Thursday, February 23, 2006

More Random Observations of Insignificant Phenomena

If a sheet of 8.5x11 in. paper is taped to a wall via the top two corners, over time the bottom half of the page tends to curl inward. The cause of this is unclear but it may be caused by some form of creep or the stresses involved in temperature fluctuation if the top half of the paper is held fixed. Moisture might have something to do with. The curl might act as a sort of temperature gauge that increases curl with heat and the temperature the paper was taped at. The curl might also be used as a way of moving and sorting papers such as those used in copiers and printers.

Gum tends to accumulate on concrete sidewalks over time. The gum is always blackened and flat and proportional to the age of the fresh pavement. The amount of gum covering could be used as a measurement of age or cultural customs. Perhaps it could also be used as a measurement of economic prosperity, littering laws and the perception of littering laws. The gum itself might be harvestable for some recyclable purpose such as adhesives, mortar, or ceramic products.

In most modern toasters, bread crumbs tend to accumulate at the bottom over time. People rarely empty or clean these toasters out during the lifetime of the toaster. The breadcrumbs are usually burnt or dried out and thus do not decompose very well. Any decomposition is usually killed by the daily toasting temperatures that exceed the boiling point of water. The bottom of toasters might serve as an useful testbed for equipment to be used in space due to the extreme variations in temperature and the biological barreness. The bread crumbs might also be harvested and used a source of food or perhaps the base in some recipe.

Saturday, January 28, 2006

Observations of the Palm Tree in Front of My Apartment

While observing a withered palm tree in front of my apartment, I noticed several different activities that this tree is simultaneously engaged in. For one, it is gradually trying to curve around the corner of my apartment building since our building is largely blocking out the sun. The tree is growing at a 80 degree angle from the ground plane. To balance its weight, there is a significant bulk of bark and husk of where another tree once grew right next and attached to the palm tree. The palm tree has taken this material for its own, wrapped it in its own bark, and subsequently using this as a counter weight to its sideways growing motion. It is not clear whether this tree is intentionally using this as a counterweight or it actually grew this extra material for this purpose. However, it appears to be doing just that.

The dead tree stump right next to it was cut in some age past and one can notice how hollow it is by knocking on the wood. However, the bark seems to be alive and a product of the main living tree. The palm tree has also been cut in several different places over the past and terminated those branches with some new bark covering. Old battle scars from a gardener long past.

What intrigues me is what could be the purpose of the dead palm fronds that hang on to a palm tree long since they've become brown and non-photosynthetic. Possible theories include discouraging primates, jaguars, or humans from trying to climb the tall trees, and another is that the dead fronds are used as a sort of shade along the base of the tree to keep it from over-heating and loosing moisture during the hot summers. It is also observed that these fronds only seem to fall off during rain and wind storms. Perhaps this a form of reproduction. It is not clear from observation of this tree how it goes about reproduction.

Jacob Everist

Tuesday, April 12, 2005

Node Synchronization

Suppose you have many robots in an area or perhaps you have wireless sensors distributed in an environment. However, you would like to get them all synchronized so that they have the same time or that they do an action all simultaneously. How would you do this?

This is a problem of time and how to keep it across multiple locations on different computing resources. One solution is to just start all the nodes at the same time so they all have the same digital clock time. But this requires time-keeping hardware and the guarantee that nothing will ever happen to the nodes that will cause them to lose power temporarily or drift from their synchronized time. In fact, processors track time at different rates for a variety of different reasons, but this isn't important right now. The question is, what are ways you can synchronize these nodes?

Assuming you have the ability to communicate with your neighbors through a wireless communication, you can implement something called a pulse-coupled oscillator. This is essentially modeled after natural phenomena such as crickets, fireflies, and pacemaker cells. Each individual member in the population emits a periodic "cry" or "signal". In the case of crickets, its the screach. In the case of the firefly it's the photoluminescent flash of its torso. However, over time the periodic pulses of the individuals begin to synchronize over time by gradually adjusting their period to that of their neighbors. This allows you to synchronize the whole population without any notion of "global time" or a centralized time-keeper.

This phenomenon is very interesting and if you're interested in knowing how to implement it in a technological system, you can read this paper[1] on how they used it in coordinating the data transmission times in a wireless sensor network to conserve power.

[1] Wakamiya, N., Murata, M., "Scalable and Robust Scheme for Data Fusion in Sensor Networks", Proceedings of the First International Workshop on Biologically Inspired Approaches to Advanced Information Technology, Lausanne, pp. 305-314, January 2004

Thursday, April 07, 2005

Robot Software

One of the difficulties of doing research on robotic platforms is the lack of available software to accomplish basic functions like obstacle avoidance, mapping, vision, etc. For those software components that are available, they only work in limited circumstances on certain platforms. This situation makes the prospect of doing your own original robotics application an especially daunting task because you will have to re-write a lot of software to accomplish basic functions-- in effect, re-inventing the wheel.

The study of Robot Software Architectures seeks to alleviate the numerous problems encountered by novice robot programmers by providing easy and simple frameworks in which to test applications and re-utilize old software written by other groups for possibly different robot platforms.

MARIE is a project that seeks to unify and integrate all the various disparate software packages and testing suites. This would allow a person to rapidly develop and test software using tools that were not designed to work together.

Player is an abstracted device interface to various robotics research platforms. This allows you to run your programs on the real robot as well as on the two simulation environments, Stage and Gazebo.

Thursday, March 10, 2005

Conflicts of Resources

One of the things that you will discover when you try to build your own autonomous robot is the competing and conflicting demands for resources. These generally include the following:
  • Sensors
  • Computation
  • Power
  • Communication
  • Motor Speed / Torque
  • Size
  • Time
You want your robot to do smart things so you need computation. But running an 800MHz processor takes a lot of power. Presumably you won't be plugging your robot into the wall, so you'll need to carry on-board batteries. If you need lots of power, you'll need big batteries which take up space and will increase the weight of your robot and demand more power from your motors which in turn demand more power from your batteries.

Communication is a requirement for some robots to transmit data to their masters or to their robot brethren. In some cases, communication is also a way of circumventing the need for bulky on-board computation and just running most of the code on a server. But communication also requires power. And communication also adds delay to the performance of your robot.



You can begin to see the interconnecting constraints that all of these resources impose on each other. This is where engineering comes in and you need to roll up your sleeves and just try some designs, gain some experience, and learn what you can. Talking to others who have gone down this gauntlet can be helpful, but they may have taken a different path and they may have missed some shortcuts on the way. Plus technology improves over time and becomes cheaper. Things that were impossible 5 years ago may now be practical.

Wednesday, March 09, 2005

Future of Construction Robots

One of the big applications I see for the future of robotics is in construction. Particularly, I think the field is ripe for earth-moving robots. We currently have all the raw materials in the form of large manually-operated construction vehicles from likes of John Deere and Caterpillar. All that is needed are these vehicles to be given electronic controls and put a computer in-charge of their operation.

What makes the task of earth-moving easier from an autonomous robot perspective is that there isn't a lot of activities that require the articulate manipulation of human hands. Most of the activities can be done using large equipment and thus can be done autonomously.

However, there remains an economic return issue that needs to be resolved. Why should a construction company buy one of these autonomous tractors that are complicated to control, only good for a specialized purpose, a potential safety hazard, and a liability instead of just hiring a worker on an hourly wage?

This is something that needs to be addressed, but I think the answer to this question is scale. An autonomous tractor would be useful and economic for very large projects. Suppose you are digging a strip-mine. It would save a lot of money if you put robots in charge of hauling dirt and digging into the earth instead of having to pay humans to do the work *and* have to shut down during the night.

Farming could be another application where an automaton could be of use. There are large expansive farms that require tilling, fertilizing, and harvesting. These farms are often thousands of acres in size. The current method of doing things is only possible with lots of cheap immigrant labor. There could be a niche for a robot if the cost of labor goes up dramatically, or the selling price of crops goes down dramatically. Or maybe yet, a robot could even undercut cheap labor.

Thursday, March 03, 2005

Walking Gaits

Walking gaits are the study of legged motion and how legs can be used to produce different kinds of motion. We're interested in learning how to control these legs, and also we want to extract general principles of legged motion, so we can easily port these to novel robot designs.

There are the typical bipeds which is what humans are, two-legged creatures. And there are the quadruped and hexapod creatures that have 4 and 6 legs respectively.

We often look to biological organisms and what their solutions are for legged motion. You can look at Kimura for examples of taking biological concepts and porting them to robots. I'm studying quadrupeds right now, so most of his papers are on quadrupeds, but I beliieve he does more general research.

Basically, they use neural oscillators to generate the movements of the joints. All of the oscillators for each joint are coupled, or "mutually entrained" as he says. I believe this also means that they're conditioned to respond to each other effectively just like you would train any neural network. This also makes the robot control adaptive since the oscillators are sensitive to the joint angles.

Of course, the problem with this is that you need to be doing numerical operations in real-time and also communicate the entrainment between all the joints in real-time. This is easy to do if you build a custom robot for this application, but on a generic robot platform, communication and computation becomes an issue.

Monday, February 28, 2005

From this link at Robots.net, and the article originally from the Guardian Unlimited, they talk about DARPA's plan to introduce legged payload carriers to the US military. Specifically, a 4-legged robot named BigDog that's quite agile for most legged robots today.

I've heard this before, but I can't remember where. I believe attended a presentation on this concept. Basically this all about logistics. Soldiers are carrying too much crap on the field, and all the new gadgets that are coming out are adding more weight than they can physically carry.

Enter an autonomous cargo carrier. This allows you to offset the duty of carrying crap to a machine and leaves the soldier free to move about, scout, kill things, whatever. The robot carrier is also expendable whereas the soldier is usually more valuable.

Given that one of the weakest points in today's US military is logistics transporation, as seen by roadside attacks in Iraq, it would be a boon if they could offset this job to expendable autonomous cargo vehicles. The military is trying to achieve this from many different angles, and from what I remember from DARPA, there goal is to have something like 30%-40% of all military vehicles unmanned by 2015.

Jacob Everist

Robotics Research
Lab Work and Robotics Wiki

It's been a while since my last diary entry, but I've been
working hard on my lab projects. Unfortunately I can't talk
much about it until we get our NASA contract signed.

In other news, I've launched a robotics wiki to try and make a
collaborative resource for robotics. It's got a lot of
stuff in it already for being such a new wiki. I've also
decided to make the content available under a Creative Commons
license, so that it's essentially a free resource.

Most of my time is spent on IRC chatting about robots and
helping people with their projects. You can find me on
irc.freenode.org in #robotics. If you don't know how to use
IRC, read here

The wiki URL is: roboticswiki.org

I'm also very interested in compiling a robot zoo, or an exhaustive list of robots past and present. My vision for it is to become a kind historical record of the robot population. I have to acknowledge that this is a highly difficult task, but I think it can be accomplished with a collaborative effort.

Jacob Everist

Tuesday, February 22, 2005

Hard Problems in Robotics

There are so many things that need to be solved in robotics, that it's so difficult to choose just one to solve. I think that's a problem I have with focus. Even a seemingly simple problem opens up a whole can of worms of unsolved issues.

Suppose you consider the problem of a robot exploring some unknown planetary terrain. Let's assume that all the high-level directives and navigation commands come from our human masters off-site or perhaps off-planet. Our focus should be moving the robot around safely, keeping it stable, preventing damage, and conserving power. Our problem is this, how do we make this robot adapt to its environment?

First you have to define what you mean by environments. In this case, we mean variations in terrain such as rockiness, surface traction, soil looseness or hardness, inclines, non-uniformities in profile, pits or obstacles, and perhaps dynamic changes such as wind, water, or creatures.

What do we mean by adapt? Given a task and the current environment, we want the robot to tailor its actions over time to maximize performance. But what are we tuning? What performance are we maximizing? How is the robot measuring its own performance? These are all big issues that need to be solved, and their solutions may only apply in limited situations and may be very difficult to generalize.

Sunday, November 14, 2004

Robosphere 2004, Nov. 9-10 Eyewitness Account (Part 2)

I should mention that as a condition for being allowed to
attend the Robosphere workshop for free, I was supposed to
volunteer my time to help with the proceedings. I spent the
early part of lunch getting briefed on my responsibilities.
My job title for the first day was Miscellaneous. I was
essentially supposed to sit near the entrance hall and take
care of anything that didn't fall under the category of
registration, audio/visual, or video recording. I think the
only thing I did was to ask someone if they needed anything.
They said "no".

So after briefing, I went to get some lunch. I think the
first person I met was Greg
Hornby
. He was a former student of Jordan Pollack at
Brandeis University and now working at NASA Ames research
center. I didn't know this at the time, but I did ask him a
little about his work. He mentioned that his work was on
evolutionary algorithms. I wasn't particularly interested
at the time because I've heard that word thrown about so
often that I just considered it a dead-end buzzword. I came
to appreciate it later in the workshop.

His work is essentially in evolutionary algorithms for
computer-automated design. The way this works is that you
give the computer a bunch of parameters, a task, and a
simulation environment. The computer then proceeds to make
up a robot design and test it in the simulation environment.
If it does well, it tries to improve on that design. If it
does poorly, it discards those changes and tries another
route. What's interesting is that this is done in an
analogous way to natural selection in the process of
evolution. Many populations of different robots with
different designs are tested in a simulation environment.
The poor performers are terminated and the good performers
are allowed to continue existing. In addition, the good
performers are allowed to "mate" and create hybrid designs
that try to combine the best features of both performers.

I think I talked more about this here
in my Oct. 22 entry last year. This is a very interesting
approach to robot design that is starting to show that it
has more and more useful applications.

Someone else I met at lunch was Greg
Chirikjian
and one of his students, Kiju Lee, from John
Hopkins University. His work involves self-replicating
robots
. I saw him once at a robotics conference in New
Orleans, but didn't have a chance to talk to him.

The idea of self-replicating robots is that you send a small
colony of robots to some place like the Moon or Mars. Once
there, they utilize the resources that they find in the soil
or regolith, synthesize this into parts, and make copies of
themselves. This essentially allows the robots to multiply
completely on their own and saves you lots of time and money
in shipping extra robots and parts to these
extra-terrestrial environments.

This may sound cool or scary, but it's a really really
difficult problem. This idea has been around for quite a
while, but not taken very seriously because the technology
just isn't there. Chirikjian has shown several
demonstrations of an actual self-replicating robot making a
copy of itself in an autonomous manner. Albeit, the robot
is made out of legos and there are only four parts to be
assembled to make an extra robot copy. Nevertheless, the
feat is amazing to anyone has actually tried to do something
like this. Go ahead and try to do it yourself. You'll be
surprised how hard it is to make a robot that can build and
assemble an exact copy of itself-- even a simplified assembly.

After lunch, it was time to head back into the conference
hall and assume my post as Miscellaneous man.

The first speaker was Ashley Stroupe from NASA JPL in
Pasadena, California. She's one of the staffers on the Mars
Rover projects, so she immediately has a superstar presence.
Not that people are asking her for her autograph, but
people are interested in her work like in the normal humdrum
way of academics.

But this talk was not about the Mars rovers. This talk was
about autonomous construction robots. More specifically,
"Sustainable Cooperative Robotic Technologies for Human and
Robotic Outpost Infrastructure Construction and
Maintenance". Their work essentially consisted of two
rover-like robots, with manipulator arms, and operating in a
sandbox. There task was to go to two sides of a beam, pick
up the beam in a synchronized manner, turn around and carry
it to the construction site, and set the beam down aligned
on top of another beam. This was done in a completely
autonomous manner with no human intervention. This was the
culmination of about 9 months worth of work.

I was deeply impressed. I've had experience trying to get
mobile robots to interact and manipulate objects in precise
way and realized all the difficulties. It's particularly
difficult for mobile robots since you don't have a fixed
reference frame to work with. The robot has to be adaptive
and able to perceive the object well enough to know where it
is and understand how it affects the object. Their only
sensors in this feat were a camera and some force-torque
sensors on the arms to keep the robots synchronized while
carrying the beam. Some fiducial markers were placed on the
beams and the cameras were used to detect them so the robots
could more easily position themselves and the beams.

So currently the project at NASA is in very preliminary
stages and its not clear whether it will continue to be
funded, but this project seems to align very well with
NASA's new space objectives outline by President Bush a year
ago which include robotic and manned missions to the Moon
and Mars.

More later.

Jacob Everist

Saturday, November 13, 2004

Robosphere 2004, Nov. 9-10
Eyewitness Account


Robosphere is a bi-annual robotics workshop that focus on
deploying robots into space for long-term operations. This
includes adding robustness to existing robot platforms and
making robots more and more self-sufficient such as being
able to self-repair, perform habitat construction, utilize
in-situ resources, and be more autonomous.

I wasn't around for last year's workshop, but this year it
was at NASA Ames in Moffet Field, CA near San Jose. I live
in Los Angeles, so we had decided to drive up which takes
about 5.5 hours.

So three of my colleagues and I plus href="http://www.isi.edu/~shen/">Prof. Wei-Min Shen
gathered together at 5:30am and took a rental van up to San
Jose. We stopped once to have some breakfast at some
fast-food town, but we spent a lot of the time napping for
the sleep we didn't get during the night. I'm not sure what
we talked about, but I think we mostly did some joking around.

Finally, we arrived at NASA Ames at 11:15am. Only 3 hours
and 15 minutes late! Sadly, we missed some of the speakers
already who talked about habitat construction for
extra-terrestrial environments. According to the program, I
missed the following presentations:

  • "Mobile lunar and planetary base architectures", Marc
    Cohen, NASA Ames Research Center
  • "Mobitat: Mobile Modular Habitat", A. Scott Howe, Plug-in
    Creations Architecture
  • "LB1 - A Case Study: A Lunar Habitat as a Self-sustaining
    Intelligent Robotic System", Susmita Mohanty, Moonfront LLC
  • "Radiation and Micro-meteorite Shielded Lunar Habitat
    Formation by Semi-autonomous Robotic Excavation", Dr. Lyman
    Hazelton, KinetX Inc.

Notice that a lot of the speakers are from companies. I
think that's particularly interesting since from my
experience at academic conferences, most people have been
from universities or research institutions. There were
quite a bit of people from companies at this workshop.

So the next section of talks was about "Robotic
Colonies/Ecologies". These talks essentially boiled down to
how to control many robots over a long period of time and
have them adapt to the environment and needed tasks.

Anthony Enguirda of Griffith University in Australia started
off by describing the concept of the robot colony and how it
differs from the conventional paradigm. I arrived right in
the middle of this talk, so I didn't learn very much. I'm
looking at the accompanying paper in the proceedings, and it
looks interesting, but it doesn't seem like there's a lot of
substance. Of course a lot of this workshop was focused on
wild speculation and the introduction of new ideas, so I
think this is acceptable. I'll have to focus on this paper
more closely when I have time.

Hamid Berenji of Intelligent Inference Systems Corp. gave a
talk about Dynamic Cased-Based Reasoning (DCBR). This was a
method for robots to ascertain their state and recover from
faults and error conditions. This looked a lot like an
expert system with the capability to generalize and adapt to
the situation. This seems effective, but the drawback to
any type of system like this is it requires a heck of a lot
of pre-programming of all the fault cases you can think of.

Finally, Zhihua Qu of University of Central Florida gave a
talk about a control-theoretic approach to controlling a
large population of robots. It basically takes a huge
matrix representing the state of every robot in the
population and you add an extra row/column that is the human
controller. Then you effect this matrix with your control
inputs. It's a very interesting approach and I wonder how
you could apply it. It seems in order for this control
system to actually work, you need to know the state of all
the robots and they need to receive your appropriate inputs.
How do you do this in a physical space with poor
communication, I don't know. Maybe there's an assumption in
here that I didn't get.

After this, we went to lunch. More later.

Monday, May 17, 2004

"Visualisation of Surveillance Coverage by Latency Mapping"

A paper by Patrick Chisan Hew in 2003 from the Defence Science and Technology Organisation in Australia.

Hew describes a technique for visualizing information about surveillance coverage by mobile sensor platforms.

There are three key concepts that explain the approach:

1. Swath - This a space region R covered by a mobile sensor over a time interval T. That is, the union of all regions covered by the sensor in the time interval T.

2. Scan History - The value {1,0} of a point or region over a history indicating the instantaneous coverage by a sensor. This reflects whether the given point or region is covered by a sensor and for how long during the history. 1 if the point or region is instantaneously covered and 0 otherwise.

3. Latency Mapping - A visualization method where the regions of space that have a short time since they were last covered give a strong color or intensity and regions with a long time since they were last covered give a weak color or intensity. In addition there is a gradient of color in between the extrema of durations.

The use of latency mapping and latency history gives one the operational ability to visualize the sensor coverage of one's network or resources. This allows the ability to make judgements or design decisions.

What is useful about this method is that the details of the sensor modality and environmental conditions are abstracted from visualization. However, this is also a drawback because the details are not made available of how difficult it is to monitor a target or how costly it is to employ a sensor. In addition, the sensors, environmental conditions, and the target characteristics need to be modeled to give good specifications for coverage capability of each sensor.

Sunday, May 16, 2004

"Surveillance Coverage of Sensor Networks under a Random Mobility Strategy"

This is a paper by Kesidis, Konstantopoulus, and Phoha about the surveillance coverage of mobile sensors in a 2d or 3d space. They assume the sensors move in a Brownian motion strategy and that the sensors have a uniform spherical surveillance coverage around each device.

They give a mathematical formulation to calculate the following quantities about their strategy:

1. Contact time between sensor nodes

2. Time-until-detection of slow moving objects

3. Effects of mobility variance and sensor density

This is a good mathematical starting point to make proofs about surveillance coverage. However, you need some knowledge about set theory and probability to understand their formulation.

There are some limitations however. This strategy uses random mobility which makes things easy to formulate, but it's not clear whether this formulation will apply to deliberative surveillance strategies. In addition, the spherical sensor coverage of mobile sensors may not be a good model of our system. One would need different shapes such as cones to account for a camera surveillance network.

Further research will give us insight into what other methodologies we can use.

Wednesday, February 11, 2004

Self-Assembly Videos

I just released a movie yesterday on the progress of my work in space self-assembly.

This demonstrates a single assembly step of connecting two beams together. The two robots are connected together by a variable-length tether that holds them together. We call this the "mirror roll" since the robots need to be symmetrically aligned for this to work properly.

This is the first step in assembling a triangle. The ultimate goal is to create trusses out of this method. And eventually large-scale structures.

There was also the previous video of beam docking.

Wednesday, February 04, 2004

The Meaning of Task

I think one of the fundamental ways we evaluate nearly all robotics applications or theories is based on the almighty task. If we define a task, then we can say what the requirements are, what information is needed, and how well a particular robot is accomplishing something.

One needs to take into account that we define tasks in terms of human perceptual schemas and motor skills. What may seem easy to us can be very difficult for a robot to accomplish-- maybe even impossible.

Perhaps task needs to be defined a little better. Lets see what others have said about tasks.

From Donald, B. R. and Jennings J., Constructive Recognizability for Task-Directed Robot Programming, Proc. IEEE ICRA, Nice, France (1992):

There is a task we wish the mobile robot to perform, and the task is specified in terms of external (e.g., human-specified) perceptual categories. For example, these terms might be "concepts" like wall, door, hallway, or Professor Hopcroft. The task may be specified in these terms by imagining the robot has virtual sensors which can recognize these objects (e.g., a wall sensor) and their "parameters" (e.g., length, orientation, etc.) Now, of course the physical robot is not equipped with such sensors, but instead is armed with certain concrete physical sensors, plus the power to retain history and to compute. The task-level programming problem lies in implementing the virtual sensors in terms of the concrete robot capabilities.

This only talks about the perceptual aspects of task and says nothing of the functional implications. We as humans define the task, but our functional description is derived from our motor experience. Perhaps this is an egregious form of the inverse kinematics problem? There is no single answer to the functional translation of task to robot language.

Friday, January 30, 2004

Integrating the Physical and Computational Worlds

An agent is an abstract computer science term that means a black-box that takes a set of percepts from the environment, perfrom some "computation", and exerts a set of actions on that environment. This is an useful concept to talk about AI-like entities or just about anything (even non-AI things) in an useful theoretical framework.

Embodiment is when an agent resides inside a body of some kind within an environment. This means that the agent is part of the environment and has the restrictions and perspectives imparted by that body. Not all agents are embodied.

A robot is an example of a physically embodied agent, i.e. it does not live in a simulated world. However, we have now stepped over a line and have left the world of pure computer science, creeping into the difficult and unpleasant world of mechanics, dynamics, frictions, electronics, gases, other embodied agents, societies, uncertainty, noise, and risk.

I believe that in order to truly understand how one can live and act within a world, we need a truly integrative approach to the physical world and the transportation and processing of information. Simply calling this work "interdisciplinary" is just a marketing term that does not convey the full revolution I am proposing.

Animals and human beings have achieved a careful and articulate equilibrium between processing information and interacting with the world with naturally-formed mechanical bodies gradually created over millions of years of evolution. To understand not the details of the parts of humans and animals, but the general underlying principles of living and surviving in the world to accomplish tasks, we must derive a general theory that closely couples a theory of information with a theory of the physical world.

This might lead some of you into fuzzy philosophical questions, which I will not address here. However, one that I believe that may be answered as we continue our work is the following:

What is information? What is computation?

How are these questions related? Are they the same? What vocabularly are we missing?

I don't claim to know the answer to these questions, but I hope that as we proceed to stack the building blocks of our theory, we may gradually achieve a glimpse of these notions.

Thursday, January 29, 2004

Robotic Basics

In order to create a general theory of robotics, you need to get down to simple basics of what exactly a robot is. I will ignore all the pop culture perception of robots and create my own working definition:

A robot is an autonomous machine composed of sensors and effectors that acts in the real-world to accomplish a task.

What's interesting is what this throws out as not being a robot. Sensorless mechanical arms are not included in this definition. A robot should perceive its environment some way and act on it.

Programs that run in simulated worlds are not robots. In order to classify something as a robot, it must exist in the real physical world and deal with that world's uncertainties, noise, change, lack of structure, and physical laws. Although simulations can be used in design of robots, they must always be verified in the real world.

Autonomous machines that have no tasks to accomplish are not robots (although they could be if you defined their task as "wandering aimlessly"). This is just a working definition that makes our analysis much easier if we always assume there is a goal.

Saturday, January 24, 2004

Mars Rover to Land Tonight

The Opportunity mars rover lands tonight Saturday, 24th at 9pm PST. (Sunday 25th at 5am UTC)

Apparently the command center at JPL in Pasadena will be televised on C-SPAN for those of you who have cable television.

Nasa TV is also available on the web, but it will probably be very crowded around that time.

In the meantime, check out this awesome satellite picture animation of Spirit's landing zone.

Since Spirit has been having problems, there's been a slight shift in the plans. The team was originally going to split up with the experts shifting their attention to Opportunity's deployment and Spirit continuing it's exploration. However, since they still don't know what happened with Spirit, most of the experts are still working on it.

They don't want the same thing that happened to Spirit to happen to Opportunity, so it will be deployed much more slowly and wait for Spirit's diagnosis before taking much action.

Wednesday, October 29, 2003

Back From Modular Robotics Workshop

I just got back yesterday from Las Vegas where the IROS conference is happening. That's the "International Conference on Intelligent Robots and Systems". However, I showed up before it started and left the day it began. I attended a workshop that preceded the general conference organized by Mark Yim and his crew from the "smart matter" division at PARC.

Specifically, we got to learn how to use the Polybot modules, put them together to make certain forms and how to rapidly program them to do locomotion or other general movement. Basically, the polybots are individual modules with one degree of freedom. You can manually put these together by screwing them together and putting them into any configuration you want. After you have your desired form, you then use the software suite to program them to move.

One of the most impressive things was the record feature. Since every module adds 1 DOF to the system, and you typically use 10 or more modules to build arbitrary shapes, the system becomes very difficult to program and control! However, the record feature allows you to hold the robot in particular configurations, take a snapshot, then select the next configuration. You can then play back this series of configuration snapshots with a gradual transition between each one and create any type of locomotion you want. My colleague and I made a quadraped crawl with this feature in 10 minutes.

This workshop was a day-long event where we did a lot of exercises to practice using the system. There were a few breakdowns, of course, especially when systems are taken outside of the laboratory. In particular, the serial modules that communicate and power the Polybot arrays were frying for unknown reaons, so by the end of the day we were down to only 3 left. However, only a few individual modules malfunctioned which is an enormous accomplishment considering how notoriously difficult it is to maintain hundreds of robots and the amount of abuse we were visiting upon them with our wild and erratic locomotions.

There were two competitions during the day. The first was a race to see who could come up with the fastest locomotions. There were some very interesting attempts from people who have never had experience with modular robots. I believe the winner was a snake performing a sinusoidal gait with some passive components to keep it stable and give it better traction. My friend and I wanted to make a really ambitious configuration, so we tried to implement the rolling gait where you connect modules together in a circle. However, we quickly learned that there is too much calibration to be done to get it to work quickly and we only had 8 modules to work with whereas Yim did the rolling gait with 10.

Realizing this, we quickly reconfigured the modules to implement a lazy biped that uses his two feet to push himself and rests and slides on a pair of crutches. Unfortunately this didn't work out very well since we couldn't get effective forward motion. Finally, we decided on the quadraped which took about 20 minutes to assemble. We were late in finishing the design, but I think people thought we had the second-most ingenious design. The most innovative design was a robot doing somersaults that one of my other colleagues designed who was inspired by one of the locomotions he saw in Karl Sim's work.

The final competition, following the theme of Las Vegas, was to design a robot that would approach a slot machine, put a coin in the slot, pull the lever, and catch the coins that popped out with a cup. This was not an autonomous requirement since we could control the robot serially from a laptop. We split into two teams and spent about 3 hours working on the problem. In fact we went over our allotted time by an hour.

My team was fairly successful. We realized that these modules are really good at implementing caterpillar gaits, but the gaits by themselves are very unstable and incapable of changing direction. We decided to implement two caterpillars in parallel to each other, connect them together by a beam, and attach a robotic arm to the beam. The result was a crazy robot that was able to move around and turn in place using four snake appendages, and a robotic arm that was used to manipulate the slot machine. To my knowledge, no one has ever come up with a design quite like this. I'm quite proud of our innovative design and the fact that we were able to put the coin in the slot and push the lever. Sadly, we were unable to catch the coins since the time-to-response to move the cup in position was too short for successful teleoperation. I should know since I was the one at the controls!

The other team was not so successful. They tried to build a very long and large robotic arm and do recorded motion to interact with the slot machine. However, there were so many DOF (approx. 20) and the motor dead-reckoning was not accurate enough that the errors quickly added up and was unable to reach the correct positions. Implementing manual interface would be impractical in the amount of time allotted, so they reverted to a simpler design. However, the team began to disintegrate by this time, and they were unable to make a successful showing during their demonstration.

All in all, I really enjoyed this workshop. Mark has promised to send us all the videos he took of the workshop, so hopefully I should have some videos to link to that show our crazy snake monster. You'll be able to see our coin-slot operation in action with me at the controls

In the meantime, I encourage you to browse the videos of Polybot in action.

Also, comment on this story here over at Frontier Files.

Wednesday, October 22, 2003

On Genetic Methods of Robot Design

In class today, we discussed the general method of solving problems using genetic algorithms which are loosely derived from the concept of natural selection in evolution. The idea is that you create a population of "genes" which serve as the basic building block in which you operate. These genes are interpreted in some way to attempt to solve a problem. Then you apply a fitness function to evaluate how well it solved the problem. Based on the fitness value, you apply a selection process for the fittest of your population of genes, do cross-over operations and mutations akin to the natural process of reproduction, and create a new generation of genes to form solutions. Then repeat ad infinitum.

One of the general drawbacks to this approach is that we have a very poor understanding of the dynamics of the system. There are lots of attempts to produce directed evolution to converge on a solution, but the fundamental drawback is that you must describe how the genes are interpreted to represent a solution, devise a suitable fitness function to weed out the underperforming solutions, do parameter tuning to balance your reproduction and mutation rates, and provide lots of computational resources and time.

That last one is a huge drawback. Natural evolution has operated on the order of millions of years and it has only produced sub-optimal solutions (humans, flora and fauna) to the problem of survival. Consumption of resources is also a big problem since if you want to find useable solutions in a reasonable amount of time, you want to have very large populations. If you want these solutions to work in the physical world, you need to provide physical resources in which to test it.

We mainly looked at these problems with respect to their applications to robotics. There have been stunning successes in the application of genetic methods to purely software worlds, but these don't transfer very well to physically realized systems such as robotics. So the question is, how can we model the real-world closely enough so we can evolve our robotic systems in simulation, or how can we closely couple the physical world and marshall the resources necessary to evolve physical solutions that are guaranteed to be a) applicable to the real world and b) tractible in time and cost.

The former may be more practical but the latter is more interesting. The basic idea is that you are simultaneously producing mechanical designs with sets of actuators and sensors and the control system that drives it in its interactions with the world. Then you are applying a fitness function on it to determine its level of "goodness" and mixing it with other solutions to come up with similar but different candidates that hopefully extend whatever general strategy that the parents had devised.

The question is, how far do you go in the allowed variance of the physical design and how do you facilitate the rapid fabrication and testing during the evolution of many generations of solutions?

One of the key features of artificial designs is that they require a huge industrial infrastructure to be produced. You need manufacturing plants, special materials and the factories to produce them, harvest of natural resources, swarms of engineers and factory workers, sophisticated operation methodologies, information management, etc. Can you really line these up in a row and start tweaking the parameters to come up with different materials, different designs, unique applications of existing technologies, and unforseen optimizations in production?

What would be really neat is to come up with a "backyard industrial complex" in which you can control and tweak the whole process from basic materials to final design and somehow integrate this into an automatic guess-and-check system to facilitate computational genetic methods to come up with new and interesting solutions to a posed task. Perhaps this could be made a little easier by restricting ourselves to reusable parts that can be assembled and disassembled without the costly waste of resources. Something akin to Legos would suffice, but we'd still need a way to assemble the test design rapidly enough to make the process worth doing.

Just thinking out loud here.

To see some previous research relevant to genetic methods in robotics, see Karl Sims or Jordan Pollack.

Jacob Everist

More Random Observations of Insignificant Phenomena If a sheet of 8.5x11 in. paper is taped to a wall via the top two corners, over time the...