Wednesday, February 11, 2004

Self-Assembly Videos

I just released a movie yesterday on the progress of my work in space self-assembly.

This demonstrates a single assembly step of connecting two beams together. The two robots are connected together by a variable-length tether that holds them together. We call this the "mirror roll" since the robots need to be symmetrically aligned for this to work properly.

This is the first step in assembling a triangle. The ultimate goal is to create trusses out of this method. And eventually large-scale structures.

There was also the previous video of beam docking.

Wednesday, February 04, 2004

The Meaning of Task

I think one of the fundamental ways we evaluate nearly all robotics applications or theories is based on the almighty task. If we define a task, then we can say what the requirements are, what information is needed, and how well a particular robot is accomplishing something.

One needs to take into account that we define tasks in terms of human perceptual schemas and motor skills. What may seem easy to us can be very difficult for a robot to accomplish-- maybe even impossible.

Perhaps task needs to be defined a little better. Lets see what others have said about tasks.

From Donald, B. R. and Jennings J., Constructive Recognizability for Task-Directed Robot Programming, Proc. IEEE ICRA, Nice, France (1992):

There is a task we wish the mobile robot to perform, and the task is specified in terms of external (e.g., human-specified) perceptual categories. For example, these terms might be "concepts" like wall, door, hallway, or Professor Hopcroft. The task may be specified in these terms by imagining the robot has virtual sensors which can recognize these objects (e.g., a wall sensor) and their "parameters" (e.g., length, orientation, etc.) Now, of course the physical robot is not equipped with such sensors, but instead is armed with certain concrete physical sensors, plus the power to retain history and to compute. The task-level programming problem lies in implementing the virtual sensors in terms of the concrete robot capabilities.

This only talks about the perceptual aspects of task and says nothing of the functional implications. We as humans define the task, but our functional description is derived from our motor experience. Perhaps this is an egregious form of the inverse kinematics problem? There is no single answer to the functional translation of task to robot language.