Some work related to my research ...

Saturday, September 13, 2008

Jargon

Since several terms are in use, it is better to get familiar with the some of the commonly used jargon in this domain:
  1. Visual-motor coordination (VMC) basically makes use of techniques that try to establish a map between the end-effector position (or pose) with the joint angle vector (theta). In this aspect, it can be considered as a static mapping between the task space and the configuration space. Most of the time, it makes use of forward kinematics map of the manipulator.

  2. Visual Servoing (VS): Visual servoing tries to establish a map between the end-effector velocities and the joint angle velocities. It necessitates the knowledge manipulator Jacobian and in some cases may involve manipulator dynamics as well.

  3. Task space (workspace) of the manipulator is the 3-dimensional Cartesian, real world space or volume where the manipulator is supposed to move about and carry out the task.

  4. Configuration space (or joint angle space) of the manipulator is the set of all valid joint angle vectors that would lead to a end-effector position and orientation within the workspace.

  5. Pose of a manipulator is the end-effector position and orientation with respect to a global reference frame. Usually, base frame of manipulator is taken as the global reference frame for all kinds of measurements.

Friday, August 15, 2008

Introduction

It is the coordination between the motors of a robot manipulator and the vision system. Just recall how our eyes are indispensable when we move around or do various things with our limbs. The idea is to get the real world coordinates using a stereo-camera and then feeding these coordinates to the robot so that it can move to the desired target (viewed by camera). Technically speaking this entire field of integrating of vision with robot motion is known as Visual Servoing. A nice introduction in this regard is available here.

We can divide entire problem into two parts:
  1. First extracting useful features from the camera images that gives a description of the manipulator workspace.
  2. Once the workspace description is available, how to guide the manipulator to carry out a desired task in this workspace.

When a camera model is used to convert the features obtained in image plane or space are converted to some useful quantities in Cartesian space and then the second problem is solved, we call it PBVS (position-based visual servoing). But most of the times it is useful to solve the problem directly in the image plane itself obviating the need for a precise camera model. This leads to image-based visual servoing (IBVS).


Another distinction can be made based on the location of camera relative to robot base.

  1. Eye-to-hand configuration: Cameras are mounted over the workspace so that its position relative to robot base is always fixed through out the operation.
  2. Eye-in-hand configuration: If the cameras are mounted on the manipulator end-effector itself, so that its relative position changes with respect to the robot base the manipulator moves.