Some work related to my research ...

Friday, August 15, 2008

Introduction

It is the coordination between the motors of a robot manipulator and the vision system. Just recall how our eyes are indispensable when we move around or do various things with our limbs. The idea is to get the real world coordinates using a stereo-camera and then feeding these coordinates to the robot so that it can move to the desired target (viewed by camera). Technically speaking this entire field of integrating of vision with robot motion is known as Visual Servoing. A nice introduction in this regard is available here.

We can divide entire problem into two parts:
  1. First extracting useful features from the camera images that gives a description of the manipulator workspace.
  2. Once the workspace description is available, how to guide the manipulator to carry out a desired task in this workspace.

When a camera model is used to convert the features obtained in image plane or space are converted to some useful quantities in Cartesian space and then the second problem is solved, we call it PBVS (position-based visual servoing). But most of the times it is useful to solve the problem directly in the image plane itself obviating the need for a precise camera model. This leads to image-based visual servoing (IBVS).


Another distinction can be made based on the location of camera relative to robot base.

  1. Eye-to-hand configuration: Cameras are mounted over the workspace so that its position relative to robot base is always fixed through out the operation.
  2. Eye-in-hand configuration: If the cameras are mounted on the manipulator end-effector itself, so that its relative position changes with respect to the robot base the manipulator moves.

No comments: