We can divide entire problem into two parts:
- First extracting useful features from the camera images that gives a description of the manipulator workspace.
- Once the workspace description is available, how to guide the manipulator to carry out a desired task in this workspace.
When a camera model is used to convert the features obtained in image plane or space are converted to some useful quantities in Cartesian space and then the second problem is solved, we call it PBVS (position-based visual servoing). But most of the times it is useful to solve the problem directly in the image plane itself obviating the need for a precise camera model. This leads to image-based visual servoing (IBVS).
Another distinction can be made based on the location of camera relative to robot base.
- Eye-to-hand configuration: Cameras are mounted over the workspace so that its position relative to robot base is always fixed through out the operation.
- Eye-in-hand configuration: If the cameras are mounted on the manipulator end-effector itself, so that its relative position changes with respect to the robot base the manipulator moves.
No comments:
Post a Comment