Georgia Tech Research Institute spacer Agricultural Technology Research Program

PoultryTech

TECHNOLOGY FOCUS

Could Advanced Robotics Techniques Lead the Way to the Next Generation of Poultry Processing?

Written by J. Michael Matthews

which came first, the chicken or the egg

Sample images of a chicken model during manipulation. Top row shows a 3D image of the chicken with its corresponding simulated model on the bottom.

The poultry industry relies heavily on fixed automation as well as specialized robotics and perception techniques, implemented in a step-by-step “perception then manipulation” process. For many tasks, especially those involving rigid materials (package sorting and stacking) or static objects (chicken breast water jet portioning) this works well and is very efficient. However, for tasks that involve deformable product that must be manipulated (gripped, pulled, flipped, cut, etc.), this fixed “perception then manipulation” process is no longer valid. This is because the manipulation itself changes the object in real-time, invalidating prior perception measurements.

Fixed automation and current robotics techniques result in loss of accuracy, precision, and yield. For example, consider gripping, flipping, or cutting tasks. Imagine taking a single look at the object, closing your eyes, and performing the tasks without further perception; this is similar to conventional step-by-step automation. In contrast, a human worker has no difficulty continuing to sense the object during manipulation.

Researchers with the Georgia Tech Research Institute’s Agricultural Technology Research Program are working on an exploratory project with the goal of developing perception algorithms that will allow continual “perception during manipulation.” In addition, the algorithms will be designed to work with any type of object, having the ability to learn new objects and thus adapt to many different applications. This is particularly beneficial to the poultry industry due to the difficult nature of sensing and handling meat products. Combined with robotics and manipulation research, this effort will enable advanced robotics platforms for the next generation of poultry processing.

The envisioned full system will be capable of learning a new object model, and then use the model for detection and tracking the product throughout manipulation. Basically, the system will perform general object modeling (learning) with skeletal and shape components to identify key features specifically for deformable objects, allowing the tracking of those objects in real-time. Development will take place in four steps.

Sensing: For this application, it is expected that both 3D and 2D color data are required. The application will determine the requirements and sensors used in an end product; however, a generic research sensor is important here. For this project, we chose the Microsoft Kinect, a general gaming sensor being utilized for robotics research. It includes a standard color camera, and an infrared pattern projector and infrared camera, to sense both depth and location of every pixel.

Segmentation: Segmentation means determining which pixels in the field of view belong to the object and which belong to the background. During learning, a non-model based system will be used to detect the object surface, and this data will be sent to the tracker for modeling. During run-time, the model will be used to detect the object surface, and the tracker aligns the model for complete detection.

Tracker: The tracker takes data from segmentation to track the object over several frames, updating the model during learning, or morphing the model during run-time to align and match the current state of the object. This alignment and matching occurs in 2D to track segmented image data as the object moves, and in 3D to track and solve for a full 3D solution of the object’s state. This process is standard for rigid objects; however, meat is not rigid, and has few image features to track, so this makes alignment difficult. In fact, learning a deformable object model and matching to a constantly deforming object is significantly more difficult.

Model: The model will be a basic triangle mesh that can store color and texture, size and shape, and variable parameters for object motion. This type of model is crucial because it must be capable of handling various constraints, and be feasible for detection and tracking in real-time. Traditional 3D animation and modeling, such as the type used in video games and movie special effects, will be considered to develop skeletal and skin models for poultry objects. As the object is deformed during learning, the mesh will grow and adapt, and during run-time, the mesh will stretch and move to match the object within its learned constraints.

System development is under way, and the research team expects initial results soon.

 

Gary Floyd, Georgia Power Company

J. Michael Matthews is a research engineer in the Georgia Tech Research Institute’s Food Processing Technology Division. His areas of research expertise are robotics, controls, and computer vision. He can be contacted by email at james.matthews@gtri.gatech.edu.