Fingertip Sensitivity for Robots

Introduction

Fingertip Sensitivity for Robots

Determined to enhance Fingertip Sensitivity in robotics, scientists have created a sensor in the shape of thumb. This sensor has a hidden camera and the camera has the training of a deep neural network to gather its tactile contact information. Once this finger touches a object, the system creates a three-dimensional force map from the visible deformations of its supple outer shell. This research invention enhances a robot finger’s haptic perception significantly and enables it to come closer to the sense of a human skin touch.

Fingertip Sensitivity | Insight – A Tactile Sensor

Based on a paper published on February 23rd, 2022 in Nature Machine Intelligence a team of scientists at the Max Planck Institute for Intelligent Systems (MPI-IS) has created a robust soft haptic or tactile sensor. This sensor termed “Insight” uses a deep neural network and computer vision to understand where exactly objects touch the sensor and how big are the applied forces. This research project is a huge step towards robots being able to sense and feel their environment as accurately as animals and humans. Like its natural counterpart, the fingertip sensor is robust, sensitive, and high resolution.

Insight consists of a softshell assembled around a stiff and lightweight skeleton. This skeleton holds the structure just as bones alleviate the soft finger tissue. The shell comprises an elastomer that is mixed with dark but reflective aluminum flakes that result in an opaque dark grey color. The elastomer protects external light from finding its way in. Concealed inside this finger-sized cap is a small fish-eye camera around 160 degrees that records colorful images lit by a ring of LEDs.

 The Working of Insight

Once an object touches the sensor’s shell, the color pattern inside the sensor changes its appearance. The camera records the pictures multiple times per second and feeds a deep neural network with this information. The algorithm identifies even the slightest change in light in every pixel. Within a fraction of a second, the Machine Learning (ML) model can place where exactly the finger is touching an object. It can also evaluate the strength of the forces and specify the force direction. The model replicates what scientists call a force map. This map provides a force vector for each point in the three-dimensional fingertip.

Georg Martius, Max Planck Research Group Leader at MPI-IS states, “we have accomplished this brilliant sensing performance. This is through the ground-breaking mechanical design of the shell”. One of the outstanding features of this new thumb-shaped sensor is that it comprises a nail-shaped zone. This comes along with a thin elastomer layer. This tangible fovea centralis can identify even tiny forces and object shapes in any detail. For this super-sensitive zone, an elastomer with a thickness of 1.2 mm is present.  i

Teaching the Sensor to Learn

For the sensor to learn, Sun has developed a testbed that develops the training data for the ML model. This data enables the model to understand the association between the applied forces and the modification in raw image pixels. Collation of close to 200,000 measurements from the testbed enables probing of the sensor around its surface. The process has helped the ML model to be trained in one day.

Conclusion

Software and hardware design can be applied to a wide variety of robot parts with various precision and shape requirements. The ML training, architecture, and inference process are all general and applicable to various other sensor designs