Palpation, the use of touch in medical examination, is almost exclusively performed by humans. We investigate a proof of concept for an artificial palpation method based on self-supervised learning. Our key idea is that an encoder-decoder framework can learn a \textit{representation} from a sequence of tactile measurements that contains all the relevant information about the palpated object. We conjecture that such a representation can be used for downstream tasks such as tactile imaging and change detection. With enough training data, it should capture intricate patterns in the tactile measurements that go beyond a simple map of forces -- the current state of the art. To validate our approach, we both develop a simulation environment and collect a real-world dataset of soft objects and corresponding ground truth images obtained by magnetic resonance imaging (MRI). We collect palpation sequences using a robot equipped with a tactile sensor, and train a model that predicts sensory readings at different positions on the object. We investigate the representation learned in this process, and demonstrate its use in imaging and change detection.
For better appreciation of the task difficulty, we created a special insert with a transparent gel instead of the regular one. As can be seen in the video, the lump moves when touched, making it hard to infer properties regarding the hidden 3D model structure. Due to this dynamics, non-data-driven approaches often fail.
We use a robotic arm to automatically palpate our fabricated objects. Using the QR code and a mounted camera, we can also associate appropriate ground-truth.
Here, we visualize one of our MRI scans, where we run through the slices at different z values. As can be seen, we scanned 6 inserts at a time to save time and resources.
In the preprocessing stage we present in the paper, we separate the scans and discretize them.
Here we show example videos of the poke trajectories from our proposed PalpationSim
@inproceedings{rimontoward,
title={Toward Artificial Palpation: Representation Learning of Touch on Soft Bodies},
author={Rimon, Zohar and Shafer, Elisei and Tepper, Tal and Shimron, Efrat and Tamar, Aviv},
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems}
}