Fig 2 - uploaded by Teodor-Andrei Sauciuc
Content may be subject to copyright.
SegNet results for the semantic segmentation task -training (top) and testing (bottom): input depth images (left) and resulting label maps (right)

SegNet results for the semantic segmentation task -training (top) and testing (bottom): input depth images (left) and resulting label maps (right)

Source publication
Conference Paper
Full-text available
Learning by demonstration, human-to-robot skillstransfer or visual perception and control are tasks that canbe tackled with techniques that combine machine vision anddeep learning. In this paper, we take a step further from theclassic deep learning based on the special Euclidean group (SE3)parameterization of displacement and motion. Considering an...

Context in source publication

Context 1
... table shows the false negative rate for each class. As expected, the most inaccurate detections correspond to the segments which are partially occluded during the movement of the arms. In the next sections, the approximation of the screw parameters is discussed for the segments of the left arm (Table II - Some results of SegNet are exemplified in Fig. 2. The semantic maps show a good localisation of all the parts of the robot, including the small joints. These results validate the encoder's ability to produce meaningful representations of the input images and motivate the transfer of learning into SO 3 -CNN and ...