Figure 6 - uploaded by Robert Manthey
Content may be subject to copyright.
Selected images from four sensors processed by OpenPose. The upper two illustrate good detection results of frontal and side-view. The lower left illustrate a similar result for the person, but with a false detection at the shadow between the two loudspeakers. The lower right show no results, indicating the limitation of the pose detection when looking downward.

Selected images from four sensors processed by OpenPose. The upper two illustrate good detection results of frontal and side-view. The lower left illustrate a similar result for the person, but with a false detection at the shadow between the two loudspeakers. The lower right show no results, indicating the limitation of the pose detection when looking downward.

Source publication
Conference Paper
Full-text available
Nowadays, several computer devices are used to visually detect objects, people and activities. Their quality and performance depends on limited datasets created and annotated by error-prone and expensive human handwork. But to reach high quality for complex detection tasks extensive datasets with errorless annotations are needed. To overcome this d...

Context in source publication

Context 1
... order to explorate the usefulness of the presented system we synthesise a scenario based on a model of our laboratory and a humanoid with beckon pose. The images of all ten sensors are captured and processed by OpenPose. The results are embedded into the images as overlay containing colored lines at the position of detection, as shown in Fig. ...

Similar publications

Conference Paper
Full-text available
BPMN diagrams are more and more often used to visualise scenarios in use case driven IT projects. The verification of consistency of BPMN diagrams is subsequently needed to identify errors in requirements at the early stage of the development process. The consistency verification is challenging due to a semi-formal nature of BPMN diagrams. We exami...

Citations

... Based on our experience with synthetic data for testing of multimedia workflows from [4] and [5], we further developed our solution to the Synthetic Ground Truth Generation for Testing, Technology Evaluation and Verification (SyntTEV) framework [6]. Those uses the open source 3D modelling tool Blender 1 to generate the scenarios, the ground truth and the corresponding images sequences as shown in Fig. 2. With the tool MakeHuman 2 we setup 3D humanoids like in Fig. 3, being imported and used in Blender and animated by captured activities like walking, jumping, etc. ...
Chapter
Many modern systems use image understanding components to inspect, observe and react. Often, the training is realized and limited to manually annotated real world data, but dangerous or resource expensive scenarios are rare. We create a solution to overcome these limitations and reduce the manual annotation process by producing synthetic scenarios of arbitrary content and composition.