Figure 2 - uploaded by Carme Torras
Content may be subject to copyright.
Examples of color and depth images from the generated database (observe that the entire garment may not be visible). 

Examples of color and depth images from the generated database (observe that the entire garment may not be visible). 

Source publication
Conference Paper
Full-text available
We present a system to deal with the problem of classifying garments from a pile of clothes. This system uses a robot arm to extract a garment and show it to a depth camera. Using only depth images of a partial view of the garment as input, a deep convolutional neural network has been trained to classify different types of garments. The robot can r...

Contexts in source publication

Context 1
... continuous repetition of this operation, we generated a dataset containing 4 types of garments: "Shirt", "Trouser", "Towel" and "Polo" (see Fig. 2). We also detected if the garment felt during the scan. When this occurred, we labelled the empty image as "Nothing". This resulting in the final 5 categories, although the empty cases are not taken into account for the result scoring. A total of 4272 depth images were obtained from the different categories 1 . 80% of those images were used to train the network and the remaining 20% to test ...
Context 2
... continuous repetition of this operation, we generated a dataset containing 4 types of garments: "Shirt", "Trouser", "Towel" and "Polo" (see Fig. 2). We also detected if the garment felt during the scan. When this occurred, we labelled the empty image as "Nothing". This resulting in the final 5 categories, although the empty cases are not taken into account for the result scoring. A total of 4272 depth images were obtained from the different categories 1 . 80% of those images were used to train the network and the remaining 20% to test ...

Similar publications

Conference Paper
Full-text available
Large diameter monopiles are still the preferred foundation method for offshore wind farm develop-ments. The performance of the pile under cyclic lateral load is the leading design criterion for this foun-dation option. In order to investigate the cyclic response of the soil surrounding the pile, it is simplistical-ly assumed that soil elements loc...

Citations

... Since a neural network is optimised by minimising the loss function, L 2 regularisation encourages smaller weights and thus less complex models. Our optimal value of 10 − 4 for λ was determined by Bayesian optimisation and has been found to be effective in other training image classifiers (Gabas et al., 2016). We also used dropout, which randomly turns off neurons in a layer at a given rate (Srivastava, 2013). ...
Article
Full-text available
Recent work shows that the developmental potential of progenitor cells in the HH10 chick brain changes rapidly, accompanied by subtle changes in morphology. This demands increased temporal resolution for studies of the brain at this stage, necessitating precise and unbiased staging. Here we asked if we could train a deep convolutional neural network to sub-stage HH10 chick brains using a small dataset of 151 expertly labelled images. By augmenting our images with biologically-informed transformations and data-driven preprocessing steps, we successfully trained a classifier to sub-stage HH10 brains to 87.1% test accuracy. To determine whether our classifier could be generally applied, we re-trained it using images (<250) of randomised control and experimental chick wings, and obtained similarly high test accuracy (86.1%). Saliency analyses revealed that biologically relevant features are used for classification. Our strategy enables training of image classifiers for various applications in developmental biology with limited microscopy data.
... The amount and variety of clothing options have greatly increased in the fashion sector with the growth of globalization [1]. To meaningfully classify these things presents a substantial difficulty for retailers and online shopping services [2]. In recent years, the importance of developing precise and effective classification systems for clothes has increased. ...
... RELATED WORK The following section of the paper discusses and presents some of the significant earlier work on fashion clothing image classification using a variety of techniques and approaches that yield positive results and aid in addressing some of the shortcomings and challenges that earlier research has run into during their experimental process. Gabas et al [2] offer a solution to the issue of categorizing items from a pile of clothing. A robot arm is used in this system to remove a garment and present it to a depth camera. ...
Conference Paper
The classification of fashion cloth images is an important and challenging task in the field of computer vision. In recent years, deep learning (DL) techniques, especially Convolutional Neural Networks (CNNs), have shown remarkable performance in image classification tasks. The proposed study presents a hybrid model for the multi-classification of fashion cloth images by combining the strengths of both CNNs and SVM. Using binary classification, the authors first divide the fashion clothing photographs into male and female categories. Then, multi-classify the images into four categories, including ethnic, casual, formal, and sportswear. The 5000 images that make up the dataset for the study have been divided into training and testing sets. The proposed hybrid model combines the feature extraction capabilities of CNNs and the decision-making power of SVMs to produce improved classification results. The results of the experiments show that the binary classification results in an accuracy of 95.5%, while the multi-classification results in the best accuracy of 96.24% in the case of the formal class of fashion cloth.
... K. M. Oikonomou, I. Kansizoglou containing robots with many degrees of freedom (DoF) operating in continuous action spaces [3], [4]. ...
Article
Full-text available
The increasing demand for applications in competitive fields, such as assisted living and aerial robots, drives contemporary research into the development, implementation and integration of power-constrained solutions. Although, deep neural networks (DNNs) have achieved remarkable performances in many robotics applications, energy consumption remains a major limitation. The paper at hand proposes a hybrid variation of the well-established deep deterministic policy gradient (DDPG) reinforcement learning approach to train a 6 degree of freedom robotic arm in the target-reach task https://www.youtube.com/watch?v=238693OpvD0. In particular, we introduce a spiking neural network (SNN) for the actor model and a DNN for the critic one, aiming to find an optimal set of actions for the robot. The deep critic network is employed only during training and discarded afterwards, allowing the deployment of the SNN in neuromorphic hardware for inference. The agent is supported by a combination of RGB and laser scan data exploited for collision avoidance and object detection. We compare the hybrid-DDPG model against a classic DDPG one, demonstrating the superiority of our approach.
... Dataset Dataset limitation [52,56,65] Large datasets with different categories of clothes should be created to avoid the dataset limitation. More information is given to networks, the easier is for them to produce good results ...
... The object grasped is unknow by the robot [30] In principle, the limitation of not knowing the object grasped could be overcome simply by collecting data from many object manipulation scenarios, so as to learn a single model that generalizes effectively across objects. A more nuanced approach might involve correlating the behaviour of objects in the human demonstrations with other previously experienced manipulations, to put them into correspondence and infer the behaviour of an object for which prior experience is unavailable Experimental and simulation phase A limitation of some studies is that a soft mannequin is used as a subject or simulating only the dressing task [49,56,70,71] The robot should work with a real person so that researchers can have feedback from them about the force applied by the robot or other problems that can happen during the dressing task to overcome the limitation of using the robot only in simulation Neural Network limitations [65] Comparing neural networks to see the difference between them and find the better and fast approach to accomplish the task Better planning algorithms [25] Algorithms that consider the limitations of the arms movements of a robot as well as uncertainty in perception would also improve the performance and the safety of the people that are working with a robot Improves manipulators trajectories [57,72] Improving manipulator trajectories should be studied in the future to make the robot more user friendly and to reduce the computation time ...
... This approach is very accurate for seen clothes recognition using the training dataset [50] 2. Given data and labels is very helpful for classification and detecting clothes grasping points for the cloth application [64] 1. Without prior knowledge about specific garments, clothes classification and recognition is inaccurate [64] 2. It is not easy to extend to complex cloth manipulation scenarios [65] LfD 1. This approach can transfer the learned motion to unseen scenarios [4] 2. The demonstration provides a high-level plan that is used to execute low-level control for cloth manipulation [66] 1. ...
Article
Full-text available
Background Service robots are defined as reprogrammable, sensor-based mechatronic devices that perform useful services in an autonomous or semi-autonomous way to human activities in an everyday environment. As the number of elderly people grows, service robots, which can operate complex tasks like dressing tasks for disabled people, are being demanded increasingly. Consequently, there is a growing interest in studying dressing tasks, such as putting on a t-shirt, a hat, or shoes. Service robots or robot manipulators have been developed to accomplish these tasks using several control approaches. The robots used in this kind of application are usually bimanual manipulator (i.e. Baxter robot) or single manipulators (i.e. Ur5 robot). These arms are usually used for recognizing clothes and then folding them or putting an item on the arm or on the head of a person. Methods This work provides a comprehensive review of the most relevant attempts/works of robotic dressing assistance with a focus on the control methodology used for dressing tasks. Three main areas of control methods for dressing tasks are proposed: Supervised Learning (SL), Learning from Demonstration (LfD), and Reinforcement Learning (RL). There are also other methods that cannot be classified into these three areas and hence they have been placed in the other methods section. This research was conducted within three databases: Scopus, Web of Science, and Google Scholar. Accurate exclusion criteria were applied to screen the 2594 articles found (at the end 39 articles were selected). For each work, an evaluation of the model is made. Conclusion Current research in cloth manipulation and dressing assistance focuses on learning-based robot control approach. Inferring the cloth state is integral to learning the manipulation and current research uses principles of Computer Vision to address the issue. This makes the larger problem of control robot based on learning data-intensive; therefore, a pressing need for standardized datasets representing different cloth shapes, types, materials, and human demonstrations (for LfD) exists. Simultaneously, efficient simulation capabilities, which closely model the deformation of clothes, are required to bridge the reality gap between the real-world and virtual environments for deploying the RL trial and error paradigm. Such powerful simulators are also vital to collect valuable data to train SL and LfD algorithms that will help reduce human workload.
... The items are either spread out or crumpled on a flat surface, 13,15 or they are in a hanging state when grasped by a robotic gripper. 9,[31][32][33] The robotics community has mostly focused on task specific, handcrafted feature extraction, such as edges and corners 34 and wrinkles. [35][36][37] Due to the 3D nature of the manipulation task, the use of physics and volumental simulators is more common in robotics. ...
Article
Full-text available
Cloth manipulation remains a challenging problem for the robotic community. Recently, there has been an increased interest in applying deep learning techniques to problems in the fashion industry. As a result, large annotated data sets for cloth category classification and landmark detection were created. In this work, we leverage these advances in deep learning to perform cloth manipulation. We propose a full cloth manipulation framework that, performs category classification and landmark detection based on an image of a garment, followed by a manipulation strategy. The process is performed iteratively to achieve a stretching task where the goal is to bring a crumbled cloth into a stretched out position. We extensively evaluate our learning pipeline and show a detailed evaluation of our framework on different types of garments in a total of 140 recorded and available experiments. Finally, we demonstrate the benefits of training a network on augmented fashion data over using a small robotic-specific data set.
... We chose the numerical value of λ through Bayesian optimisation (S6 Table) as 10 −4 . We find that this value has been found to be effective in training image classifiers (55). We also used dropout, which randomly turns off neurons in a layer at a given rate (56). ...
Preprint
Full-text available
Recent work has indicated a need for increased temporal resolution for studies of the early chick brain. Over a 10-hour period, the developmental potential of progenitor cells in the HH10 brain changes, and concomitantly, the brain undergoes subtle changes in morphology. We asked if we could train a deep convolutional neural network to sub-stage HH10 brains from a small dataset (<200 images). By augmenting our images with a combination of biologically informed transformations and data-driven preprocessing steps, we successfully trained a classifier to sub-stage HH10 brains to 87.1% test accuracy. To determine whether our classifier could be generally applied, we re-trained it using images (<250) of randomised control and experimental chick wings, and obtained similarly high test accuracy (86.1%). Saliency analyses revealed that biologically relevant features are used for classification. Our strategy enables training of image classifiers for various applications in developmental biology with limited microscopy data. SUMMARY STATEMENT We train a deep convolutional network that can be generally applied to accurately classify chick embryos from images. Saliency analyses show that classification is based on biologically relevant features.
... In [42] two, RGB-D sensors continuously captured a garment lying on a surface, first to locate the grasping point via a heuristic algorithm; recording then continued until the garment could be classified into a predefined category. A contrasting approach was used in [43], wherein after a garment was picked up from a pile, a time-offlight (ToF) sensor-a new generation of depth sensors-captured a single partial image of the garment (partial because depth sensors are usually used at close distances), and a convolutional neural network (CNN) was then used to classify the garment into one of four garment categories: "shirt", "trouser", "towel" or "polo". Besides detecting edges and formations, 3D sensors are also widely used for classification purposes. ...
... In [42] two, RGB-D sensors continuously captured a garment lying on a surface, first to locate the grasping point via a heuristic algorithm; recording then continued until the garment could be classified into a predefined category. A contrasting approach was used in [43], wherein after a garment was picked up from a pile, a time-of-flight (ToF) sensor-a new generation of depth sensors-captured a single partial image of the garment (partial because depth sensors are usually used at close distances), and a convolutional neural network (CNN) was then used to classify the garment into one of four garment categories: "shirt", "trouser", "towel" or "polo". ...
Article
Full-text available
While in most industries, most processes are automated and human workers have either been replaced by robots or work alongside them, fewer changes have occurred in industries that use limp materials, like fabrics, clothes, and garments, than might be expected with today’s technological evolution. Integration of robots in these industries is a relatively demanding and challenging task, mostly because of the natural and mechanical properties of limp materials. In this review, information on sensors that have been used in fabric-handling applications is gathered, analyzed, and organized based on criteria such as their working principle and the task they are designed to support. Categorization and related works are presented in tables and figures so someone who is interested in developing automated fabric-handling applications can easily get useful information and ideas, at least regarding the necessary sensors for the most common handling tasks. Finally, we hope this work will inspire researchers to design new sensor concepts that could promote automation in the industry and boost the robotization of domestic chores involving with flexible materials.
... One common pose estimation method during physical interaction is using visual features from an RGB or depth camera. For example, several works have used depth sensing for estimating human pose [68], [69], [70], [71] and tracking cloth features [72], [73], [74] while a robot helps a person or mannequin dress on a clothing garment. Jiménez et al. [75] provide an overview of various perception techniques for tracking cloth during assistive robotic tasks. ...
Preprint
Towards the goal of robots performing robust and intelligent physical interactions with people, it is crucial that robots are able to accurately sense the human body, follow trajectories around the body, and track human motion. This study introduces a capacitive servoing control scheme that allows a robot to sense and navigate around human limbs during close physical interactions. Capacitive servoing leverages temporal measurements from a multi-electrode capacitive sensor array mounted on a robot's end effector to estimate the relative position and orientation (pose) of a nearby human limb. Capacitive servoing then uses these human pose estimates from a data-driven pose estimator within a feedback control loop in order to maneuver the robot's end effector around the surface of a human limb. We provide a design overview of capacitive sensors for human-robot interaction and then investigate the performance and generalization of capacitive servoing through an experiment with 12 human participants. The results indicate that multidimensional capacitive servoing enables a robot's end effector to move proximally or distally along human limbs while adapting to human pose. Using a cross-validation experiment, results further show that capacitive servoing generalizes well across people with different body size.
... Similarly to Doumanoglou's method, Corona et al. [16,17] detect specific points for each garment using deep convolutional neural networks to find the grasping points on a garment after a neural network identifies the garment type. ...
... If the garment is grasped by any random point, then the lowest point of the garment from a frontal view corresponds to one of the corners (see Fig. 3). The same observation was used in [5,17]. ...
... Then we validated the results of our method by having the robot unfold several previously unseen garments. Finally, to demonstrate the effectiveness of the global detector, we show an ablation study comparing the local + global detector with the local only detector from previous work [17]. ...
Article
Full-text available
Compared with more rigid objects, clothing items are inherently difficult for robots to recognize and manipulate. We propose a method for detecting how cloth is folded, to facilitate choosing a manipulative action that corresponds to a garment’s shape and position. The proposed method involves classifying the edges and corners of a garment by distinguishing between edges formed by folds and the hem or ragged edge of the cloth. Identifying the type of edges in a corner helps to determinate how the object is folded. This bottom-up approach, together with an active perception system, allows us to select strategies for robotic manipulation. We corroborate the method using a two-armed robot to manipulate towels of different shapes, textures, and sizes.
... Similarly to Doumanoglou's method, Corona et al. [16,17] detect specific points for each garment using deep convolutional neural networks to find the grasping points on a garment after a neural network identifies the garment type. Some methods that try to locate specific points like corners; however, cloth tends to curl over, which can hide these points. ...
... If the garment is grasped by any random point, then the lowest point of the garment from a frontal view corresponds to one of the corners (see Fig. 3). The same observation was used in [5,17]. ...
... Then we validated the results of our method by having the robot unfold several previously unseen garments. Finally, to demonstrate the effectiveness of the global detector, we show an ablation study comparing the local + global detector with the local only detector from previous work [17]. ...
Preprint
Full-text available
Compared with more rigid objects, clothing items are inherently difficult for robots to recognize and manipulate. We propose a method for detecting how cloth is folded, to facilitate choosing a manipulative action that corresponds to a garment's shape and position. The proposed method involves classifying the edges and corners of a garment by distinguishing between edges formed by folds and the hem or ragged edge of the cloth. Identifying the type of edges in a corner helps to determinate how the object is folded. This bottom-up approach, together with an active perception system, allows us to select strategies for robotic manipulation. We corroborate the method using a two-armed robot to manipulate towels of different shapes, textures, and sizes.