Figure 4 - uploaded by Agnese Augello
Content may be subject to copyright.
Block diagrams for the variational autoencoder 

Block diagrams for the variational autoencoder 

Source publication
Article
Full-text available
What we appreciate in dance is the ability of people to sponta- neously improvise new movements and choreographies, sur- rendering to the music rhythm, being inspired by the cur- rent perceptions and sensations and by previous experiences, deeply stored in their memory. Like other human abilities, this, of course, is challenging to reproduce in an...

Similar publications

Preprint
Full-text available
Conceptual Blending (CB) theory discusses a basic mechanism that allows humans to understand and generate creative artefacts. CB theory has been primarily employed as a method for interpreting creative ideas and pieces of art, while recently algorithmic frameworks have been developed for methodologies that do generative use of CB towards achieving...

Citations

... The first consists of deep learning approaches that often rely on recurrent networks [1] and (variational) autoencoders [2]. These act as generators for new movement ideas, either by randomly generating sequences [3,4,5,6] or by responding to a specific movement [5,6,7] or music prompt [8,9,10,11]. Yet, the user's creative control over these sequences is restricted to the choice of training data or of an external prompt. ...
... The first consists of deep learning approaches that often rely on recurrent networks [1] and (variational) autoencoders [2]. These act as generators for new movement ideas, either by randomly generating sequences [3,4,5,6] or by responding to a specific movement [5,6,7] or music prompt [8,9,10,11]. Yet, the user's creative control over these sequences is restricted to the choice of training data or of an external prompt. ...
Preprint
Full-text available
We summarize the model and results of PirouNet, a semi-supervised recurrent variational autoencoder. Given a small amount of dance sequences labeled with qualitative choreographic annotations, PirouNet conditionally generates dance sequences in the style of the choreographer.
... Other instances of innovative techniques achieving this include self-organizing maps [38] and autoencoders for reacting to live movement [39]. Recent research has also introduced variational autoencoders [40,41] that encode sequences towards and generate sequences from a lower dimensional latent space. This enables generation of variations on given sequences as well as sampling new sequences. ...
Preprint
Full-text available
Using Artificial Intelligence (AI) to create dance choreography with intention is still at an early stage. Methods that conditionally generate dance sequences remain limited in their ability to follow choreographer-specific creative intentions, often relying on external prompts or supervised learning. In the same vein, fully annotated dance datasets are rare and labor intensive. To fill this gap and help leverage deep learning as a meaningful tool for choreographers, we propose "PirouNet", a semi-supervised conditional recurrent variational autoencoder together with a dance labeling web application. PirouNet allows dance professionals to annotate data with their own subjective creative labels and subsequently generate new bouts of choreography based on their aesthetic criteria. Thanks to the proposed semi-supervised approach, PirouNet only requires a small portion of the dataset to be labeled, typically on the order of 1%. We demonstrate PirouNet's capabilities as it generates original choreography based on the "Laban Time Effort", an established dance notion describing intention for a movement's time dynamics. We extensively evaluate PirouNet's dance creations through a series of qualitative and quantitative metrics, validating its applicability as a tool for choreographers.
... Some creative domains may necessitate this split more than others in order to comprehensively model the creative processes within. Dance choreography represents a prime example (Augello et al., 2017;Carlson et al., 2016), but embodiment has also been embraced in e.g. music (Schorlemmer et al., 2014) and painting (Schubert and Mombaur, 2013;Singh et al., 2017) to model creative processes that rely on sensorimotor feedback between an agent and their surroundings. ...
Preprint
Full-text available
We conjecture that creativity and the perception of creativity are, at least to some extent, shaped by embodiment. This makes embodiment highly relevant for Computational Creativity (CC) research, but existing research is scarce and the use of the concept highly ambiguous. We overcome this situation by means of a systematic review and a prescriptive analysis of publications at the International Conference on Computational Creativity. We adopt and extend an established typology of embodiment to resolve ambiguity through identifying and comparing different usages of the concept. We collect, contextualise and highlight opportunities and challenges in embracing embodiment in CC as a reference for research, and put forward important directions to further the embodied CC research programme.
... Regarding on social robotics, some generative approaches are being applied with different objectives. In [17] Manfrè et al. use HMMs for dance creation and in a later work they try variational auto-encoders again for the same purpose [2]. ...
Preprint
The goal of the system presented in this paper is to develop a natural talking gesture generation behavior for a humanoid robot, by feeding a Generative Adversarial Network (GAN) with human talking gestures recorded by a Kinect. A direct kinematic approach is used to translate from human poses to robot joint positions. The provided videos show that the robot is able to use a wide variety of gestures, offering a non-dreary, natural expression level.
... Focusing on social robotics, some generative approaches are being applied with different means. In [24] Manfrè et al. use HMMs for dance creation and in a later work they try variational auto-encoders again for the same purpose [25]. Regarding the use of adversarial networks, Gupta et al. [26] extend the use of GANs to generate socially acceptable motion trajectories in crowded scenes in the scope of self-driving cars. ...
Article
This paper presents a talking gesture generation system based on Generative Adversarial Networks, along with an evaluation of its adequateness. The talking gesture generation system produces a sequence of joint positions of the robot's upper body which keeps in step with an uttered sentence. The suitability of the approach is demonstrated with a real robot. Besides, the motion generation method is compared with other (non-deep) generative approaches. A two-step comparison is made. On the one hand, a statistical analysis is performed over movements generated by each approach by means of Principal Coordinate Analysis. On the other hand, the robot motion adequateness is measured by calculating the end effectors’ jerk, path lengths and 3D space coverage.
... There have been several studies on generating robot's novels actions using machine learning techniques [5,7,8]. In [5], the generation of creative goal-directed behaviors by exploiting cortical chaos was investigated. ...
... In [7], the cognitive architecture for artificial creativity was proposed and it was used to generate humanoid robot's dance movements. Recently, Augello, et al. [8] showed how a robot can generate novel dance behaviors using the variational encoder. In line with those previous studies, this study aims to investigate the generation of novel action from a dynamic neural network perspective. ...
Preprint
In this study, we investigate how a robot can generate novel and creative actions from its own experience of learning basic actions. Inspired by a machine learning approach to computational creativity, we propose a dynamic neural network model that can learn and generate robot's actions. We conducted a set of simulation experiments with a humanoid robot. The results showed that the proposed model was able to learn the basic actions and also to generate novel actions by modulating and combining those learned actions. The analysis on the neural activities illustrated that the ability to generate creative actions emerged from the model's nonlinear memory structure self-organized during training. The results also showed that the different way of learning the basic actions induced the self-organization of the memory structure with the different characteristics, resulting in the generation of different levels of creative actions. Our approach can be utilized in human-robot interaction in which a user can interactively explore the robot's memory to control its behavior and also discover other novel actions.
Article
Computational creativity composes a collection of activities that are capable of achieving or simulating behaviors which can be deemed creative. A frequently articulated criticism for related systems is that the creative capability yet remains with the software designer rather than the computational creative system itself. The rise of machine learning enables new ways of combining, exploring, and transforming conceptual spaces to achieve creative results. This paper demonstrates that the learning occurring within the computational machine through machine learning enables creative capabilities therein, allowing the computational creative system to be more creative on its own than ever before. Thus, we perceive machine learning as a key enabler of computational creativity. In this conceptual study, we consolidate research from the Computer Science, Computational Creativity, and Information Systems communities, which has been treated separately so far. We build on a framework of human creativity to examine the relationship between creative capabilities and machine learning mechanisms in machine learning-based computational creative systems. Specifically, we explicate which creative capabilities are already established through machine learning mechanisms in computational creative systems as strengths. Further, we explicate challenges pointing towards further potential of machine learning-based computational creative systems to enhance the inherent creative capabilities. Our results reveal that machine learning-based computational creative systems advance the previously static and explicit principles of non-machine learning-based computational creative systems, yielding creative capabilities on the machines own, which yet have been in the realm of human actors.
Chapter
The goal of the system presented in this paper is to develop a natural talking gesture generation behavior for a humanoid robot. With that aim, human talking gestures are recorded by a human pose detector and the motion data captured is afterwards used to feed a Generative Adversarial Network (GAN). The motion capture system is capable of properly estimating the limbs/joints involved in human expressive talking behavior without any kind of wearable. Tested in a Pepper robot, the developed system is able to generate natural gestures without becoming repetitive in large talking periods. The approach is compared with a previous work, in order to evaluate the improvements introduced by a computationally more demanding approach. This comparison is made by calculating the end effectors’ trajectories in terms of jerk and path lengths. Results show that the described system is able to learn natural gestures just by observation.