Fig 1 - uploaded by Yulia Sandamirskaya
Content may be subject to copyright.
Schematics for the implementation of the hierarchical serial order model.  

Schematics for the implementation of the hierarchical serial order model.  

Source publication
Conference Paper
Full-text available
Robotic researchers face fundamental challenges when designing autonomous humanoid robots, which are able to interact with real dynamic environments. In such unstructured environments, the robot has to autonomously segment objects, detect and categorize relevant situations, decide when to initiate and terminate actions. As humans are very good in t...

Similar publications

Conference Paper
Full-text available
Push recovery is prime ability that is essential to be incorporated in the process of developing a robust humanoid robot to support bipedalism. In real environment it is very essential for humanoid robot to maintain balance. In this paper we are generating a control system and push recovery controller for humanoid robot walking. We apply different...
Conference Paper
Full-text available
It is known that bipedal robots with passive compliant structures have obvious advantages over stiff robots, as they are able to handle the potential energy management. Therefore, this paper is aimed at presenting a jumping pattern generation method that takes advantage of this property via the utilization of the base resonance frequency, which is...

Citations

... In this work, an architecture for serial order memory proposed in the framework of dynamic neural fields (Sandamirskaya and Schöner, 2010b;Duran and Sandamirskaya, 2012), was realized in neuromorphic hardware in the following way. ...
... Note that memory groups keep firing until the end of the teaching or replay period, keeping track of the unfolding sequence. This activity is achieved by strong recurrent connections in the memory groups and can be used to monitor sequence learning and replay by a higher-level system in a hierarchical sequence representation architecture (Duran and Sandamirskaya, 2012). Figure 7B shows plastic synapses on the ROLLS chip after learning. ...
... The length of the sequence can be arbitrary and is limited by the required number of neurons, which grows linearly with the sequence length. A model can be easily extended to represent hierarchical sequences (Duran and Sandamirskaya, 2012), sequences of state coming from different modalities (Sandamirskaya and Schöner, 2010b), or sequences with intrinsic timing of transitions (Duran and Sandamirskaya, 2018). Finally, and most importantly, a sequence here can be learned with a very simple Hebbian learning rule in a fast-one shot-learning process. ...
Article
Full-text available
Neuromorphic Very Large Scale Integration (VLSI) devices emulate the activation dynamics of biological neuronal networks using either mixed-signal analog/digital or purely digital electronic circuits. Using analog circuits in silicon to physically emulate the functionality of biological neurons and synapses enables faithful modeling of neural and synaptic dynamics at ultra low power consumption in real-time, and thus may serve as computational substrate for a new generation of efficient neural controllers for artificial intelligent systems. Although one of the main advantages of neural networks is their ability to perform on-line learning, only a small number of neuromorphic hardware devices implement this feature on-chip. In this work, we use a reconfigurable on-line learning spiking (ROLLS) neuromorphic processor chip to build a neuronal architecture for sequence learning. The proposed neuronal architecture uses the attractor properties of winner-takes-all (WTA) dynamics to cope with mismatch and noise in the ROLLS analog computing elements, and it uses its on-chip plasticity features to store sequences of states. We demonstrate, with a proof-of-concept feasibility study how this architecture can store, replay, and update sequences of states, induced by external inputs. Controlled by the attractor dynamics and an explicit destabilizing signal, the items in a sequence can last for varying amounts of time and thus reliable sequence learning and replay can be robustly implemented in a real sensorimotor system.
... Using affordances, however, is not without complexity. Further, work should focus on how objects' action potentials are perceived and modeled within DNFs, similar to that presented in [115]. DNF can be used to attribute affordances to objects by modeling a neuronal pool of properties that should all be present for a given affordance to be turned on. ...
Article
Full-text available
People exhibit a robust ability to understand the actions of others around them. In this work, we identify two biologically inspired mechanisms that we hypothesize to be central in the function of action understanding. The first module is a contextual predictor of the observed action, given the goal-directed movement towards objects, and the actions that are allowed to be performed on the object. The second module is a kinematic trajectory parser that validates the previous prediction against a set of learned templates.We model both mechanisms and link them to the environment using the cognitive framework of Dynamic Field Theory and present our first steps into integrating the aforementioned modules into a consistent framework for the purpose of action understanding. The two modules and the combined architecture as awhole are experimentally validated using a recording of an actor performing a series of intentional actions testing the ability of the architecture to understand context and parse actions dynamically. Our initial qualitative results show that action understanding benefits from the combination of the two modules, while any module alone would be insufficient to resolve ambiguity in the perceived actions.
... The intention node's activation, v i (t), Eq. 10a, follows the neural field attractor dynamics (F ) with two inhibitory terms: one from the CoS node (v s (t)) and one from the CoD node (v d (t)). I i (t) is an external (motivational) input that comes from other EBs and context ("precondition") nodes of the overall architecture for behavioral organisation [79]. ...
Article
Storing and reproducing temporal intervals is an important component of perception, action generation, and learning. How temporal intervals can be represented in neuronal networks is thus an important research question both in study of biological organisms and artificial neuromorphic systems. Here, we introduce a neural-dynamic computing architecture for learning temporal durations of actions. The architecture uses a Dynamic Neural Fields (DNFs) representation of the elapsed time and a memory trace dynamics to store the experienced action duration. Interconnected dynamical nodes signal beginning of an action, its successful accomplishment, or failure, and activate formation of the memory trace that corresponds to the action’s duration. The accumulated memory trace influences the competition between the dynamical nodes in such a way that the failure node gains a competitive advantage earlier if the stored duration is shorter. The model uses neurally-based DNF dynamics and is a process model of how temporal durations may be stored in neural systems, both biological and artificial ones. The focus of this paper is on the mechanism to store and use duration in artificial neuronal systems. The model is validated in closed-loop experiments with a simulated robot.
... While the DNFs are relevant for intra-behavior dynamics, such as selection of the appropriate perceptual inputs for a given behavior, the nodes play a role on the level of inter-behavior dynamics (i.e., switching between behaviors). In previous work, we have shown how EBs may be chained according to rules of behavioral organization [15,16], serial order [5,6,18], or the value-function of a goal-directed representation [12]. ...
Article
Full-text available
In order to proceed along an action sequence, an autonomous agent has to recognize that the intended final condition of the previous action has been achieved. In previous work, we have shown how a sequence of actions can be generated by an embodied agent using a neural-dynamic architecture for behavioral organization, in which each action has an intention and condition of satisfaction. These components are represented by dynamic neural fields, and are coupled to motors and sensors of the robotic agent.Here,we demonstratehowthemappings between intended actions and their resulting conditions may be learned, rather than pre-wired.We use reward-gated associative learning, in which, over many instances of externally validated goal achievement, the conditions that are expected to result with goal achievement are learned. After learning, the external reward is not needed to recognize that the expected outcome has been achieved. This method was implemented, using dynamic neural fields, and tested on a real-world E-Puck mobile robot and a simulated NAO humanoid robot.
... The networks in Fig. 4 allow the system to perform reaching movements toward visually perceived targets and update the mapping in order to generate precise arm movements autonomously. Switching between these two regimes is done manually here, but could be organised in a hierarchical system for behavioral organisation [28]. ...
... The condition of failure node detects when the hypothesized action is not perceived and thus is withdrawn. The proof-of-concept architecture presented here can be extended to encompass a larger action repertoire, based on our work on behavioral organization and hierarchical serial order [9,23,24] Other approaches to action parsing often lack the capability to autonomously detect and represent critical events from the sensory flow. For instance, Lee and Demiris use low-level action detectors to analyze movements of the hand relative to the object, as well as the presence of objects, and the distance between the hand and the object [25]. ...
Article
Full-text available
Parsing of action sequences is the process of segmenting observed behavior into individual actions. In robotics, this process is critical for imitation learning from observation and for representing an observed behavior in a form that may be communicated to a human. In this paper, we develop a model for action parsing, based on our understanding of principles of grounded cognitive processes, such as perceptual decision making, behavioral organization, and memory formation.We present a neural-dynamic architecture, in which action sequences are parsed using a mathematical and conceptual framework for embodied cognition—the Dynamic Field Theory. In this framework, we introduce a novel mechanism, which allows us to detect and memorize actions that are extended in time and are parametrized by the target object of an action. The core properties of the architecture are demonstrated in a set of simple, proof-of-concept experiments.
... In previous work, we have shown how EBs may be chained according to rules of behavioral organization [25,27], serial order [29,7,6], or the value-function of a goaldirected representation [17]. Multiple EBs can be composed into chains [28], where a sequence of behaviors execute one after another, in parallel and/or in response to sensory information [27]. ...
Conference Paper
Full-text available
We present here a simulated model of a mobile Kuka Youbot which makes use of Dynamic Field Theory for its underlying perceptual and motor control systems, while learning behavioral sequences through Reinforcement Learning. Although dynamic neural fields have previously been used for robust control in robotics, high-level behavior has generally been pre-programmed by hand. In the present work we extend a recent framework for integrating reinforcement learning and dynamic neural fields, by using the principle of shaping, in order to reduce the search space of the learning agent.
Chapter
Full-text available
Below text is an extraction from the 3rd chapter of a thesis. The intention is to provide an introductory guide to the interested readers about the dynamic field theory and Amari equation. The text also includes some information on basics of neural modelling and very common neural models such as Hodgkin-Huxley. This text is not a published document but the thesis can be cited.