ArticlePDF Available

Neural Network Architecture for Control

Authors:

Abstract

Two important computational features of neural networks are associative storage and retrieval of knowledge, and uniform rate of convergence of network dynamics independent of network dimension. It is indicated how these properties can be used for adaptive control through the use of neural network computation algorithms, and resulting computational advantages are outlined. The neuromorphic control approach is compared to model reference adaptive control on a specific example. It is shown that the utilization of neural networks for adaptive control offers definite speed advantages over traditional approaches for very-large-scale systems.< >
Neural Network Architecture
for
Control
Allon Guez, James
L.
Eilbert, and
Moshe
Kam
ABSTRACT: Two important computational
features of neural networks are
(1)
associa-
tive storage and retrieval of knowledge and
(2) uniform rate of convergence of network
dynamics, independent of network dimen-
sion. This paper indicates how these prop-
erties can be used for adaptive control
through the use of neural network compu-
tation algorithms and outlines resulting com-
putational advantages. The neuromorphic
control approach is compared to model ref-
erence adaptive control on a specific exam-
ple. The utilization of neural networks for
adaptive control offers definite speed advan-
tages over traditional approaches for very
large scale systems.
Introduction
A
computing architecture for adaptive
control based on computational features of
nonlinear neural networks is proposed. These
features are the associative storage and re-
trieval of knowledge, and the uniform rate
of convergence of the network dynamics, in-
dependent of network dimension. The ben-
efits expected from this new architecture are:
(1) fast adaptation rate,
(2)
simpler controller
structure, and
(3)
adaptation over both dis-
crete- and continuous-parameter domains.
The following discussion gives an intro-
duction to neural networks for adaptive con-
trol. The subsequent section describes the
proposed neuromorphic architecture, and the
next section presents a solution to a simple
problem using both neuromorphic architec-
ture and model reference adaptive control
(MRAC)
[
11. The final section compares the
properties of neuromorphic and mainstream
controllers.
Computational Features
of
Neural Networks
Interest in the computational capabilities
of neural networks has been spurred to new
An early version
of
this paper was presented at
the 1987 IEEE International Conference on Neural
Networks, San Diego, California, June 21-24,
1987. Allon
Guez
and Moshe Kam are with the
Department of Electrical and Computer Engineer-
ing, and James
L.
Eilbert is with the Department
of
Mathematics and Computer Science, Drexel
University, Philadelphia, PA 19104.
heights by the increasing availability of fast
parallel architectures, including very large
scale integrated, electro-optical, and dedi-
cated architectures. The most popular com-
putational features
of
current neural network
models are associative storage and retrieval
[2]-[7] and signal regularity extraction [8].
An important, but currently underutilized,
computational feature of neural networks has
been demonstrated in simulations-unlike
other large-scale dynamic systems, the rate
of convergence toward a steady state of
neural networks is essentially independent of
the number of neurons in the network [5].
The future of neurocomputing depends on
its appeal to system analysts and designers.
Before neuromorphic systems can interest
practical end-users, they must have tools and
algorithms for steady-state topology assign-
ment. Some initial approaches to solving this
problem have been incorporated into learn-
ing [2], [8],
[9]
and design [lo], [ll] algo-
rithms. At present, these algorithms can only
place equilibrium states at the desired posi-
tions. However, the basins of attraction
around these equilibrium points (which are
needed for completion of the design process)
cannot be controlled, nor can one control the
spurious equilibrium points that may appear
in other parts of the state space.
Neural networks are large-scale systems
comprising simple nonlinear processors
(neurons). Each neuron is characterized by
a state, a list of weighted inputs from other
neurons, and an equation governing the neu-
ron’s dynamic behavior. Two popular
models that can serve as the neural estimator
in the controller scheme are presented here.
(1) The asynchronous binary neural net-
work (Hopfield’s associative memory)
[3] comprises
N
neurons, each charac-
terized by the parameters
U,,
which is the
(binary) activity state of the ith neuron,
wlJ,
the weight of the connection strength
from the jth neuron to the ith, and
r,,
the threshold of the ith neuron. The bi-
nary state of the neuron is either
+
1
or
-
1. The dynamics of the network are
such that the neuron reassesses its state
according to the following rule, where
sgn [XI is the algebraic sign of
x.
The network’s state at each instant of
time is described by the
N
binary ele-
ments
U,.
For a real, symmetric matrix
{
w,,
}
,
a Lyapunov function exists, which
guarantees that all the attractors in the
system of difference equation
(1)
are
fixed points. These fixed points serve to
store desirable patterns (such as com-
mands for a controller). When the net-
work is started from an initial pattern
that is close, say, in terms of Hamming
distance, to a certain fixed point, the
system state rapidly converges
to
that
fixed point. An environmental pattern
that falls into the region
of
attraction
associated with a fixed point
is
said to
recall the pattern stored by that fixed
point.
As this explanation indicates, the in-
formation that a neural network stores
and retrieves is in its stable states. Thus,
this neural architecture can serve as a
content addressable memory. Informa-
tion is placed in this memory through
design or adaptation. These processes
assign values to the dynamic parameters
that make certain patterns local minima
of the network, and endow them with a
suitable region of attraction.
The Cohen-Grossberg family
of
models
I121 is a generalization of the Hopfield
model. In these models, the neuron’s
state is a real number rather than a binary
number. A neuron’s state changes con-
tinuously according to a set of ordinary
differential equations. For the ith neu-
ron, the state
U,
is governed by the fol-
lowing differential equation, where
a,
(U,)
is a shunting term, b,(u,) the self-
excitatory term, and, for eachj, the term
w,,dJ(u,) is a weighted inhibitory input
from neuronj to neuron i.
du,/dt
=
a,(u,)
b,(u,)
i
Stability properties of the model have
been studied in [12].
As
in the previous
model, the main application of the net-
work is as an associative memory with
information stored in the many local
minima of the system.
0272-170818810400-0022 $01
00
0
1988
IEEE
22
I€€€
Control
Systems
Mogorine
Adaptive Control Problem
Adaptive control techniques are applied
when dealing with classes of unknown
or
partially known systems. The criterion of
successful control is a performance index
consisting of the system state, input, and
output variables. The performance index may
only be concerned with the qualitative be-
havior of the system, e.g., stability and
tracking characteristics
of
the closed-loop
system.
Adaptation mechanisms are devised to
generate the control input that satisfies this
performance index. In conjunction with this
underlying methodology, several other de-
sign considerations must be incorporated,
such as the need for parameter identification,
in order to improve the design
of
an adaptive
controller. For example, local parameter op-
timization is employed in MRAC, usually
via gradient methods, such as the steepest
descent method or the Newton-Raphson
method. The stability analysis of the closed-
loop system is usually based on the Lyapu-
nov second method, as well as on Popov’s
hyperstability method.
Proposed Architecture for Control
Figure
1
illustrates the proposed neuro-
morphic adaptation architecture for control.
Sensors make the status of the plant and im-
portant parameters available to the controller
and the neural estimator. Based on these in-
puts, the neural estimator changes its state,
moving in the state space of its variables.
The state variables of the estimator corre-
spond exactly to the parameters of the con-
troller. Thus, the stable-state topology of this
space can be designed
so
that the local min-
ima correspond to optimal control laws
for
the parameters of the controller. When the
estimator status is transmitted to the con-
troller, the controller modifies its parameters
to match the recommended parameters
of
the
control law, and then generates and transmits
a command to be executed by the plant.
This preliminary architecture exploits
neural net features in the parameter-identi-
fication process only. Architectures are also
being studied that employ parallel and adap-
tive mechanisms for the entire control sub-
system. The principal advantage of the pro-
posed architecture is that it can focus the
controller on the correct parameters very
quickly. The stable-state topology of the
neural estimator serves as a model of the
important states of the environment. The
ability of the estimator to categorize directly
sensed environmental input allows it to se-
lect the most appropriate control law, while
its uniform convergence property guarantees
fast adaptation of the optimal control scheme.
Notice also that the state space of the
neural estimator could be discrete (e.g.,
[3])
or continuous, thereby accounting for both
discrete and continuous control parameter
domains within which adaptation takes place.
Another important aspect
of
this proposed
architecture is that, based on simulation re-
sults
[5],
it is expected that the convergence
rate of the estimator toward the optimal con-
trol parameters will be independent of the
number of parameters that have to be
adapted. This allows multiparameter adap-
tive control schemes to be realized with sim-
ilar architectures and convergence rates.
Example
In order to explain the approach, a second-
order example (e.g., a single-degree-of-free-
dom manipulator) is presented. The pro-
posed solution will be demonstrated using a
two-neuron neural network, and then the
MRAC approach will be sketched (following
[13]).
This simple second-order example
may be solved more easily using traditional
approaches; however, for high-dimensional
systems, the neuromorphic approach should
prove advantageous when compared with
MRAC (see, for example,
[
141).
Let the linearized dynamics of the single-
degree-of-freedom manipulator be described
by the following second-order differential
equation, where
q,
q,
and
q
are the joint
Commands
Fig.
Plant
Sensors
Estimato
1.
General neuromorphic architecture for
Environment
I
control.
angle, velocity, and angular acceleration, re-
spectively;
T(t)
the applied torque;
B
the vis-
cous friction;
F
the compliance coefficient;
and
J
the
unknown
moment of inertia, which
includes the known motor and link inertias
and the unknown payload inertia.
Jq
+
Bq
+
Fq
=
T(t)
(3)
Let the proportional derivative controller be
described by the two coefficients
kl
and
kz.
(4)
The closed-loop dynamics are represented by
a (nominally) constant-coefficient second-or-
der equation
T(t)
=
-klq
-
k2q
q
+
czq
+
c,q
=
0
c2
=
(B
+
k2)/J
C’
=
(F
+
k,)/J
(5)
The parameters
k,
and
k2
of the controller
are determined by a two-element neural net-
work of the Cohen-Grossberg type, gov-
erned by two differential equations, where
a,,
a2,
TI,,
T12,
T2,,
and
TZ2
are constants,
designed to guarantee a set
of
stable points
in the
k,,
k2
state space, and
I,,
Zz
are exter-
nal impulsive inputs used to switch the neural
estimator to the appropriate basin of attrac-
tion, thereby selecting the stable point into
which the network converges.
ir,
=
-alkl
+
T,,
tan-’
(k,)
+
TZ2
tan-’
(k2)
+
I2
Equation
(6)
describes the dynamics of the
neural estimator, whose state space is iso-
morphic to the parameter space
of
the con-
troller
(k,,
k*).
This equation is obtained by
following the design algorithm in
[l
11,
where
it is shown that one could select
a,, a2,
TI,,
T12,
and
TZ2
such that the steady-state equi-
libria
of
the neural estimator will be identical
to the optimal values for
k,
and
k2.
The in-
puts
ZI
and
Iz
are functions of the manipu-
lator states
q,
q,
which are assumed mea-
surable (see Fig.
l),
such that an estimate of
the actual (real-time) payload inertia will be
available to the estimator that should place
it within the appropriate basin of attraction.
The “neural” implementation of the esti-
mator inputs
I,,
Z2
are currently under study
and will be reported elsewhere.
When MRAC is applied to this second-
order example, a second-order reference
model is defined as shown in Fig.
2.
The
difference between the output of the actual
system and the output
of
the reference model
April
1988
23
-
Reference
System
dynamics
X
b
Adaptation
mechanism
Reference
model
V
1
I’
Fig.
2.
adaptive control.
General control block diagram for model reference
is an error signal
e,
which is used to adjust
the coefficients of the feedback gain. In most
MRAC algorithms, the adjustment attempts
to minimize the squared error by gradient
steepest descent, as shown in the following
differential equation, where
k
is the param-
eter adjustment vector,
c
an adaptation con-
stant,
e
the error signal, and
a
represents the
partial derivative (of the error with respect
to the parameters).
k
=
-ce(de/ak)
(7)
In practice, the error term can be the
weighted sum of the error
e,
the error rate
e,
and the error acceleration
e.
With the sec-
ond-order example, a second-order differ-
ential equation can be used to determine the
partial derivative term for each of the com-
ponents of the parameter adjustment vector.
Simulation results are not presented here for
MRAC, but this explanation gives some un-
derstanding of the complexity involved
[
131.
The combined structure of Fig.
1
has been
examined under a host of different condi-
tions. In the example in Fig.
3,
the conver-
-20
-15
-10
-5
0
5
10
15
20
0,
velocity
Convergence without a neural
Fig.
3.
estimator.
gence of
q
and
q
to the origin (i.e., zero
position and velocity) is considered for two
scenarios that do not involve the neural es-
timator:
(1)
Convergence to the origin with high val-
ues of the viscous friction and the com-
pliance coefficient
(B
=
6,
F
=
8,
J
=
(2)
Convergence to the origin starting with
the same values of
B
and
F
however,
after four time units, the parameters drop
in their values to
B
=
0.3
and
F
=
0.4.
As a result, the convergence to the origin
is slowed and the trajectory in the
q-q
state space follows a longer spiral.
The neural estimator is then invoked,
which is considered in the region
Jk21
<
2.
Using the parameters
a,
=
u2
=
1,
T,I
=
T22
=
4/a,
and
T12
=
T2,
=
0,
the neural
estimator has local minima at
(-1,
-l),
(-1,
l),
(-1,
I),
and
(1,
1).
It is easy to
verify that only
(1, 1)
will yield a stable
closed-loop system. The system is designed
such that a drop in
F
and
B,
which yields
unacceptable “sluggish” convergence, ac-
tivates the neural estimator. The initial con-
ditions on
kl
and
k2
are determined by im-
pulsive inputs
I,
and
Z2,
which guarantee that
the neural estimator operates in the basin of
attraction of
(1,
1).
In Fig.
4,
trajectories are
compared when there is a switch from high
to low values of
B
and
F
under two condi-
tions: with and without aid from the neural
estimator. Once the sudden fall in
B
and
F
had been sensed, the neural estimator was
activated with initial conditions
(0.1,
2.0),
which are in the basin of attraction of
(1, 1).
The improvement due to the neural estimator
is evident. The time-domain comparison
1,
kl
=
0,
k2
=
0).
-20
-15
-10
-5
0
5
10
15
20
4,
velocity
Impact of the neural estimator
Fig.
4.
(NE)
on convergence trajectories.
active
%--==--
active
I-
-20
4-
0
10
20
30
40
Time (sec)
Impact of the neural estimator
Fig.
5.
(NE)
on position.
(Fig.
5)
illustrates the accelerated conver-
gence that the neural estimator helps to pro-
duce. Good trajectories are obtained even
when the estimates of
B,
F,
and
J
are poor
due to deficiencies in modeling, and in ac-
curacy of measurements and computations.
Thus, good convergence is expected if the
basin of attraction of the desirable operating
point in the
k,, k2
state space is large enough.
Discussion
The equations for both the neural network
and the MRAC are nonlinear coupled sets of
differential equations. These properties ac-
count for the adaptability of the closed-loop
controllers. However, using the second-or-
der example, the following distinctions can
be made:
The MRAC utilizes an
explicit
reference
model via the error signal
e,
to be supplied
by the
designer;
whereas the neurocon-
troller
implicitly
employs as
many “ref
erence models” as the number of stored
stable equilibria.
This allows accommo-
dation of a much larger a priori knowledge
base.
The MRAC algorithm must be
pro-
grammed for each new reference model
while the neurocontroller can be
taught by
examples
[3], [4], [8].
In other words, no
reprogramming is necessary.
24
I€€€
Control
Systems Magazine
The stability
of
MRAC is local, is highly
dependent on the unknown plant dynam-
ics, and exists only for restricted param-
eter domain sets. The stability
of
the neu-
rocontroller is guaranteed when the neural
network architecture can be designed to
have sufficiently large basins
of
attraction
about its equilibrium set. Moreover, due
to its fast convergence, it may be less sen-
sitive to the plant dynamics.
With increasing
dimension
of the un-
known parameter vector, the MRAC
equations become
very
complex, possibly
slowing the convergence and demanding
greater processing power. For the neuro-
controller, simulations indicate that the
convergence rate is independent of the
network’s dimension
[5]
Despite the potential benefits listed here,
neuromorphic architectures are yet to be
thoroughly investigated, since most
of
their
properties are only partially and qualitatively
known.
References
[l] Y. D. Landau,
Adaptive Control Model
Reference Adaptive Control Approach,
New
York: Marcel Dekker, 1979.
[2]
J.
A. Anderson, “Cognitive and Psycho-
logical Computation with Neural Models,”
IEEE Trans. Syst., Man, Cybern.,
vol.
SMC-13,
no.
5,
pp. 799-814, 1983.
[3]
J. J.
Hopfield, “Neural Networks and Phys-
ical Systems with Emergent Collective
Computational Abilities,”
Proc. Naf. Acad.
Sei.
U.S.,
vol. 79, pp. 2554-2558, 1982.
[4]
J.
J.
Hopfield, “Neurons with Graded Re-
sponse Have Collective Computational
Properties Like Those of Two-State Neu-
rons,”
Proc. Nat. Acad. Sci.
U.S.,
vol. 81,
pp. 3088-3092, 1984.
[5]
J.
J.
Hopfield and D.
W.
Tank, “‘Neural’
Computation of Decision Optimization
Problems,”
Biol. Cybern.,
vol. 52, pp. 1-
12, 1985.
[6] T. Kohonen, “Self-organized Formation of
Topologically Correct Feature Maps,”
Biol.
Cybern.,
vol. 43, pp. 59-69, 1982.
J.
L. McCelland and D. E. Rumelhart,
Par-
allel Distributed Processing,
vol. 2, Cam-
bridge, MA: MIT Press, 1985.
G. A. Carpe..ter and
S.
Grossberg, “A
Massively Parallel Architecture for a Self-
Organizing Neural Pattern Recognition Ma-
chine,’’
Computer Vision, Graphics, and
Image Processing,
vol. 37, pp.
54-115,
1987.
D. H. Ackley, G. E. Hinton, and T.
J.
Sejnowski, “A Learning Algorithm for
Boltzman Machines,”
Cognirive Sci.,
vol.
9, no. 1, pp. 147-169, 1985.
L. N. Cooper, F. Liberman, and E. Oja,
“A Theory for the Acquisition and Loss of
Neuron Specificity in the Visual Cortex,”
Bid. Cybern.,
vol. 33, pp. 9-28, 1979.
A.
Guez, V. Protopopescu, and
J.
Barhen,
“On the Stability,
Storage Capacity and
Design of Nonlinear Continuous Neural
Networks,” to appear in
IEEE Trans. Sysr.,
Man, Cybern.
Jan. 1988.
M. Cohen and
S.
Grossberg, “Absolute
Stability
of
Global Pattern Formations and
Parallel Memory Storage by Competitive
Neural Network,”
IEEE Trans. Syst., Man,
Cybern.,
vol. SMC-13, no.
5,
pp. 815-826,
1983.
K.
Fu, R. Gonzalez, and C. Lee,
Robotics:
Control, Sensing, Vision, and Intelligence,
McGraw-Hill, 1987.
M. I. Mufti, “Model Reference Adaptive
Control
for
Large Structural Systems,”
J.
Guid., Conrr.,
Dyn.,
vol.
10,
no.
5, pp.
507-509, 1987.
Allon
Guez
was
born
in
Tunisia in 1952. He re-
ceived the B.S.E.E. de-
gree from the Technion,
Israel Institute of Tech-
nology, in 1978, and the
M.Sc. and Ph.D. degrees
in electrical engineering
from the University of
Florida in Gainesville in
1980 and 1982, respec-
tively. He worked with
the Israeli Defense Forces
from 1970 to 1975, with System Dynamics, Inc.,
from 1980 through 1983, and with Alpha Dynam-
ics, Inc., since 1983. In 1984, he joined the fac-
ulty of the Electrical and Computer Engineering
Department at Drexel University. He has done re-
search and development and design in control sys-
tems, neurocomputing, and optimization.
James
L.
Eilbert
was
born
in Pittsburgh, Penn-
sylvania, in 1950. He
holds a B.S. (1973) in
physics from the State
University of New York
at Stony Brook, an M.S.
(1975) in applied mathe-
matics from the Courant
Institute at New York
University, and a Ph.D.
(1980) in biomathematics
from North Carolina State
University. Currently, he is an Assistant Professor
at Drexel University in Philadelphia, Pennsylva-
nia. He has done work in hierarchical neural
models and in modeling attentional mechanisms
in neural networks. His other areas of research
include model-based vision and machine learning.
Moshe
Kam
received the
B.Sc. degree from Tel
Aviv University in 1977
and the M.S. and Ph.D.
degrees from Drexel Uni-
versity in 1985 and 1987,
respectively. From 1977
to 1983, he was with the
Israeli Defense Forces.
Since September 1987, he
has been an Assistant Pro-
fessor in the Department
of Electrical and Com-
puter Engineering at Drexel University. His re-
search interests are in the areas of large-scale sys-
tems, control theory, and information theory. He
organized the session on “Relations Between In-
formation Theory and Control” at the 1987 Amer-
ican Control Conference, and the IEEE Educa-
tional Seminar on
“Neural Networks and Their
Applications” (Philadelphia, December 1987). Dr.
Kam serves as the Chairman of the joint Circuits
and Systems and Control Systems Chapter of the
Philadelphia IEEE Section.
April
1988
25
Article
Full-text available
Today, the design and development of smart homes and the way in which the user interacts with them is becoming more and more common. There are different ways in which a user can interact with these technological elements, such as wireless communications, control from a mobile device and through voice, the latter of great relevance for this paper. The design of such buildings seeks the comfort of the individual or individuals who inhabit it, modifying the internal conditions of the property, such as interior and exterior lighting, humidity, etc. This environment control occurs due to the reading of values recorded by sensors, and the output reaction of the actuators. This paper presents the proposal for a control system of light and temperature conditions within a building through Alexa. The conditioning of such physical magnitudes is carried out by means of fuzzy controllers. The presented scheme could be easily used in an automation system.
Article
Background/Purpose: The automatic identification of brain tumor types is important for advancing remedy and boosting survival of patients. In nowadays, magnetic resonance imaging is only used to effectively explore a variety of brain cancer. Since manual categorization of brain cancer requires experts and is only suitable restricted collection of clear MRI pictures, study of Convolutional Neural Network model for automatic diagnosis of brain tumor and how neural network technics are applied in images to detect tumor is proposed in this review paper. Design/Methodology/Approach: Various Scholarly articles and websites are referred and studied to gather information for this review paper. Findings/Result: Convolutional neural network and its different layers in image processing. Originality/Value: This review-based research article is a brain tumor study detection implementing a Cnn Architecture as well as the research gaps and research Agenda. Paper type: Literature Review
Conference Paper
Full-text available
This paper presents a Deep Reinforcement Learning agent for a 4-wheeled rover in a multi-goal competition task, under the influence of noisy GPS measurements. A previous related work has implemented a similar agent to the same task using only the raw dynamics measurements as observations. The Proximal Policy Optimization algorithm combined to Universal Value Function Approximators resulted in a system able to successfully overcome very noisy GPS observations and complete the challenge task. This work introduced a frontal camera to add visual input to the rover observations during the task execution. The main change on the algorithm is on the neural networks' architectures, in which a second input layer was added to deal with the image observations. In a few alternate versions of the networks, Long Short-Term Memory (LSTM) cells were included in the architecture as well. The addition of the camera did not present a significant increase in stability or performance of the network, and the computation time require increased.
Article
A new structure on adaptive control based on modellingmaker and controller using neural network is presented in this paper. In this structure, the dynamical characters of controlled plant are analogized by neural network modellingmakor for learning on-line .The output of neural network modellingmakor is Kept tne same as controlled plant for each input signal. The adaptive learning process of neural network controller is a process of modified weight of neural network accoding as the error of neural network modellingmaker by backpropagation to minimize the error of controller plant. Morever, the feedback signal from output of controlled plant is added to neural network controller. Simulations are presented in this paper.
Article
In this paper, we propose a novel neural network-based finite frequency band inverse system and corresponding controller for a kind of system with unknown dynamics. Their some properties such as restructure and existence in terms of neural networks are discussed. The two numerical simulations and a pilot liquid level control experiment are given to demonstrate that the proposed approach possesses strong adaptability and robustness.
Article
Power system stabilizers have been used successfully to improve power system stability. The major problem facing the design of a suitable controller for power systems is that they are nonlinear in nature and can operate over a wide range of parameter variations. Adaptive controllers were used to solve such a problem but they are usually computationally intensive and may not compensate large variations in system parameters. Motivated by the substantial growth of Artificial Intelligence (AI) techniques and their application in control systems, knowledge based controllers began to take place as a wide range stabilizers. This paper introduces a newly developed power system stabilizer using a knowledge base controller. The proposed knowledge base controller is acting on the system while keeping the conventional stabilizer. Rules and facts that represent the core of the controller are deduced using fuzzy logic. A single synchronous machine with its speed governor and voltage regulator connected to an infinite bus is used as an example to illustrate the features of the proposed expert controller. Simulation results showed great improvement in system performance when the auxiliary knowledge base controller is used together with the speed governor and the voltage regulator.
Article
This paper presents the control of a robotic manipulator using neural networks. A neural network-based scheme is used for the identification and control of a robotic manipulator. The main idea in this scheme is that by using a neural network to learn the characteristics of the robot system (or specifically the inverse dynamics of a plant) accurate trajectory following and good performance results are obtained. A modified form of the back propagation learning algorithm, based on the use of an independent, adaptive learning rate parameter for each weight and adaptive slope of the nonlinearity of the neuron has been used to train the neural network. This algorithm converges much faster than the standard back propagation algorithm. To demonstrate the validity of the proposed method, we apply it to the control of a three-link manipulator.
Article
Active Queue Management(AQM) has been widely used for congestion avoidance in Transmission Control Protocol(TCP) networks. Although numerous AQM schemes have been proposed to regulate a queue size close to a reference level, most of them are incapable of adequately adapting to TCP network dynamics due to TCP's non-linearity and time-varying stochastic properties. To alleviate these problems, we introduce an AQM technique based on a dynamic neural network using the Back-Propagation(BP) algorithm. The dynamic neural network is designed to perform as a robust adaptive feedback controller for TCP dynamics after an adequate training period. We evaluate the performances of the proposed neural network AQM approach using simulation experiments. The proposed approach yields superior performance with faster transient time, larger throughput, and higher link utilization compared to two existing schemes: Random Early Detection(RED) and Proportional-Integral(PI)-based AQM. The neural AQM outperformed PI control and RED, especially in transient state and TCP dynamics variation.
Chapter
In the past few years, there has been increased interest in the modeling and control of non-linear systems using Neural Networks (NNs) [1–12]. Applications of NNs published in literature dealt with I/O mappings. Recently, however, there has been increased interest in Input — state — Output mapping representation using Dynamic Recurrent Neural Networks (DRNNs) [13–16]. DRNNs are Feed Forward Neural Networks (FFNNs) [17,18] with feedback connections, which enable the description of temporal behavior to be stored in the neurons, allowing the NNs to account for the nonlinear dynamics. Funahashi and Nakamura [19] have shown that finite time trajectories of an n dimensional system can be approximated by the states of a Hopfield network, with n output nodes, N hidden nodes, and appropriate initial states.
Book
Model Reference Adaptive Control (MRAC) is one of the most used techniques for adaptive control. It is relatively easy to implement, offers good performances in a variety of situations and analytic design procedures are available. After briefly presenting the adaptive control concepts, the basic structures of MRAC will be given. Relationships with Self Tuning Controllers (STC) will be mentioned. The use of MRAC in a stochastic environment will be explained. The design of MRAC will be presented. Presentation of several applications concludes the article.
Article
Computational properties of use to biological organisms or to the construction of computers can emerge as collective properties of systems having a large number of simple equivalent components (or neurons). The physical meaning of content-addressable memory is described by an appropriate phase space flow of the state of a system. A model of such a system is given, based on aspects of neurobiology but readily adapted to integrated circuits. The collective properties of this model produce a content-addressable memory which correctly yields an entire memory from any subpart of sufficient size. The algorithm for the time evolution of the state of the system is based on asynchronous parallel processing. Additional emergent collective properties include some capacity for generalization, familiarity recognition, categorization, error correction, and time sequence retention. The collective properties are only weakly sensitive to details of the modeling or the failure of individual devices.
Article
Implicit model reference adaptive control technique provides a promising approach for the control of large structures. Applying this technique to the collocated case, it is shown that the output error approaches zero asymptotically, provided the weighting factor of position to rate measurement is greater than or equal to zero but less than or equal to the lowest modal damping. The control law in the form of integral, proportional, and relay adaptations along with the integral of the output error is proposed.
Article
This book provides a clearly written, authoritative, and detailed development of adaptive-control systems analysis and design. The book incorporates many new results and techniques fordesigning adaptive-control systems, and derives many previously known results in novel ways that simplify and lend new insights to the theory. Above all, the book is written in a manner that is comprehensible to a broad scientific audience. Taking a realistic approach, the author presents techniques for applying the theory to solving practical problems and shows how the designer of adaptive-control systems can draw on his knowledge of classical methods to better understand the adaptive-control problem. The book is an outgrowth of several research papers published by the author and of courses given to graduate students at the Institut National Polytechnique de Grenoble, France.
Article
Discusses the process whereby input patterns are transformed and stored by competitive cellular networks, which arises in such diverse subjects as the short-term storage of visual or language patterns by neural networks, pattern formation due to the firing of morphogenetic gradients in developmental biology, control of choice behavior during macromolecular evolution, and the design of stable context-sensitive parallel processors. In addition to systems capable of approaching one of perhaps infinitely many equilibrium points in response to arbitrary input patterns and initial data, one finds in these subjects a wide variety of other behaviors, notably traveling waves, standing waves, resonance, and chaos. The questions of whether global pattern formation occurs and whether the global pattern formation property persists when system parameters slowly change in an unpredictable fashion due to self-organization (development and learning) are discussed. Model systems that exhibit the absolute stability property are examined, and it is shown that the use of fast symmetric competitive feedback is a robust design constraint for guaranteeing absolute stability of global pattern formation. (37 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
Reviews neurobiological support for the idea that large-scale models of the brain should be parallel, distributed, and associative. It is argued that state vectors—large patterns of activity of groups of individual, somewhat selective neurons—are the appropriate elementary entities to use for cognitive computation. Simple neural models using this approach are presented that will associate and will respond to prototypes of sets of related inputs. Some experimental evidence supporting the latter model is reviewed, and a model for categorization is then discussed. Educating the resulting systems and the use of error-correcting techniques are discussed, and an example is presented of the behavior of the system when diffuse damage occurs to the memory, with and without compensatory learning. Finally, a simulation is presented that can learn partial information, integrate it with other material, and use that information to reconstruct missing information. Although these models have trouble with logic and accuracy, they provide a reasonable simulation of human mental properties related to abstraction and possibly to inference. (63 ref) (PsycINFO Database Record (c) 2012 APA, all rights reserved)
Article
This work contains a theoretical study and computer simulations of a new self-organizing process. The principal discovery is that in a simple network of adaptive physical elements which receives signals from a primary event space, the signal representations are automatically mapped onto a set of output responses in such a way that the responses acquire the same topological order as that of the primary events. In other words, a principle has been discovered which facilitates the automatic formation of topologically correct maps of features of observable events. The basic self-organizing system is a one- or two-dimensional array of processing units resembling a network of threshold-logic units, and characterized by short-range lateral feedback between neighbouring units. Several types of computer simulations are used to demonstrate the ordering process as well as the conditions under which it fails.
Article
A neural network architecture for the learning of recognition categories is derived. Real-time network dynamics are completely characterized through mathematical analysis and computer simulations. The architecture self-organizes and self-stabilizes its recognition codes in response to arbitrary orderings of arbitrarily many and arbitrarily complex binary input patterns. Top-down attentional and matching mechanisms are critical in self-stabilizing the code learning process. The architecture embodies a parallel search scheme which updates itself adaptively as the learning process unfolds. After learning self-stabilizes, the search process is automatically disengaged. Thereafter input patterns directly access their recognition codes without any search. Thus recognition time does not grow as a function of code complexity. A novel input pattern can directly access a category if it shares invariant properties with the set of familiar exemplars of that category. These invariant properties emerge in the form of learned critical feature patterns, or prototypes. The architecture possesses a context-sensitive self-scaling property which enables its emergent critical feature patterns to form. They detect and remember statistically predictive configurations of featural elements which are derived from the set of all input patterns that are ever experienced. Four types of attentional process—priming, gain control, vigilance, and intermodal competition—are mechanistically characterized. Top—down priming and gain control are needed for code matching and self-stabilization. Attentional vigilance determines how fine the learned categories will be. If vigilance increases due to an environmental disconfirmation, then the system automatically searches for and learns finer recognition categories. A new nonlinear matching law (the ⅔ Rule) and new nonlinear associative laws (the Weber Law Rule, the Associative Decay Rule, and the Template Learning Rule) are needed to achieve these properties. All the rules describe emergent properties of parallel network interactions. The architecture circumvents the noise, saturation, capacity, orthogonality, and linear predictability constraints that limit the codes which can be stably learned by alternative recognition models.
Article
The computational power of massively parallel networks of simple processing elements resides in the communication bandwidth provided by the hardware connections between elements. These connections can allow a significant fraction of the knowledge of the system to be applied to an instance of a problem in a very short time. One kind of computation for which massively parallel networks appear to be well suited is large constraint satisfaction searches, but to use the connections efficiently two conditions must be met: First, a search technique that is suitable for parallel networks must be found. Second, there must be some way of choosing internal representations which allow the preexisting hardware connections to be used efficiently for encoding the constraints in the domain being searched. We describe a general parallel search method, based on statistical mechanics, and we show how it leads to a general learning rule for modifying the connection strengths so as to incorporate knowledge about a task domain in an efficient way. We describe some simple examples in which the learning algorithm creates internal representations that are demonstrably the most efficient way of using the preexisting connectivity structure.