Article

Theory and Practice of Rcursive Identification

Authors:
To read the full-text of this research, you can request a copy directly from the authors.

No full-text available

Request Full-text Paper PDF

To read the full-text of this research,
you can request a copy directly from the authors.

... The key tool is the definition of a virtual off-line estimator, having a very accurate characterization. Although we can not compute it in practice, its on-line approximation, obtained along the lines of [11], [12], is computable leading to an adaptive input design method. This data-driven approach is presented first for open loop system. ...
... Thus the virtual estimatorθ n ( * ) is optimal from the perspective of input design. The asymptotic estimation problem in the spirit of [12] is defined by the algebraic equation ...
... Following the ideas of [11], extended in [12], a computable recursive PE estimatorθ N ( * ) can be constructed. The viability of the proposed approach for data-driven input design has been demonstrated, with all technical details included, in [9] for the case of ARMAX systems exited with inputs u generated by a FIR filter. ...
Article
Full-text available
We show that a class of optimal input design problems have only discrete spectral measures as solutions. If we fix any finite set of possible frequencies then a randomized version of the resulting convex problem has a unique (sparse) solution with probability 1. We also propose a data-driven approach to optimal input design via virtual off-line estimators that coincide with the optimized PE estimator modulo a negligible error, both for open loop and closed loop systems.
... Because this paradigm approximates the gradient of the expected risk with a single sample (or a small, randomly chosen batch of samples), it alleviates the computational complexity problem when using batch optimization methods. Stochastic approximation algorithms are also referred to as recursive identification in the field of system identifiaction [19], sequential estimation in statistics, adaptive algorithms in signal processing, and online learning in machine learning [20]. Currently, the most popular stochastic approximation algorithms in machine learning are stochastic graidient decent (SGD) and their variants, which have become the main workhorse for training NN models. ...
... It is apparently that (11) is a stochastic linear least squares problem which is convex if θ is fixed. In this paper, we follow the way of Ljung & Söderström [19] to solve this stochastic convex optimization problem. The scheme is actually a stochastic Newton algorithm which employs the following iterations of the form ...
... Algorithm 2 Online algorithm for Stochastic Separable Nonlinear Least Squares Initialize α 0 , θ 0 , B 0 for k = 1, 2, · · · Update α k using (17) and (18). Update θ k using (19). end ...
Preprint
We propose an online learning algorithm for a class of machine learning models under a separable stochastic approximation framework. The essence of our idea lies in the observation that certain parameters in the models are easier to optimize than others. In this paper, we focus on models where some parameters have a linear nature, which is common in machine learning. In one routine of the proposed algorithm, the linear parameters are updated by the recursive least squares (RLS) algorithm, which is equivalent to a stochastic Newton method; then, based on the updated linear parameters, the nonlinear parameters are updated by the stochastic gradient method (SGD). The proposed algorithm can be understood as a stochastic approximation version of block coordinate gradient descent approach in which one part of the parameters is updated by a second-order SGD method while the other part is updated by a first-order SGD. Global convergence of the proposed online algorithm for non-convex cases is established in terms of the expected violation of a first-order optimality condition. Numerical experiments have shown that the proposed method accelerates convergence significantly and produces more robust training and test performance when compared to other popular learning algorithms. Moreover, our algorithm is less sensitive to the learning rate and outperforms the recently proposed slimTrain algorithm. The code has been uploaded to GitHub for validation.
... 5 These applications are all massive because feedback controllers need to be deployed to serve many simultaneous users, and therefore recursive identification methods that excel in low memory consumption and low computational complexity become interesting. 6 Identification of systems with specific structures, for example, obtained by physical modeling with remaining unknown physical parameters, is known as grey-box identification. 7 The sequential Monte-Carlo (SMC) methods are well suited for adaptation to grey-box identification. ...
... 31 Additional advantages as compared to SMC methods include reduced memory and computational requirements. 6 The paper is organized with the derivation of the identification algorithm in Sections 2 and 3. The tools and conditions needed for averaging analysis are introduced in Section 4. Local convergence of the algorithm is analyzed in Section 5. ...
... The recursive prediction error identification method (RPEM) proposed in the paper minimizes the expected squared error, between the measured output data and a predicted output signal obtained by simulation of a nonlinear state space model. This prediction error principle is well established, 6 with different algorithmic variants resulting from different selected models. The definition of the state space model and the gradient required for the iterative search algorithm are therefore specified next. ...
Article
Full-text available
The convergence of a recursive prediction error method is analyzed. The algorithm identifies a nonlinear continuous time state space model, parameterized by one right‐hand side component of the differential equation and an output equation with a fixed differential gain, to avoid over‐parametrization. The method minimizes the criterion by simulation using an Euler discretization. A stability analysis of the associated differential equations results in conditions for (local) convergence to a minimum of the criterion function. Simulations verify the theoretical analysis and illustrate the performance in the presence of unmodeled dynamics, by identification of the nonlinear drum boiler dynamics of a power plant model.
... The simpler way to accomplish this is by performing a step response test with the system in open loop and record the control action u(t) and the ore flowrate y(t). Since the time delay Td of the process is already known, the recorded flowrate signal y(t) can be shifted backwards by Td to be synchronized to u(t), and then used to estimate the pole p through a system identification method [8]. Figure 9 shows the control action and the flowrate signals obtained from a step response test on the feeding system. ...
... The red mark "×" is the pole of the controller C(s) at s = 0, and the mark "o" is the zero of the controller at -KI/KP = -0.1331. The pink mark "×" corresponds to the three poles of the low-pass filter F(s) in (8). Notice that the root locus now passes at the desired poles si. ...
... The complete model of the ore feeding system, including the process model (6), the low-pass filter (8), and the controller (12), was implemented in a specific software [9] for simulation. Figure 13 shows the simulated flowrate for a set-point of 1,000 t/h. ...
Article
Full-text available
Brazil is the world leading producer and exporter of iron ore, mainly due to operations of VALE, a global mining company with several operations in the country, delivering iron, copper, manganese, and nickel ores. Those operations are located mainly in the Northern state of Pará, and in the Southeastern state of Minas Gerais. From 2014 to 2019, the company’s average yearly production of iron ore was about 342 million t/year. Most of such production was exported overseas, mainly to Asian and European countries.Mineral processing plants implement several stages of ore beneficiation, necessary to convert raw ore from the mines into ore products. Those plants comprise several processing facilities as crushing, screening, grinding, flotation, and others, which are operated by means of a plantwide digital control system, either SCADA- or DCS-based, using many process control loops.The most important process variable in a mineral processing plant is the ore mass flowrate, measured in t/h. The transportation of ore between mineral facilities is usually done by belt conveyors, and the ore flowrate is measured by a conveyor weigher (also known as dynamic weigher or belt scale), which is a heavy instrument assembled directly in the conveyor structure [18]. The conveyor weigher has two primary sensors: piezoelectric loading cells to sense the unit weight of material on the belt, in t/m; and an incremental encoder to sense the belt moving speed, in m/s. By combining the unity load and the belt speed measurements, the weigher computes the material flowrate, in t/h, with a typical precision of ±0.5% of its full measurement range. The ore flowrate in a conveyor is not defined by the belt speed, but by the speed of the feeder that discharges the ore on the conveyor belt. Therefore, by changing the feeder (actuator) speed it is possible to control the ore flowrate using its measurements provided by the conveyor weigher (sensor).This work describes the modeling and control of ore flowrate for the ore feeding system “RF-A” of the Fábrica Iron Ore Processing Plant, in the city of Ouro Preto, Minas Gerais state, Brazil, operated by VALE. The next section introduces the industrial process and the control problem.
... We use recursive least squares (RLS) method [66], [67] to estimate the HDVs' car-following parameters for state prediction to address Problem 1, as shown in Fig. 2. The essential steps of the proposed framework are outlined as follows. ...
... In this section, we use a recursive least-squared formulation [66] to estimate the parameters of the internal car-following model residing in CAV-1's mainframe to represent the driving behavior of each of the following HDVs. To this end, we consider the constant time headway relative velocity (CTH-RV) model [67], [68] v ...
... , T h as new data becomes available. Therefore, we employ the following recursive form of (42) known as the recursive least squares algorithm [66] ...
Preprint
Full-text available
Platoon formation with connected and automated vehicles (CAVs) in a mixed traffic environment poses significant challenges due to the presence of human-driven vehicles (HDVs) with unknown dynamics and control actions. In this paper, we develop a safety-prioritized, multi-objective control framework for creating platoons of HDVs led by a CAV. Our framework ensures indirect control of the trailing HDVs by directly controlling the leading CAV given the system constraints and initial conditions. The framework utilizes three different prediction models for estimating the HDV trajectories: (1) a naive constant speed model, (2) a nonlinear car-following model with nominal parameters, and (3) a data-driven model that estimates the driving behavior of the HDVs in real-time using a recursive least squares algorithm to predict future trajectories. To demonstrate the efficacy of the proposed framework, we conduct numerical simulations and provide the associated sensitivity, robustness, and performance analyses.
... In (18), P denotes the covariance matrix and is updated in each time step of the phasor estimation. Several methods have been introduced for updating the covariance matrix to preserve the adaptive nature of the RLS algorithm including the forgetting factor method, covariance resetting, random walking, and hybrid algorithms [29]. While discussing the performance of these algorithms is not the intention of this paper, a hybrid algorithm for updating the covariance matrix which is suggested in [29], is utilized as follows: ...
... Several methods have been introduced for updating the covariance matrix to preserve the adaptive nature of the RLS algorithm including the forgetting factor method, covariance resetting, random walking, and hybrid algorithms [29]. While discussing the performance of these algorithms is not the intention of this paper, a hybrid algorithm for updating the covariance matrix which is suggested in [29], is utilized as follows: ...
... On the contrary, by decreasing σ 2 , the speed of convergence is decreased. According [29], σ 2 is selected 0.1. ...
Article
Full-text available
Power transformer differential protection may confront mal-functioning in authentic discrimination between inrush and internal faults. To tackle the latter mal-functioning, a new two-stages algorithm based on phase content of the current signal of the current transformers (CTs) is put forward. The proposed algorithm is designed based on the fact that the fundamental phase angle of a fault signal ideally remains constant during the fault. However, during inrush cases, the phase angle varies. Also, during the internal fault, the phase angles of the current signals of CTs are in phase while during the external fault, the phase angles of the current signals of CTs are 180° out of phase. In the first stage, the proposed algorithm calculates the fundamental phase angles of the current signals of the CTs using sub-cycle modified recursive least squares (MRLS). Afterward, normalized mean residue (NMR) is employed to measure distance between the estimated phase angles of the CT’s currents. MRLS and NMR algorithms require limited samples (i.e. 10 and 5 samples respectively) for executing their calculations. Performance evaluation with simulated and experimental recorded current signals shows the ability of the proposed method in discrimination of the internal faults from inrush and external fault currents.
... This can be formally proved by showing how this problem is a particular instance of a LQ problem and utilizing the LQ solution (see Appendix). Substituting the control rule into the process mean estimate, we get t = Yt = KtYt (11) and the adjustment rule is Ut (12) We conclude that Grubbs' extended rule minimizes the expected sum of squared deviations provided the setup error mean and variance are known. If the errors are all normally distributed, Grubbs extended rule is the optimal solution for criterion (9). ...
... is set equal to one and Kt = 1/(t + P0) is substituted in (14), the so-called recursive least squares (Young, 1984, Ljung andSöderström, 1987) estimate of d is obtained: ...
... It was mentioned in the previous section that the Kalman filter model that yields Grubbs' extended rule (equation [11][12] is optimal for the criterion E ] and this constitutes a simple instance of a linear quadratic control problem. The more general LQ formulation of the appendix allows to derive optimal adjustment rules for more complex setup adjustment problems. ...
... Zhang et al. 2015), although the problem is not new. In the field of control theory, calibrating mathematical models of physical systems has been common practice (Ljung and Söderström 1983). This approach is known as system identification and consists of tuning the simulation parameters such that the simulated behaviour is as close as possible to the real behaviour. ...
... This approach is known as system identification and consists of tuning the simulation parameters such that the simulated behaviour is as close as possible to the real behaviour. Control practitioners have calibrated 74 2.7 Simulation environments to accelerate learning models using local search techniques that search for the optimum by using a gradient-following technique (Ljung and Söderström 1983). Later, genetic algorithms have been shown to be competitive to deal with non-differentiable and non-linear search spaces (Kristinsson and Dumont 1992). ...
Thesis
Endowing robots with dexterous manipulation skills could spur economic welfare, create leisure time in society, reduce harmful manual labour, and provide care for an ageing population. However, while robots are producing our cars, we are still left to our own devices for doing the laundry at home. This shortcoming is due to the major difficulties in perceiving and handling the variability the real world presents. Robots in modern manufacturing require engineers to produce a safe and predictable environment: objects arrive at the same location, in the same orientation, and the robot is preprogrammed to perform a specific manipulation. Unfortunately, the need for a predictable environment is undesirable in dynamic environments that handle a wide range of objects, often in the presence of human activity. For example, should a human worker first unfold all clothes such that a robot can easily find the corner points and perform folding? Indeed, the high variability in modern production environments and households requires robots to handle objects that can take arbitrary shapes, weights, and configurations. This diversity renders traditional robotic control algorithms and grippers unsuitable for deployment in dynamic environments. To find methods that can handle the ever-changing nature of human environments, we study the perception and manipulation of objects that provide an infinite amount of variations: deformable objects. A deformable object changes shape on force interaction. Deformable objects are omnipresent in industry and society: food, paper, clothes, fruit, cables and sutures, among others. In particular, we study the task of automating the folding of clothes. Folding clothes is a common household task that will potentially be performed by service robots in the future. Handling cloth is also relevant in manufacturing, where technical textile is processed, and in the fashion industry. Dealing with the deformable nature of textiles requires fundamental improvements in both hardware and software. Mechanical engineering needs to incorporate actuators, links, joints and sensors into the limited space of a hand while using soft materials similar to the human skin. In addition to engineering more capable hands, control algorithms need to loosen assumptions about the environment in which robots operate. It is unattainable to expect highly deformable objects like cloth to always be in the same configuration before manipulating them with a robot. A solution for dealing with real-world variability can be found in the machine learning field. In particular, deep RLcombines the function approximation capabilities of deep neural networks with the learn by trial-and-error formalism provided by RL. Deep RL has shown to be capable of driving cars, flying helicopters and manipulating rigid objects. However, the data requirements for training highly parameterized functions, like neural networks, are considerable. This data-hungry property causes an incongruity between the representational learning features of deep neural networks and the high cost of generating real robotic trials. Our research focuses on reducing the required learning data for systems that perceive and manipulate clothing items. We implement a cloth simulation method to generate synthetic data, utilize smart textiles for state estimation of cloth, crowdsource a dataset of people folding clothing, and propose a method to estimate how well people are folding clothing without providing labels. Actuating a physical robot is slow, expensive and potentially dangerous. For this reason, roboticists resort to physics simulators that simulate the robot's and environment's dynamics. However, there exists no integrated robot and cloth simulator for use in learning experiments. Cloth simulators are built for offline render farms in the film industry, or for the game industry that sacrifices fidelity for real-time rendering. Unfortunately, cloth simulation for robotic learning requires performance characteristics similar to online rendering and accuracy aspects found in offline rendering. For this reason, we implement a custom cloth dynamics simulation on GPU and integrate it in the robotic simulation functionality of the Unity game engine. We found that we can utilize deep RL to train an agent in our simulation to fold a rectangular piece of cloth twice within 24 hours of wall time on standard computational hardware. The developed cloth simulation assumes full accessibility to the state of the cloth. However, state estimation of cloth in the real world relies on complex vision-based pipelines or high-cost sensing technology. We avoid this complexity and cost by integrating inexpensive tactile sensing technology into a cloth. The cloth becomes an active smart cloth by training a classifier that uses the tactile sensing data to estimate its state. We use this smart cloth to train a low-cost robotic platform to fold the cloth using RL. Our results demonstrate that it is possible to develop a smart cloth with off-the-shelf components and use it effectively for training on a real robotic platform. Our smart cloth bridges the gap between our cloth simulation on GPU and state estimation in the real world. However, it is still required to distil a scalar value that indicates task progress in order to acquire manipulation skills using RL. We believe that learning the reward function from demonstrations may overcome human bias in reward engineering. Unfortunately, when starting our research there existed no large dataset with people folding clothing. We fill this gap by crowdsourcing a dataset of people folding clothes. Our dataset consists of roughly 300000 multiperspective RGB-D frames, annotated with pose trajectories, quality labels and timestamps indicating substeps. This dataset can be used to benchmark research in action recognition and bootstrap learning by using example demonstrations. Learning from demonstrations is a prevalent domain in the robotics learning community. However, using our cloth folding dataset requires mapping the movements of demonstrators to the embodiment of a robot. Additionally, behavioural cloning is prone to blindly imitating trajectories instead of understanding how actions relate to solving the task. For this reason, estimating how well a process is being executed is preferred to learning the policy from demonstrations. Unfortunately, existing methods couple the learning of rewards with policy learning, thereby inheriting all problems associated with RL. To decouple reward and policy learning, we propose a method to learn the task progression from multiperspective videos of example demonstrations. We avoid incorporating human bias in the labelling process by using time as a self-supervised signal for learning. We demonstrate the first results on expressing task progression of people folding clothing, without labelling any data. General-purpose robots are not yet among us. Robots that are capable of working in dynamic environments will require a holistic view of software and hardware. We demonstrated the benefits of this approach by outsourcing the intelligence for state estimation to the cloth instead of the robot. By developing a smart cloth, we trained a robot to fold cloth in-vivo within a day. Extrapolating an integrated approach on hardware and software leads to embodied intelligence in which morphology closes the loop with control: co-optimizing the body and brain will allow evolving manipulators tailored to the tasks, and use them to build a representation of how the world works. Robots can use that feedback to understand how actions influence the environment and learn to solve tasks by using human examples, instrumented objects and their own experiences. This holistic process will enable future robots to understand human intent and solve a large repertoire of manipulation tasks.
... A method to exploit the sensitivity of the predicted currents to the model parameters has been discussed in relation to the three-phase induction machine in [2], three-phase IPMSM in [3]- [5] and dual three-phase IPMSM in [6]. As per Ljung's definition in [7], this method is termed as prediction gradient (Ψ T )-based Recursive Prediction Error Method (RPEM). Under this approach, the stator currents are predicted using the Full-Order Model (M uθ ) and used to generate the prediction error, ϵ, which is assumed to be existent only if the parameterdiscrepancies are present. ...
... RPEM is a family of parameter identification methods presented by Ljung in [7] that can be generalized as in (3). ...
Conference Paper
Full-text available
Due to the sensitivity to parametric errors, open-loop current predictor-based estimators offer good online tracking of electric machine parameters, however at the price of rotor-speed dependent predictor pole-trajectories, thus the digital implementation needs caution. This paper aims to investigate, firstly, the discrete-time domain stability of the predictor when implemented using different discretization methods in the processor. Secondly, to assess the impact of the integration time-step (h) on the predictor stability, the microprocessor-and Field-Programmable Gate Array (FPGA)-based implementations are investigated. A System-on-Chip (SoC) that integrates both the processor and the FPGA is exploited in this study, for online identification of permanent magnet flux linkage Ψ m and stator resistance R s of Interior Permanent Magnet Synchronous Machine (IPMSM). The experiments are conducted using an Embedded Real-Time Simulator (ERTS), which shows that when the approach is based on the DSP, the trapezoidal rule offers full speed-range stability while the FPGA-based implementation with inherent shorter h yields overall better performance irrespective of the discretization strategy.
... where Φ(s k ) is an instrumental variable to avoid the asymptotic bias (Ljung and Söderström, 1983;Bradtke and Barto, 1996). It can be seen from (3.45) that LS-TD does not involve the step-size sequence η n , but computes a matrix inverse at each step, which means that LS-TD has a computational complexity of O(H 3 ). ...
... By Lemma 2.1 of Ljung and Söderström (1983), we have ...
Thesis
This thesis deals with the advance scheduling of elective surgeries in an operating theatre that is composed of operating rooms and downstream recovery units. The arrivals of new patients in each week, the duration of each surgery, and the length-of-stay of each patient in the downstream recovery unit are subject to uncertainty. In each week, the surgery planner should determine the surgical blocks to open and assign some of the surgeries in the waiting list to the open surgical blocks. The objective is to minimize the patient-related costs incurred by performing and postponing surgeries as well as the hospital-related costs caused by the utilization of surgical resources. Considering that the pure mathematical programming models commonly used in literature do not optimize the long-term performance of the surgery schedules, we propose a novel two-phase optimization model that combines Markov decision process (MDP) and stochastic programming to overcome this drawback. The MDP model in the first phase determines the surgeries to be performed in each week and minimizes the expected total costs over an infinite horizon, then the stochastic programming model in the second phase optimizes the assignments of the selected surgeries to surgical blocks. In order to cope with the huge complexity of realistically sized problems, we develop a reinforcement-learning-based approximate dynamic programming algorithm and several column-generation-based heuristic algorithms as the solution approaches. We conduct numerical experiments to evaluate the model and algorithms proposed in this thesis. The experimental results indicate that the proposed algorithms are considerably more efficient than the traditional ones, and that the resulting schedules of the two-phase optimization model significantly outperform those of a conventional stochastic programming model in terms of the patients' waiting times and the total costs on the long run.
... The filtering problem is the problem of determining the best estimate of its condition, given its observations = ( 0 , 1 , . . . , ) [17][18][19][20][21][22][23][24][25]. When ...
... Discrete-time linear state-space models and Kalman filtering (KF) have been employed since the 1960s, mostly in the control and signal processing areas. The KF has been extensively employed in many areas of estimation the extensions and applications of discrete-time linear state-space models can be found in almost all disciplines [17][18][19][20][21][22][23][24][25]. Let us consider a general discrete-time stochastic system represented by the state and measurement models given by ...
Article
Full-text available
Since the beginning of 2020, the world has been struggling with a viral epidemic (COVID-19), which poses a serious threat to the collective health of the human race. Mathematical modeling of epidemics is critical for developing such policies, especially during these uncertain times. In this study, the reproduction number and model parameters were predicted using AR(1) (autoregressive time-series model of order 1) and the adaptive Kalman filter (AKF). The data sample used in the study consists of the weekly and daily number of cases amongst the Ziraat Bank personnel between March 11, 2020, and April 19, 2021. This sample was modeled in the state space, and the AKF was used to estimate the number of cases per day. It is quite simple to model the daily and weekly case number time series with the time-varying parameter AR(1) stochastic process and to estimate the time-varying parameter with online AKF. Overall, we found that the weekly case number prediction was more accurate than the daily case number (R2 = 0.97), especially in regions with a low number of cases. We suggest that the simplest method for reproduction number estimation can be obtained by modeling the daily cases using an AR(1) model. JEL classification numbers: C02, C22, C32. Keywords: COVID-19, Modeling, Reproduction number estimation, AR(1), Kalman filter.
... Modeling accuracies are shown in Table 1 including the Akaike's final prediction error (FPE) [36] and the mean square error (MSE). ...
Article
Full-text available
In this paper, a new approach for head camera stabilization of a humanoid robot head is proposed, based on a bio-inspired soft neck. During walking, the sensors located on the humanoid’s head (cameras or inertial measurement units) show disturbances caused by the torso inclination changes inherent to this process. This is currently solved by a software correction of the measurement, or by a mechanical correction by motion cancellation. Instead, we propose a novel mechanical correction, based on strategies observed in different animals, by means of a soft neck, which is used to provide more natural and compliant head movements. Since the neck presents a complex kinematic model and nonlinear behavior due to its soft nature, the approach requires a robust control solution. Two different control approaches are addressed: a classical PID controller and a fractional order controller. For the validation of the control approaches, an extensive set of experiments is performed, including real movements of the humanoid, different head loading conditions or transient disturbances. The results show the superiority of the fractional order control approach, which provides higher robustness and performance.
... Many investigations studied the effectiveness of utilizing the recursive least squares technique (RLS) [25,26] and EKFs [27,28] in neural networks. In the 1990s, several studies autonomously used an EKF in training a multilayer perceptron and demonstrated that it performs better than utilizing a traditional backpropagation training approach [29][30][31][32]. ...
Chapter
Determination of suitable sites for small hydropower projects could offer new opportunities for sustainable developments. However, the non-scalable initial investigation costs are one of the biggest burdens when planning small projects. Moreover, solving a complex problem with only a few available parameters is almost impossible with many traditional models, and lack of data may make many design studies infeasible for remote, hard to access or developing areas. Artificial neural networks (ANNs) could help reduce investigation costs and make many projects feasible to study by acting as input–output mapping algorithms. This study provides an easy to understand and implement method to develop fast ANN-based estimation models using the multilayer perceptron (MLP) neural network and extended Kalman filter (EKF) or gradient descent (GD) as the training algorithm. Also, three approaches to feeding training data to the models were studied. Estimating runoff is an important challenge in water resources engineering, especially for development and operation plans. Therefore, the proposed method is applied for a runoff estimating problem using only easily measured precipitation and temperature. Results of this case study indicate that for a relatively similar performance, ANN models using EKF required a fewer number of neurons and training epochs than GD. Compared to the prior research in this study area, the methods in this study are much easier to understand and implement and are not dependent on data mining techniques or continuous long-term time series. Based on the results, a combination of the proposed data feeding methods and the EKF training algorithm improved estimation models by reducing the number of training epochs and the size of the network. —— Keywords: Artificial Neural Network; ANN; Extended Kalman Filter; EKF; Gradient descent; Multilayer Perceptron; MLP; Machine learning; Runoff; Taleghan basin.
... Ví dụ như phương pháp bình phương trung bình tối thiểu [8,9], phương pháp tập hợp mô hình [10,11], các thuật toán với hàm trọng số tích phân [12], hoặc bộ mở rộng động vectơ hồi quy tuyến tính [13,14]. Tới cuối thế kỷ XX, đầu thế kỷ XXI, những cách tiếp cận như vậy đã mất đi sự phổ biến. ...
Article
The problem of control for time-varying linear systems by the output (i.e. without measuring the vector of state variables or derivatives of the output signal) was considered. For the control design, the well-known online procedure for solving the Riccati matrix differential equation is chosen. For synthesis of the observer of state variables, a new approach is proposed, providing the possibility of obtaining monotonous estimates of convergence with the regulation of transition time. The main idea of the synthesis of observers is based on the transformation of the original dynamic system to a linear regression model containing unknown parameters, which in turn were functions of the initial conditions of the state variables of the control object. Then the DREM algorithm is used to estimate the state variables. Some simulations on Matlab/Simulink prove the correctness of the theory built, ensuring the good quality of the response processes.
... Note that the smaller the lambda value is, the sharper is the decay of the weighting factor. In this study we use λ = 0.99 as suggested in [37,38]. Delay parameters were selected as 5 s to obtain more stable results. ...
Article
A 45-month continuous seismic monitoring of a multi-span highway bridge by wireless sensor network (WSN) is described in this paper. The WSN monitoring system is developed using a simultaneous high-speed flooding technique and deployed to monitor a continuous 12-span 381.8 m long highway bridge in Katsuta city, Ibaraki prefecture, Japan. The WSN monitoring system consisted of 20 triaxial accelerometers placed on bridge girder, pier caps, abutment, and on the ground. The monitoring system has successfully recorded seismic responses of 63 earthquakes from August 2017 to December 2020. The paper describes the WSN monitoring system, analyses of seismic responses, and structural assessment using the recorded seismic responses. The analyses include investigation on seismic response characteristics of girder, piers, and bearings, system identifications, and seismic performance evaluation of isolation bearings. The WSN monitoring system provides high quality of seismic responses. Evaluations of ground motions recorded by WSN with the existing strong motion KNET seismograph network have demonstrated the accuracy, reliability, and robustness of WSN monitoring system. Using three system identification methods, dominant bridge modal parameters and their variations with respect to earthquake levels were investigated. High-frequency filtering effect was utilized as indicator in assessment of isolation bearing performance. Systematic classification by clustering algorithm was conducted to determine whether the isolation bearings have functioned normally.
... queé denominado de modelo de regressão linear (Ljung and Söderström, 1983). ...
Conference Paper
Full-text available
Proportional-Integral-Derivative (PID) is one of the most used forms of control in the industry and there are several variations of its architecture. PID 2-DOF is one of them and aims to improve the rejection of disturbances of control actions more quickly. This paper presents the identification through the non-recursive least squares (LS) mathematical method of a rotational joint of a cylindrical manipulator and the application of PID and PID 2-DOF controllers. The identification of the 1st order model of the manipulator joint is performed to apply the tuning methods Ziegler/Nichols (ZN), Chien-Hrones-Reswick (CHR), Internal Model Control (IMC) and Skogestad IMC Method (SIMC) in the proposed controllers. To compare the simulation results, the following performance criteria are used: rise time (t r), settling time (t s), and overshoot. In the end, it is concluded that the tuning methods CHR and IMC obtained better results, and the PID 2-DOF controller obtained better results after fine tuning.
... Following the landmark article [47], many extensions have been developed within the statistics and automatic control communities. Some notable works include convergence results [48,49,50], online parameter estimation and system identification in [51,52], adaptive control strategies [53], and general books in the area [54,55,56]. The primary focus of current research activity is directed towards improving convergence rates. ...
Preprint
Identification of nonlinear systems is a challenging problem. Physical knowledge of the system can be used in the identification process to significantly improve the predictive performance by restricting the space of possible mappings from the input to the output. Typically, the physical models contain unknown parameters that must be learned from data. Classical methods often restrict the possible models or have to resort to approximations of the model that introduce biases. Sequential Monte Carlo methods enable learning without introducing any bias for a more general class of models. In addition, they can also be used to approximate a posterior distribution of the model parameters in a Bayesian setting. This article provides a general introduction to sequential Monte Carlo and shows how it naturally fits in system identification by giving examples of specific algorithms. The methods are illustrated on two systems: a system with two cascaded water tanks with possible overflow in both tanks and a compartmental model for the spreading of a disease.
... The model is able to predict the next value of the input variable [17]. The NARX model is a nonlinear discrete time model structure that can be used for univariate as well as multivariate analysis [21,22,61]. In addition, the classification aspect of model structure selection is crucial for the effectiveness of the identification process. ...
Article
Full-text available
Due to greater accessibility, healthcare databases have grown over the years. In this paper, we practice locating and associating data points or observations that pertain to similar entities across several datasets in public healthcare. Based on the methods proposed in this study, all sources are allocated using AI-based approaches to consider non-unique features and calculate similarity indices. Critical components discussed include accuracy assessment, blocking criteria, and linkage processes. Accurate measurements develop methods for manually evaluating and validating matched pairs to purify connecting parameters and boost the process efficacy. This study aims to assess and raise the standard of healthcare datasets that aid doctors’ comprehension of patients’ physical characteristics by using NARX to detect errors and machine learning models for the decision-making process. Consequently, our findings on the mortality rate of patients with COVID-19 revealed a gender bias: female 15.91% and male 22.73%. We also found a gender bias with mild symptoms such as shortness of breath: female 31.82% and male 32.87%. With congestive heart disease symptoms, the bias was as follows: female 5.07% and male 7.58%. Finally, with typical symptoms, the overall mortality rate for both males and females was 13.2%.
... Many investigations studied the effectiveness of utilizing the recursive least squares technique (RLS) [168], [169] and EKFs [170], [171] [189]. ...
Thesis
Full-text available
The Archimedes/Archimedean screw generator (ASG) is a new form of small hydroelectric powerplant technology that can operate under a wide range of flow heads and flow rates. ASGs have low impacts on wildlife, especially fish, and can generate power from almost any flow, even wastewater. Simplicity, low maintenance requirements and moderate costs make ASGs suitable even for remote or developing regions. However, there are few analytical methods for designing ASGs and almost no general and easy-to-use guidelines to design Archimedes screw power plants. This has been a major unanswered problem for about three decades. Therefore, the overall goal of this study was to develop design tools and guidelines to facilitate and support the design and operation of Archimedean screw hydropower plants. This goal was achieved by developing analytical equations and models to design Archimedes screw generators, estimate the volume of flow passing through the ASGs, and a guideline for designing Archimedes screw powerplants. In addition, methods for developing rapid estimation models were studied to facilitate initial studies of ASG hydropower projects, such as determining and evaluating suitable sites. Experimental measurements of a grid-connected ASG and five laboratory screws led to characterizing the phenomena of supply flow into ASGs and developing new equations for estimating the volumetric flow entering ASGs as a function of the screw geometry, rotation speed, and inlet depth. At most ASG installations, water is supplied by an inlet channel and controlled using sluice gates. Five analytical models were evaluated using laboratory experiments. New equations and methods are proposed to facilitate the design, calibration, modelling and operation of sluice gates. Studying worldwide currently operating Archimedes screw power plants led to characterizing the important design aspects of ASGs and developing empirical screw sizing equations. An analytical equation was developed to estimate Archimedes screw geometry based on the common design characteristics of ASGs. These achievements led to the development of a design guideline for ASG hydropower plants. This work also helps remediate one of the biggest burdens when designing small hydropower projects, the unscalable initial investigation costs, by enabling the evaluation of site-specific possibilities of green and renewable Archimedes screw hydropower generation.
... As is well-known, the main drawback of offline estimators is their inability to track parameter variations, which is very often the main objective in applications. This situation motivates the interest to develop bona-fide on-line estimators that relax the PE condition preserving the scheme's alertness [26]. ...
Preprint
In this note a new high performance least squares parameter estimator is proposed. The main features of the estimator are: (i) global exponential convergence is guaranteed for all identifiable linear regression equations; (ii) it incorporates a forgetting factor allowing it to preserve alertness to time-varying parameters; (iii) thanks to the addition of a mixing step it relies on a set of scalar regression equations ensuring a superior transient performance; (iv) it is applicable to nonlinearly parameterized regressions verifying a monotonicity condition and to a class of systems with switched time-varying parameters; (v) it is shown that it is bounded-input-bounded-state stable with respect to additive disturbances; (vi) continuous and discrete-time versions of the estimator are given. The superior performance of the proposed estimator is illustrated with a series of examples reported in the literature.
... The RLS algorithm is a recursive form of the ZF equalizer (14) [125]. The RLS algorithm offers a near performance to the ZF algorithm, at the cost of a preprocessing stage and matrix manipulation, and obtains an optimal performance for a Gaussian noise source. ...
Article
In order to meet the user demands in performance and quality of services (QoS) for beyond fifth generation (B5G) communication systems, research on decentralized and distributed massive multiple-input multiple-output (M-MIMO) is initiated. Data detection techniques are playing a crucial role in realization and implementation of M-MIMO networks. Although most of detection techniques were proposed for centralized M-MIMO, there is a notable trend to propose efficient detection techniques for decentralized and distributed M-MIMO networks. This paper aims to provide insights on data detection techniques for decentralized and distributed M-MIMO to generalists of wireless communications. We garner the detection techniques for decentralized and distributed M-MIMO and present their performance, computational complexity, throughput, and latency so that a reader can find a distinction between different algorithms from a wider range of solutions. We present the detection techniques based on the following architectures: decentralized baseband processing (DBP), feedforward fully decentralized (FD), and feedforward partially decentralized (PD), FD based on coordinate descent (FD-CD), and FD based on recursive methods. In addition, the role of expectation propagation algorithm (EPA) in decentralized architectures is comprehensively reviewed. In each section, we also discuss the pros, cons, throughput, latency, performance, and complexity profile of each detector and related implementations. Moreover, the energy efficiency of several decentralized M-MIMO architectures is also illustrated. The cell-free M-MIMO (CF-M-MIMO) architecture is discussed with an overview of deployed detection schemes. This paper also illustrates the challenges and future research directions in decentralized and distributed M-MIMO networks.
... The outputerror method [10] is based on information vector which includes output of the model instead output of the system. Such procedure represents the basis for identification of the fast model in identification of dualrate systems. ...
... The outputerror method [10] is based on information vector which includes output of the model instead output of the system. Such procedure represents the basis for identification of the fast model in identification of dualrate systems. ...
Article
Full-text available
ABSTRACT: In the paper is proposed outlier robust identification algorithm for dual – rate stochastic systems. The algorithm is accelerated stochastic approximation which is based on averaging in both iterates and observations. In comparison with algorithm based only on averaging in iterates, proposed in the paper algorithm, is more stable in the initial period. The derivation of the algorithm is based on modified Newton – Raphson algorithm. In the paper is considered identification of fast rate model without any transformation. It is used output-error model philosophy. The auxiliary model is FIR (finite impulse response) model. The identification consists from two stages. In the first stage it is estimated FIR model. The parameter estimation of FIR model is performed by using outliers robust accelerated stochastic approximation and order of the model is determined with outliers robust Bayesian information criterion. In the second stage, by using robust accelerated stochastic approximation is estimated fast rate model. The main contributions of the paper are: (i) Design of the robust accelerated stochastic approximation algorithm based on modified Huber theory; (ii}) design of robust Bayesian information criterion.
... In all these cases, conditions under which learning, that is, accurate parameter estimation, can take place were precisely articulated. Both necessary and sufficient conditions were derived (Morgan and Narendra, 1977;Ljung, 1977a,b;Ljung and Söderström, 1983;Anderson, 1985). Yet another link between adaptation and learning is due to Yakov Tsypkin who proposed a unified framework based on stochastic approximation machinery. ...
Article
This article provides a historical perspective of the field of adaptive control over the past seven decades and its intersection with learning. A chronology of key events over this large time-span, problem statements that the field has focused on, and key solutions are presented. Fundamental results related to stability and robustness of adaptive systems and learning of unknown parameters are sketched. A brief description of various applications of adaptive control reported over this period is included.
... A related but slightly different context in chemical engineering is the calibration of non-linear dynamic process models using system identification techniques mostly for process control [Ljung and Söderström, 1983]. The models might be coupled ordinary differential equations or time series auto-regressive models and are generally not computationally expensive. ...
Preprint
Full-text available
Recent advances in machine learning, coupled with low-cost computation, availability of cheap streaming sensors, data storage and cloud technologies, has led to widespread multi-disciplinary research activity with significant interest and investment from commercial stakeholders. Mechanistic models, based on physical equations, and purely data-driven statistical approaches represent two ends of the modelling spectrum. New hybrid, data-centric engineering approaches, leveraging the best of both worlds and integrating both simulations and data, are emerging as a powerful tool with a transformative impact on the physical disciplines. We review the key research trends and application scenarios in the emerging field of integrating simulations, machine learning, and statistics. We highlight the opportunities that such an integrated vision can unlock and outline the key challenges holding back its realisation. We also discuss the bottlenecks in the translational aspects of the field and the long-term upskilling requirements of the existing workforce and future university graduates.
... Following the guidelines given in [50, p. 379] and [51, p. 68], we use λ = 0.98. However, other possibilities for selecting λ are also possible [51], [52]. The matrix P k is initialized as P 0 = 0.05I. ...
Preprint
Full-text available
In this paper, we develop a novel data-driven method for Deformable Mirror (DM) control. The developed method updates both the DM model and DM control actions that produce desired mirror surface shapes. The novel method explicitly takes into account actuator constraints and couples a feedback control algorithm with an algorithm for recursive estimation of DM influence function models. In addition to this, we explore the possibility of using Walsh basis functions for DM control. By expressing the desired and observed mirror surface shapes as sums of Walsh pattern matrices, we formulate the control problem in the 2D Walsh basis domain. We thoroughly experimentally verify the developed approach on a 140-actuator MEMS DM, developed by Boston Micromachines. Our results show that the novel method produces the root-mean-square surface error in the 14 − 40 nanometer range. These results can additionally be improved by tuning the control and estimation parameters. The developed approach is also applicable to other DM types, such as for example, segmented DMs.
... Discrete-time linear state-space models and Kalman filtering (KF) have been employed since the 1960s, mostly in the control and signal processing areas. The KF has been extensively employed in many areas of estimation the extensions and applications of discrete-time linear state-space models can be found in almost all disciplines [18][19][20][21][22][23][24][25][26]. ...
Article
Full-text available
In this study, cumulative and daily cases are estimated online using discrete-time Gompertz model (DTGM) and Adaptive Kalman Filter (AKF) based on the total COVID-19 cases between February 29-July 28, 2020 in USA, Germany, India, Russia, Italy, Spain, France, United Kingdom, Brazil. Employing the data collected between February 29 and July 28, 2020, it is showed that the DTGM in conjunction with AKF provides a good analysis tool for modeling the daily cases made using the in terms of mean square error (MSE), mean absolute percentage error (MAPE), and R^2.
... 41, 103]. Numerous extensions of RLS have been developed to address initialization, forgetting, and numerical stability, for example, [43][44][45][46]. The development of RLS that is closest to this dissertation is given in [47, pp. ...
Thesis
This dissertation develops data-driven retrospective cost adaptive control (DDRCAC) and applies it to flight control. DDRCAC combines retrospective cost adaptive control (RCAC), a direct adaptive control technique for sampled-data systems, with online system identification based on recursive least squares (RLS) with variable-rate forgetting (VRF). DDRCAC uses elements of the identified model to construct the target model, which defines the retrospective performance variable. Using RLS-VRF, optimization of the retrospective performance variable updates the controller coefficients. This dissertation investigates the ability of RLS-VRF to provide the modeling information needed to construct the target model, especially nonminimum-phase (NMP) zeros, which are needed to prevent NMP-zero cancellation. A decomposition of the retrospective performance variable is derived and used to assess target-model matching and closed-loop performance. These results are illustrated by single-input, single-output (SISO) and multiple-input, multiple-output (MIMO) examples with a priori unknown dynamics. Finally, DDRCAC is applied to several simulated flight control problems, including an aircraft that transitions from minimum-phase to NMP lateral dynamics, an aircraft with flexible modes, aeroelastic wing flutter, and a nonlinear planar missile.
... In all these cases, conditions under which learning, that is, accurate parameter estimation, can take place were precisely articulated. Both necessary and sufficient conditions were derived [66,67,59,68,69]. Yet another link between adaptation and learning is due to Yakov Tsypkin who proposed a unified framework based on stochastic approximation machinery. ...
Preprint
Full-text available
This article provides a historical perspective of the field of adaptive control over the past seven decades and its intersection with learning. A chronology of key events over this large time-span, problem statements that the field has focused on, and key solutions are presented. Fundamental results related to stability, robustness, and learning are sketched. A brief description of various applications of adaptive control reported over this period is included.
... Even when measurement samples span a broad range of variation of system behaviours, each successive observation yields progressively smaller gain in the model estimation (Ljung and Söderström, 1983). Initially, each new observation provides relatively novel information, resulting in significant marginal improvement for the estimation. ...
... Xu et al. (2002) and Geramifard et al. (2006) develop a more efficient way to implement this solution in a recursive way. The reason why it is called "least square" comes from the instrumental variable approach to regression problems (Ljung and Söderström, 1983). 11 Bradtke and Barto (1996) show that the basis functions in LSTD are indeed instrumental variables. ...
Preprint
Full-text available
We propose a unified framework to study policy evaluation (PE) and the associated temporal difference (TD) methods for reinforcement learning in continuous time and space. We show that PE is equivalent to maintaining the martingale condition of a process. From this perspective, we find that the mean--square TD error approximates the quadratic variation of the martingale and thus is not a suitable objective for PE. We present two methods to use the martingale characterization for designing PE algorithms. The first one minimizes a "martingale loss function", whose solution is proved to be the best approximation of the true value function in the mean--square sense. This method interprets the classical gradient Monte-Carlo algorithm. The second method is based on a system of equations called the "martingale orthogonality conditions" with "test functions". Solving these equations in different ways recovers various classical TD algorithms, such as TD($\lambda$), LSTD, and GTD. Different choices of test functions determine in what sense the resulting solutions approximate the true value function. Moreover, we prove that any convergent time-discretized algorithm converges to its continuous-time counterpart as the mesh size goes to zero. We demonstrate the theoretical results and corresponding algorithms with numerical experiments and applications.
... Secondly, to the best of the authors' knowledge, almost all of the existing estimation algorithms for stochastic systems with binary-valued observations and given thresholds, are designed with first-order gradient. This kind of algorithms is designed by taking the same step-size for each coordinates, which may alleviate the complexity in the convergence analysis, but will sacrifice the convergence rate of the algorithms (Ljung and Söderström (1983)). To improve the convergence properties, it is necessary to consider estimation algorithms with adaptation gain being a matrix (e.g., Hessian matrix or its modifications), rather than a scalar. ...
Preprint
Dynamical systems with binary-valued observations are widely used in information industry, technology of biological pharmacy and other fields. Though there have been much efforts devoted to the identification of such systems, most of the previous investigations are based on first-order gradient algorithm which usually has much slower convergence rate than the Quasi-Newton algorithm. Moreover, persistence of excitation(PE) conditions are usually required to guarantee consistent parameter estimates in the existing literature, which are hard to be verified or guaranteed for feedback control systems. In this paper, we propose an online projected Quasi-Newton type algorithm for parameter estimation of stochastic regression models with binary-valued observations and varying thresholds. By using both the stochastic Lyapunov function and martingale estimation methods, we establish the strong consistency of the estimation algorithm and provide the convergence rate, under a signal condition which is considerably weaker than the traditional PE condition and coincides with the weakest possible excitation known for the classical least square algorithm of stochastic regression models. Convergence of adaptive predictors and their applications in adaptive control are also discussed.
Article
This paper evaluates how initial beliefs uncertainty can affect data weighting and the estimation of models with adaptive learning. One key finding is that misspecification of initial beliefs uncertainty, particularly with the common approach of artificially inflating initials uncertainty to accelerate convergence of estimates, generates time-varying profiles of weights given to past observations in what should otherwise follow a fixed profile of decaying weights. The effect of this misspecification, denoted as diffuse initials, is shown to distort the estimation and interpretation of learning in finite samples. Simulations of a forward-looking Phillips curve model indicate that (i) diffuse initials lead to downward biased estimates of expectations relevance in the determination of actual inflation, and (ii) these biases spill over to estimates of inflation responsiveness to output gaps. An empirical application with U.S. data shows the relevance of these effects for the determination of expectational stability over decadal subsamples of data. The use of diffuse initials is also found to lead to downward biased estimates of learning gains, both estimated from an aggregate representative model and estimated to match individual expectations from survey expectations data.
Article
Full-text available
Nonlinear feedback shift registers (NFSRs) are the main components of stream ciphers and convolutional decoders. Recent years have seen an increase in the requirement for information security, which has sparked NFSR research. However, the NFSR study is very imperfect as a result of the lack of appropriate mathematical tools. Many scholars have discovered in recent years that the introduction of semi-tensor products (STP) of matrices can overcome this issue because STP can convert the NFSR into a quasi-linear form. As a result of STP, new NFSR research has emerged from a different angle. In view of this, in order to generalize the latest achievements of NFSRs based on STP and provide some directions for future development, the research results are summarized and sorted out, broadly including the modeling of NFSRs, the analysis of the structure of NFSRs, and the study of the properties of NFSRs.
Chapter
This chapter deals with the discrete‐time control design of linear systems in the polynomial domain. The basic premise of predictive control is simple. It involves calculating a sequence of future control signals that will minimize a set of cost functions over a finite prediction horizon. The chapter discusses both prediction base controllers and their application in adaptive control. It focuses on single‐input, single‐output systems, or SISO, in the discrete‐time domain. The first step in developing a predictive control is to construct a model. Generalized predictive control was first introduced by D. W. Clarke. Generally, a model that is known as the controlled autoregressive integrated moving average process is used for this controller design. An adaptive quantity is able to adapt to the changes in its environment. The chapter focuses on different types of self‐tuning controllers.
Conference Paper
Full-text available
We consider the problem of interaction-aware motion planning for automated vehicles in general traffic situations. We model the interaction between the controlled vehicle and surrounding road users using a generalized potential game, in which each road user is assumed to minimize a common cost function subject to shared (collision avoidance) constraints. We propose a quadratic penalty method to deal with the shared constraints and solve the resulting optimal control problem online using an Augmented Lagrangian method based on PANOC. Secondly, we present a simple methodology for learning preferences and constraints of other road users online, based on observed behavior. Through extensive simulations in a highway merging scenario, we demonstrate the practical efficacy of the overall approach as well as the benefits of the proposed online learning scheme.
Chapter
This paper describes an impedance control strategy based on model reference adaptation in unstructured environment, aimed at the uncertainty of the environmental stiffness and the unknown of the dynamic change of the environmental position during force tracking. First, the contact force model between the robot and the environment is established, and the environmental stiffness is identified through the BP (back propagation) neural network; then, the simulink-adams co-simulation model of dynamic-based adaptive force control is established. The change of the contact force adjusts the parameters of the impedance model online adaptively, which is used to compensate for the unknown dynamic change of the environment; finally, the simulation results show that the strategy can achieve a good force control effect, and the control method has strong robustness It can increase the reliability of the system, and is suitable for robotic arm force interaction scenarios in a location environment.
Article
Education can be viewed as a control theory problem in which students seek ongoing exogenous input-either through traditional classroom teaching or other alternative training resources-to minimize the discrepancies between their actual and target (reference) performance levels. Using illustrative data from [Formula: see text] Dutch elementary school students as measured using the Math Garden, a web-based computer adaptive practice and monitoring system, we simulate and evaluate the outcomes of using off-line and finite memory linear quadratic controllers with constraintsto forecast students' optimal training durations. By integrating population standards with each student's own latent change information, we demonstrate that adoption of the control theory-guided, person- and time-specific training dosages could yield increased training benefits at reduced costs compared to students' actual observed training durations, and a fixed-duration training scheme. The control theory approach also outperforms a linear scheme that provides training recommendations based on observed scores under noisy and the presence of missing data. Design-related issues such as ways to determine the penalty cost of input administration and the size of the control horizon window are addressed through a series of illustrative and empirically (Math Garden) motivated simulations.
Article
The applicability of the proposed dynamic response model for buildings is investigated using shaking‐table tests with a four‐storey steel specimen. This approach derives the equation of motion for a multi‐degree‐of‐freedom linear building based on microtremor measurements. Under a linear assumption, the equation can estimate the seismic response accelerations, velocities, and displacements at microtremor sensor locations without the need for information about the mass, damping, stiffness matrices or need for structural design documents to estimate peak responses that are linked with seismic damages of structural and non‐structural components. The modelling is unconstrained by structural shape, composition of frames, connections of structural members, or the assumption of a rigid floor. In comparison to the previous methods assuming simple/regular building shape with standard/typical rigid floor, the proposed model is applicable to large‐scale low‐rise buildings with irregular shapes, flat expanses, and open spaces such as large atria and skylights as well. The applicability study considers two practical scenarios: natural frequencies and damping ratios based on microtremors that can be updated by an earthquake and a standard assumption for structural design. The prediction accuracy is best when the participation vector for seismic input is obtained from sensors located on the upper floors; the structure mostly exhibits elastic response; a modal system identification is applied to the seismic measurement; and local damage does not affect the global seismic response of the structure. The reason is that this method assumes that identified mode shapes do not change due to the occurrence of an earthquake.
Chapter
In this paper, a simple IIR filter is used in system identification. Uniform white sequence is used as an input signal for the unknown system. A noise white sequence signal, which is not correlated with the input signal, is added to the system output. An illustrative example is solved, and the optimization is performed using the Simplex method by Nelder and Mead. A comparison is done to results by a genetic algorithm and a simulated annealing algorithm. It is demonstrated, that a gradient based algorithm gets stuck in a local minimum. The obtained result confirms the efficiency and efficacy of this approach. KeywordsSystem identificationAdaptive filteringConvex optimization
Article
Univariate filters used in output gap estimation are subject to criticism as being purely statistical and having no economic content. The information content of the output gap measures estimated by standard multivariate filtering techniques, on the other hand, can be distorted because of the possibly unrealistic restriction that system parameters stay constant over time. In this study, we seek to address these shortcomings by proposing an output gap estimation method that takes into account changing economic relations. We employ a nonlinear time series framework along with the extended Kalman filter, in which economic content is used by inflation and output gap dynamics and the parameters are allowed to be time varying. We use the Turkish economy as a laboratory to show that our method provides useful results, both in terms of the properties of output gap estimates and for the assessment of change in macroeconomic dynamics.
Thesis
At the roots of every engineering field there are mathematical models. They allow us to make predictions on the evolution of a process, monitor the health of a plant and design a control scheme. System Identification provides us with techniques for obtaining such a model directly from experimental data collected from the system we want to model, leading to a model which is accurate enough. In order to obtain a good model using the tools of System Identification, a user has to choose: a model structure, the experimental data and an estimation method.The choice of the experimental data relies on designing the experiment and it has important consequences on the final quality of the model. Indeed, if we consider the identification of a model among a set of transfer functions (model structure) in the Prediction Error framework, the “larger” the power spectrum of the excitation signal, the more accurate the model. On the other hand, a “large” power spectrum for the excitation signal represents a high cost for the experiment. In this context, the least-costly experiment design framework has been proposed, where the cost is minimized while requiring a model which is just accurate enough.In all optimal experiment design problems, the underlying optimisation problem depends on the unknown true system that we want to identify. This problem is generally circumvented by replacing the true system by an initial estimate. One important consequence of this approach is that we can underestimate the actual cost of the experiment and that the accuracy of the identified model can be lower than desired. Many efforts have been done in the literature to make this optimisation problem robust, leading to the research area of robust optimal experiment design. However, except for simple cases, all the approaches proposed so far do not completely robustify the optimisation problem. In this thesis, based on an a-priori uncertainty set for the true system, we propose a convex optimization approach that guarantees that the experiment cost will not be higher than a computed upper bound and that the accuracy of the model is at least the desired one. We do this considering that the excitation signal is a multisine signal.In recent years we have observed in control engineering a rising interest in networks. Even if many Identification problems in the network context have been recently studied, this is not the case for the optimal experiment design. In this thesis, we consider the optimal experiment design for the identification of one module in a given network of locally controlled systems. The identification experiment will be designed in such a way that we obtain a sufficiently accurate model of the to-be-identified module with the smallest identification cost i.e. with the least perturbation of the network.Finally, in the second part of this thesis we consider the drive mass system of MEMS gyroscope. This drive mass system is meant to oscillate at its resonance frequency in order to have the desired performances. However, during its operation the gyroscope undergoes environmental changes, such as temperature changes, that affect the resonance frequency of the resonator. It is then important to track these changes during the operation of the gyroscope. To do so, in this thesis, we investigate two solutions: one coming from adaptive control, the extremum seeking scheme, and one coming from System Identification, the recursive least squares algorithm.
Preprint
Full-text available
Within the context of recursive least squares (RLS) parameter estimation, the goal of the present paper is to study the effect of regularization-induced bias on the transient and asymptotic accuracy of the parameter estimates. We consider this question in three stages. First, we consider regression with random data, in which case persistency is guaranteed. Next, we apply RLS to finite-impulse-response (FIR) system identification and, finally, to infinite-impulse-response (IIR) system identification. For each case, we relate the condition number of the regressor matrix to the transient response and rate of convergence of the parameter estimates.
ResearchGate has not been able to resolve any references for this publication.