Question
Asked 17th Mar, 2015

Why it is the deadbeat controller considered not robust?

I has been written in literature that deadbeat controller is not robust with perturbations. But i could never find any mathematical or logical proof for this statement. Can somebody explain this? Related literature link would be appreciated. 

Popular answers (1)

Hugh Lachlan Kennedy
Daronmont Technologies
Deadbeat control is a very optimistic and  aggressive approach, particularly for short sampling periods. By "optimistic" I mean that it aims to obtain a zero tracking error in M time steps where M is the order of the plant. By "aggressive" I mean that this usually requires very large control commands (i.e. plant inputs), typically of opposing signs. Furthermore, even when the plant is known precisely, you may end up with oscillatory closed-loop behaviour, in the intra-sample response, due to (stable) poles near z = -1, even though the plant output may look fine at the sample times!
Like many other controller design techniques that utilize a plant model (e.g. linear-state-space with observer, internal model control, smith predictor, polynomial design), you need to be realistic and not "ask too much" of your controller (i.e. closed-loop poles too close to the origin, from the right, in the z plane). As an aggressive control strategy will hurt you if your plant model is not perfect - with a highly under-damped closed-loop system at best and an unstable system at worst. Although these problems can often be lessened somewhat, by incorporating a low-pass filter into your controller, e.g. with repeated poles on the real axis (at 0<z<1, but closer to 1 for more smoothing).
Don't worry about proofs, you will see these effects if you play around with Simulink for a an hour or two.
3 Recommendations

All Answers (2)

Osman Çakıroğlu
TAAC Aerospace Technologies
Deadbeat controller is related with eliminating system poles which means the nominator of controller is denominator of the system, in practical there is always a modelling mistake or parameter perturbation that changes the actual system. Therefore the mathematical controller will not work properly.
In mathematical controller design it can be seen i suppose, or in simulink you can see the difference.
Hope this helps.
2 Recommendations
Hugh Lachlan Kennedy
Daronmont Technologies
Deadbeat control is a very optimistic and  aggressive approach, particularly for short sampling periods. By "optimistic" I mean that it aims to obtain a zero tracking error in M time steps where M is the order of the plant. By "aggressive" I mean that this usually requires very large control commands (i.e. plant inputs), typically of opposing signs. Furthermore, even when the plant is known precisely, you may end up with oscillatory closed-loop behaviour, in the intra-sample response, due to (stable) poles near z = -1, even though the plant output may look fine at the sample times!
Like many other controller design techniques that utilize a plant model (e.g. linear-state-space with observer, internal model control, smith predictor, polynomial design), you need to be realistic and not "ask too much" of your controller (i.e. closed-loop poles too close to the origin, from the right, in the z plane). As an aggressive control strategy will hurt you if your plant model is not perfect - with a highly under-damped closed-loop system at best and an unstable system at worst. Although these problems can often be lessened somewhat, by incorporating a low-pass filter into your controller, e.g. with repeated poles on the real axis (at 0<z<1, but closer to 1 for more smoothing).
Don't worry about proofs, you will see these effects if you play around with Simulink for a an hour or two.
3 Recommendations

Similar questions and discussions

Are normality assumptions for Pearson correlations and regression analysis contradictory?
Question
14 answers
  • John D CrawfordJohn D Crawford
For examining the association between two variables, say X and Y, using the Pearson correlation coefficient, the assumption commonly stated in text books is that both variables need to be normally distributed, or at least a reasonable approximation to that distribution.
On the other hand, the assumption for a parametric OLS regression model is that the residuals are normally distributed. In such a regression analysis, unless there is a very strong relationship between the independent and dependent variables (say X and Y, resp.) the distribution of the residuals is very close to that of the dependent variable, Y. (That is why the commonly stated assumption for regression is that the DV needs to be normally distributed.) So, in a situation where Y, (and hence the residuals) are sufficiently normal, but the predictor, X, is very non-normal (say skewness outside the range of plus and minus one), the parametric regression model would still satisfy the required criterion and so allow the standardised regression coefficient, beta, to be validly estimated.
However, the Pearson correlation coefficient is precisely the same as the standardised regression coefficient, beta, derived from a simple regression analysis. So there seems to be a conflict in the commonly stated assumptions regarding distributions for the two types of analyses, each of which allows the estimation of the same statistic (correlation or beta).
Does the above imply that it should be valid to use the Pearson correlation coefficient when only one of the two variables is normally distributed, but not the other?

Related Publications

Got a technical question?
Get high-quality answers from experts.