Adaptive control has been extensively investigated and developed in both theory and application during the past few decades, and it is still a very active. PDF | The parameters of a process may be unknown or may change slowly over time. This chapter discusses how one can control a process. PDF | Adaptive assistive control for a haptic interface system is proposed in the present paper. The assistive control system consists of three subsystems: a servo .

Adaptive Control Pdf

Language:English, German, Portuguese
Published (Last):27.05.2016
ePub File Size:29.57 MB
PDF File Size:20.15 MB
Distribution:Free* [*Sign up for free]
Uploaded by: MARDELL

Direct and Indirect Adaptive Control 8. Model Reference Adaptive Control Adaptive Pole Placement Control. This tutorial paper looks back at almost 50 years of adaptive control trying to establish how much more we need to secure for the industrial community an. Nonlinear and Adaptive Control with Applications . grounding for analysis and design of adaptive control systems and so these form the.

Results from physical measurements show that MRAC system was able to accommodate nonlinearities associated with DC motor and yet, maintain good control of the motor without voltage overshoot as against the PID controller.

Lately, developments in magnetic materials, microprocessors, semiconductor technology and mechatronics etc. For high performance drive Vol. DC servomotors are mostly the choice; they are widely deployed in these applications due to their reliability and ease of control because of the decoupled nature of the field and the armature magneto motive force Sheel, Chandkishor, and Gupta, DC Servomotor systems have two outputs that can be controlled, angular speed and angular position.

For some applications, such as disk drives and robotics, position control is more important than speed control Makableh, One of the parameters that negatively affect the efficient control of DC servomotor is overshoot. In the context of control theory, overshoot can be regarded as an output exceeding its final steady state value.

Overshoot could be seen as a form of distortion that affects the rise time, settling time etc. Reviews show that conventional controllers such as Proportional Integral Derivative PID are not that capable of handling nonlinearities associated with DC motors and at the same time, mitigate the effects of overshoot.

Thus, one of the drawbacks of conventional tracking controllers for electric drives is that they are unable to capture the unknown load characteristics over a widely ranging operating point.

Apparently, this makes tuning of controller parameters very difficult. There are many ways to overcome these difficulties but, generally there are four basic way that are common to adaptive controller; 1 Model reference adaptive control MRAC , 2 Self tuning, 3 Dual control and 4 Gain scheduling.

Usually load torque is a nonlinear function of a combination of variables such as speed and position of the rotor. Therefore, identifying the overall nonlinear system through a linearized model around a widely varying or changing operating point, under fast switching frequencies, can introduce errors which can lead to unstable or inaccurate performance of the system Astron and Wittenmark, It has the electrical and the mechanical representation.

The torque developed on the motor Vol. To substantiate that, assume a current-carrying conductor is established in a magnetic field with flux , and the conductor is located at a distance r from the center of rotation.

The relationship among the developed torque, flux and current ia is. In addition to the torque developed, when the conductor moves in the magnetic field, a voltage is generated across its terminals. This voltage is known as the back emf, which is proportional to the shaft velocity, and tends to oppose the current flow. The relationship between the back emf and the shaft velocity is:.

Equations 1a and 1b form the fundamentals of the dc-motor operation. Generally speaking, MRAC is composed of four parts namely; the plant containing unknown parameters, a reference model for compactly specifying the desired output of the control system, a feedback control law containing adjustable parameters. The adaptation law of MRAC systems extracts parameter information from the tracking errors. However, unlike NARMA-L2, the model reference architecture requires that a separate neural network controller be trained offline, in addition to the neural network plant model.

The controller training is computationally expensive, because it requires the use of dynamic backpropagation Beale, Hagan, and Demuth, Perhaps, it has been emphasized that complete controllability and observability of the process must be assumed for successful neural network modeling and control Saerens and Soquet, More so Narendra and Parthasarathy indicated that considerable progress in nonlinear control theory is still needed to obtain rigorous solutions to identification and control problems using neural networks.

The PID control system is achieved by tuning and calculating the error signal between an output measured value and a reference value input , the controller works to minimize the error signal or the difference between the output signal and the reference signal to a minimum value; such that the output measured value will be as close as possible to the input reference signal.

The mathematical model of the PID controller has been proposed by many authors and is represented by: 2 Where: is the controller output signal, is the error signal, is the proportional gain, Ki is the integral gain and Kd is the derivative gain. The DC motor takes in single input in the form of an input voltage and generates a single output parameter in the form of output speed. It is a single-input, single-output system SISO. Figure 2 is the electromechanical representation of a DC motor, the diagram is used to develop the system level transfer function that characterize the operation or behavior of a DC motor.

Fig 2 Electrical Model of DC Motor The armature is modeled as a circuit with resistance Ra connected in series with an inductance La and a voltage source ea, and eb representing the back electromotive force emf in the armature when the rotor rotates. Looking at the diagram of fig 1, it can be seen that the control of the dc motor is applied at the armature terminals in the form of applied voltage ea t. It can be deduced that the torque developed in the motor is proportional to the air-gap flux and the armature current.

Putting the control Vol. From equations 3 through 6, the applied voltage ea t is considered as the cause and Equation 5 considers that the immediate effect due to the applied voltage.

From Equation 3, armature current ia t causes the motor torque , while in Equation 6 the back emf was defined. It can be seen also from Equation 7 that the motor torque produced causes the angular velocity and displacement m t of the rotor respectively. The transfer function between the motor displacement and the input voltage is obtained as thus; 9 Note that TL has been set to zero in Equation 9.

Fig 2 shows a block diagram of the DC motor system for speed control. From the diagram, one can see clearly how the transfer function is related to each block. It can be seen from Equation 9 that s can be factored out of the denominator and the significance of the transfer function is that the dc motor is an integrating device between these two variables.

From fig 3 also, it can be seen that the motor has a built-in feedback loop caused by the back emf Eb. Fig 3 Simulink Model of a DC Servomotor in terms of speed The back-emf physical represents the feedback of a signal that is proportional to the negative of the speed of the motor.

From equation 9, it can be noted that back emf constant Kb represents an added term to the resistance Ra and the viscous-friction coefficient Bm.

Effectively, the back-emf effect is equivalent to an electric friction which tends to improve the stability of the motor and apparently the stability of the system. So simulation can be performed on the control of DC motor using ANN model, there is need to construct an equivalent DC motor to a discrete time model.

Note that the choice of load torque here is arbitral because considering load torque as one of the functions of a DC motor; it is a common characteristic for most propeller driven loads.

Alternatively, Direct substitution and substitute for position in equations 4 , 5 and 10 i. Figure 1. Direct adaptive control structure. Therefore, the main problem in indirect adaptive control is to choose the class of control laws C We will study this problem in great detail in Chapters 5 and 6. The estimate 9c t is then used in the control law without intermediate calculations. The choice of the class of control 4 Chapter 1. Introduction laws C 9C and parameter estimators that generate 9c t so that the closed-loop plant meets the performance requirements is the fundamental problem in direct adaptive control.

As a result, direct adaptive control is restricted to certain classes of plant models. In general, not every plant can be expressed in a parameterized form involving only the controller parameters, which is also a suitable form for online estimation.

As we show in Chapter 5, a class of plant models that is suitable for direct adaptive control for a particular control objective consists of all SISO LTI plant models that are minimum phase; i.

In general, the ability to parameterize the plant model with respect to the desired controller parameters is what gives us the choice to use the direct adaptive control approach. Note that Figures 1. This identical-in-structure interpretation is often used in the literature of adaptive control to argue that the separation of adaptive control into direct and indirect is artificial and is used simply for historical reasons.

In general, direct adaptive control is applicable to SISO linear plants which are minimum phase, since for this class of plants the parameterization of the plant with respect to the controller parameters for some controller structures is possible.

Indirect adaptive control can be applied to a wider class of plants with different controller structures, but it suffers from a problem known as the stabilizability problem explained as follows: As shown in Figure 1. Such calculations are possible, provided that the estimated plant is controllable and observable or at least stabilizable and detectable. Since these properties cannot be guaranteed by the online estimator in general, the calculation of the controller parameters may not be possible at some points in time, or it may lead to unacceptable large controller gains.

As we explain in Chapter 6, solutions to this stabilizability problem are possible at the expense of additional complexity. The principle behind the design of direct and indirect adaptive control shown in Figures 1. The form of the control law is the same as the one used in the case of known plant parameters. In the case of indirect adaptive control the unknown controller parameters are calculated at each time t using the estimated plant parameters generated by the online estimator, whereas in the direct adaptive control case the controller parameters are generated directly by the online estimator.

In both cases the estimated parameters are treated as the true parameters for control design purposes. This design approach is called certainty equivalence CE and can be used to generate a wide class of adaptive control schemes by combining different online parameter estimators with different control laws.

In some approaches, the control law is modified to include nonlinear terms, and this approach deviates somewhat from the CE approach. The principal philosophy, however, that as the estimated parameters converge to the unknown constant parameters the control law converges to that used in the known parameter case, remains the same.

Gain scheduling structure. In this class of schemes, the online parameter estimator is replaced with search methods for finding the controller parameters in the space of possible parameters, or it involves switching between different fixed controllers, assuming that at least one is stabilizing or uses multiple fixed models for the plant covering all possible parametric uncertainties or consists of a combination of these methods.

We briefly describe the main features, advantages, and limitations of these non-identifier-based adaptive control schemes in the following subsections. Since some of these approaches are relatively recent and research is still going on, we will not discuss them further in the rest of the book. Transitions between different operating points that lead to significant parameter changes may be handled by interpolation or by increasing the number of operating points.

The two elements that are essential in implementing this approach are a lookup table to store the values of Kj and the plant measurements that correlate well with the changes in the operating points.

The approach is called gain scheduling and is illustrated in Figure 1. With this approach, plant parameter variations can be compensated by changing the controller gains as functions of the input, output, and auxiliary measurements. The advantage of gain scheduling is that the controller gains can be changed as quickly as the auxiliary measurements respond to parameter changes.

Frequent and rapid changes of the controller gains, 6 Chapter 1. Introduction Figure 1. Multiple models adaptive control with switching.

One of the disadvantages of gain scheduling is that the adjustment mechanism of the controller gains is precomputed offline and, therefore, provides no feedback to compensate for incorrect schedules. Large unpredictable changes in the plant parameters, however, due to failures or other effects may lead to deterioration of performance or even to complete failure. Despite its limitations, gain scheduling is a popular method for handling parameter variations in flight control [3,6] and other systems [7, , ].

While gain scheduling falls into the generic definition of adaptive control, we do not classify it as adaptive control in this book due to the lack of online parameter estimation which could track unpredictable changes in the plant parameters. These schemes are based on search methods in the controller parameter space [8] until the stabilizing controller is found or the search method is restricted to a finite set of controllers, one of which is assumed to be stabilizing [22, 23].

In some approaches, after a satisfactory controller is found it can be tuned locally using online parameter estimation for better performance [].

Since the plant parameters are unknown, the parameter space is parameterized with respect to a set of plant models which is used to design a finite set of controllers so that each plant model from the set can be stabilized by at least one controller from the controller set.

Without going into specific details, the general structure of this multiple model adaptive control with switching, as it is often called, is shown in Figure 1. Why Adaptive Control 7 In Figure 1. This by itself could be a difficult task in some practical situations where the plant parameters are unknown or change in an unpredictable manner.

Adaptive Control Tutorial (Advances in Design and Control)

Furthermore, since there is an infinite number of plants within any given bound of parametric uncertainty, finding controllers to cover all possible parametric uncertainties may also be challenging. In other approaches [22, 23], it is assumed that the set of controllers with the property that at least one of them is stabilizing is available.

This is achieved by the use of a switching logic that differs in detail from one approach to another. While these methods provide another set of tools for dealing with plants with unknown parameters, they cannot replace the identifier-based adaptive control schemes where no assumptions are made about the location of the plant parameters. One advantage, however, is that once the switching is over, the closed-loop system is LTI, and it is much easier to analyze its robustness and performance properties.

This LTI nature of the closed-loop system, at least between switches, allows the use of the well-established and powerful robust control tools for LTI systems [29] for controller design.

These approaches are still at their infancy and it is not clear how they affect performance, as switching may generate bad transients with adverse effects on performance. Switching may also increase the controller bandwidth and lead to instability in the presence of high-frequency unmodeled dynamics.

Guided by data that do not carry sufficient information about the plant model, the wrong controllers could be switched on over periods of time, leading to internal excitation and bad transients before the switching process settles to the right controller.

Some of these issues may also exist in classes of identifier-based adaptive control, as such phenomena are independent of the approach used. The following simple examples illustrate situations where adaptive control is superior to linear control.

Consider the scalar plant where u is the control input and x the scalar state of the plant. The parameter a is unknown We want to choose the input u so that the state x is bounded and driven to zero with time.

Journal of the Fisheries Research Board of Canada

If a is a known parameter, then the linear control law can meet the control objective. The conclusion is that in the Chapter 1. Introduction 8 absence of an upper bound for the plant parameter no linear controller could stabilize the plant and drive the state to zero. The switching schemes described in Section 1. As we will establish in later chapters, the adaptive control law guarantees that all signals are bounded and x converges to zero no matter what the value of the parameter a is.

This simple example demonstrates that adaptive control is a potential approach to use in situations where linear controllers cannot handle the parametric uncertainty. Another example where an adaptive control law may have properties superior to those of the traditional linear schemes is the following.

It is clear that by increasing the value of the controller gain k, we can make the steady-state value of jc as small as we like. This will lead to a high gain controller, however, which is undesirable especially in the presence of high-frequency unmodeled dynamics. In principle, however, we cannot guarantee that x will be driven to zero for any finite control gain in the presence of nonzero disturbance d. The adaptive control approach is to estimate online the disturbance d and cancel its effect via feedback.

Therefore, in addition to stability, adaptive control techniques could be used to improve performance in a wide variety of situations where linear techniques would fail to meet the performance characteristics. This by no means implies that adaptive control is the most 1.

A Brief History 9 appropriate approach to use in every control problem. The purpose of this book is to teach the reader not only the advantages of adaptive control but also its limitations. Adaptive control involves learning, and learning requires data which carry sufficient information about the unknown parameters.

For such information to be available in the measured data, the plant has to be excited, and this may lead to transients which, depending on the problem under consideration, may not be desirable.

Furthermore, in many applications there is sufficient information about the parameters, and online learning is not required. In such cases, linear robust control techniques may be more appropriate. The adaptive control tools studied in this book complement the numerous control tools already available in the area of control systems, and it is up to the knowledge and intuition of the practicing engineer to determine which tool to use for which application.

The theory, analysis, and design approaches presented in this book will help the practicing engineer to decide whether adaptive control is the approach to use for the problem under consideration. Starting in the early s, the design of autopilots for high-performance aircraft motivated intense research activity in adaptive control.

Highperformance aircraft undergo drastic changes in their dynamics when they move from one operating point to another, which cannot be handled by constant-gain feedback control.

A sophisticated controller, such as an adaptive controller, that could learn and accommodate changes in the aircraft dynamics was needed.

Model reference adaptive control was suggested by Whitaker and coworkers in [30, 31] to solve the autopilot control problem. Sensitivity methods and the MIT rule were used to design the online estimators or adaptive laws of the various proposed adaptive control schemes. An adaptive pole placement scheme based on the optimal linear quadratic problem was suggested by Kalman in [32]. The work on adaptive flight control was characterized by a "lot of enthusiasm, bad hardware and nonexisting theory" [33].

The lack of stability proofs and the lack of understanding of the properties of the proposed adaptive control schemes coupled with a disaster in a flight test [34] caused the interest in adaptive control to diminish. The s became the most important period for the development of control theory and adaptive control in particular. State-space techniques and stability theory based on Lyapunov were introduced. Developments in dynamic programming [35, 36], dual control [37] and stochastic control in general, and system identification and parameter estimation [38, 39] played a crucial role in the reformulation and redesign of adaptive control.

By , Parks [40] and others found a way of redesigning the MIT rule-based adaptive laws used in the model reference adaptive control MRAC schemes of the s by applying the Lyapunov design approach. Their work, even though applicable to a special class of LTI plants, set the stage for further rigorous stability proofs in adaptive control for more general classes of plant models. The advances in stability theory and the progress in control theory in the s improved the understanding of adaptive control and contributed to a strong renewed interest in the field in the s.

On the other hand, the simultaneous development and progress in computers and electronics that made the implementation of complex controllers, such as 10 Chapter!. Introduction the adaptive ones, feasible contributed to an increased interest in applications of adaptive control. The s witnessed several breakthrough results in the design of adaptive control. MRAC schemes using the Lyapunov design approach were designed and analyzed in [].

The concepts of positivity and hyperstability were used in [45] to develop a wide class o MRAC schemes with well-established stability properties. At the same time parallel efforts for discrete-time plants in a deterministic and stochastic environment produced several classes of adaptive control schemes with rigorous stability proofs [44,46].

The excitement of the s and the development of a wide class of adaptive control schemes with wellestablished stability properties were accompanied by several successful applications [47— 49].

The successes of the s, however, were soon followed by controversies over the practicality of adaptive control. As early as it was pointed out by Egardt [41] that the adaptive schemes of the s could easily go unstable in the presence of small disturbances. The nonrobust behavior of adaptive control became very controversial in the early s when more examples of instabilities were published by loannou et al.

Rohrs's example of instability stimulated a lot of interest, and the objective of many researchers was directed towards understanding the mechanism of instabilities and finding ways to counteract them.

By the mid- s, several new redesigns and modifications were proposed and analyzed, leading to a body of work known as robust adaptive control.

An adaptive controller is defined to be robust if it guarantees signal boundedness in the presence of "reasonable" classes of unmodeled dynamics and bounded disturbances as well as performance error bounds that are of the order of the modeling error. The work on robust adaptive control continued throughout the s and involved the understanding of the various robustness modifications and their unification under a more general framework [41, ].

In discrete time Praly [57, 58] was the first to establish global stability in the presence of unmodeled dynamics using various fixes and the use of a dynamic normalizing signal which was used in Egardt's work to deal with bounded disturbances.

The use of the normalizing signal together with the switching a-modification led to the proof of global stability in the presence of unmodeled dynamics for continuous-time plants in [59].

Item Preview

The solution of the robustness problem in adaptive control led to the solution of the long-standing problem of controlling a linear plant whose parameters are unknown and changing with time. By the end of the s several breakthrough results were published in the area of adaptive control for linear time-vary ing plants [5, ].

The focus of adaptive control research in the late s to early s was on performance properties and on extending the results of the s to certain classes of nonlinear plants with unknow parameters.In this study, we consider a simple estimation algorithm, which is based on online parameter identification.

This phenomenon is known as "bursting," and it may take considerable simulation time to appear []. The switching schemes described in Section 1.

Several survey papers [74, 75] and books and monographs [5,39,41,,49,50,66,] have already been published.

The purpose of this book is to teach the reader not only the advantages of adaptive control but also its limitations. If we assume that the spring is "linear," i.

COLLETTE from Toledo
I do enjoy reading comics righteously. Feel free to read my other articles. I take pleasure in bat-and-ball games.