[go: up one dir, main page]

0% found this document useful (0 votes)
10 views6 pages

CourseProject Paper

The document discusses the optimization of PID controllers using Particle Swarm Optimization (PSO) and compares it with Genetic Algorithm (GA). It highlights the advantages of PSO in terms of faster convergence and efficiency in tuning PID parameters for control systems. The study concludes that while both methods have their strengths, PSO may be more effective in specific scenarios, and suggests future exploration of hybrid algorithms and machine learning approaches for further improvements.

Uploaded by

Nghĩa Bùi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views6 pages

CourseProject Paper

The document discusses the optimization of PID controllers using Particle Swarm Optimization (PSO) and compares it with Genetic Algorithm (GA). It highlights the advantages of PSO in terms of faster convergence and efficiency in tuning PID parameters for control systems. The study concludes that while both methods have their strengths, PSO may be more effective in specific scenarios, and suggests future exploration of hybrid algorithms and machine learning approaches for further improvements.

Uploaded by

Nghĩa Bùi
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 6

1

Particle Swarm Optimization approach for optimal


design of PID controllers
Ashwin R Nair (122101004), Nakul C (122101024)

Abstract: Proportional-Integral-Derivative gains of PID controllers is very crucial. Even


(PID) control is one of the most common though there are random search methods like
control algorithms used in the industry. A PID GA, they are very computationally complex. This
controller reads a sensor and computes its prompts us to search for other algorithms that can
proportional, integral and derivative responses provide optimal solutions for effective tuning of
and computes the output. It is used to improve PID controllers. PSO is one such method that is
the dynamic response and to reduce the steady simple, can converge fastly and is very efficient
state error. Random search methods like in tuning PID controllers. This motivates us to
genetic algorithm and simulated annealing compare the characteristics of GA-PID and PSO-
have been used in control systems in search of PID to assess the advantages and disadvantages of
optimal PID control parameters. But due to the both methods and arrive at a conclusion that which
computational complexity and degradation in of these methods is better and can be applied in
efficiency, we need to look for other suitable real world control systems.
methods. One such method was Particle Swarm
Optimization (PSO). In this project we compare These methods, inspired by nature, offer innovative
the characteristics of PSO-PID controller and solutions to complex optimization problems by
GA-PID controller using Matlab and control emulating natural systems’ behavior. PSO, for
systems toolbox and compare the performance example, is inspired by the social behavior of bird
of both methods. flocks and fish schools, where individuals adjust
their movements based on the best-performing
I. O BJECTIVES peers.
Proportional-Integral-Derivative (PID) controller
is a widely used control method in the industry. Similarly, GA mimics the process of natural
However it is quite difficult to tune its gains due selection and evolution, where individuals with
to its high order, time delays, and non linearities. better traits have a higher chance of survival
One of the classic methods used to tune the gains and reproduction. By harnessing these principles,
of PID controllers is the Ziegler-Nichols method. nature-inspired optimization methods can efficiently
But using this method also it is hard to determine search through vast solution spaces and find optimal
near optimal PID parameters. The genetic algorithm solutions in complex and dynamic environments.
method has high potential for optimization but re-
quires high computational efforts. Particle Swarm
III. S TATE - OF - THE - ART
Optimization (PSO) is one of the modern heuristic
algorithms that is very robust in solving nonlinear Over the years there has been different
optimization problems. In this project, our objective
methods introduced to tune PID controllers.
is to compare the characteristics and performance of
There are gradient-based optimization methods like
both GA-PID and PSO-PID controllers and assess gradient descent algorithms, stochastic gradient
their computational efficiency. descent(SGD) and Adam optimizer. There are
many metaheuristic algorithms like Particle Swarm
II. M OTIVATION / O RIGIN OF THE PROJECT Optimization (PSO), Genetic Algorithms (GA), Ant
PID controllers are one of the widely used Colony Optimization (ACO), Differential Evolution
control system in the industry. So optimizing the (DE), and Harmony Search (HS) that are also good
2

optimization techniques and each offers a unique • Generator: converts mechanical energy into
convergence property. electrical energy. Its response depends on the
load and has a time constant (τg = 1).
In recent times, machine learning based techniques
have seen a rise with neural networks, deep learning Vt (s) 1
= (4)
and reinforcement learning that can easily integrate VF (s) 1+s
with PID controllers and learn optimal control • Sensor: detects changes in voltage. Hence, has
based on system feedback. There are works going a small time constant (τs = 0.01).
on that combines one or more of these methods to
come up with optimization strategies to improve Vs (s) 1
= (5)
convergence speed. Vt (s) 1 + 0.01s

IV. M ETHODOLOGY /A PPROACH C. Performance Estimation of PID Controller


A. PID Controller A new performance criterion as proposed by [1]
was used. Once we get a set of good parameters
Ki
C(s) = Kp + + Kd s (1) for gains kp , kd and ki , we can minimize the
s performance criterion. This performance criterion
The PID controller enhances the dynamic re- denoted by W(K) depends on maximum overshoot
sponse and minimize steady-state errors of a system. (Mp ), rise time (tr ), settling time (ts ) and steady
• The derivative controller improves transient re- state error (Ess ).
sponse by introducing a finite zero to the open-
W (K) = (1 − e−β )(Mp + ESS ) + e−β (ts − tr ) (6)
loop plant transfer function.
• The integral controller adds a pole at the origin, where k is [kp, ki, kd] and β is the weighing factor.
boosting the system type by one and reducing β is set to less than 0.7 if we want to reduce rise
steady-state errors caused by step functions to time and settling time and it is set to be more than
zero. 0.7 if we want to reduce maximum overshoot and
Hence, the PID controller fine-tunes the system’s steady state error. We’ve used a β of 0.8 to 1.5.
behavior, making it respond faster and more accu-
rately to changes.
D. Genetic algorithm
B. Modeling a Linearized AVR System In GA, initial population of candidate solutions
is called chromosomes. These chromosomes are
The Automatic Voltage Regulator (AVR) is used created randomly with parameters like population
for maintaining a steady voltage from a generator. size, chromosome length, and encoding scheme
It’s made up of four parts: amplifier, exciter, gen- defined.
erator, and sensor. We’ve made the AVR system
into a linearized model by simplifying each part’s Chromosomes with higher fitness are favored
behaviour: for reproduction, simulating the principle of
• Amplifier: is used for boosting signals quickly. ”survival of the fittest.” Common selection methods
We represented it with a gain (KA = 10) and a include roulette wheel, tournament, rank-based,
small time constant (τA = 0.01). and stochastic universal sampling.
VR (s) 10
= (2) Mutation introduces diversity by randomly altering
Ve (s) 1 + 0.01s
genes or bits in offspring chromosomes, helping to
• Exciter: is used for regulating the generator’s prevent premature convergence. The mutation rate
field. Hence, It’ll have a longer time constant controls the probability of mutation occurrence.
(τe = 0.4). The next step involves replacing individuals in
VF (s) 1 the current population with the new offspring and
= (3) mutated chromosomes.
VR (s) 1 + 0.4s
3

Fig. 1: AVR system with a PSO-PID controller

velocity adjustment mechanism balances between


f (Ci ) exploration and exploitation.
P (Ci ) = Pn , pm = mutationrate (7)
j=1 f (Cj )
After updating velocities, particles adjust their
The algorithm iteratively repeats these steps until positions accordingly in the search space. The
a termination criterion is met. This iterative process position update equation computes the new
allows for continual improvement and exploration of position of each particle based on its current
the solution space until an optimal or near-optimal position and updated velocity. This iterative process
solution is obtained. allows particles to traverse the solution space
dynamically, converging towards optimal solutions
E. Particle Swarm Optimization over successive iterations.
Particle Swarm Optimization was first described xt+1 t t+1
ij = xij + vij
by James Kennedy and Russell C. Eberhart in 1995
i = 1, 2, ..., n (8)
was formulated on the idea of swarm intelligence
based on the observation of swarming habits by j = 1, 2, ..., m
certain kinds of animals like bees and birds. If a particle’s current position yields a better
fitness value than its local best, it updates its local
PSO begins by initializing a population of best to the current position. Similarly, the global best
potential solutions known as particles. Each position among all particles is updated if a particle
particle represents a solution within the problem’s discovers a better solution than the current global
search space. These particles collectively explore best. This mechanism encourages information shar-
the solution space to find an optimal or near-optimal ing among particles, facilitating the discovery of
solution. better solutions collectively.

In the next step, each particle’s objective function


velocityt+1 t
ij = w · velocityij
value is calculated. This assesses how well a particle
performs in solving the optimization problem. The + c1 · r1 · (local bestij − positiontij )
fitness function provides a quantifiable measure of + c2 · r2 · (global bestij − positiontij )
the solution quality represented by each particle, (9)
guiding the search process towards better solutions. The constants c1 and c2 are called acceleration
constants. Low values of c1 and c2 allow the
In each iteration of PSO, particles update their particle to roam far away from target regions and
velocities based on their current velocities, the high values result in movements towards the target
best-known positions among other particles (local region. In this project we use c1 = c2 = 2 as
best), and the best-known positions among all mentioned in [].
particles in the population (global best). This
4

The algorithm continues iterating through these


steps until a termination criterion is met. Common
termination criteria include reaching a maximum
number of iterations or achieving a satisfactory
solution within a specified threshold. Once the
termination criterion is satisfied, the best solution
found, either the global best or a satisfactory
solution, represents the optimized solution to the
given optimization problem.

F. PSO-PID controller
PSO-PID was modelled in Matlab. The following
PSO parameters were used
• Member of each individual is Kp , Ki and Kd
Fig. 3: Performance Criterion of PSO-PID with each iteration
• Population size = 50
• wmax = 0.9 and wmin = 0.4

(wmax − wmin ) Next, we modelled a PSO-PID controller with the


w = wmax + × iter (10)
PSO constants as mentioned above. It iterated to
itermax
minimise the performance criterion.
w is the inertial weight After convergence of the PSO algorithm, we got
• c1 = c2 = 2 the final tuned parameters as:
Simulation was done for 50 iterations and PSO • Kp = 0.65315
showed good convergence rates. • Ki = 0.55575
First we plotted the AVR transfer function with- • Kd = 0.24551
out PID controller. which gave the performance criterion with β = 1
of the PSO-PID controller to be W = 0.0591

Fig. 2: Step response of AVR system without PID controller


Fig. 4: Step response of AVR system with PSO-PID
controller
From the above graph the we calculated the
transient response
• Steady state error (Ess ) = 0.0894
• Maximum overshoot (Mp ) = 0.6502 G. GA-PID controller
• Settling time (ts ) = 6.9878 We also implemented the GA-PID controller to
• Rising time (tr ) = 0.2613 compare it with PSO-PID. We gave the objective
5

function to minimize the performance criterion like V. R ESULTS AND D ISCUSSION


in the same case as PSO-PID. Hence, It iterated to We observed that PSO-PID controllers demon-
minimise the performance criterion strated faster convergence and stability in reach-
ing near-optimal solutions compared to GA-PID
controllers. We can see that PSO-PID controller

Fig. 5: Performance Criterion of GA-PID with each


generation
Fig. 7: Step response of AVR system with GA-PID and
The values of the tuned parameters obtained PSO-PID Controller
were:
• Kp = 0.84721 could create very perfect step response of the AVR
• Ki = 0.72751 system, indicating that the PSO-PID controller is
• Kd = 0.26716 better than the GA-PID controller. Also as noticed
from the performance criteria of PSO-PID controller
Which gave the performance criterion with β = 1
(W = 0.0591) is far less than the Performance
of the GA-PID controller to be W = 0.20105
criteria of the GA-PID controller (W = 0.20105).
Shown below is the step response of AVR system
with GA-PID controller.
VI. C ONCLUSION
We have looked into the strengths and weaknesses
of GA-PID and PSO-PID for improving PID
controller performance. PSO works well in specific
control scenarios because it converges quickly and
handles tough situations effectively. On the other
hand, GA takes a different approach, exploring
various possibilities to find the best solution and
adapting to changes in the system. However, GA
can be unpredictable and give different results even
with the same starting conditions.

Deciding between PSO and GA depends on


your control goals, system characteristics, and
optimization preferences. To make control systems
even better, we can explore combining PSO and
Fig. 6: Step response of AVR system with GA-PID controller GA, integrating machine learning, and using
optimized controllers in real-time situations.
6

VII. F UTURE P ERSPECTIVE


Looking ahead, there are several possible
advancements that can be made in PID controller
tuning. We can look into the possibility of
combining both the PSO and GA algorithms and
make it a hybrid algorithm that can leverage the
advantages of both algorithms. This may lead to
faster convergence and better optimization.

There are different variants and extensions of


PSO and GA algorithms with minor improvements.
We can also investigate these methods and compare
how they enhance the optimization process.

In recent times, there has been a boom for


machine learning based approaches. We can
also investigate ML based approaches like using
reinforcement learning, neural networks and fuzzy
design. The advantage of using ML based approach
is that it is an adaptive and self tuning PID contoller
so these parameters continuously learn and update
their parameters based on system feedback.

R EFERENCES
[1] Zwe-Lee Gaing, ”A particle swarm optimization approach for
optimum design of PID controller in AVR system,” in IEEE
Transactions on Energy Conversion, vol. 19, no. 2, pp. 384-
391, June 2004, doi: 10.1109/TEC.2003.821821. keywords:
Particle swarm optimization;Three-term control;Control
systems;Optimal control;Design methodology;Pi
control;Proportional control;Computational efficiency;Time
domain analysis;Genetic algorithms,
[2] Solihin, Mahmud Tack, Lee Moey, Lip Kean. (2011). Tuning
of PID Controller Using Particle Swarm Optimization (PSO).
Proceeding of the International Conference on Advanced Sci-
ence, Engineering and Information Technology. 1. 10.18517/ija-
seit.1.4.93.
[3] J. Kennedy and R. Eberhart, ”Particle swarm optimization,”
Proceedings of ICNN’95 - International Conference on Neu-
ral Networks, Perth, WA, Australia, 1995, pp. 1942-1948
vol.4, doi: 10.1109/ICNN.1995.488968. keywords: Particle
swarm optimization;Birds;Educational institutions;Marine an-
imals;Testing;Humans;Genetic algorithms;Optimization meth-
ods;Artificial neural networks;Performance evaluation,

You might also like