The Role of Block Particles Swarm Optimization to Enhance The PID-WFR Algorithm

: In the conventional Proportional Integral Derivation (PID) controller, the parameters are often adjusted according to the formulas and actual application. However, this empirical method will bring two disadvantages. First, testing the program takes much time and usually needs help to reach the optimal solution. Second, the PID parameters will not adapt to the new environment when the situation changes. This paper proposed a method by employing a Block Particles Swarm Optimization (BPSO) to enhance the conventional Proportional Integral Derivation (PID) algorithm to overcome the mentioned disadvantages. The genetic algorithm (GA) first optimized the PID parameters. However, its optimization time is relatively long. Then, a Block Particle Swarm Optimization (BPSO) algorithm is designed to solve the problem of long optimization time. This method was then applied to the wall-following robot problem by realistically simulating it to confirm the performance. After Compared with conventional methods, the proposed method shows a relatively stable solution.


Introduction
Currently, the PID algorithm is widely used in industrial process control because of its ease.
The essential idea is to construct a PID controller made up of the proportional, integral, and derivative of the system deviation linearly to control the object (Adriansyah et al., 2019). It has good control results, high security, and good stability. Since the invention of the PID controller (mainly due to Elmer's invention of ship autopilot) in 1910 and the Ziegler-Nichols PID parameter tuning method (Z-N method) in 1942 (Joseph et al., 2022) the applications of PID controllers have been developed and popularized extremely . PID parameters tuning includes the engineering and theoretical calculations tuning methods (Lawrence et al., 2022). The engineering tuning method tunes the parameters according to engineering experience directly. It often requires much experience, and different objective functions correspond to different experiences; the theoretical calculations tuning method obtains the PID parameters based on the mathematical model of the system. It takes much time to calculate and test the program, which usually cannot reach the optimal solution. PID parameter tuning is a complex and tedious process, and the research on the PID parameter tuning method has been an essential issue in the control field (Kvascev & Djurovic, 2022;Song et al., 2022).
Many intelligent optimization algorithms have been applied to PID parameter tuning and have achieved a good control effect to improve the efficiency of the PID controller parameter tuning.
Such as genetic algorithm, particle swarm optimization, ant colony optimization algorithm, etc. However, there are various problems with the algorithms mentioned above. Some have slow convergence speeds, some can only get local optimal solutions, and some have high error rates. This paper uses the block particle swarm optimization algorithm (BPSO) for PID parameter tuning. It is an optimization algorithm that combines the concept of initial population fragmentation and the traditional particle swarm algorithm. It has a small population size, fast convergence speed and solid global search capabilities.
The remainder of the paper is organized as follows. In Section 2, Two forms of PID algorithm and their application will be mentioned briefly (Wang et al., 2022). Then take the genetic algorithm and particle swarm optimization algorithm as examples to illustrate the optimization of parameters and explain their working mechanism. Last but not least, the BPSO algorithm will be introduced to apply to the simulation of a wall-following robot. In Section 3,

PID Algorithm
The PID controller consists of a proportional unit P, an integral unit I and a differential unit D. Set the three parameters of , , and . The PID controller is mainly suitable for systems where the essential linearity and dynamic characteristics do not change with time. A proportional segment is an error between the actual and expected values (Behera & Choudhury, 2022;Havaei & Sandidzadeh, 2022;Wiangtong & Sirapatcharangkul, 2017).

= ( ) − ( )
An integral segment is the sum of previous errors used for adjustment when the error is small.

= + *
The differential segment is the change in error used to predict what the following error might be.
= (( ) − ( ))/ PID control system is mainly composed of PID controller and controlled object, and a typical PID control system can be described in Figure 1.

Figure 1 PID Control System
Position PID algorithm PID control is a two orders linear controller. The original formula of the PID algorithm is equation 1: Where ( ) is the output of control systems; ( ) is the input of control systems; in the above formula, ( ) is the output of the control system; ( ) is the input of the control system, generally the difference between the set quantity and the controlled quantity, that is, ( ) = ( )-( ); PIs the proportional coefficient of the control system; is the integral time of the control system; is the differential time of the control system; is the sampling period of the system (Kotb et al., 2022).
Discretization formula equation 2: The position-type PID control algorithm is suitable for the actuator without an integral element. The action position of the actuator is one-to-one, corresponding to its input signal.
According to the deviation ( ) between the sampling result of the -th controlled variable and the set value, the controller calculates the output control variable after the -th sampling.
The disadvantage of the position PID algorithm is that sampling output is related to any previous state and not independent control quantity. When calculating, the accumulator should be used to accumulate ( ) quantity, which is very large. At the same time, the output of the control system ( ) is relative to the actual link of implementing equipment. Once the computer breaks down, ( ) will change significantly, leading to the implementation of equipment position Dramatic changes (Liu et al., 2022).

Incremental PID Algorithm
Incremental PID algorithm control is the output increment of the control quantity (represented by ( )) control system. When the algorithm is applied, the output control amount ( ) is relative to the position increment of the equipment implemented this time, not to the actual position of the equipment implemented. Therefore, the algorithm needs the equipment to accumulate the control amount increment to realize the control of the controlled system. The cumulative function of the system can be realized by hardware circuit and software programming method, for example, by using the formula ( ) = ( ) − ( − 1) to realize programming (Mok & Ahmad, 2022).
From Equation 2, Equation 1 below: The advantages of using the incremental PID algorithm: there is no accumulation link in the formula, so a large amount of calculation is not needed; the control increment value ( ) is related to the latest three sampling values of the system, which is convenient to use weighted processing to achieve good control effect; each time the computer output is only the control increment, that is, the change of relative execution equipment position. If the machine breaks down, the impact on the system will be small, and the production process will not be seriously hindered (Carlucho et al., 2019).

Genetic Algorithm
A genetic algorithm (GA) is a method for searching for the optimal solution. It simulates reproduction, mating, and mutation in natural selection and genetic evolution. It has the advantages of easy implementation and fast convergence speed when used in path planning.
GA performs well on simple maps but poorly on complex maps and maps with many obstacles.
Because GA can only search randomly in the solution space, it does not use the given information on the map (Yichen et al., 2020).
The flow chart of GA shows in Figure 2.
a) Generate a random population.
b) According to the strategy, we can judge whether the individual's fitness conforms to the optimization criteria. If it conforms, we can output the best individual and optimal solution and end the program. Otherwise, proceed to the next step. c) Parents are selected according to their fitness, the individuals with high fitness are assigned with high probability, and the low fitness individuals are eliminated.
d) The parent's chromosomes were used to intersect to generate offspring according to specific methods.
e) Variation of offspring chromosomes.
f) A new generation population is generated by crossover and mutation, and step 2 returns until the optimal solution is generated.

Particle Swarm Optimization Algorithm
The algorithm is initially inspired by the regularity of the birds' cluster activities, and then a simplified model is established using group intelligence. Based on the observation of the behaviour of animal clusters, particle swarm optimization (PSO) uses information sharing among individuals to move the whole population from disorder to order in the problemsolving space to obtain the optimal solution .
The mathematical language of the PSO algorithm is described as follows. Suppose there is a population composed of particles, the position of each particle at any time is an ndimensional vector of decision space. At the iteration time t, the position of the particle can be recorded as ( ) = [ ( ), ( ), . . . , ( )], ( = 1, 2, . . . , ). The fitness of each particle can be obtained by substituting ( ) into the objective function or fitness function, and the particle's fitness value determines the particle's quality. Comparing the current position of each particle with the position in the previous iterations, the optimal historical position of particle in the -th iteration is obtained, which is called individual extremum and is recorded as ( ) = [ 1 ( ), 2 ( ), … 3 ( )] (the local extremum is not unique, for convenience, the symbol corresponding to the individual extremum is still used, but in this formula, i is no longer a symbol for particles.) The corresponding velocities of each particle in the t-th iteration are recorded as (t) = [ 1 (t), 2 (t), … ( )] ,( = 1, 2, . . . , ) Each dimension of the individual extremum ( ℎ ) of particle the following formula updates me in iteration t,
Each dimension of the global extremum ( )of all particles in the t-th iteration is selected as follows Equation 6: Where is the value of the independent variable corresponding to the function value ( ), ( + 1) is the optimal global position of particles in the + 1 iteration. In the + 1 iteration, each dimension of position _ ( + 1) and velocity _ ( + 1) of particle I are updated as follows, Equation 7: ( + 1) = ( ) + 1 1, ( ( ) − ( )) + 2 2, ( ( ) − ( )) (7) ( + 1) = ( ) + ( + 1) Where the is a constant, _1 _2 is a learning factor, which is a non-negative constant; 1 2 is a random number between 0 and 1; from Equation 7, we can see that the velocity renewal formula of particles can be divided into three parts: the first part, _ ( ) represents the − of velocity _ ( ) of particle in the -iteration process, which shows that particle can explore new areas and develop its own so that the algorithm has the capability of global search, It is often used to balance the global and local search ability of particles; the second part represents the self-learning ability of particles, so that particles have strong local search ability; the third part can be seen as the ability of particles to learn from other members of the population, reflecting the information sharing between particles.
The steps of the original particle swarm optimization algorithm are as follows (Pinto et al., 2013) : STEP 1: initialize particles randomly; STEP 2: Calculate the fitness of each particle; STEP 3: According to the formula, the individual extremum of particles is updated; STEP 4: Update the optimal global position according to the formula; STEP 5: Update the particle's speed and position according to the formula; STEP 6: If the termination condition is not reached (usually the preset maximum number of iterations or the preset fitness value), Step 2 is returned. Otherwise, the algorithm is terminated.

BPSO Algorithm
The initial population data in the range is set, and the range is divided into several equal parts in this algorithm. The initial population number is equally distributed in each block.
Take the function ( ) as an example, Equation 5: Where ∈ [−5,5], the number of initial particle swarms is set to 12 and divided into four parts.  Figure 3).

Figure 3 Block of Initial Particles
The following process is the same as the particle swarm optimization algorithm. The particle is substituted into the fitness function to calculate the fitness value of each particle. After selection, the particle with the smallest function value is the benchmark to let other particles move towards it at a certain speed. The particles finally gather near a point in many iterations to get the optimal global solution. The output parameters of particles are , and .

Application
In this section, the wall following robot will be simulated in MATLAB. It can measure the distance from the wall and use the PID algorithm to adjust its moving direction and realize the following process. The BPSO algorithm will optimize three parameters of the PID algorithm.

Establishment of The Simulation Model
Firstly, we assume the climbing robot is an equilateral triangle and establish the robot's pose coordinate system, as shown in Figure 4. Meanwhile, is updated by time , angular velocity and current angular displacement .
All these parameters can be represented by the following Figure 5 (Left). Meanwhile, the movement and rotation of the robot can be set at short intervals. The robot's trajectory can be accumulated through these discrete points to achieve a period movement trajectory, as shown in Figure 5 (Right).

Initialization of Particles
Firstly, set the particle swarm size to 28 (multiple of 4), set the three parameters KP, Ki, and KD of PID as independent variables, and define particles' acceleration and inertia factors.
Because the wall-climbing robot optimizes RMSE by reading the distance from the wall and using three parameters , and in PID algorithm, it is a multi-function optimization problem. The block particle swarm optimization (PSO) algorithm divides these three parameters into four average blocks from 0 to 0.6. The advantage of this method is that it can avoid omitting the optimal local solution and reduce the number of iterations. The codes are as follows:

Fitness Function
In where represents the number of robot motions. +1 − represents errors.

Result and Discussion
Results Figure 6 represents the particle's initial state and final motion position. It is well explained that the particle swarm is from the initial irregular discrete state after many iterations gathered in a range near a point. The calculated , and values can be obtained from the three coordinate values of particles. We can see this in Table 1 below. After many tests, we find that the PID value is not constant because it is a multivariate function problem that is even more complex than the travelling salesman problem in the ant colony algorithm. The coordinates of the three points can be changed, which leads to the various results of KP, Ki, and KD. However, the value of RMSE is stable between 9 and 10, which shows the reliability of the optimization method. Let the two algorithms iterate 500 times simultaneously, GA takes 148 seconds, and BPSO takes 64 seconds. It can be seen from the fitness function that the BPSO algorithm can significantly improve the operation speed and reduce the waiting time. It can also be seen from Figure 7 that BPSO only needs a few iterations to get the optimal solution, while GA needs at least 400 iterations.