Similar to Genetic Algorithms and other Evolutionary Computation (EC) paradigms, Particle Swarm Optimization (PSO) is a population-based stochastic search algorithm that utilizes a population of candidate solutions to evolve an optimal or near-optimal solution to an optimization problem. PSO differs from other EC methods in that it is inspired by the paradigm of birds flocking in search for food. The algorithm mimics the social behavior of a flock of birds in order to guide swarms of particles (i.e., solution candidates) towards the most promising regions of the search space. It is grounded up on the principle of exchanging information among the swarm particles (i.e., ensuring cooperation among alternative solutions) than competing the solution candidates to arrive at the optimal solution. This way, individuals interact with one another and are benefited from the very best performance of any member of the entire population while learning from their own experience as well, and gradually the population members move into better regions of the problem space. PSO has been applied to many science and engineering optimization problems and has been proven to be an efficient algorithm.

Search for an optimal solution using PSO starts with generation of a population of solution candidates. Each of these candidates is known as a particle. Assuming that a particle is defined in terms of *N* parameters (i.e., the search space is *N*-Dimensional), the current position of a particle (parameters assigned to the particle), say particle* i*, is represented by *XI* = (*xi*_{1}, *xi*_{2},…, *xi _{N}* ). Fitness value of each particle is then evaluated for the problem at hand and will be used to compare relative performance of the alternative solutions in the population (swarm). Best performing particle is determined and its position is registered. Let the index of the best particle among all particles in the swarm is represented by the symbol

*g*. The best previous position (i.e., the position giving the best fitness value) of the

*i*th particle is represented as

*PI*= (

*pi*

_{1},

*pi*

_{2},…,

*pi*). Each individual (particle) can move through the search space with a velocity vector that is influenced by particle

_{N }*g*. At each iteration, the individual moves from one position to another. The rate of position change (velocity) of particle

*i*is represented as

*VI*= (

*vi*

_{1},v

*i*

_{2},…,

*v*i

_{N}). The particles are manipulated according to the following equation:

where *d* = 1,2, ..., *N*;* I*= 1, 2,…, *M* and *M* is the number of particles in the population; *w* is the inertial weight; *c*_{1}_{ }and *c*_{2} are positive constants; *r*_{1} and *r*_{2} are random values in the range [0,1]. The Equation calculates the particle’s new velocity according to its previous velocity, the distance of its previous position from its own best experience (position) and the group’s best experience. Then the particle flies toward a new position according to the Equation. The inertia weight *w* is employed to control the impact of the previous history of velocities on the current velocity, thus to influence the trade-off between global (wide-ranging) and local (nearby) exploration abilities of the flying points. A larger inertial weight *w* facilitates global exploration (searching new areas) while a smaller inertia weight *w* can provide a balance between global and local exploration abilities and thus require less iteration on average to find the optimum. Once the particles are moved to a new place, their fitness value is evaluated; best perforating particle is determined; particle’s velocity vector is updated; and the particle is moved. This process is repeated until the stopping criterion is met.