# Optimization problems and algorithms

**5 hours**left at this price!

- 8 hours on-demand video
- 40 downloadable resources
- Full lifetime access
- Access on mobile and TV

- Certificate of Completion

Get your team access to 4,000+ top Udemy courses anytime, anywhere.

Try Udemy for Business- Identify, understand, formulate, and solve optimization problems
- Understand the concepts of stochastic optimization algorithms
- Analyse and adapt modern optimization algorithms

- You should have basic knowledge of programming
- You should be familiar with Matlab's built-in programming language

This is an introductory course to the stochastic **optimization problems and algorithms** as the basics sub-fields in **Artificial Intelligence**. We will cover the most fundamental concepts in the field of optimization including metaheuristics and swarm intelligence. By the end of this course, you will be able to identify and implement the main components of an optimization problem. Optimization problems are different, yet there have mostly similar challenges and difficulties such as constraints, multiple objectives, discrete variables, and noises. This course will show you how to tackle each of these difficulties. Most of the lectures come with coding videos. In such videos, the step-by-step process of implementing the optimization algorithms or problems are presented. We have also a number of **quizzes and exercises** to practice the theoretical knowledge covered in the lectures.

Here is the list of topics covered:

History of optimization

Optimization problems

Single-objective optimization algorithms

**Particle Swarm Optimization**Optimization of problems with

**constraints**Optimization of problems with

**binary and/or discrete variables**Optimization of problems with

**multiple objectives**Optimization of problems with

**uncertainties**

Particle Swarm Optimization will be the main algorithm, which is a search method that can be easily applied to different applications including **Machine Learning, Data Science, Neural Networks, and Deep Learning. **

I am proud of **200+ 5-star reviews.** Some of the reviews are as follows:

David said: "This course is **one of the best online course** I have ever taken. The instructor did an excellent job to very carefully prepare the contents, slides, videos, and explains the complicated code in a very careful way. Hope the instructor can develop much more courses to enrich the society. Thanks!"

Khaled said: "Dr. Seyedali is one of the greatest instructor that i had the privilege to take a course with. The** course was direct to the point** and the lessons are easy to understand and comprehensive. He is very helpful during and out of the course. i truly recommend this course to all who would like to learn optimization\PSO or those who would like to sharpen their understanding in optimization. best of luck to all and THANK YOU Dr. Seyedali."

Biswajit said: "This coursework has really been very helpful for me as I have to frequently deal with optimization. The most prominent feature of the course is the **emphasis given on coding and visualization of results**. Further, the support provided by Dr. Seyedali through personal interaction is top notch.

Boumaza* said: "*Good Course from Dr. Seyedali Mirjalili. It gives us clear picture of the algorithms used in optimization. It covers technical as well as practical aspects of optimization. **Step by step and very practical approach to optimization through well though and properly explained topics**, highly recommended course You really help me a lot. I hope, someday, I will be one of the players in this exciting field! Thanks to Dr. Seyedali Mirjalili."

Join 1000+ students and start your optimization journey with us. If you are in any way not satisfied, for any reason, you can **get a full refund from Udemy** within 30 days. No questions asked. But I am confident you won't need to. I stand behind this course 100% and am committed to help you along the way. ** **

- Anyone who wants to learn optimization
- Anyone who wants to solve an optimization problem

In this video, the structure of the course is discussed in details. There are also some tips on how to use Udemy video player.

In this lecture we talk about optimizations problems in general. We will be covering the main components of optimization problems and the concepts of search space/landscape. There is also a very simple and intuitive example to understand the theory covered in this lecture.

The learning outcomes are as given below:

- Understanding the difference between search space and search landscape
- Demonstrating the ability to identify the main components of an optimization problem (system)
- Demonstrating the ability to formulation single-objective optimization problems
- Understanding the most common difficulties when solving optimization problems

This lecture discusses the structure of single-objective optimization algorithms. All the concepts are discussed with an intuitive analogy.

The learning outcomes are as follows:

- Understanding the terminologies in the field of optimization: variables, objective value, objective function, constraint, global optimum, local optimum, search agents, algorithm, and iteration
- Understanding the main differences between conventional and modern optimization algorithms
- Understandingthe differences between a deterministic algorithm and a stochastic algorithm
- Understanding the advantages and drawbacks of stochastic and deterministic algorithms
- Understanding the concepts of gradient and the structure of the gradient descent algorithm

This lecture covers the family of stochastic optimization algorithms.

The learning outcomes are as follows:

- Understanding the differences between individual-based and population-based algorithms
- Understanding the advantages and drawbacks of individual-based and population-based algorithms
- Understanding the concepts of function evaluations needed for an optimization algorithm
- Understanding the concepts of exploration (diversification) and exploitation (intensification)
- Understanding the No Free Lunch theorem

The lecture covers the most fundamental concepts for understanding the PSO algorithm as one of the most well-regarded stochastic population-based algorithms. We use the same analogy to understand the way that this algorithm searches for the global optimum of optimization problems.

The learning outcomes are as follows:

- Understanding the mathematical formulating of the PSO algorithm
- Understanding the main components of the velocity vector in PSO: inertial component, cognitive component, and social component
- Analyzing the performance of PSO in terms of exploration and exploitation
- Understanding the impact of inertia weight, c1, and c2 on the performance of PSO

This video is a step-by-step implementation of the PSO algorithm in Matlab.

The learning outcomes are as follows:

- Understanding the main logical steps in PSO for implementation
- Testing and analyzing the results of PSO
- Drawing the convergence curve of the PSO algorithm

In the PSO algorithm, the velocity vectors might incrementally get bigger and bigger. As a consequence, the particles go outside the boundaries of the landscape and are no longer desirable. In this video, we learn how to prevent the particles from going outsides the landscape. In fact, we introduce two mechanisms to reduce the probability of overshooting the particles and re-initialzing them in case of overshooting: velocity bounding and re-positioning.

The learning outcomes are as follows:

- Demonstrating an underestanding of the need for limiting the velocity vector in PSO
- Implementing a mechanism to limit velocity
- Implementing a mechanism to re-initialize the particles that go outside the landscape

In this lecture, we solve the simple case study presented in the earlier lectures using the PSO algorithm. This lecture mainly demonstrates how to replace the objective function with a desirable one to solve it.

The learning outcomes are as follows:

- Understanding the steps of solving an optimization problem using the PSO algorithm
- Demonstrating the skills to tune the parameters of the PSO algorithm
- How to replace the default objective function with a desired objective function

This lecture introduces different types of constraints when solving optimization problems. It then covers a very simple technique to handle constraints of different types.

The learning outcomes are as follows:

- Demonstrating the ability to formulate a constrained optimization algorithm
- Understanding the difference between equality and inequality constraints
- Understanding the process of converting equality constraints to inequality constraints
- Understanding the concepts of penalty and barrier functions

In this video, we will demonstrate how to employ a barrier function to handle constraints in the objective function without algorithm modification.

The learning outcomes are as follows:

- Demonstrating the ability to Implement constraints in the objective function
- Demonstrating the ability to apply barrier functions in the objective function
- Understanding the process of uUsing PSO to estimate the global optimum of a constrained problem
- Demonstrating the ability to visualize the landscape before and after applying the barrier function
- Understanding the impact of a barrier function on the shape of landscapes

Problems with discrete variables are very common. In this lecture, we learn how to solve such problems with stochastic optimization algorithms.

The learning outcomes are as follows:

- Demonstrating the ability to formulate problems with discrete variables
- Understanding the shape of landscape of discrete optimization problems
- Demonstrating an understanding of transfer function in Binary Particle Swarm Optimization
- Understanding the mathematical model of the BPSO algorithm
- Using multiple bits to choose one of the values in a set of discrete values

In this video, we write the code for a binary PSO. Several modification will be made into the PSO to design the Binary PSO (BPSO) algorithm. A test function is also solved as an example of binary optimization problems.

The learning outcomes are:

- Understanding the process of initializing the solutions for a binary problem
- Demonstrating the ability to define the upper and lower bouds for binary variables
- Demonstrating the ability to implement the sigmoid transfer function
- Understanding the process of using sigmoid transfer function to generate probability values and updating the position of particles in BPSO
- Demonstrating the ability to apply BPSO to binary problems

Since the BPSO algorithm cannot solve discrete problems with multiple discrete values for each parameter, a mechanism is implemented in this video to choose more than two values from a given set discrete values. Also, the process of solving problems with discrete variables is given. A simple case study is solved to demonstrate the application of BPSO

The learning outcomes are:

- Understanding the method of increasing the number of variables in each particle to choose more than two values from a give set of discrete values
- Demonstrating the ability to find correct number of bits for each variables of particles to choose one of N discrete values
- Demonstrating the ability to use the variables of particles in BPSO to choose discrete values
- Demonstrating the ability to implement a discrete version of BPSO
- Demonstrating the ability to apply BPSO to problems with discrete values

Problems with more than one objective are very common in both science and industry. In this lecture, we learn the most fundamental concepts of such problems. The problem formulation of multi-objective problems are also covered.

The learning outcomes are as follows:

- Understanding the main components of a multi-objective problem/system
- Demonstrating the ability to formulation multi-objective optimization problems
- Understanding the concepts of Pareto optimal solutions, Pareto optimal dominance, and Pareto optimal front
- Demonstrating the ability to analyze Pareto optimal sets and Pareto optimal fronts

In this video, three main classes of methods to solve multi-objective optimization problems using multi-objective stochastic algorithms are covered. The Multi-objective Particle Swarm Optimization algorithm is discussed as one of the most well-regarded algorithms as well.

The learning outcomes are:

- Understanding different types of multi-objective optimization: a posteriori, a priori, and interactive methods
- Demonstrating the ability to convert a multi-objective problem into a single-objective problem
- Understanding the general framework of a posteriori methods
- Understanding the the concepts of convergence and converge in a posteriori methods
- Understanding the components required for MOPSO to solve multi-objective problems

In this video, are are going to be focusing on robust optimization using stochastic optimization algorithms. The lecture starts with discussing the main types of uncertainties in operating conditions, inputs, outputs, and constraints. Then, two methods are covered to handle uncertainties in the inputs only as the most common types of errors during the manufacturing processes. Since sampling is a main part of robust optimization, three sampling methods are covered as well.

The learning outcomes are as follows:

- Understanding different types of uncertainties in an optimization problem/system
- Demonstrating the ability to formulate a problem with uncertainties in the variables (inputs)
- Understanding the purpose of a expectation measure
- Understanding the purpose of a variance measure
- Demonstrating the ability for formulate an optimization problem when using an expectation of a variance measure
- Understanding the limitation of an expectation measure calculated with integral
- Understanding the process of using the Monte Carl technique and sampled points to calculate an expected objective value or a variance measure
- Understanding the differences between random, Latin Hypercube, and Orthogonal sampling techniques

In this video, an expectation measure is implemented in Matlab. The PSO algorithm is then used to find the robust optimum for a given test function.

The learning outcomes are as follows:

- Demonstrating the ability to replace an objective function with an expectation measure
- Demonstrating the ability to calculate an approximation of the expected objective value using the Monte Carlo technique
- Understanding the steps of finding robust solution for an optimization problem
- Understanding the impact of a variance measure on the shape of landscape
- Understanding the mechanism of random sampling
- Demonstrating the ability to implement an expectation measure

This video implements a variance measure in Matlab and employs it to find the robust solutions for a given optimization problem. The PSO algorithm is used as the main algorithm and the objective function is changed to simulate a variance measure.

The learning outcomes are as follows:

- Understanding the process of implementing a variance measure
- Demonstrating the ability to use a variance measure as a constraint
- Understanding the impact of a variance measure on the landscape
- Demonstrate the ability to calculate an approximation of the variance measure using the Monte Carlo technique
- Demonstrating the ability to implement a variance measure

This lecture demonstrates how to observe exploratory and exploitative behaviour of a particle by looking at the fluctuations in one of the variables of the first particle in the PSO algorithm. You can use this technique to see the changes in any of the variables and particle.s

This video shows how to update the GBEST in the PSO algorithm more frequently. The PSO that has been developed in this course updates GBEST at the beginning of the iteration, and all particles use it for update theirs positions. However, we might need to update the GBEST once we find a better solution right away. The steps to do so are discussed in this video.