Graphic lp optimizer
In the weighting method, the weighted sum of the objective functions is optimized. It can be used only when the feasible space of values of the objective function is convex. The major problem with this method is the variation of the weighting factors, which often leads to Pareto fronts with a low density of solutions ( Hernandez-Rodriguez, 2011). The weighting factors are assigned a priori, and are modified to obtain the Pareto front, with all nondominated solutions (or satisfactory solutions). Hence, a new, unique objective function is obtained. The simplest way to proceed is to take each objective function, associate a weight with the objective function, and then take a weighted sum of objective functions. The goal of the weighted sum is to transform the problem so that it turns into a monoobjective optimization problem, for which various methods of solution exist. Hence, the most preferred solution is “most preferred” in relation to what the decision maker has for comparison so far ( Mavrotas, 2007, 2009). The drawback is that the decision maker never sees the whole picture (the set of efficient solutions). More precisely, the weighted sum, goal programming, and lexicographic methods (among others) can be mentioned ( Collette and Siarry, 2003).
![graphic lp optimizer graphic lp optimizer](https://www.mdpi.com/electronics/electronics-09-01541/article_deploy/html/images/electronics-09-01541-g003.png)
The aggregative methods belong to this family (in which the objective functions are gathered into one objective function). With these methods, the decision maker defines the tradeoff to be applied (preferences) before running the optimization method. When there are more objective functions, the constraint set becomes large toward the end of the solution process. A few disadvantages on this method include that it can require the solution of many single-objective problems to obtain just one solution point, and it requires that additional constraints to be imposed. The advantages of the method include that it offers a unique approach to specifying preferences, it does not require that the objective functions be normalized, and it always provides a Pareto optimal solution. If we change the importance order of the objective functions, we will most likely reach a different solution. Note that f 3 was not even considered in the solution process because it is the least important objective among the three. Because the solution is unique, the process stops. For the second optimization problem, we found the solution at point B, where f 2 = 5.47, while f 1 is unchanged satisfying the constraint function defined in Eq. 5.19j is the additional constraint from the result of the first optimization. However, determining if a solution is unique (within the feasible design space S) can be difficult, especially with local gradient-based optimization engines. Generally, this is indicated when two consecutive optimization problems yield the same solution point. The algorithm terminates once a unique optimum is determined. Note that after the first iteration ( j > 1), f j x j * is not necessarily the same as the independent minimum o f f j( x) because new constraints are introduced for each problem. Here i represents a function’s position in the preferred sequence, and f j x j * represents the minimum value for the jth objective function, found in the jth optimization problem. (18.13) f j x ≤ f j x j * j = 1 to i − 1 i > 1 i = 1 to k Note that this method is classified as a vector multi-objective optimization method because each objective is treated independently.
![graphic lp optimizer graphic lp optimizer](https://www.sistrix.com/wp-content/uploads/2021/08/si-lp-google-en-tablet.png)
![graphic lp optimizer graphic lp optimizer](https://images.slideplayer.com/24/7085139/slides/slide_29.jpg)
In any case, the solution is, theoretically, always Pareto optimal. Thus, it is best to use a global optimization engine with this approach. However, determining if a solution is unique (within the feasible design space S) can be difficult, especially with local gradient-based optimization engines.įor this reason, often with continuous problems, this approach terminates after simply finding the optimum of the first objective f 1( x). Note that after the first iteration ( j > 1), f j ( x j * ) is not necessarily the same as the independent minimum o f f j( x) because new constraints are introduced for each problem. Here i represents a function's position in the preferred sequence, and f j ( x j * ) represents the minimum value for the jth objective function, found in the jth optimization problem.