# Optimizer - Settings

## Home: Simulation > Setup Solver > Optimizer

### In this property page you can select the algorithm type, the parameters for optimization, and define the limits of these parameters.

#### Simulation type and general controls

See Optimizer help page.

Algorithm

Choose between seven algorithm types. The Trust Region Framework is most modern of the implemented algorithms. It uses local linear models on primary data and is able to exploit the sensitivity information if provided by the solver. The Interpolated Quasi Newton algorithm makes use of approximated gradient information to achieve fast convergence rates. The Powell optimizer applies a line search for each parameter. However these algorithms are sensitive to the choice of the starting point in the parameter space. If the starting point is close to the desired optimum or the (unknown) goal function is sufficiently smooth then the local algorithms will converge quickly. The Interpolated Quasi Newton optimizer is fast due to its support of interpolation of primary data, but in some cases it may be not as accurate as the slower Classic Powell optimizer. The Trust Region Framework is the most robust of the algorithms, because the Trust Region approach will always assure convergence to a stationary point. It is also very efficient, avoiding many solver runs by interpolating primary data without sacrificing accuracy. The globalized version of the algorithm will most likely use more function evaluations than the local approach but will also be more efficient than general global optimization methods in many cases. Especially if the system is modeled such that sensitivity information can be exploited it will achieve the best convergence rates of all the algorithms.

The Nelder Mead Simplex Algorithm generates a set of starting points and does not need gradient information to determine it's search direction. This is an advantage over local algorithms as soon as the number of variables grows. It is also less dependent on the chosen starting point because it starts with a set of points distributed in the parameter space. Which, compared with the other local algorithms, is an advantage if you have a bad starting point but a disadvantage if your starting point lies already close to the desired optimum.

Another advantage of neither using gradient information nor an interpolation approach to avoid some evaluations is that this algorithm is able to continue optimization even if for some parameter settings the model can not be evaluated. A parameter combination for which results can't be produced is called infeasible. The CMA Evolutionary Strategy, the Genetic Algorithm and the Particle Swarm Optimization also feature this continuation of optimization despite infeasible points but only if the interpolation feature is not switched on.

If a non-smooth goal function is expected, the starting point is far away from the optimum or a large parameter space is going to be explored then a global algorithm should be preferred. For the featured global optimizers a maximal number of iterations can be specified. Therefore the maximal number of goal function evaluations, and thus optimization time, can be determined a priori. Another advantage of the global optimizers is that the number of evaluations is independent from the number of parameters. Therefore the choice of a global optimizer over a local one can pay off if the optimization problem has a large number of parameters. The CMA Evolutionary Strategy, which is the most sophisticated approach of the implemented global optimizers, uses a statistical model in combination with some step size parameter. In addition the history of successful optimization steps is exploited. This improves the algorithms performance without loosing its global optimization properties.

Trust Region Framework: Selects a optimizing technique embedded in a trust region framework. The algorithm can be a local or a global optimization method depending on the settings. The algorithm starts with building a linear model on primary data in a "trust" region around the starting point. For building this model sensitivity information of the primary data will be exploited if provided. Fast optimizations are done based on this local model to achieve a candidate for a new solver evaluation. The new point is accepted, if it is superior to the anchors of the model. If the model is not accurate enough the radius of the trust region will be decreased and a model on the new trust region will be created. The local version of the algorithm will stop once the trust region radius or distance to the next predicted optimum becomes smaller than the specified domain accuracy.

If the method is set up to be a global optimization the linear model is only exploited if the expected improvement exeeds a certain threshold. If this is not the case, the algorithm will create a new linear model at a different place in the parameter space. The  global method will stop optimizing once it satisfies all goals defined, or doesn't find another place for a new linear model. It is always possible to bound the number of function evaluations directly by setting a maximal number of evaluations.

Nelder Mead Simplex Algorithm: Selects the local Simplex optimization algorithm by Nelder and Mead. This method is a local optimization technique. If N is the number of parameters, it starts with N+1 points distributed in the parameter space.

CMA Evolutionary Strategy: Selects the global covariance matrix adaptation evolutionary strategy.

Genetic Algorithm: Selects the global genetic optimizer.

Particle Swarm Optimization: Selects the global particle swarm optimizer.

Interpolated Quasi Newton: Selects the local optimizer supporting interpolation of primary data. This optimizer is fast in comparison to the Classic Powell optimizer but may be less accurate. In addition, you can set the number N of optimizer passes (1 to 10) for this optimizer type. A number N greater than 1 forces the optimizer to start over (N-1) times. Within each optimizer pass the minimum and maximum settings of the parameters are changed approaching the optimal parameter setting. Increase the number of passes to values greater than 1 (e.g., 2 or 3)  to obtain more accurate results. It is recommended for the most common EM optimizations not to increase the number higher than 3 but to increase the number of samples in the parameter list, if the results are not suitable. The corresponding numerical solver for the optimization will only be evaluated for the defined samples. All other parameter combinations will be evaluated by using the interpolation of primary data. At the end of each optimization pass the optimum predicted by this approach will be verified by another evaluation of the numerical solver .

Classic Powell: Selects the local optimizer without interpolation of primary data. In addition, it is necessary to set the accuracy, which effects the accuracy of the optimal parameter settings and the time of termination of the optimization process. For optimizations with more than one parameter the Trust Region Framework, the Interpolated Quasi Newton or the Nelder Mead Simplex Algorithm should be preferred to this technique.

Reset min/max

Reset the minimum and maximum values of each parameter to the entered percentage of the initial value. If an initial value is 0, the minimum/maximum value is set to -/+ the ratio of percentage to 100.

Use current as initial/anchor values

Activate this checkbox to initialize the optimizer with the current values. This means that you are able to continue the optimization process, starting the solver with the previously achieved parameter results. However, if that you want to run the optimizer several times with the same initial parameter conditions, you have to disable this check button.

If the global optimization techniques are used, the algorithms need a set of distributed starting points. In this case the check button won't have any effect.

Use data of previous calculations

Activate this check button to trigger the import of previously calculated results for new optimizations to speed up the optimization process. If the result templates on which the optimizer goals are based were already evaluated before and the corresponding parameter combinations lie in the defined parameter space the  results might be imported without the need for recalculation. For the local algorithms it's possible that the initial point is replaced if a more suitable point is found in advance. For the algorithms that use a set of initial points, multiple initial points will be replaced if suitable data is found. Points are replaced by previously calculated ones if the parameter combinations are very close or if the corresponding goal values are superior to previously calculated parameters in the neighborhood. This may disturb the selected distribution type of the inital point set but the algorithm will find a good compromise between finding points with good goal value and a well distributed set of starting points in the parameter space. Keep in mind that this feature will make the reproducibility of optimizations more difficult because after an optimization there will be more potential imports available than before.

Parameter list

Checkbox: Within the parameter list you can select the parameters varied during the optimization run.

Parameter: Shows the name of the parameters (read only).

Min/Max: You can set the minimum and maximum boundaries of the parameters selected for the optimization process either manually or using the Reset min/max button as described above. The minimum/maximum values must be less/greater than the initial parameter value.

Samples: If you use the Interpolated Quasi Newton, the Genetic or the Particle Swarm optimizer you have to set a value for the number of samples (minimum 3). The number of samples defines the parameter values that are used to calculate exact 3D solutions with the currently selected solver. Please note that a high number N of samples does not automatically mean that the N 3D solver simulations will be performed. The sample value rather defines the step width for the locally searching optimizer. For a larger parameter range, a higher sample value may lead to more accurate results. If the Genetic or the Particle Swarm optimizer is used and the interpolation is switched off, this setting will have no effect.

Initial/Anchor: You can modify the initial/anchor parameter settings here.

Current: Shows the parameter values of the current model.

Best: Shows the best parameter combination the optimizer has found so far.

#### The following settings are available depending on the chosen algorithm type:

Properties

If a global algorithm or the Nelder Mead Simplex Algorithm is selected the properties dialog for the corresponding global optimizer will be opened.

Use interpolation

This check box is only available for the Genetic Algorithm or the Particle Swarm Optimization. Check this box to activate the interpolation, and disables the sample values in the parameter list.

For both global optimizers it is possible to switch on the Interpolation of Primary Data. If the interpolation is applied the only true solver runs that will be done are the ones for the evaluation of the specified anchors and a final solver run for the estimated best parameters. All other goal function evaluations will be interpolated.

Please note that global optimization algorithms have the probability of exploring most of the parameter space. Thus it is most likely that all or nearly all anchor points will actually be evaluated. Keep in mind that the number of solver runs needed for interpolation is dependant of the number of parameters whereas the number of solver runs needed for the two global optimization algorithms are independent of the number of parameters. Because of this, the usage of the interpolation feature will only pay off if the parameter space is not too high dimensional or a large number of iterations is planned.

Since the possible goal functions that can be defined have always non negative values the optimization will automatically be stopped if one of the anchor evaluations yields a goal value equal zero.

Include anchor in initial point set

This check box is only available for the Nelder Mead Simplex Algorithm. If this feature is switched on then the point that is defined as anchor point in the parameter list will be included in the initial data set of the algorithm. If the current parameter settings are already quite good then it makes sense to include this point in the starting set. After the set of initial points is generated the closest point from the automatically generated set will be substituted with the predefined point. However if the current point was created by a previous optimization run of a local optimizer and a second optimization is planned on a reduced parameter space this setting should be turned off because it increases the risk that the second optimization will converge to the same local optimum as before. In this case the second optimization won't yield any improvement.

Optimizer passes

This check box is only available for the Interpolated Quasi Newton. Set the number of samples required for the Interpolated Quasi Newton optimizer.

Accuracy

This check box is only available for the Classic Powell. Set the accuracy, which effects the accuracy of the optimal parameter settings and the time of termination of the optimization process.

General Properties...

### Some general settings can be configured via this separate dialog.

Result storage settings:

Parametric results

Alters the Storage Settings for the course of the optimization. The parametric results setting can be set to "None", "All" or "Automatic" for the course of the optimization. "Automatic" will use the settings that were made for the Parametric Results. "All" and "None" will overrule them during the optimization process. The feature is used to save resources (like disc space) by avoiding the accumulation of parametric data.

Mesh settings:

Move mesh on parameter change if possible

Check box to enable the "move mesh"  feature for the optimization process. If enabled, this setting will overrule the "same" setting from the Tetrahedral Special Mesh Properties for the duration of the optimization.

The basic principle of this approach is to fit the existing mesh to a parametrically altered model geometry without loosing the meshes topology. Doing this will reduce the noise that would alternatively be caused by re-meshing the new geometry and will therefore speed up the optimization's convergence. Another performance gain can be achieved because the operation of moving the mesh usually outperforms the process of adaptive mesh refinement. If (e.g. for a large parametric variation) the existing mesh can not be mapped to the altered geometry, then a new mesh will be created according to the current mesh settings.

Please note that by moving (or morphing) the mesh instead of re-meshing the new mesh will not only depend on the mesh settings and structure information but also on the previous mesh. Like this the mesh gets a history of parametric changes and the corresponding moving/morphing steps, which needs to be taken into account for result reproduction.

As the results that are produced on a moved/morphed mesh naturally differ slightly from the results of a newly created mesh, it is recommended to verify the optimum after a successful optimization by re calculation of the best parameter combination. It may occur in some cases that the re-calculated results do not completely satisfy the expected goal features, but usually they are very close, and a second optimization will converge very quickly to the desired result.