There are three optimization methods implemented, the classical gradient method (steepest descent) , 'rapid thermal annealing' or 'Monte Carlo' optimization and, last but not least, exhaustive search .
The optimization can optimize parameters defined by the user.
At the beginning of an optimization, the user has to place an optimization
box in the schematic, on the top level of the hierarchy .
It is a prerequisite that all devices which will be optimized
have parameter properties instead of real value properties:
{res} {optres}
instead of
{res} 100k
to optimize a resistor value.
A circuit optimization can optimize small-signal, dc-transfer or transient simulation results. Also, group delay optimization is possible.
To determine the error between simulated and desired result, the user has to
create a so-called goal-file.
This file has four columns . It looks like this:
3.600000e+07 2.6e-7 2.6e-7 9.847908e-01 3.608081e+07 2.6e-7 2.6e-7 9.872174e-01 3.616162e+07 2.6e-7 2.6e-7 9.888112e-01 3.624242e+07 2.6e-7 2.6e-7 9.897776e-01 3.632323e+07 2.6e-7 2.6e-7 9.902827e-01 3.640404e+07 2.6e-7 2.6e-7 9.904594e-01 3.648485e+07 2.6e-7 2.6e-7 9.904125e-01 4.335354e+07 2.6e-7 2.6e-7 9.947936e-01 . . . 4.343434e+07 2.6e-7 2.6e-7 9.956859e-01 4.351515e+07 2.6e-7 2.6e-7 9.963593e-01 4.359596e+07 2.6e-7 2.6e-7 9.966800e-01 4.367677e+07 2.6e-7 2.6e-7 9.964780e-01 4.375758e+07 2.6e-7 2.6e-7 9.955412e-01 4.383838e+07 2.6e-7 2.6e-7 9.936096e-01 4.391919e+07 2.6e-7 2.6e-7 9.903714e-01 4.400000e+07 2.6e-7 2.6e-7 9.854618e-01
The first column is the independent variable time, frequency or sweep value. The next two columns are the lower and upper bounds of the desired results. This result must be a voltage, currents are not supported. If the simulated result lie within these bounds, they do not produce any errors. 'Error' means that the cost-function which measures the difference between the achieved result and the desired result is the classical quadratic error function: the sum of the squared differences between achieved and desired result.
Upper and lower bound of the goal definition inside the goal file may be identical. The fourth column of the goal file is a weighting factor with which the local error contribution will be multiplied (weighted).
The determination of the optimization algorithm is done in line 'method' in the property box.
Valid methods are 'lse' which means the gradient method or 'monte' which means 'rapid thermal annealing'. When placing an 'exhaustive search' optimization box, the method is 'exhaustive'.
Next, the simulation type has to be specified. 'tran', 'acm' , 'acph' or 'dc' are valid.
Next, a reference node has to be specified . The voltage at this node will be taken by the error function to derive the quadratic error. NOTE: Don't forget to edit the alias name of the desired net with edit net props before you place the optimization box.
Next, the maximum number of iterations ('max. Iterations') and the minimum quadratic error ('max. lse diff.') have to be specified . These parameters control the end of the optimization interations on success or on exceeding the allowed number of iterations : no success.
A little preprocessing of the simulation and goal results is possible by specifying the 'weighting lin/log' parameter. 'lin' is default , 'log' should be chosen during small-signal optimizations.
Lastly, the parameters which shall be changed during optimization have to be specified. A maximum of 20 parameters is allowed within the iteration loop.
The simplest method is the steepest descent or 'gradient search' where a set of new coefficients is derived from the first derivatives of the squared error by the optimization parameter.
A more difficult method to handle is the 'rapid thermal annealing' or
'Monte Carlo' method where new sets of coefficients are determined
by using a random number generator. After applying these random numbers
to the initial optimization parameters, the squared error is determined
after a new simulation. If the squared error is smaller than before, the
new coefficient set is accepted. If the result is worse, then, depending on
another random variable, the likelihood that this set of new
coefficients is accepted, too, is greater zero but smaller than 1.0 .
This way, there
is a certain chance that the optimizer, when trapped in a local minimum,
can still find the global minimum if it exists. The likelihood with which
a 'bad' set of coefficients is accepted is directly proportional to
(1-exp(-T)) , the temperature T. When T is very high, it is very likely
that a bad set of coefficients is accepted. Whether it is accepted or not is
determined by the equation
rand() < (1-exp(-T))
rand() is the outcome of a random variable which lies between
0 and +1 . When T approaches zero,
rand() < (1-exp(-T)) is always false,
and bad coefficient sets are very unlikely to get accepted.
Thus, T has to be set by the
user to an appropriate positive value between 0.01 and 10.
It is always difficult to say
which is a useful starting value, so, it is highly recommended
to play with this type of optimization before actually
using it. If T is too high, too many bad coefficient sets are
accepted, and it can even be possible that convergence is not achieved.
If T is too low, the optimizer can be trapped in a local minimum.
After a certain number of successful reductions of the optimization error, T is reduced step by step, so that the achieved results are 'frozen'. This reduction factor 'reduction temp' can be defined by the user, too. Again, it must not be too high or too low. The default value is 0.9 , which means, that the temperature is reduced by 10 sequence of successful parameter updates.
The last optimization method is exhaustive search. Here, up to 10 parameters can be optimized. By specifying lower and upper bound of an n-dimensional orthogonal search space, SPICECAD will loop over all dimensions between lower and upper bounds and determine the squared error. This method is very primitive, but using today's available Pentium III PCs or SUN ULTRA II workstations, results are achievable overnight or over the weekend.