3 | | The empirical cumulative distribution function can be calculated from several runs of an optimization algorithm. From a first-hit graph several targets will be defined in the objective function and it will be measured when a target is hit by an algorithm. The proportion of all these hits forms the empirical cumulative distribution function (ECDF). |
4 | | |
5 | | Necessary tasks: |
6 | | ~~1. Create analyzers that calculate the quality progress with respect to function evaluations and also with respect to wall-clock time~~ |
7 | | ~~2. Create a run collection view that will calculate and compare the ECDF in one chart~~ |
8 | | |
9 | | TODO: |
10 | | * ECDF comparison view: |
11 | | * ~~Minimization is assumed when comparing against levels -> extract Maximization parameter from run~~ |
12 | | * ~~Logscaling sometimes fails breaking the chart~~ |
13 | | * ~~Test with view open and new runs arriving~~ |
14 | | * ~~Add feature to calculate fixed-cost and fixed-target values and add them to the runs~~ |
15 | | * IRRestarter: |
16 | | * ~~Count number of restarts~~ |
17 | | * ~~Difficult to abort Algorithm earlier as the Analyzers may not have run~~ |
| 3 | The empirical cumulative distribution function (ECDF) can be calculated from several runs of an optimization algorithm. From a first-hit graph several targets will be defined in the objective function and it will be measured when a target is hit by an algorithm. The proportion of all these hits forms the distribution. |
23 | | * ~~Move-based algorithms often output EvaluatedMoves, we should probably have all algorithms output EvaluatedSolutions~~ |
24 | | * Instead, I added a cost factor for moves |
25 | | * Analyzers depend on BestAverageWorstQualityAnalyzer, in my opinion best-so-far quality and execution time are results that the algorithm itself must provide and not an analyzer |
| 9 | * Adding EvaluatedMoves and EvaluatedSolutions is complicated by different cost factors |
| 10 | * Some algorithms/operators count evaluated solution-equivalents instead |
| 11 | * Integer is sometimes too small to hold the amount of evaluated moves for long runs / big problems |
| 12 | * Analyzers depend on BestAverageWorstQualityAnalyzer being run before them |
| 13 | * Arguably, best-so-far quality and execution time are results that the algorithm itself should provide and not an analyzer |
| 14 | * Quality progress chart should be calculated in a separate analyzer |
| 15 | * ISingleObjectiveHeuristicOptimizationProblem specifies MaximizationParameter as an IParameter instead of an IValueParameter<BoolValue> |
| 16 | * Which parameters in a run characterize the algorithm instance? |
| 17 | * E.g. Seed does not characterize the instance |
| 18 | * Analyzers do not characterize the instance, but some "analyzers" may modify penalties |
| 19 | * ResultCollection contains Result objects, but RunCollection.Results contains only the items |
| 20 | * The Result objects are always in the way, they should be removed |