Hi,

I was running a long running optimization (~ 5 days - I need to get a faster computer!).

Relevant excerpts from the code:

```
// LEAVE THESE OPTIONS COMMENTED OUT UNLESS YOU FIRST READ
// https://www.amibroker.com/guide/h_optimization.html
// However, running one of these non-exhaustive optimizations may allow
// optimization of more parameters in combination instead of in small groups.
OptimizerSetEngine("trib");
OptimizerSetOption("Runs", 5); // default value
OptimizerSetOption("MaxEval", 20000); // max number of evaluations
```

and

```
// ROC Lookback
ROCShortLookback = Optimize("ROCShort", 2, 1, 12, 1);
ROCLongLookback = Optimize("ROCLong", 28, 8, 52, 1);
```

and

```
ROCShort = ROC(C,ROCShortLookback);
ROCLong = ROC(C,ROCLongLookback);
```

This is the error message from the optimization window:

Column 19 of line 265 is the Close (C) array in the ROCShortLookback line. I guess that implies that Close was Null or negative?

What is the best way to trap for this error? Should I wrap Close and ROCShortLookback in the Nz function? It would be cool if there was a setting that said "ignore errors" during optimization. A bit like Powershell's ErrorAction = Ignore or SilentlyIgnore. Or else Try/Catch/Except functionality in AFL.

I also get "weird" results sometimes, such as NetProfit = -298409998935111.38. I assume that's just from dividing by some really small number? I just ignore those results, since I don't have that much money to lose

Finally, I would love to have a better understanding of how the non-exhaustive optimization engines work: cmae, spso, and trib. I have read https://www.amibroker.com/guide/h_optimization.html as well as the included links. In layman's terms, my guess is these engines use statistical algorithms to select a random set of parameters within the optimization parameter set, run the backtests, then use statistics/machine learning/etc. to pick another set of parameters to zero in on the objective function (say CAR/MDD or NetProfit).

My empirical evidence is I get better results when I run the non-exhaustive optimization rather than doing an exhaustive optimization "in chunks". IOW, while the exhaustive optimization will test every possibility in the optimization data set, optimizing the entirety of the optimization data set together using the non-exhaustive optimization gives me better results. I also seem to get better results using trib. Note I'm not favoring trib over the other engines, just stating my personal experience.

But of course I could be completely wrong on this, thus my questions!

Thanks for your help.