Help with coding a 3-step Walkforward routine

Hi guys, need some help coding a special 3-step walkforward routine. Instead of the normal method of
Walkforward Testing used in Amibroker - https://www.amibroker.com/guide/h_walkforward.html

  1. optimizing with an optimization target on In Sample (IS) data set, then taking whichever set of parameters that gives the best result for the optimization target, and
  2. running a backtest with that on Out Of Sample (OOS) data, then repeat repeat repeat (2 step process)

I want to do a 3 step process:

  1. optimizing with an optimization target on In Sample data set

  2. For each backtest performed in step 1, run another backtest on a short OOS data set called the "Paper Trade" period and record those results (but not used as the optimization target). For example, use 2 weeks of data following the In-Sample data as the "Paper trading" period. AFTER the optimization step is completely finished, but BEFORE the OOS backtesting, sort the optimization results by how well they perform in the "Paper Trade" period and

  3. Use best result from step 2 to run a backtest on the subsequent OOS data set (starts after the end of the "Paper Trade" period.

Can anyone figure out a way to do this in Amibroker? Seems like not.

Thank you in advance for any help with this! I've used this feature on a different piece of trading software and it DRAMATICALLY (by a factor of 10 or more) improves the OOS results of walkforward testing. I would love to duplicate this feature in Amibroker as it is far better software on all other counts aside from lacking this feature.

Thank you very much!

To put it more simply - whether or not you wanted to use the "Paper Trading" step on data seperate from the In-Sample optimization data probably doesn't matter a whole lot. The important part is an second fitness metric used to sort your optimization results before running the OOS backtest. I think it would be more appropriate to call this second step "SELECTION" -(or "verification" or something like that)

So, 3 steps:

  1. OPTIMIZATION (In-Sample Data): Optimize strategy parameters on In-Sample data using "Fitness Metric 1" as the optimization target, generate a list of backtests and parameters.

  2. SELECTION (Paper Trading Data) Use a different "Fitness Metric 2" to select the best of these optimization results based on THIS metric, rather than just choosing the best one based on "Fitness Metric 1" (which is what happens currently) ---- (Paper Trading Data could either be the ending period of the IS data, or an interval between the IS and OOS sets, or the entire IS period - whatever the user defines)

  3. BACKTEST (Out of sample data) - Run backtest using the best "Fitness Metric 2" parameters on OOS data (not seen by either step 1 or step 2)

repeat repeat repeat until walkforward test is complete.

Would greatly appreciate any assistance in accomplishing this.
Thanks!

1 Like

I provided a suggestion for you as a response to your previous post: Custom backtest metric (per-symbol profit/loss) for a specific date range

I did not realize that you had created a new topic to restate the question.

Thank you, will take a look. Yeah I figured the title of this post was more to the point about what actually needs to be accomplished, and it wasn't possible to change the title of the old thread.

Thanks for the reply, sounds like that would be useful, but there's still the problem of having 1 target for the optimization, and then selecting a different solution than the best "optimized" solution to use for the walkforward OOS backtest... I think it would have to be coded as a plugin or use the batch routine in some complicated and tedious way.

I don't think you fully understood my point. You're not going to use Fitness Metric 1 or Fitness Metric 2 as your Optimization Target. You're going to track those yourself, and create an artificial Optimization Target Value that can be provided to AmiBroker so that AB will know which is your "best" variation to run for OOS.

Aha ok, well yeah that sounds good!

Right now I'm running walkforward tests using this custom metric that I eventualy figured out how to create after reading through other forum posts and user guides:

// Window Profit:  Gives % profit for last "windowlength" days of an optimization.
SetCustomBacktestProc( "" );
if ( Status( "action" ) == actionPortfolio)
{
	//find profit for window in backtest
	windowlength = 60; 	//window length in days
	
	//start = _DT( "2018-10-01" );					//Beginning of window
	range_end = Status( "rangetodate"); 			//gives end of range in datenums
	range_end = DateTimeConvert( 2, range_end); 	//converts DateNum format to  DateTime format
	start = DateTimeAdd(range_end,-windowlength); 	//subtract windowlength from end date in DateTime format
	bo = GetBacktesterObject();
    bo.Backtest(); 									// run default backtest procedure
    st = bo.GetPerformanceStats(0); // get stats for all trades
    start_eq = Lookup(bo.EquityArray,start);
    end_eq = bo.equity;
    window_gain = Nz(100*end_eq/start_eq);
    window_profit = Nz(100*(end_eq-start_eq)/start_eq);
    carmdd = Nz(st.GetValue( "CAR/MDD"));
    cargain = IIf(carmdd>0,Nz(carmdd*window_gain),0);
    cargain2 = Nz(cargain*window_gain);
    carwindow = IIf(carmdd>0 AND window_profit>0,carmdd*window_profit,0);
    carwindow2 = carwindow*window_profit;
    bo.AddCustomMetric( "Window % Profit", window_profit, 2); 
    bo.AddCustomMetric( "CAR Window", carwindow, 2); 
    bo.AddCustomMetric( "CAR Window2", carwindow2, 2);     
    bo.AddCustomMetric( "CAR Gain", cargain, 2); 
    bo.AddCustomMetric( "CAR Gain2", cargain2, 2);        
    bo.AddCustomMetric( "Window Length (Days)", windowlength); 
}

I'm setting the optimization target to "CAR Gain2" for one walkforward test, and running another identical walkforward with "CAR/MDD" as the optimization target, and the CAR Gain2 one is winning by a landslide so far. Using 10 months for my in-sample range, 90 days for the "Window length" and 1 month OOS step size, with 14 months total OOS.

This still isn't quite what I originally wanted to do, since the "Selection" metric (the profit during the "Window") is used as part of the "Optimization target" (I wanted to keep them seperate) But, it's kicking @$$ so far, so maybe doing it this way is fine.

It would be awesome to implement your solution and compare!

1 Like

It wouldn't be that hard to calculate CAR and MDD (on an EOD basis) for the period of time from the beginning of the IS period to the beginning of the Window, i.e. the true IS period. However, you're still missing the ability to only select the Top N variations of the true IS period for evaluation during the Window. Right now you're just overweighting the returns from the Window portion of the "IS + Window" period.

This is the third and final time that I will say this:

  1. Do NOT use Fitness Metric 1 as your optimization target
  2. Do NOT use Fitness Metric 2 as your optimization target
  3. DO use the two Fitness Metrics to track your own results, and to create an artificial optimization target that can be used by the optimization engine
1 Like

Here is an idea for how to use this 2nd step "filtering" that would be nice to have in walkforward testing and optimization, if anyone knows how this could be coded:

Example:

  1. Optimize strategy using CAR/MDD (or any Fitness Metric #1) as optimization target using 1 year (user specified) of In-Sample data

  2. Calculate Ulcer Index (or any Fitness Metric #2) for the last 120 (user specified) days of In-Sample data for each backtest during the optimization

  3. Select top 10% (user specified) of optimization results based on Fitness Metric #1 then sort them by Fitness metric #2

  4. For walkforward tests - Backtest the Out of Sample data using this "Filtered" result

@Tomasz - is there any way to do this?

I see basically 2 parts, 1) run 2 backtests for each backtest during the optimization stage, 1 on the selected date range for the optimization, a second on a seperate date range, and generate all the backtester stats for the 2nd range also.

  1. Keep top X% of results from optimization target Fitness Metric 1 (CAR/MDD) and sort them by performance in Fitness metric 2 (UPI of last 4 months)

I know from previous experience that a 2 stage filtration process like this can come up with more stable results rather than just choosing the solution with the best result for the optimization target to run live. Taking the top 10% of results and then choosing the one with least drawdown, or highest UCI, or whatever, would most likely be a more stable solution and perform better on OOS data.

Thank you for the help!

2 Likes

True. Well, sounds like a nice solution until @Tomasz adds this functionality as a built-in feature on the paid version of Amibroker :wink: Coding this would be way over my head but if you think it would be a useful feature and feel like giving it a crack that would be awesome!

Being able to run the built-in backtester on a second date range in a custom metric would be optimal, then we would have access to all the built-in metrics in Amibroker without hard-coding all the calculations.

I got a response by email:

Hello,

Thank you very much for your e-mail.

As I wrote before - this "Paper Trading" is not part of walk-foward and I see no benefits of adding it.

You can however do whatever you want and I explained on the forum that you can change
the way WF works.

Start/End date is stored in project file (.APX).

So you can either:

a) store multiple APX files with different start/end dates and load them as you wish
or
b) modify APX file to change start/end date.

APX is a plain text file (XML) and you can edit it or process programmatically.

Analysis can be automated as explained in the manual
http://www.amibroker.com/guide/objects.html

AmiBroker's OLE Automation Object Model

AmiBroker's OLE Automation Object Model. Important note about OLE automation: OLE automation interface is provided to control AmiBroker from the OUTSIDE process (such as windows scripting host).

www.amibroker.com

(scroll down for the examples, there are examples how to run Analysis and optimization too)

Best regards,
Tomasz Janeczko
amibroker.com

Thank you for all of your hard work here. I know exactly what you are talking about, and it would likely be beneficial for picking the very best sets of parameters in a walk forward type test. This is the same process used in machine learning, and is different than walk forward testing.

I mention the same idea in my thread here.

I would summarize the idea in this way:

  • Three sets of data:
    1.Training - built in/ stranded optimization.
    2.Validation - Testing only the best results from Training. Compares results to Testing.
    3.Unseen - The best results from Training + Validation are used here.
  • The best results from step 3 Unseen (or a combination of all 3) would be used for actual trading.

This is a common approach to optimizing machine learning algorithms - More info here.

I think @mradtke gave the best answer above. I will work on a solution to that end.

1 Like