Optimizing with a Large Watchlist


When I want to optimize a code against a large watchlist with something like the Russell 3000, it can take a long time to test all my parameters against several years of data for thousands of equities. I can speed it up by just taking a sample of the thousands of equities by just testing it against every nth value. To do that, I export my watchlist, bring the list into Excel and use this Excel code to get for example every 10th equity:


I then bring the list back into Amibroker to make a new watchlist with 10% of the original equities and I can optimize it in a fraction of the time. When I’m done, I’ll run a backtest against the shorter list and compare it to a backtest against the complete list to see that they come close.

Is this how other people would do it? Is there a better or more efficient way short of buying a supercomputer.

Using Static Variable to Streamline Optimization Process

@Marcel Try not to Optimize too many parameters at the same time. Limit your attempt to two parameters at a time, and you can thus use the 3D curve to look for "peaks and plateau’s " (only available if using only two different parameters).

Another option is don’t use small increments. For example a moving average optimization from 50 to 200 days, don’t use every one bar to increment. Try using a 2 bar or 3 bar increment. This decreases your steps from 150 to 75 or 50 in this example.

Perhaps use a shorter time period to test. Don’t test 3,000 stocks over 20 (or 5,000 bars) years. Consider optimizing over 10 or 7 or 5 years.

Lastly, for the Russell 3000, you mentioned breaking it up into smaller groups. The S&P 500, S&P 400, S&P 600, are all smaller groups within the Russell 3000. Since they are Large / Mid / Small cap indices they may behave differently from each other, but that too could be useful information.


@Portfoliobuilder Thanks for the tips, I do all that already. Typically I’ll do two parameters at a time, larger increments and reasonable time periods. I don’t like to let the optimizations run over five minutes especially if you have a dozen or so to run for one code. I don’t like to spend a whole day just for an optimization, I’m not that young as you might relate yourself.


Another useful approach is to prefilter the watchlist based on some static set of criteria. For example, if your strategy requires a minimum price of $5/share and volume greater than 1M shares, then you can write a small exploration to find only those stocks that have met these criteria at least once during the period over which you intend to test. The output of this exploration can be saved into a new, smaller watchlist which will be the input to your optimization.


If you have lots of optimization params (more than 10K combinations) use smart optimizer (CMAE/PSO).


I’ve been following all of the above recommendations but one thing that has really helped me is to change my computer’s CPU. For $50 on Ebay, I found a CPU with 8 cores/16 threads to replace my 4 core/8 thread CPU and it has doubled the speed of my optimizations. I see that Tomasz has a topic for recommendations for new users and he includes advice for a computer CPU: Recommendations for new AmiBroker users


There's some more relevant tips on the Norgate website that might speed up optimizations as well:https://norgatedata.com/beta-testing-program/amibroker-faq.php#missingwatchlists

How can I increase the speed of AmiBroker Scans/Explorations/Backtests?

1.Put your data onto a secondary SSD drive (or purchase an SSD drive to replace your main system drive). Solid State Drives increase performance significantly.
2.Upgrade to the Professional Edition of AmiBroker, which can utilize all of the processing cores on your CPU (backtesting with the Standard edition of AmiBroker is limited to two simultaneous threads).
3.Install a 64-bit version of Windows so that you can use the 64-bit version of AmiBroker Professional Edition. 64-bit CPU operations are significantly quicker than 32-bit. The 64-bit Professional Edition of AmiBroker can also better utilize all available RAM.
4.Prevent your antivirus scanner from doing real-time scanning on your data folders (i.e. exclude both the NDU database and your AmiBroker database from real-time scanning). Since the data folders do not contain any executable files that need constant monitoring, this significantly decreases the CPU load by bypassing the virus checking on these large folders. You can determine your NDU database location by starting NDU and clicking the Database tab. You can determine your AmiBroker database by starting NDU and clicking Integration->AmiBroker
5.Change the AmiBroker settings for "In-memory cache size". These settings are established under Tools > Preferences > Data. If you have at least 2GB of RAM, you could increase "Max. MegaBytes" to 1000. The 64-bit version of AmiBroker allows even higher settings. For instance, you could use a setting like 6000 MB given something like 16 GBs of RAM. As long as all of the data can fit into RAM, the second run (and all subsequent runs) will go very fast. You can monitor AmiBroker's cache usage by clicking Tools > Performance Monitor. The setting for "Max. Symbols" may also be increased to the limit of 20,000. However, a run should never be performed on more symbols than is necessary (see below).
6.Where applicable, use "Filter" to restrict your runs to a pre-defined universe of symbols. For instance, when using the NorgateIndexConstituentTimeSeries function to run a back-test against the constituents of a particular index, use "Filter" to select the relevant Watch List for that index. This will provide a faster result than running the test against all symbols.
7.Where applicable, when creating a database, set the "numbers of bars" to the lowest possible value. For instance, if a scan or exploration only requires 12 months of data, change the "number of bars" accordingly (File > Database settings). 12 months is approximately 250 bars. For backtesting you will probably want a higher number of bars to test against. Note: Once initially set, AmiBroker will not reduce the number of bars in its cache, so lowering the number will not subsequently help. It may be easier to have one database for scans, and one for backtesting.