I'm conducting a study of the impact of MaxOpenPositions on various trading performance metrics. The method used is to generate buy signals for many stocks and randomly choose k stocks from the pool of stocks having the buy signal at each bar. I use the optimizer to repeat the experiment for many trials. Then the optimization results are exported to a csv file and analyzed outside of AmiBroker to determine the mean performance metrics across all trials for each value of MaxOpenPositions.

One unexpected finding is the inverse relationship between MaxOpenPositions and Win% when the position score is random. I have observed this pattern with multiple trading strategies.

I would expect Win% to diminish with increased MaxOpenPositions when the PositionScore is positively correlated with win%, but I'm puzzled with the result when the PositionScore is random. What am missing here? What is causing the decreased Win% as MaxOpenPostions is increased?

A sample strategy is below. This code snippet deliberately peaks into the future -- it is only an example to generate buy/sell signals. The important part about this code is the random PositionScore. Optimizing this code with the "S&P 500 Current & Past" watchlist generated the data plotted in the chart above.

Normally, I would say commissions as more positions you open your transactional costs grow. But your formula suggests that you have set commissions to zero.

I can only tell you where I'm tending to lean towards in my thoughts on this. Because the percent is deteriorating so slightly with more positions, ~ -1%...

Maybe it's like picking from a jar of red and green marbles (green representing those symbols that would be a winning trade and red marbles losing) and because they might be concurrent (at the same time positions) it might be something like picking a marble without replacement. Meaning at any one time out of the S&P (containing limited winners), your winner takes out of the jar a winner, leaving slightly less for the next pick et al. Therefore, the more maxopenpositions, the more a run might inadvertently throw off the balance and sometimes leave less winners left and thus because of that, subsequent picks might have a less chance of being a winner. Try positions (marbles) with replacement?

Then again the opposite might happen, leaving less losers to pick from. I guess you could pick a time frame where the S&P made a big advance and due the same for a bad market environment and see how they compare. Also think a bit about using a non market capitalization index and constituents. Maybe only limit yourself to stocks in a certain price range and see what you get etc.

It is disconcerting -- I would like to fully understand the behavior of a strategy before trading it. As you mentioned, Sean, it is something like picking red and green marbles from a jar, without replacement, where the contents of the jar changes at every bar. I'm not a statistician, but I think the problem can be modeled by a Hypergeometric Distribution https://en.wikipedia.org/wiki/Hypergeometric_distribution, where there is a different Hypergeometric Distribution at every bar.

Continuing with your jar of marbles analogy, there are two important cases to consider: 1) a single marble drawn from the jar and 2) all the marbles drawn from the jar. Suppose the jar has 60 green and 40 red marbles.
Case 1: If we choose one marble, the probability it will be green is .6. If we repeat the sampling experiment for many trials we expect 60% of the trials will result in a green marble.
Case 2: If we choose all the marbles, by definition 60% will be green. We only need one trial.

My reasoning is that regardless of the number of marbles chosen (k between 1 and 100) from the jar, if we repeat the experiment for many trials, the expected value should be 60% green.

That analogy only describes a single backtest bar. As I mentioned, each bar has a different mix of potential winners and losers. I may be mistaken, but I think the concept should generalize to a backtest with many bars and the number of open positions should not have any affect on the outcome. But my optimization results do not confirm my thinking.

What does your CBT do? Since I don't have your CBT, I ran your code without one, and I do not see the declining Win Rate that you reported. The table below is simply an Excel pivot table generated from the optimization output when the AFL is executed against the current S&P 100 for all of 2020.

Yes exactly, or at least where my thoughts were leading me. So if the logic is to buy randomly and hold for 5 days, then I would scan from the buy day, out to 5 days and get the population of the number of symbols that were up vs were down and see what the total population (Jar) looks like. If the up symbols aren't greater than the down symbols, by at least your maxpositions number for that optimization step, then that could be the smoking gun.

@Matt, thanks for weighing in on this. My CBT just assigns a value to a custom metric. I always use a metric called "Fitness" whenever I run an optimization.

if (Status("Action") == actionPortfolio)
{
bo = GetBacktesterObject();
bo.Backtest();
st = bo.GetPerformanceStats(0);
fitness = st.GetValue("UlcerPerformanceIndex");
bo.AddCustomMetric("Fitness", fitness);
}

I repeated your experiment as closely as I could: S&P100, no CBT, 2020 only, 500 trials. My results are reproducible -- I ran the optimization twice with almost identical results. See below a chart of my results and yours. For calendar year 2020 we see a different pattern -- increasing MaxOpenPositions also increases win%. I still find this puzzling. Why isn't the win% constant for all values of MaxOpenPositions?

Why you expect them to be constant? With larger traded universe you are likely to get different results. The law of large numbers works for large numbers. 10-50 isn't that large.

If you take dice roll picture from Wikipedia you would observe some deviation from average, especially with small number of rolls (<600)