Architecture for taking Amibroker /IB live to large universe of stocks on intraday data?

Hi, I was curious if anyone out there had good architectural solutions for trying to adapt Amibroker to actually automatically scan and trade all 6000 nyse/nasdaq listed stocks - and how to run multiple trading systems on that? It would seem like the actually opportunity to continually scan and place orders run into several different types of limits. I've got a few thoughts, but I was curious if the community (Bandy, Thomasz) had good solutions.

In the past I've tried to port systems developed in Amibroker to third party software and it just creates too many points of failure so I'd like to keep it contained.

You would need a data source that allows you to stream 6000 symbols at once. That is major obstacle because for example highest eSignal subscription is 2000 symbols.

1 Like

Yeah, that’s one obstacle. I would need a data feed that either supported that or breaking it down into multiple data feeds. Then there is the processing power of the server it’s running on and how much data AB can handle at any given time. And if AB should be broken down into multiple instances / multiple databases. Then there is the monitoring and balancing of said instances - and what to do when the market is under stress and the number of transactions increases. There is also the obstacle of maintaining symbol lists for new listings and delistings.

In short, I know there is a lot of issues that need to be worked out. I guess I was curious if anyone else had done anything like this and wanted to chime in. Maybe there was another direction to go in that I hadn’t already thought about? I know so far this subforum isn’t too active yet.

If anyone else cares to respond or has questions, just respond to this and I’ll get an email notification and respond.

AmiBroker does not require to be “broken into instances”. AmiBroker is multi-threaded application
and does not need any special procedures to take advantage of modern hardware. If you use new analysis will use ALL available resources and

Scanning 6000 symbols is not a problem for AmiBroker. People are scanning and backtesting larger universes than that.

Sure, it’s not an issue of number of symbols, it is an issue of the
messages per second coming from the data feed. I’m coming from direct feeds
so I’m sure the consolidated tape that gets pushed through esignal is way
less intensive. Still, there will be times when the market is under stress
and the message rate quadruples. Of course pinpointing if esignal is
lagging or ab is lagging would be another deal all together.

The other issue is the number and complexity of the calculations being done
in real time. Maybe the limiting factor will be the quality of the server
running AB, but I would think AB breaks before the computer pegs.

Ehh, we’ll see. It’s all just conjecture on my part!

Again assumptions. Assumptions are not facts.
You don’t understand how AmiBroker works. AmiBroker contrary to all other applications has asynchronous design and your formulas do not live in GUI thread. No matter how much computing is done in your AFL, it does not affect GUI thread and responsiveness of application. If there are lots of computations in AFL the scans will execute less frequently but this is not lag.

1 Like


I have cast my vote on this thread, because I'm also very interested in this topic and would like to know if there are any useful tips and suggestions regarding this matter. But I can't agree with some of your statements:

First of all AmiBroker is installed on users' computers - it is not a server based solution. But I'm not sure, what did you mean writing about the server. If this server is one of your computers, then it's OK.

I don't know why do you expect AmiBroker to break (even if it is strained) and you want to break it down into multiple instances to avoid that? Tomasz has just written about it's unique design and explained why it is not a good idea...

It is probably the fastest and the lightest program of its kind on the market. If you have Professional Edition, each Analysis window can utilize up to 32 threads. So by simply running 4 Analysis windows (without any charts), you can utilize up to 128 threads! If the program breaks, it is almost always not because it is faulty (or has bugs) but the formula is faulty. I have seen many poorly written codes, which after a couple of hours or even minutes were causing a crash, because for example they were creating hundreds of thousands of static variables (without removing them) which would inevitably lead to crash of any software. Many of users blame @Tomasz for that, not their poor coding skills.

If I was to find an analogy I would say it is a situation in which someone has bought a Ferrari and now is complaining that if he reaches 300 km/h it usually ends in a crash. Can Ferrari be blamed for that? Is the car faulty (or has bugs :slight_smile: ?) In 99% cases poor driving skills are to blame. The fact that someone can afford to buy a Ferrari, doesn't make him a good driver. Drivers tend to think, they don't need to improve their driving skills, because all the fancy systems onboard will take care of everything. The same applies to AmiBroker. Many users think, that if they install it on an expensive machine, they don't need to worry about performance issues because their powerful rig will crunch everything in nanoseconds. In some cases it is true, but only to some extent. Eventually such attitude may lead to many problems and crashes. Learning coding on a powerful machine may form bad habits. Such users tend to forget about really important matters. A quote from Tomasz's post from Another thread :

I think I know why people are so quick to assume bugs in programs. Probably because all other programs including Windows 10 itself are filled with bugs. But that is not the case for AmiBroker, as it is painstakingly crafted the ‘old’ way, when programmers were not only “coders” but had experience starting from digital TTL logic and every bit of memory and every single CPU cycle was precious.

You should read Tomasz's article about AmiBroker's performance:

Let me give you another example regarding the topic of this thread. A couple years ago I was scheduled to be admitted to hospital. I knew I would spend there at least one month. There were no such powerful smartphones back then, and the only way to be up-to date with the current situation (not only on the financial markets) was to have my own computer with me. Because I was told, that there is a high probability that my laptop might be stolen or broken there, I decided to buy a cheap netbook (netbook not notebook) so that it wouldn't be a big issue if it went missing. I wouldn't even consider buying it in other circumstances. It was ASUS Eee PC 1215B with Brazos E-350 Dual core processor (2 x 1,6 Ghz - without hyperthreading - only 2 cores and 2 threads). I bought it brand new for not more than $500. The only thing I have changed, I supplied it with 8 GB's of RAM, installed AmiBroker and a bunch of other programs. It was a very nice looking netbook :slight_smile: with a decent specification (for the amount of money I paid for it) :

As you can see, it is rather not the fastest machine on the market :wink: The processor is so weak, that browsing the Net was definitely not a great experience (because of flash etc.) But when it comes to AmiBroker - everything runs smoothly. I was able (and still I am able) to live stream and scan/explore all shares listed on one of the European's stock exchange (about 1000 Symbols) on that machine , and at the same time I could have many opened charts with live quotes. One remark - I always try to do my best to optimize my code to make it light and effective. Making such live scans/explorations possible on this machine, required some effort. The limiting factor is not whether AmiBroker can handle it (it can - because you can easily scan/explore/backtest many, many thousands issues when working offline) but whether your data provider can handle it (probably he can't).

If I was able to perform this demanding task even on E-350 Processor, imagine what can you do on the good mainstream Intel 7700k processor, which is even 17 times faster !!! than Brazos, according to multicore UserBenchmark:

Now think what can you achieve if you have your AmiBroker installed on a system equipped with a high end processor like Intel 6950x. This CPU might be even 36 times faster !!! than E-350 in multithreading environment: If you pair it with lots of RAM and fast SSD drives you get really insane crunching power...

If someone asked me, what has changed since the time I was using Brazos, can I still run the latest AmiBroker 6.25 on it? That would be an appropriate question in case of most other competitive products, which sometimes get the hiccups if only a couple of live charts are opened, but it is inappropriate question in case of AmiBroker. AmiBroker 6.25 not only can still be ran on Brazos (2 x 1.6 Ghz), but it's performance is significantly better than it used to be a couple years ago. Thanks to Tomasz's outstanding skills, each new release (in spite of being packed with new features) is getting faster and faster.

Summing up - if your formula is properly and effectively written it should run smoothly even on an average system. You don't need 128 threads to perform live scans/explorations on 6000 stocks' universe - believe me - I know that :slight_smile: The only thing you should really worry about is your data provider who probably won't enable you to do that easily.

Sorry for such a long reply, but I wanted to show you (on my example), that AmiBroker's performance is usually the last thing to worry about :slight_smile:



… one remark. When I wrote I was able to stream live data for all stocks on that netbook, I meant I was able to perform Scans/Explorations on all symbols every x seconds or x minutes during the session. It usually took only a couple of seconds for a simple Exploration to be completed.


Thanks Milosz.

You’re absolutely right that efficient code has a big impact. That was something I didn’t state, but I do recognize.

Everything I am concerned about is theoretical (assumptions). The reason why I posted here was to get clarity from anyone actually doing what I propose (scanning entire market, running calcs, and placing orders in real time). This way I can learn from them without making mistakes on my own. If I don’t have to break amibroker up into instances, then fantastic.

If I have to have three accounts to pull data on 6k stocks, ok. Then I’m still back to probably needing 3 instances of amibroker since each universe is only 2k stocks (unless AB supports multiple log ins for data but afaik it doesn’t). I still run into issues with new listings and delisting… etc.

Again, I am trying to understand the various limitations how others have created solutions for the inherent problems. I’m not here to blame a Ferrari.

I think you’re not getting the message. The limitation is in the data feed, not AmiBroker. Stick all 6000 into a Watchlist and Explore it in Analysis. No problemo. The problem is a data feed that can send all 6000 ticks in a reasonable amount of time and plugin support for that data feed. About half of the 6000 will also be penny stock crap and a pointless waste of resources. An argument could be made for running a separate instance just for OTC.

However, I should point out that AmiBroker doesn’t really have support for anything more sophisticated than treating each tick as a OHLC bar since the data plugins are functionally limited and there is no AFL access provided to the data shown in the time & sales window. If you have sophisticated needs such as parsing bid and ask volume or the order book, you’ll have to look at the competition which is way more advanced in that domain.

So for anyone interested who reads this thread at a future date, here’s some limitations/issues you should know about.

  1. As already covered, but a retail subscription to the data - DTN is the best bet at this writing - with 2500 symbols per account. There are other options that will let you pull the entire market for $500-$700/mon, but I would advise against it for the reasons below.

  2. It’s already been discussed too in some of the KB, but the auto scan doesn’t necessarily run at exactly 00. You can setup a batch process in the latest build, but you’re still going to be slightly off. So be cognizant of any time sensitive stuff.

  3. The history in the database and the timeframe you’re scanning on has a huge impact on scan times.
    a. Scanning ~4400 stocks on the most recent bar (but the database has approximately 6 years of history), takes slightly over 5 minutes on a 1 minute timeframe. Scanning on a 5 minute interval takes about 3:20.
    b. Scanning ~4400 stocks on a database with only 15 months of history takes slightly over 1 minute on a 1 minute timeframe. It’s about 50 seconds on a 5 minute timeframe and 45 seconds on a 15 minute timeframe.
    c. So depending on the strategies you’re running, if you’re trying to run 1 minute data on anything with more than a year history you’re going to hard pressed because it takes over a minute to scan so you’re constantly behind. Plus if you’re 30 or 45 seconds into a 1 minute bar, then chances are you’re probably way off from your backtest anyway.
    d. As an aside here, it was also previously mentioned that obviously you’re just using OHLC data and you don’t have bid/ask data. So the thinner the stock, the less chance you have of being filled, or the more slippage you’ll experience. I already knew that going in - but this is just for anyone else reading.

  4. The above was running one strategy with 50-75 lines of code. I’m not sure how much extra time would be needed if I was trying to execute multiple strategies.

  5. More than likely, I’ll try to contain an instance of AB to running 500-1k stocks to ensure quick scanning times. Even the idea of running a scan on a 15 minute timeframe and it taking 45 seconds to complete - being 45 seconds into the next bar before firing would probably drastically alter backtest to live results.

  6. I’m going to write a script for comparing backtest results to live AT results. But, that would be a nice feature on the wishlist for future releases.

I hope this was helpful in providing quantitative numbers for what I was seeing. It was run on Amibroker 6.25 on a Core I7 5820k with 16 GB ram.

1 Like

@Term, thanks for providing those information.

… but there are many ways in which you can significantly improve your results:

  1. You can make your scans/explorations run much, much faster if you use QuickAFL in Analysis Window. Why use 1 000 000 bars if you only need 200? I wrote about it (and about using SetBarsRequired() at the end of a formula) in details in this thread:
  1. Nevertheless - in your case you should generally try to make your database as compact as possible. The less bars, the better. It makes no sense to scan live market every n-seconds or every n-minutes using a database with a 6 years history. When performing such scans/explorations you are interested in the latest changes, not those from a year ago. You overwhelm AmiBroker with lots of unnecessary data. Besides if you perform such scan/exploration on live market and subscribe all those issues (with such a long history), you will probably run out of memory.

  2. If you need to use some values which require a long history to be calculated, there is also a solution. You can import some values from other Charts/Explorations/Scans running in a different timeframe:

If you set Periodicity (in Analysis window’s Backtester settings) to 1 Minute and want to calculate 60 day average volume (and compare it to today’s volume) or 90 day ROC, you will need many thousands 1 Minute bars just to perform this single calculation - even if you use TimeFrameSet () or TimeFrameGetPrice() - because those values will be calculated using 1 Minute bars. Your formula will become slow. However you still don’t have to use that many bars. You can calculate things that you need in other Chart/Exploration/Scan (running in a Daily timeframe) - save those values to Static Variables and then easily import required values to the first scan/exploration/indicator. As soon as I started doing so, my intra-day explorations became much, much lighter and faster.

Some more appealing examples:

To calculate a daily rate of change for EUR/USD using 1 Minute data you need 1440 1 Minute bars. In case of a Daily Timeframe - only 1.

To calculate a yearly rate of change for EUR/USD using 1 Minute data you need hundreds of thousands of 1 Minute bars. In case of a Daily Timeframe - exactly 1440 times less !!

Think about speed gains you can achieve using 1440 times less bars !!!

… and that’s not all. You can achieve the same result using just 1 Yearly bar … :slight_smile:



All is covered already in the manual:

The difference between efficient use of AmiBroker and incorrect use of AmiBroker can be 1000x fold. The same way, using same C++ compiler, proficient programmer who has not only practical but also theoretical background in computer science can produce code 1000x faster than code produced by someone who has no idea what he is doing. Even very simple stuff can be coded wrong. Simple example: iterative vs recursive implementation of Fibonacci series. First has O(n) complexity the other has O(n*n) complexity.


DTN NxCore might be interesting.

Or perhaps some CQG service, like the continuum client. In the end, providers make it hard to monitor every trade-able instrument, not to mention the difficulty increasing with the data resolution. Don’t forget the idea of limited margin/cash and just because a back test allocates trades in an A-z manner. In real life fills come in in any order taking up your availible cash/margin. A best practice is to filter out and down to a smaller trade-able universe, by some “Ideal” on the slower time frame, that can then be monitored at the higher frequency. This would reduce processing and hardware strain as well.

As far as SetBarsRequired goes, it would be nice if we could in AFL set active symbols and then SetBarsRequired() for EOD and then separately for other intraday resolutions just for those active symbol in a top down, “Whittle down” manner. Perhaps that is already possible via watch lists, but I’m more thinking of feeding just those real-time and having AB use its resources just for those as well and not the whole DB. In this way each day can be different perhaps. Just thinking. Be easy on me. :thinking:

1 Like

I agree, that's probably the best solution ...

I think, there's a misunderstanding here.

  1. Amibroker doesn't refresh/feed all symbols in a database by default. It refreshes real-time only those symbols which are displayed on visible charts (according to Tools/Preferences/Intraday/Realtime chart refresh interval settings:

(If we enter 0 into this field, then it will result in chart being refreshed with every new tick - up to 10-times per sec).

.... and those Symbols which have been added to a Real time quote window

So if you have a database with 6000 Symbols, 3 charts with different Issues opened and no Symbols in a Real time quote window, only these 3 Symbols will be refreshed realtime. So AB's resources are used effectively.

Only you can make AmiBroker to refresh all symbols in a database. You can do it i.e. by running a Scan/Exploration on a watchlist containing all Symbols (when you run initial Scan/Exploration Amibroker will backfill any data holes). But those Symbols will not be automatically refreshed during the session (if they are not added to Real time quote window). They will be refreshed only when the next scan(s) is run --> on demand (when you click the Scan/Explore button) or automatically every x seconds/minutes/hours ...

You already decide which symbols are refreshed realtime and you can already have different SetBarsRequired() settings for each timeframe either using charts/indicators or scans/explorations. For example you can have 3 explorations running in a different timeframe (different Periodicity in Analysis window’s Backtester settings) and specify different SetBarsRequired() settings for each Analysis window.


A similar topic with additional information provided by Tomasz:

There is a big difference between refreshing and feeding. Every symbol that is for example in the real-time window list or was accessed at any time and stays within RT subscription limit is FEED in real-time constantly. Feeding means streaming RT updates. These updates are received from external RT source, used to produce bar data and AB gets notified with every new tick on every subscribed symbol. This happens on PLUGIN side.

These notifications
a) mark symbols as “dirty” so AmiBroker knows that it needs to get NEW data from plugin for any such symbol if it wants to display updated charts or any Analysis is run
b) cause real-time window to display new data
c) cause Time&Sales to display new data
d) trigger Alerts (EasyAlerts)

Chart refresh is different story. It can be triggered by many events including new data, scroll, resizing, timed refresh, mouse click, symbol change, etc, etc. On arrival of new data charts are refreshed immediately if time since last refresh was >0.1 sec.


Yes, of course I am aware of that. I should have used the word "feeding" instead of refreshing in most cases. I am sorry for not being precise. English is not my native language. I do my best, but sometimes it is not enough ... :worried:

The good side is however, that your replies are usually an excellent opportunity to learn something new about how AmiBroker works internally - I really appreciate it. And that is this case :slight_smile:

  1. How can this RT subscription limit be recognized if the data provider doesn't specify it?
  1. So if I understand correctly: " if the Symbol was accessed at any time and stays within RT subscription limit (but this Symbol is neither displayed on a chart nor present in the realtime quote window) is FEED in real-time constantly." means, that it is marked as "dirty" when there is new data available, but only when such Symbol is requested by (for example) a chart or Analysis window - only then the newest quotes will be downloaded/updated. Simply put: as long as this Symbol is not "requested" AmiBroker knows, that it lacks some newest data, but doesn't download it untill this newest data is really needed. Is that how it works internally?

    And as I recall, those Symbols within RT subscription limit are added/removed according to FIFO (first in, first out) rule.

  2. What is the dependence between In-memory cache size and the number of the Symbols which are FEED in real-time constantly? Only the most obvious one - that the cache size should be big enough to fit all those Symbols' quotes?

Thank you.


No. Nothing is downloaded. The quotes of streaming symbols are already there in the plugin. Ready to use immediately without downloading anything. Real-time data sources like IQFeed or eSignal stream every tick of streaming-subscribed symbol and plugin holds that data for immediate access.

This is nothing new - everything is described / documented in the ADK (plugin development kit)

RT subscription limit is known. IB, eSignal and IQFeed have known limits and IQFeed even has methods to query for subscription limits.

Cache size and RT subscription limits have nothing common.

1 Like

I completely agree! This is great stuff and I wish there was more of it frankly. The knowledge of how the “Gears” work aid in going from playing with AB and conducting research, to real-time application. I would even add to that, will help keep strategies researched with the practicality of how it might actually be implemented et al.

My personal angle with AB as it were, was to maybe use it as a from end graphics and data management, storage (historical and real-time and give that access to my engines which generate my orders and manage my systems). Originally I thought that thru the SDK I could simply pass a pointer to the data that is in AB as historical, but yet updated constantly via a real-time feed. Then it would be nice for my engines to label orders, trades, risk management metrics our to AB for viewing, as well as labeling trades or orders on the applicable chart. That kind of stuff. However the notion of all symbols in AB not getting auto appended changes things a bit as I map this out in my head. From a marketing perspective, this type of ADK functionality (if it doesn’t exist now) might be a good idea, since most quants build stuff in anything from R, Python, mathlab, C++, C#, Java etc., and would love a front end to tie into.

With regards to the feeding, perhaps you might want to simply make a on/off toggle for watchlists and other category assignments, to either be as they are (off) or be fed (on)? You might even make it so you can set a interval or timer, to rotate the symbol, if let’s say the symbol exceed the feeds limits to rotate the active symbols. If the limit is 2000 symbols, set a timer for 2 minutes, then swap the most recent 2000 with the older 2000.

Just some thoughts.
Be Sweet.