Hello,
I think you faced the same issue as a beginner. I am setup the client code however not able to setup the Firs Step itself. My basic question is, whats the right way to ADD a new Symbol for the first time. Should I add the symbol directly in Amibroker or somewhere else?
Reading the documentation, it seems I should add it first in the Plug-in DB, however I don't see the steps to do it, how to add a new symbol in the Plug-in DB?
I tried sending addsym command to the Relay Server, however it doesn't add it in the Plug-in DB, I am expecting the plugin color in the AB will turn dark green if a new symbol is available in the DB for import to AB.
Please ignore if I have missed any obvious point, as I am still trying to understand the initial setup process using this Plugin.
The plugin is pass-through, a bridge between your client-APP and AB.
There is NO-ADD in that sense for plugin, you just send json-rtd or json-hist and plugin will store it and Notify wll become DARK-GREEN.
Then go to configure and RETRIEVE.
IF the SYMBOL is in AB already, go to that symbol from chart or AA directly. No "add"-ing is required.
Also: Spend time reading this whole thread and the docs carefully because this is not plug-n-play.
ok, now it is clearer to me. I will directly try to ADD in the AB and open the chart to initiate the process first time. I have gone through the entire thread here, it is very helpful, it has given me a good start.
Probably some problem with with Client itself which is not returning data may be. I will debug more and get it working.
use sample_Server.py, it is fully functional
I came across this plugin recently - looks like a nice idea with a lot effort put in by yourself. I have read the docs, forum posts and skimmed the python but have a few questions. I don't know python so aplogies if answers are all in the code. Firstly in the Relay Server & Client App context is this summary of message protocol correct?
Message protocol
================
Relay Server and Client App
All messages are JSON format except initial comms handshake.
Establish comms:-
Client App opens websocket
Client App sends string "rolesend" (not JSON)
Plugin/Relay response ????
From Plugin to Client App
-------------------------
Code Triggered by Client Response
---- ------------ ---------------
cmd:bfauto None Plugin Send history data (array)for symbol from given date
cmd:bffull " " Send all history data (array) for symbol
cmd:bfsym " Plugin UI " "
cmd:bfall " " Send all history data (separate arrays) for all symbols
cmd:addsym " Plugin UI/AFL Echo cmd with code 200/400 (OK/Fail) for symbol subscribe
cmd:remsym " " " " " unsubscribe
cmd:cping " Plugin/AFL " " (remote connected/not connected)
ed " AFL Send extra data fields data for symbol
From Client App to Plugin
-------------------------
Code Triggered by Plugin Response
---- ------------ ---------------
cmd:dbremsym 300 Client App ack with code 200/400 (OK/Fail) for symbol delete from DB
cmd:dbgetlist 300 " ack " " return list of DB symbols
cmd:dbgetbase 300 " ack " " return base time int secs
cmd:dbstatus 300 " ack " " return DB status info
cmd:cping None " ????
info None " No response - uodate specified symbol fields in AB
There are some question marks in above:-
What is plugin/relay resposne after getting "rolesend"?
What is plugin/relay response to client cping?
I have some other queries but will post separately.
Cping: is an optional heartbeat packet that both client-app and plugin emit. client-app can choose some interval like every 1m and plugin emits one around the same time.
it just tells the other that I am active. client-app receives an ACK from plugin when it queries with cping.
for roles, see the relay.py code:
a new connection to relay should be sent with this packet, it is not broadcasted but tells the relay in which side to put the connection.
relay broadcasts all rolesend connected packets to roleauth side, and vice-versa.
if not rolesend, ie. client app(s), it defaults to receiver, ie. plugin or external DB or anything that consumes the data.
@nsm51,could you please add me to the testing group of the WS_RTD plugin?
Respect for your work on this! Currently using the ODBC connection but speed is a limiting factor.
your help is much appreciated!
good day @chrisnoy how do you implement
GetintraDayChart()
is it custom function you made ?
i wonder how you configured
without getting/hitting "DosCritHit 301".. i am "bfauto"-ing 1min hist on 500+ symbols or perhaps @nsm51 has any suggestions? to avoid it.
I have not seen the code, which repo are you referring to.
In any case, you need to throttle the requests to the server.
Vendor define the rate limits.
something like this, it is not tested but we can rate limit the requests
# --- near top of file ---
import asyncio
import time
BACKFILL_MIN_INTERVAL = 0.4
_backfill_lock = asyncio.Lock()
_last_backfill_send = 0.0
async def throttled_backfill_send(ws, payload):
global _last_backfill_send
async with _backfill_lock:
now = time.monotonic()
elapsed = now - _last_backfill_send
if elapsed < BACKFILL_MIN_INTERVAL:
await asyncio.sleep(BACKFILL_MIN_INTERVAL - elapsed)
await ws.send(payload)
_last_backfill_send = time.monotonic()
# --- modify to fit this part
if cmd in ("bfauto", "bffull", "bfall", "bfsym"):
if not info_triggered.is_set():
info_triggered.set()
log(f"[relay] backfill request → {cmd} {msg.get('arg','')}")
await throttled_backfill_send(feed_ws, json.dumps(msg))
Hello @nsm51,
I would like to test your WsRtd data plugin, how can I get the WsRTD.dll?
I recently integrated AngelOne’s API with the WSRTD Plugin using your documentation and forum support. It took me over three days to fully understand the data structure and implement it, and the learning experience was invaluable. I must say, the plugin is exceptionally well-built — efficient, powerful, and clearly the result of significant effort and expertise.
I have a few feature requests for future releases. One is an auto-retrieve option, eliminating the need to manually click Retrieve in the database settings. also mentioned interest in tick-level data support; although I understand that implementing true tick-by-tick streaming may require substantial work considering the current base interval is 1 minute.
On my side, the client is functioning well, but I am noticing a slight delay (around a second) in updating data in AmiBroker. I believe this may be due to client-side optimizations required on my end, but I would appreciate your thoughts if there could be any other reason.
I also have a clarification regarding backfill — when RTD data is flowing into AmiBroker, is the incoming data stored in the local database automatically? After closing and reopening, I noticed past RTD data isn’t retained unless I backfill historical data up to the current time, after which RTD continues from that point onward, and the historical data gets stored. Please confirm if this is the expected behavior.
Additionally, I feel a Remove All Symbols (dbremsyms) button within the WSRTD UI would be a useful enhancement. I went through your latest GitHub commit and was glad to see that some of the features I wished for are already in progress.
Note: Symbols cannot be automatically added or StockInfo updated without clicking RETRIEVE, this is by AB design(specifically clarified by Tomasz).
I would love this too, but technically we can't randomly add/remove symbols when charts/AA windows are using them.
The maximum delay we can visually have is 300ms, which is Timer Interval in settings.
Check your client app and use asynchronous technique to send data.
In testing so far, it has not been an issue. Also, check using sample_server.py (there shouldn't be a delay)
This is doable, the later version comes with LRU Manager too. (Least Recently Used) are auto dropped.
The cpp code of its working has been added to GitHub, I am testing a 8-min rolling window instead of tighter ones.
Rtd_Ws_AB_plugin/src/WsRTD/LRUManager.cpp at main · ideepcoder/Rtd_Ws_AB_plugin
Updating index at every QuoteEx() call is cpu wastage imho.
If enough requests come, maybe Tomasz can Notify plugin for UI Symbol delete, already requested this, for a future version. Then it is seamless.
This is discussed in FAQ and thread many times, AB will request data only if Symbol is used. To do that, you run AA on all your required symbols.
Until then only 1 backfill cache is stored in plugin.
See FAQ7, and backfill in manual.
We can only cache the most recent json-hist payload in the plugin.
Again, in later plugin version, there is state persistence option that partially mitigates the issue.
But bottom-line is AB "NEEDS" to call the symbol to populate the Local DB.
Last but not least, I am really thankful for all your wishes and hopefully get sponsors to continue development with same spirit.
A few questions:-
Re Config:
Time ms - what does this control?
Max no. symbol quotes - per symbol or total?
Retrieve - not clear why Retrieve needs to be in config UI. What underlying method does clicking Retrieve in UI do that cannot be achieved in plugin code? OLE?
Re Comms
The comms protocol bewteen app and relay seems to have no flow control. What happens when circular buffer used for RTD gets full? Similaraly it is mentioned that only a single history packet is processed at a time - how does app know when it can send next histroy packet without data loss?.
Re Backfill
Is the sequense of sending of history and RTD always history first then RTD after or can they be interspersed (and data stitched/merged by plugin).
The data plugin runs on a Timer in the main thread, ie. how often it is called to process the received data
This is RTD cache. These many bars are in their base time interval are stored as a ring buffer "per symbol". So 200 in 1min TF means 200 minutes, you need to call the symbol from AB to ensure cache is copied to AB DB.
So if you ran AA-Exploration every minute, then even 10 is sufficient. Higher value uses more RAM, 40KB per bar.
OLE is NOT allowed inside plugin as it runs on the main thread. It will deadlock.
Retrieve OR any other name, by AB design, is the only way plugin gets m_pSite pointer.
Symbols cannot be randomly added/removed, hence the blocking Dialog.
Look at the QT plugin Sample in ADK.
Elaborate on what you mean by flow control
Ring buffer/circular buffer by name means oldest quote is dropped if new one arrives when full.
App should stitch entire history and send the full backfill period.
If you want to paginate, as explained, run ALL symbols in AA-scan/explore and then with a larger window then AA-repeat, send newer data.
There is no sequence, you can have a look at BlendQuotesArray() cpp code in QT Sample of ADK.
Because WsRtd uses a bit more complex method, but data has to be aligned from AB-DB + Backfill + RTD
That is why only 1 Backfill, the most recent one is cached.
If you send backfill, and AB does not call this symbol, and a new backfill arrives, the older one is dropped.
Processing multiple backfills was an idea but it is more complex than you think.
A very good example is ASCII Import which does it, but you can see it lags the UI. NOT because it is badly coded, but the sheer complexity of sorting.
I have implemented many things, like Adaptive sleep on socket thread, adaptive yield on main thread etc but data plugin should not be doing this for edge cases. It is better the Client-App handles this.
That is why BfAuto sends the last bar timestamp.
PLUGINAPI int Configure( LPCTSTR pszPath, InfoSite* pSite );
Elaborate on what you mean by flow control
Flow control (or Back Pressure) is required to handle the situation where a producer process (the client app here) sends data faster than the consumer process (plugin/relay here) can process it. There are various strategies that can be used - web search willl show lots of techniques. Without flow control data can be unknowingly lost as you metioned for RTD ring buffer.
You could use a simple ACK/NAK protocol on RTD & Hist packets that would enable the client app to adjust the rate it sends data. For Hist data it could simply wait until it gets ACk before sending next Hist packet. For RTD data additionally client could adjust its aggregation period accordingly to avoid overwhelming plugin..
OLE is NOT allowed inside plugin as it runs on the main thread. It will deadlock.
Retrieve OR any other name, by AB design, is the only way plugin getsm_pSitepointer.
Symbols cannot be randomly added/removed, hence the blocking Dialog.
Look at the QT plugin Sample in ADK.
I meant plugin could trigger add symbol via an external OLE process/script.
Many things are not practical if considered in a holistic view.
AB is not a headless server, it needs UI intervention.
There is no real throttling because we are localhost, the physical hardware limits most of the things.
Any data will not go from plugin to AB-DB without UI trigger, most apps/plugins may not even cache what I cache. They are pure pass through.
So as a user, if you have 500 symbols and a dummy AFL scan/explore completes under 5 seconds, then you throttle from client-app accordingly.
And RTD ACKs is impossible.
All this is solved with one analysis running during mkt hours. You can schedule and batch all of this as well and AFL run guard.
The idea is to keep the plugin as fast and light as possible.
Dont take my answers as rude, forum is a place to discuss ![]()
Just as an example, our forum member requested Android/IOS AB versions OR Cloud DB.
Its fine, they can request but given AB architecture and resource intensive capabilities at full power, I dont see that happening because of the time required to build it. It is not that Tomasz is not capable, its priority.
Might as well, run AB as it is in VPS and remote login from any device you want.
From client-app,
you can all run the analysis and also get status of completion.
NewC = oAB.AnalysisDocs.Count; // get total analysis windows
if ( NewC >= 1)
{
NewA = oAB.AnalysisDocs.Item(0); // 1st analysis starts with 0
NewA.Run( 1 );
while ( NewA.IsBusy ) WScript.Sleep( 1000 ); // check IsBusy every 1 second
}
you can probe if it is completed directly and use existing functionality to achieve the goal.
If data is in plugin and AB requests QuotesEx() it is 99.999% guaranteed that memcpy has completed.
Dont take my answers as rude, forum is a place to discuss
Nothing taken as rude, discussion is always good.
My understanding is the purpose of this plugin is to enable a client app to feed a RT data source into AB via the plugin using a websocket. The advantage of this approach is it eliminates the need for user to code a data plugin (C++) and instead code a much simpler client app (Python etc.)
Basic RT operation is plugin internally buffers the RT quotes it receives from client then alerts AB that it has data by periodiically sending WS_USER_STREAMING_UPDATE message. Once AB responds by calling the GetQuotesEx function in plugin any buffered data for symbol is pushed into AB data array. To feed multiple symbols (not just those with a visible chart) a repetitive AB scan/explore is required (causing AB to call GetQuotesEX for each symbol in turn).
I am puzzled though by some of your previous statements:-
Any data will not go from plugin to AB-DB without UI trigger
My understanding was the plugin feeds RT data supplied by client app continuously into AB with no user interaction?
There is no real throttling because we are localhost, the physical hardware limits most of the things
So as a user, if you have 500 symbols and a dummy AFL scan/explore completes under 5 seconds, then you throttle from client-app accordingly.
Apart from hardware there are many other factors which will determine if client throttling (flow control) is required. These include configurable things you can control (AB config (no. symbols, history length), plugin config. client app config etc but also non-controllable factors like Windows OS, AB itself and any other apps/ processes. Addiitionally if using a RT data feed to autotrade there will be trading logic AFLs, autotrade libraries/plugins/bridges etc consuming cpu.
The client app has no way of knowing if it need to “throttle” itself (enlarge snapshot interval) and I do not see how siimply timing a dummy afl scan and/or checking for script completion is going to reliably ensure all quotes are processed without data loss.
Without flow control then as a minimum I would suggest the plugin logs all circular buffer overflow events (i.e. data loss) - at least then the user is alerted and can take prevention measures.
Above are just my thoughts.
You have clearly put a lot of effort into this project so I wish you success.
I said UI trigger, not user interaction. That symbol has to be in some active context, other wise Send message/WM_STREAMING_UPDATE wont randomly call GetQuotesEx.
Hey AB, i have new data. AB, that symbol is not in use and user may never load it, so i don't need to request GetQuotesEx()
Try and think deterministically.
I have given cache size default 200. The range is 10-1000. So if you have 1-min base time interval, you very well know that 200min from last dummy scan you will overflow in open market.
The is no point baking logic, if i can only notify. So instead of me notifying, you setup bullet proof acting logic.
Inside plugin i don't know when/how AB will call GetQuotesEx(), but from AB I know exactly when/how it will call.
AB plugin driven database setting requires use to define max_bars. Even AB doesn't notify overflow. Bcos when array is full, every new bar is overflow. User should use own requirements, understand and fine tune ![]()