Awesome!! I think I can work around that, thank you!
Thanks a lot for your kindness. Let me use.your work is marvellous for AB Community !
This is working well! Thanks!.
What are your thoughts on migrating from dtohlcvi to uohlcvi? Can we mix and match within the same database or will that break stuff? I am thinking of migrating my multitudes of csv files to an sqllite database.
Sending the "since" parameter in ccxt will require less parsing if it is already an integer, and not having to turn the whole csv filke into a pandas dataframe, will be quicker when doign a bulk import, and I can simply check the last value in the list.
Am I better simply starting over with a fresh database?
Also, in uohlcvi, will the same caveat apply regarding not mixing base time interval?
This format is only to specify the timestamp format of the json packet. Each packet is individually checked.
Plugin DB and AB DB store timestamp in their own ways.
so, yes, u and dt can be mixed across different json packets.
This is AB spec. While mixing did work but it is not meant to be done that way because plugin does cannot check for timeframe. AB trusts the plugin, plugin trusts the data source.
Brilliant. This is working so well, Iâm really happy with what you have done.
Can non ohlcv data be sent through the plugin? I am imagining, with respect to the json, if I have them stored, can I send the fields found in AFL Function Reference - GETFNDATA?
E.g {âhistââŚ,â OperatingCashFlowâ:100000,âbarsâ:[âŚ]} etc?
You're the first one to request this. I will need to implement it. Let's see what i can do.
It will require a separate format.
Maybe you can list the required fields here.
Apologies. I didnât intend to request this. Youâve already done so much. I was simply trying to understand a background process.
I donât currently use those fields but itâs in my nature to play with such things. Please donât implement them on my behalf.
Out of interest, have you made a decision if the plugin going to be open source in the AKD, or private?
The idea for a test release was to expand the plugin capabilities so I don't mind adding features.
Open source will depend down the line based on response and contribution. People need to opensource their various vendor API wrappers and get more people onboard.
There is no community benefit if they are going to use plugin for their private needs.
We are in early days, so I don't expect anything but we will see green shoots when people who don't code be able to use RTD from their choice of Broker/Vendor.
I had some personal issues so lost some time, but i'm waiting for users like you to catch up so I can get input on BrokerWS which is the order interface.
Iâm certainly up for contributing code. I forked the repository last week and have been sorting a vendor client for ccxt. I built a class which has been working ok and I renamed a few variables so I could keep track of them more easily.
Thereâs a few issues I need to resolve, and also backfilling from my old CSVs is clunky, so Iâm planning and further testing and playing around with postgreSQL, which will mean that users will be able to import all their old data (both crypto and non crypto) in.
Once Iâve got it nicely set up, would you like me to submit a push on GitHub?
Anything that is comfortable, for you on GitHub.
For DB, just have a look at ArcticDB, it's something unique.
I've invested some time into learning postgres for now, but I'll certainly keep that in mind for future.
I notice that in RTD, we have a list of jsons,
[{"n":"SYM1","t":101500,"d":20241130,"c":3,"o":6,"h":9,"l":1,"v":256,"oi":0,"bp":4,"ap":5,"s":12045,"bs":1,"as":1,"pc":3,"do":3,"dh":9,"dl":1},{"n":"SYM2","t":101500,"d":20241130,"c":3,"o":6,"h":9,"l":1,"v":256,"oi":0,"bp":4,"ap":5,"s":12045,"bs":1,"as":1,"pc":3,"do":3,"dh":9,"dl":1},{"n":"SYM3","t":101500,"d":20241130,"c":3,"o":6,"h":9,"l":1,"v":256,"oi":0,"bp":4,"ap":5,"s":12045,"bs":1,"as":1,"pc":3,"do":3,"dh":9,"dl":1}]
but historical is;
{"hist":"SYM1","format":"dtohlcvi","bars":[[20240601,100000,22.1,22.5,22.1,22.8,120,0],
[20240602,110000,22.8,22.7,22.3,22.4,180,0],[20240603,120000,22.8,22.7,22.3,22.4,180,0]]}
Can we put multiple historical for backfilling inside a list as json_string, such as;
[
{"hist":"SYM1","format":"dtohlcvi","bars":[[20240601,100000,22.1,22.5,22.1,22.8,120,0],
[20240602,110000,22.8,22.7,22.3,22.4,180,0],[20240603,120000,22.8,22.7,22.3,22.4,180,0]]},
{"hist":"SYM2","format":"dtohlcvi","bars":[[20240601,100000,22.1,22.5,22.1,22.8,120,0],
[20240602,110000,22.8,22.7,22.3,22.4,180,0],[20240603,120000,22.8,22.7,22.3,22.4,180,0]]}
]
Array improves performance for RTD quotes, as we are snapshotting multiple symbols in real-time.
There is no performance benefit for Array of Historical quotes of symbols, only increases unnecessary complexity.
The JSON packet is parsed in-situ ( memory performance ), if there is a problem in parsing, all quotes will be dropped so I believe the current design is the better choice.
A lot of performance factors have been considered, you won't believe And it is because it is C++, the developer has power to implement in a particular way.
So in this case, from the time the json-packet arrives to the time all data is sent to AB, nothing is really copied. In most other languages, there would have been quite alot of copying in memory.
Also, for true Asynchronous design, Backfill should be symbol wise because most people will not fetch from DB but vendor remote server. So it is best that client-app keeps sending data as it arrives from server.
To build Array of historical quotes, packet size increases vastly, and edge case might exceed buffer sizes.
Is it possible to batch add a load of symbols to the plugin database? I have an sql with 1400 symbols.
It seems a catch 22 at the moment.
I can't add them into amibroker by importing a .tls watchlist file because they don't exist in the database yet, but they can't exist in the database until I call some data on them, and I can only call some data on them once they exist in the database.
I'm handling the creation of exchange watchlists on the SQL side by creating separate tables and I am able to retrieve those lists in python.
Is there a json I should send to batch import symbols, or is there a different procedure I should have read about?
Ok, I created a pull request. I am hoping that what I have done will be useful.
Observe the plugin status color
It is in the manual, click RETRIEVE button.
I understand getting it into AmiBroker, but to get a symbol into the plugin, so I just send a load of jsons in the format {âcmdâ:âaddsymâ,âargââŚ}, or
Is that specifically fur use when subscribing to RTD?
The plugin does not have any restrictions.
Any json, either RTD or Hist is automatically added to plugin DB.
No cmd is required.
Just remember, if AB does not request data and new historical data arrives in plugin, it overwrites the old one.
There is a limit to symbols, change settings if it is more than 1000.
addsym and remsym are sent "by" the plugin or AB UI to subscribe/unsubscribe in Client-App.
Ah, ok, so I could actually perhaps broadcast a load of jsons, with the sym name but empty bars, to the plugin, just to get the symbols into the plugin, and then retrieve them in Amibroker when the banner is dark green?
Not empty, i said some data, either historical or RTD. There should be a successful parse.
I'm migrating my data from my postgres, which, was slow, over to ArcticDB on your advice. I'll let you know how it goes