Better way to get to information

In order not to spam the next big release wishlist thread, I'd like to continue here @Tomasz about this one.

For what it is worth, the solution is in fact already there and working for about 1.5 year, but apparently you did not try to use it yet.

There is an indexed vector database with all AmiBroker knowledge and GPT4o (yes state of the art the best currently in the world) that uses this database to perform Retrieval Augmented Generation.
The system is called KATE (Knowledge-based Artificial intelligence TEchnical support assistant) and is working already for about 1.5 years.

If you just send an EMAIL TO SUPPORT and get the answer from "Kate" then it is from AI knowledge based assistant.
KATE responds with all those "dear valued customer..." small talk blue-sky that some people need so much for some reason.

Problem with that is that:

  1. I still need to review the answer generated by AI manually and accept it for reply, because AI even state of the art, produces nonsense sometimes (hallucination problem). Manual review takes time and HUMAN action (therefore you don't get the answer immediately)
  2. Even with all the knowledge, it is still limited in understanding how AFL works (since it is polluted so heavily with Python / Javascript in training)
  3. Can't solve "all" problems, as probably nobody can
  4. It still cannot figure out the information that user did NOT give. Some people apparently believe that others know what they have in their heads or on their computer screens. No we don't have that knowledge unless user provides screenshots, explanations and so on. And some people get very angry when asked for details, for some reason.

I had no clue about KATE, thanks for the info! I did send one email to support though and there was a mention of AISA if I recall right but the link was broken. I think I also deleted the email as it was a trivial matter so can't vouch for that, nor recall who signed it.

  • Does that mean that I can send an email to support with a piece of my code, background info/context and all that and KATE overseen by you should responde? Can the conversation be continued?

  • In regards to point 1, is your input being used in the next resopnse? Meaning the model learns from you? If it's so, then it's great for all of us(but tiring for you).

  • About point 2, even with javascript it fails miserably sometimes and with AFL I think the issue is that things are too precise. Meaning there is the manual with certain examples but that's it. With JS and Python it's seen tons of examples from tons of repos where people used the language in tons of ways so it can extrapolate from those. Maybe chatGPT can never be good about AFL, or maybe it could be good if it's a model that has no idea about coding at all and the ONLY language it would know is AFL.

  • For point 4 one sollution is to use templates! Can you use templates in the forum on certain categories or can a support form be built that does reinforce templates? There are certain open source projects on github for instance that force you to abide to a markup template meaning you have to include information like: what version of the software you use?, what is the issue?, what is happening?, what do you expect to be happening?, include snippet of code. If you don't fill the whole template you cannot post your issue! I think this is a viable sollution for issue 4.

  • I also think what would help is to have ChatGPT behave like a LIBRARIAN not like a TEACHER. Maybe flipping the paradigm would work here and instead of having it generating AFL code or review code it could just be trained on where info about a certain function or feature can be found. This could include manual, examples, forum, etc. So instead of giving a sollution it could point to answers that already exist. Maybe it could respond to question like this: I would like to build a range breakout system. Where do I start? and it would provide links to existing systems from the forum or the library etc etc.

  • In regards to what @beppe said here, I totally agree that the quality of the answer is related to the quality of the prompt and I think as you might know better than me @Tomasz these models send back and forth the whole conversation, including it's own answers so that in turn will make the prompt be better eventually. Does KATE continue the conversations? Or is it potentially better to have another tool/agent that works directly with the user, similar to chatGPT?

On another note I'd like to thank @beppe for the tip about the bookmarks in the forum. I was using them but didn't know I could set a title to them, hence catalog them.

I hope this helps
Cheers

As a newbie user I would love to have the following:

  1. Plotting directly from AFL code in same way as it is done in open source where you run the code like Plot(...); AddPane(...); AddAnotherPane();AddIndicator(SecretSauceIndicator,Pane1); and the chart window gets opened for you automatically with the main part of the plot and sub panels with all the fancy stuff you added to chart. No need to drag and drop and saving code in special folders.
  2. Filter GUI functionality in Backtest and Optimization results so I don't need to copy and paste the output somewhere else where I can do the filtering (simultaneously over several columns).
  3. A premium, very expensive support option where all the questions are answered without throwing RTFM and getting an impression you are stupid for posting a basic question. I am happy to pay a lot of money to get answer to any stupid question without being RTFM-ed at. So, if I pay for premium support and I ask "How much is 1+1?" I get an answer like "Dear valued customer. Thank you for your question. 1+1 equals 2. Hope this helps. Looking forward to your next question. Sincerely, Read The Fabulous Manual Team."
  1. That won't happen because this prevents multi-threading from occurring in multiple panes (as that would mean that you have ONE formula for all panes). Currently each pane runs in separate thread giving super parallelism and performance.

  2. That might happen (the idea is OK, but fast implementation is tricky with tens of millions of rows)

  3. Interesting. Really you would like to pay for 1+1 type of question? Especially in the age of ChatGPT? And how much would you be willing to pay for this "very expensive support option"?

With regards to 3) I think the main problem is not 1+1 type of question. The problem is "ALL" questions. You simply can't address "ALL" questions in any other way than per-hour pricing something like $200 per hour, since "ALL" may also mean super complex questions requiring hours of development.

Frankly I don't get it why referring people to manual is such a problem. From my perspective it is tremendous waste of energy and time having to retype information that was already typed into documentation. Thousands of people may have same question and that means retyping the same thing 1000 times. Would you like to do such a job?

Also, referring to documentation, IMHO, has additional benefit of teaching the SKILL of finding relevant information in the guide. It's similar to the saying: give a man a fishing rod instead of giving him a fish. Referring to the manual is like having a fishing rod; it helps you find the knowledge you need.

Is there really a market need to pay $200 per hour for the service where all the questions are answered without refering to manual?
Are you willing to pay that much?

3 Likes

:joy:
RTFM sucks! Not having the money for the 'very expensive support option' also sucks! ChatGPT sucks as well, at least when it comes to AFL! Searching in the forum also sucks big time! I always use a search engine to find stuff in the forum, land on it, then get the title to open it in the Discourse PWA, by searching for the title.

An option that would suck less and would not waste energy(only electrical one) is to index the whole manual, docs, forum, member area examples and give it to an LLM to answer from. Of course, only for paying customers!

A model that could synthesize the response for any afl question and ideally include the urls to a few docs or blog posts would be awesome. (At the end it could even add, you didn't RTFM you cheeky bastard, did you? :laughing:)

Models wise there are readily available ones like Llama, Mistral, etc etc. Dunno what the licence is in regards to comercial use but they do exist. Even a plugin for ChatGPT could work for this(maybe activated by a code from AB)

Long story short the simplest way this can be tried at home is with gpt4all. No coding required, just download a model, or use an online model like chatGpt. You have a folder of text files with the info you want answers from, that the app indexes first. Then you just prompt and prompt in regards to your information.

gpt4all is high level and simple to use, but there are all sorts of libraries for all sorts of languages that do exactly this.

Cheers

Did you actually read @Tomasz's response?

Hi @TrendSurfer, I did read it and it's spot on!
Did you read my reply though?

It does not contradict nor disagree with anything Tomasz said! It only potentially adds a way to find the answer to our questions in the pool of information. And also potentially solves a problem the majority of established users have in the forum, and that is getting pissed off at noob questions.

The manual describes 'tools' but isn't it easy when you see someone use the tool in one way or another? Humans learn like this! Did you see kids learn? They only imitate. This is us, humans So sure the manual, the forum, the info is great if you need to get in depth, but with the huge amout of data related to AB and AFL it's hard a lot of the time to see how these 'tools' fit and are being used.

To put slightly different what I wrote before is simply 'Have a better AI assisted search for the manual, docs, examples, forum, etc!'

I'd bet this would solve a lot of the noob questions for people who want to actually learn and also give less chance for trigger happy rtfm users to slap it to the noobs.

I also want to point out a huge cognitive dissonance in the community between noobs and established users. Noobs feel lost and the established users feel annoyed by the noobs for their entry level, million times replied to questions. None of the groups seems to get the other group. The first group wants a direction most of the time and the second group is hung on precise details.
Unortunately I get both @kktrader and Tomasz and sadly I'm in the noob group with questions unanswered.

Wouldn't it be great for these groups to reconcile in an amiable way @TrendSurfer ?

Just my 2 cents in the era of AI and data overload
Thanks for your time

@DBV - For what it is worth, the solution is in fact already there and working for about 1.5 year, but apparently you did not try to use it yet.

There is an indexed vector database with all AmiBroker knowledge and GPT4o (yes state of the art the best currently in the world) that uses this database to perform Retrieval Augmented Generation.
The system is called KATE (Knowledge-based Artificial intelligence TEchnical support assistant) and is working already for about 1.5 years.

If you just send an EMAIL TO SUPPORT and get the answer from "Kate" then it is from AI knowledge based assistant.
KATE responds with all those "dear valued customer..." small talk blue-sky that some people need so much for some reason.

Problem with that is that:

  1. I still need to review the answer generated by AI manually and accept it for reply, because AI even state of the art, produces nonsense sometimes (hallucination problem). Manual review takes time and HUMAN action (therefore you don't get the answer immediately)

  2. Even with all the knowledge, it is still limited in understanding how AFL works (since it is polluted so heavily with Python / Javascript in training)

  3. Can't solve "all" problems, as probably nobody can

  4. It still cannot figure out the information that user did NOT give. Some people apparently believe that others know what they have in their heads or on their computer screens. No we don't have that knowledge unless user provides screenshots, explanations and so on. And some people get very angry when asked for details, for some reason.

3 Likes

It is also important to note that the quality/relevance of LLM responses is directly related to the prompt quality.

Going back to @DBV's previous post, a tip to speed up your AFL learning: every time you read a post with a solution, a snippet of something new, create a bookmark with a descriptive title that you will find easy to remember when you will search among them.
I have hundreds of them and revisit them from time to time to keep my memory fresh; from your own summary page by clicking on the "bookmarks" you will access the web page with the list of them.

1 Like

Apparently, RTFM has hijacked this thread :grin:

No it doesn't keep "conversations" because prompt length is limited and you have to pull the content of KB into prompt so there is no way to keep very long conversations in prompt.

Not unless I specifically add (manually) given response to database. If all responses were added this would pollute database with trivial issues (like "I lost my key file", "I can't find my login", "how do I reinstall", "send me a demo", "call me" etc. )

It is not that model just uses examples it had in the learning. LLMs are "compression" algorithms and in order to compress the knowledge it needs to generalize. This generalization allows to generate code that was never seen in examples.
Trouble is that this generalization while already impressive, it is not quite yet ready to be proficient coder.
I am not sure why people want the machine to be better than them. I would rather be better than the machine. You need to keep your brain active and busy. If you delegate every task to the machine, what would left to you?
The same happens with muscles. Since modern world does not require muscles as much as it was in the past, the society is obese and not fit. The same will happen with brains if you delegate all the thinking to the machines.

Templates are used in Discourse for "canned replies" and I use that for "how to ask a good question". Still some people for some reason find any request to read anything as "bad service". No it is not bad service. If the service asks you for the details, it means that they want to help, but the information is required to help you. If somebody fails to understand such basic thing it is childish.

Example template (already used many many times):

Unfortunately your question isn't clear enough and does not provide all necessary details to give you an answer. Please follow this advice: How to ask a good question

That is basically what AISA is (our another, earlier AI based system). AISA stands for Artificial Intelligence Support Assistant and it is semantic AI search system that responds with pre-written responses. Such responses are written by human and guaranteed not to contain nonsense, however due to the nature how semantic search (based on document embeddings) works, sometimes the answer given might not really answer the question asked, but some other (similar) question.

As I wrote earlier - no - due to limitations of prompt length and necessity to use prompt for knowledge retrieved from the database. Also because conversations pollute answers and that way they are less and less influenced by knowedge base. For long "chats" you better use ChatGPT