Yes it is possible, but even with syntax check, AI generated code might be completely wrong on all other levels. With current state of the art GPTs they still don't have any kind of actual understanding what they write. Hallucinations are still very common.
AI is great tool in the hand of experienced human that adds missing understanding element. So programmers can use AI assistants easily as they spot logical errors quickly.
Dr. Tomasz,
I would like to know what could be achieved if a person like you, using the content in this Community, in the User Manual, in the Knowledge Base and in other similar places, decided to create an AI for AmiBroker/AFL.
I would pay to have such a tool as a teacher 24/7/365, no doubt about it.
Such thing exits already Better way to get to information - #7 by Tomasz but due to limitations of LLM (hallucinations) it requires human approval/moderation. The LLMs are simply too unreliable, even if they are provided with right context.
None of these work any better than ChatGPT. In fact they can be illegal because using our copyrighted content (Users' guide, AFL reference) without our permission is violation of our intellectual property rights.
Ok, I understand.
I saw that my post asking if a specialized AFL GPT I found worked well has been deleted, I understand that you deleted it because of this. Sorry I didn't notice!!
I'm still trying to create a GPT to help me with AFL. I'm trying to learn about Prompts to see if I can create something useful.
Can Prompts be shared here?
I already explained, no amount of prompting would change the fact that GPTs don't understand a thing. It is just "text generator". Not something that actually understands anything. The amount of hallucination is ridiculous. This GPT even if asked the most simple thing hallucinates like hell. It comes up with function names that don't exist. Without knowledgeable human checking what this thing writes, the code generated can't be trusted. GPTs can be fun when you ask them to write a POEM because in poetry you don't have rules. GPTs are good at TEXT tasks like translation (that was original purpose of Transformers) and summarization, writing a novel maybe. Programming is another story. Programming requires not only fluency in language syntax, but also understanding how some abstract concepts work.
ChatGPT is simply stupid. Ask him "Write MA crossover in AFL" and it will even write correct code:
I understand Dr. Tomasz, thank you for your explanations. I understand that it may or may not be useful and that you have to be very careful with hallucinations.
But it will be possible with some very well-made prompts that it can become better, right?
That is, instead of "tricking" him as in the MA example, giving him correct data and also correcting him when he makes mistakes.
Well, it is a hope for newbies like me to learn AFL, there is no doubt about that. Although we will see if we lose hope or not.
Yes, you can provide it with correct information. Problem is that GPTs don't know what is true what is not. They lack meta-knowledge or any knowledge at all. They don't know what they know and they don't know what they don't know.
GPTs are statistically based text generators. They produce "nicely sounding text" that statistically likely could be produced by human in response to given question. It is a play of law of large numbers. If there are lots of examples in the training set that closely resemble your question you are likely to get something that is statistically correct.
Just a heads up. Over the last few days, I was playing with the Google Gemini Pro 2.5 Experimental model that was just released about 2 weeks ago. It is available basically for free from Google's AI Studio.
And I have one thing to say: It is much, much better than OpenAI's offerings. Try it for yourself, you will be surprised.