AI setup instructions (preliminary)

In the preparation for upcoming version 7 release, I am providing early instructions for those who are not familiar with all that AI API keys, local LLMs and so on.


:brain: Part 1: Setup AI Brain

Option A: Use Cloud services

PRIVACY NOTE: by using Cloud AI services like OpenAI/Google Gemini, you will be sending your questions / requests and the formula that you edit to OpenAI/Gemini large language model for inference. Privacy-wise is not different than using OpenAI/Google web site directly. The transmission is secure and encrypted (via SSL/HTTPS), but obviously OpenAI and Google can see what you are sending to them.

PERFORMANCE NOTE Using cloud based LLM gives you access to state-of-the-art models that are much faster and much more capable than any local / open source model can be. The tradeoff is privacy.

Obtaining Your API Keys

:key: OpenAI API Key

  1. Visit OpenAI's API platform and sign in or create an account.
  2. Click on your profile icon in the top-right corner and select "View API keys".
  3. Click "Create new secret key".
  4. Copy the generated key immediately and store it securely.

:warning: Note: You won't be able to view this key again once you navigate away from the page.


:key: Google Gemini API Key

  1. Go to Google AI Studio and sign in with your Google account.
  2. Navigate to the "Get API key" section.
  3. Click "Create API key".
  4. Choose an existing Google Cloud project or create a new one.
  5. Copy the generated API key and store it securely.

:warning: Note: This key is essential for authenticating API requests.


Option B: Use Local Private LLM

PRIVACY NOTE: by using local private LLM, your requests and your code does NOT leave your machine. Everything is processed locally and nothing is sent anywhere.

PERFORMANCE NOTE Using local private LLM gives you maximum privacy but limits you open-weight models that are typically smaller and much less capable than state-of-the-art offered by cloud based services. Typically they are also much (10x) slower (when run on customer-grade graphic cards). To get any kind of tolerable performance you need dedicated graphic card (NVidia or AMD) with at least 8GB of VRAM (12GB of more preferred) that is needed to accelerate inference.

:hammer_and_pick: Setting Up a Local LLM with LM Studio

If you prefer not to rely on external APIs, you can run a local LLM using LM Studio.

:white_check_mark: System Requirements

  • Operating System: Windows 10/11, macOS 10.15+, or a modern Linux distribution.
  • RAM: At least 16 GB.
  • Disk Space: SSD with at least 50 GB available.
  • GPU: 8GB VRAM, recommended 12GB or more RTX3060 or higher

:puzzle_piece: Installation and Setup

  1. Download LM Studio from the official website.
  2. Install the application following the on-screen instructions.
  3. Launch LM Studio.
  4. Navigate to the "Developer" tab.
  5. Enable the "Local Server" option.
  6. Download a model:
  • Go to the "Discover" tab.
  • Search for models like Qwen2.5 coder, or Gemma.
  • Click "Download" for the desired model.
  1. Load the model:
  • Go to the "Chat" tab.
  • Click "Load Model" and select the downloaded model.
  1. Go to the "Developer" tab and Configure the local server:
  • Set the host to 127.0.0.1.
  • Set the port to 1234.
  • Change Status to "Running"

For detailed instructions, refer to the LM Studio documentation.


:electric_plug: Part 2: Configuring AmiBroker

  1. Open AmiBroker.
  2. Navigate to Tools → Preferences → AI tab.
  3. For OpenAI:
  • Choose API Type: "OpenAI API"
  • Model: gpt-4.1-mini (we recommend this one for balance between cost and performance, but you can use any other)
  • Endpoint URL: https://api.openai.com/v1/chat/completions
  • Paste your OpenAI API key into the API key field.

    It should look like this:
  1. For Google Gemini:
  • Choose API Type: "Google/Gemini API"
  • Model: gemini-2.5-flash (we recommend this one for balance between cost and performance, but you can use any other)
  • Endpoint URL: https://generativelanguage.googleapis.com/v1beta/models/
  • Paste your Google Gemini API key into the API key field.

    It should look like this:
  1. For Local LLM:
  • Choose API Type: "OpenAI API"
  • Model: qwen/qwen2.5-coder-14b (this model we have been testing with, it works relatively good on 12GB RTX3060 or higher)
  • Endpoint URL: http://127.0.0.1:1234/v1/vhat/completions
  • You do NOT need API key for local LLM

    It should look like this:

Part 3: Final steps

At the end you need to decide whenever you want to use "Short" (about 4K tokens) or "Long" (about 32K tokens) context length. The difference is that when you choose "Long" context, significantly more information is fed into LLM (including full function reference) and that enables way better responses. The tradeoff is the speed. Long context means slower response. It also means more VRAM use in case of local LLM. During our own testing we found that for local LLM it is rather impossible to use Long context. On the other hand Long context works absolutely great with fast and capable cloud-based models like gemini-2.5-flash

After you configured everything as above, now it is time to test if it works at all. You should press Test LLM button. If everything is setup properly you should see message similar to this (text may vary depending on LLM used):

If something went wrong, you will see an error message with explanation what the error is.

The final step is entering AFL Code Wizard License Key in the respective field:

Anyone who has AFL Code Wizard already installed and registered will have the license key prefilled. If you don't have a key, you would find it either in "AFL Code Wizard registration" email or, if you purchased AmiBroker ULTIMATE PACK, in "AmiBroker registration" email. Note that AFL Code Wizard license is separate from AmiBroker (unless you purchased Ultimate Pack).

If you don't have AFL Code Wizard license and don't want to purchase the ability to use AI Assistant in AFL Code Editor will be limited to 5 completions/responses per run.

Part 4: Costs

Depending on which route you choose there might be extra costs involved in using AI features. Running local LLM gives you cost-free solution (not counting the fact that you pay electricity bills and you need to have machine with dedicated graphic card). Runing cloud-based AI typically involves paying fees for inference to the AI/LLM vendor. Pricing depends on model that you choose and number of tokens used. AmiBroker doesn't use much tokens as it does NOT accumulate chat history, therefore it only sends current request/question plus content of the formula plus system prompt (user-selectable either 4K tokens or 32K tokens). To simplify things, token can be seen more or less a "word". Open AI pricing is here: https://openai.com/api/pricing/ , Gemini pricing is here: Gemini Developer API Pricing  |  Gemini API  |  Google AI for Developers Pricing is typically per million tokens so OpenAI 4.1 mini with $0.40/1M tokens would cost something like $0.02 per single request/completion using long context.

Google is even better as it has absolutely great "free tier" that costs nothing and allows for example with Gemini 2.5 Flash model upto 10 requests per minute, 250000 tokens per minute and 250 requests per day.

Part 5: Performance

There are huge differences in performance between various models. We have seen very good results and fast completions with Gemini 2.5 Flash and "Large" context. Still it can take even 30 seconds if Google servers are particularly busy and the task to do is complex.
Using "Short" prompt makes things ~3x faster, but at the expense of quality of response.
While state-of-the-art models from Google and OpenAI work quite well, certain open source models (especially smaller than 12B params) struggle to follow instructions and system prompt and will NOT work at all or work poorly.
As far as private LLMs are considered, we had average success with qwen2.5-coder-14b that is open-weight model specifically trained for coding. Your mileage with open-weight models will vary a lot from being unusable at all to average. Currently open-weight models are nowhere near state of the art when it comes to AFL writing capabilities.

We have done months of extreme prompt tweaking, experiments to steer these models to co-operate and understand not-obvious aspects of AFL. Please treat those new AI features are work-in-progress as models evolve so our prompting techniques evolve too and we can expect improvements in the future.

Last, you might wonder, which model I like the most and I can definitely say Google Gemini 2.5 family of models are currently best in both performance and coding ability.


By following these steps, you can choose between using external APIs or running a local LLM to integrate with AmiBroker.

If you need assistance with other integrations or have further questions, feel free to ask!


Part 6: Quick tutorial on using new features

The Assistant is available as side window in the AFL Editor (you can show/hide Assistant window using Window -> AFL Code Assistant menu).

The Assistant window is divided into two parts

  • Upper pane - showing history of actions / requests
  • Lower pane - with edit field where you enter your question / request

You don't need to be verbose in your request, the Assistant is smart enough to produce useful results with minimal prompts like "write channel breakout". The entire code that can be seen on the screenshot was generated in response to just this single prompt.

There are 3 basic actions that you do with the Assistant, represented by buttons in the bottom of Assistant window

  1. Ask - you just ask question or request things from the assistant (this is default mode. If you press "RETURN/ENTER" key in the edit field it will trigger "Ask" action without need to press the button
  2. Fix - fix any errors in the formula - note that it first runs AFL parser to find out any errors, if errors are detected, then AI gets the list and descriptions of those errors to be able to fix them for you automatically. If AFL parser doesn't detect any errors, this action will not do any changes.
  3. Explain - this feature explains in plain English what the formula is doing. It is useful for beginners trying to understand code written by somebody else

Of course you can also ask any other question / give any other request. The Assistant may either generate code for you and offer changes, or it may generate textual response (like in "Explain" case) that is displayed in the upper pane.

When the Assistant proposes changes in the code, it will display changes directly in the editor, new (added) parts will be displayed with green background and italic font. Lines that are to be deleted are stroked thru using red line as shown in the picture. Now it is user task to review and accept or reject changes using "Accept" or "Reject" buttons. Note that any manual edits done to the formula while wizard waits for you to decide, will result in automatic rejection of changes.

16 Likes

Suggestion: Require Explicit User Agreement Before Enabling Upcoming AI Features in AmiBroker

@Tomasz, as a user and supporter of AmiBroker, I’m both excited and cautious about the upcoming integration of AI-powered assistants (such as those based on OpenAI, Gemini, or local LLMs) in future versions of the platform. I believe this functionality has enormous potential — especially for idea generation, code writing, and debugging — but it also comes with important caveats that should not be overlooked.

Specifically, I strongly suggest that access to this AI-based feature should be explicitly gated by user consent, through an agreement that:

  1. Clearly states the limitations of AI models, including their potential to generate misleading or subtly incorrect code;
  2. Highlights that any use of AI-generated trading systems or strategies is entirely at the user's own risk;
  3. Fully disclaims any liability on the part of AmiBroker or the AI provider (whether OpenAI, Google, or otherwise) for any financial losses incurred by users as a result of relying on AI-assisted outputs;
  4. Includes a reminder that backtest results, especially those based on AI-generated logic, may be deceptive and require careful scrutiny.

This is especially important given that such features will likely appeal also to less experienced users — either in programming or in trading itself — who may not be fully aware of how easily overfitting, logical flaws, or hidden assumptions can invalidate backtest results.

It may also be prudent to consult with a qualified legal advisor to draft the necessary language and reduce the risk of future disputes. As powerful as AI can be, it should be treated like any other tool: useful, but fallible — and never a substitute for critical thinking or trading discipline.

Thanks for your attention and for continuing to evolve such an excellent platform with both power and responsibility in mind.

Disclaimer: I shared my point of view with ChatGPT, which helped me draft this post.

4 Likes

hi. To make use of this new AI utility one needs the "Ultimate Edition" if I understand correctly. I currently have the "Professional Edition".

Found the answer to my question :grinning_face:

1 Like

Ed,

Unregistered AI Assistant in AFL Code Editor will be limited to 5 completions/responses per run.

Moderator comment: added "Unregistered", as the limitation is true ONLY if you don't have AFL Code Wizard license. If you have AFL Code Wizard license or Ultimate Pack, then there are NO LIMITS.

This trial version is limited to the above limits for evaluation.

1 Like

To clearify:

New AI-based AFL Code Assistant embedded in AFL Editor will use existing AFL Code Wizard license (as it is meant as hugely improved replacement/upgrade to text to code functionality)

AFL Code Wizard license is available in two ways:

  • as separate purchase
  • as part of AmiBroker Ultimate Pack

For everyone who purchased either AFL Code Wizard license standalone or as a part of AmiBroker Ultimate Pack, this new functionality will be available without limits for free.

If you don't have that license, you will be able to use limited version (upto 5 completions per run).

To get rid of that limitation, simply purchase AFL Code Wizard license (that will probably be renamed to AFL Code Assistant)

1 Like

Update

A Quick tutorial on using new features was added as Part 6 to the FIRST post

1 Like

Very cool. Although I have yet to play with an investigate the features, I would suggest, if not already possible, to have the option to setup both so we can choose to run AI local or the AI Cloud, and then be able to set which one on a AFL file basis. Because, some AFL files we may care about while others we do not so much etc. Just a thought.

Yes I think it is good idea.

1 Like

Of course both decent cards I have, only have 8GB of V ram, not 12. :squinting_face_with_tongue:

Any indication if there is a difference in local AI compute between consumer/gaming graphics cards versus Pro Workstation variants?

You can use any card with LM Studio. It automatically adjust to avilable VRAM so it will just work, just a bit slower. 8GB VRAM is still OK.

1 Like

Tomasz, is it your experience now that the major AI models are no longer hallucinating nearly as much when generating AFL?

The secret is in the prompting. You can’t rely on LLM “knowledge”, but you can rely on LLMs ability to understand the prompt. If you feed the knowledge in the prompt, it will produce much better results than “on its own”.

AmiBroker uses very fine tuned prompting (especially in “long” prompt) version to achieve results that are impossible without it.

Hi, @Tomasz Thank You for such wonderful work. I've personally used several LLM's recently for coding assistance. I moved away from OpenAI and found solid results with Claude AI. Claude Opus 4 is the LLM that I used. Just wanted to pass this along to you and the AmiBroker users...

1 Like

Do you use Claude with AmiBroker's AFL Assistant?

No, I'm currently using AmiBroker Version 6.93.0.

Using any LLM within AmiBrokers AI assistant yields much better results than standalone

1 Like

Thanks, Tomasz. I'll upgrade and give it a try...

Hi Tomasz,
Why is it better to use an LLM inside AmiBroker’s AI assistant rather than as a standalone tool? Is the main advantage the ease of coding (since everything is on the same screen), or does it actually improve the quality of the code generated?

@simple

3 Likes

hi, As I type into the assistant, I noticed that I can not paste any text in the prompt, nor can I copy any text from the conversations. Has anyone else noticed this?