ChatGPT is utterly amazing and scary at the same time

I don't quite get it why you did all of that (reinventing the wheel).
The same is available out-of-the-box if you place cursor on given function and press F1 in AmiBroker.

Why? Because, looking at the screenshot, @mordrax is writing a plugin using Rust, and evidently prefers to have the help/documentation directly in the editor he's using (looks like VS Code).
Is it worth it? "AI posteri l'ardua sentenza!" it (transl. Posterity will judge)

1 Like

@beppe is correct, I'm writing a AmiBroker plugin in Rust using VSCode. There's an extension to show function documentation in a side window and I'm calling AmiBroker function through the site interface. Writing function doco in source makes it easier to see what the params are.

This is all besides the main demonstration is that ChatGPT is good for being able to quickly learn a behaviour and copy it.

Adding markdown formatting manually is time consuming but accuracy isn't essential, so this is a great task for ChatGPT to perform.

I noticed something new today with regards to ChatGPT generating code. Now it seems to prepend some of generated codes with copyright notice:

image

Funny thing is that they ignore copyrights of people who wrote the code their system was trained on and they claim copyrights on something they don't really own or created. The weights of neural network are not really the work of the person who trained the network. It is the result of input data and input data come from the web sites owned by various people who were not asked if they give permission to train neural networks on their content.

Funny thing is that OpenAI state that they will not claim copyrights on generated code:
https://help.openai.com/en/articles/5008634-will-openai-claim-copyright-over-what-outputs-i-generate-with-the-api

but they do now.

UPDATE: I rechecked it and it seems quite random. Even same prompt produces codes once with and once without copyright by OpenAI.

1 Like

Now both ChatGPT and Bing Chat do produce reasonable AFL Coding and it is getting better day
by day. Guess it is getting intelligent when comes to Amibroker Coding as the days are progressing!

2 Likes

So here we are, GPT 4 was just presented in demo stream. Now it understands not only text but images too. And apparently is able to use much longer context (32k tokens?) which would be extremely helpful in building expert systems.

(post deleted by author)

In-house developed AI helps me to manage support channel for over 2 years now. In that time about 30% of support issues were automatically answered by AI. The advantage of the system is that user gets reply in about one minute (24/7/365), assuming AI is able to answer with high confidence.
This system is NOT based on generative transformer like GPT, because at the time when I developed it (2+ years ago during pandemic) GPT was too weak back then.
Instead I used my own hybrid system that used transformers to produce sentence embeddings plus regular expressions to handle some edge cases to understand the request and perform semantic search to find relevant answer in a database of replies prepared by human. This approach isn't as flexible as GPT but has one big advantage - it gave 100% protection against hallucination typical to GPT, since all answers were written by human and guaranteed to be correct.

Nowadays I do experiment with GPT3.5 but not for AmiBroker itself, but rather to augment / improve existing support system.

And by the way: I just checked GPT4 with one of my "tricky" prompts and it failed miserably the same way as previous version, so things are not so rosy as OpenAI tries to picture.

8 Likes

There is new AI startup Anthropic that just released ChatGPT competitor, Claude LLM

Anthropic founded by ex-OpenAI executives is backed by Alphabet, Inc.

They have different design principles and they try to address the biggest flaw of OpenAI GPT - hallucination.

UPDATE:
I just checked Anthropic's CLAUDE model (it is available via poe.com) and indeed it brings new "quality".... it is arrogant.
It assumes it has all the knowledge, it makes wrong assumptions, and furiously defends its assumptions saying that user is mistaken and only after providing evidence it finally admits

You're right that I should not make such broad claims without verifying the facts.

What a time we live in :slight_smile:

6 Likes

The same kind of answer I got yesterday from ChatGPT4

ChatSonic let you choose the personality

Yes this "personality" is just system prompt. If you use gpt3.5-turbo (OpenAI) via API, it gives you ability to specify the "system" prompt. In the system prompt you can tell it to behave the way you want. You can make it to act like anyone. But this is different from what CLAUDE was doing.

Good one :slight_smile:

The "system card" document published by OpenAI is particularly interesting. Here is some nice stuff:

2.13 Overreliance (page 19)
As noted above in 2.2, despite GPT-4’s capabilities, it maintains a tendency to make up facts, to double-down on incorrect information, and to perform tasks incorrectly. Further, it often exhibits these tendencies in ways that are more convincing and believable than earlier GPT models (e.g., due to authoritative tone or to being presented in the context of highly detailed information that is accurate), increasing the risk of overreliance.

The more I play with it, the more I am convinced that this tech is just really "convincingly sounding text generator" and its reasoning ability, even given lots of context, is very weak.

The epic fail can be observed with trivial 8-year-old math/logic task like this:

There were 5 identical candles A, B, C, D, E 10cm high each that were light up on the same time. Burn rate is the same. They were stopped manually at random times. Could you tell which candle stopped first? Data:
Final heights of candles:
A: 5cm
B: 6cm
C: 4cm
D:7cm
E: 6cm

Even latest and greatest models would insist that shortest candle lit for shortest time.

3 Likes

@Tomasz, the real question is:
How do we know you're not an AI?

Wouldn't an AI create its own AI to throw real humans off its trail? :nerd_face:

:slight_smile: Those who were in Houston and Las Vegas at AmiBroker conference back in 2008 could see me and talk to me, so I guess I predated GPT :slight_smile:

3 Likes

Those who were in Houston and Las Vegas at AmiBroker conference back in 2008...

OT - Several years have passed since 2008 and today the conferences are mostly done online...

@Tomasz, have you ever thought about organizing one so that all Amibroker users can meet virtually and learn from you and other Amibroker experts (and/or certified third party product suppliers)?

6 Likes

image

https://twitter.com/daniel_eckler/status/1636362970581336064?t=YWtgjbaecjbuSGqaCYbrBg&s=08

Looks like someone learned something new overnight :smiley:

1 Like

Yeah, maybe they really use the feedback because I have sent them a lot of feedback about wrong answers.