Well, are you saying I’m preaching to the choir?
Apparently you know this, although much of the hype about AI is completely detached from the reality of AI.
About me, once upon a time I was a computer science major back when they were still teaching Fortran 77 and IBM 360/390 assembly language, I did well enough in classes, although due to certain adversities I failed to complete a degree. And so I have been limited to just indulging some C coding and lately some Python coding to serve my passing interests.
I have coded a NMIST classifier just for fun. That classifier with some tweaking achieved 93% accuracy with only 39 neurons. The YouTube video that motivated my interest in that effort achieved only 86% accuracy with 29 neurons.
I’ve gone on to study the general structure and training of LLMs, transformers in particular, although have yet to delve into any code with them mostly due to lack of sufficient hardware or funds for extensive cloud use.
I remember back around 2000 looking into playing with neural nets and at that time with the available hardware, a network of four neurons would take hours to train. It was like watching paint dry. So, I left all that alone until just recently.
Thinking about your reply a little more, in regard to this:
“They are not creativity machines, they are statistical models based on the characteristics of some latent space which is trained on a whole load of existing data.”
They are deterministic, although it has been demonstrated, or at least claimed so, that they can apply higher level concepts to new context, and synthesize new information that is absent from their training data. I see this as just part of the pattern recognition function of a neural network. Although artificial neural networks are so much simpler than biological one, it’s amazing that they function at all.