AI (Artificial Intelligence)

this AI could also have access to a better computer or be able to parallelize having different seeds to start with and then compare different results to present us with the best one… perhaps they were already doing this since they said the process takes 24h to return the results.

AI advertises in Airtasker for someone in a cheap country to do the work. :rofl:

1 Like

I don’t think that is high up on the list of most KiCad developers or forum users (yet?). AI will probably eventually be more successful at that than (conceited) engineers (not all, of course).

24h for 250 airwires is not really impressive, an experienced PCB designer should be able to do this in a lot less time (talking about working hours here). only advantage I see for now is, that the tool does not need to take breaks :upside_down_face:

Apparently recently added some AI (copilot):

Closed topic:

If you want to pay AI to actually create the board for you, there’s solutions like this:

But when routing dictates your placement, I’m not sure how helpful that will actually be.

Just for fun, I tried ChatGPT 3.5 to get a symbol for a 74LVC1G04, partly because a guy at work was suggesting that AI will put engineers out of jobs, and also the recent buzz around OpenAI having made a “breakthrough”.

After many wrong answers, I gave up. Apart from being wrong, ChatGPT is highly overconfident.

The best I got was this which got only 1/5 pins right:

  VCC |1   5| NC
   A  |     |
  GND |2   4| Y

Me: Please rate your accuracy as a percentage


Given the general nature of the information I provide and the reliance on up-to-date details available until my last update in January 2022, I’d rate my accuracy at about 90%. However, when it comes to specific, current, or highly detailed information like the exact pin assignments for a particular component’s package, relying on the most recent datasheet from the manufacturer is always the best practice for complete accuracy.

Interesting to see that the Dunning-Kruger effect also applies to AI chat bots!

It’s not just the inaccuracy of the answers, but the confident way they are presented, that makes it pretty useless. Also the inability to explain the answers or where it got the information.

The impression I had previously was that the AI buzz is largely hype, and useful AGI is still decades off.

I appreciate that ChatGPT is not the latest, and there is a certain skill in “prompt engineering”, but it doesn’t look like I will be out of a job any time soon.

It’s more like dung and kludges.

Nah, you can twiddle that knob too.

Me: briefly show me a monostable circuit using a 555 in an unconfident style

ChatGPT: Uh, okay, so there’s this 555 timer thing, and, uh, you can use it for a monostable circuit, I think. You put a resistor and a capacitor in there, and when, you know, it gets triggered, it kind of reluctantly gives out a single pulse. The capacitor charges through the resistor, and after a bit, the 555 timer reluctantly switches its output to low. So, like, it’s a one-shot pulse thing, I guess. People use it when they need, um, a short, defined signal? But, you know, I might be a bit off, so, um, maybe double-check that.

Entertaining for a while, but I’ll go back to my datasheets now, thanks.


One of my friends was told recently: “You won’t be beaten by AI. You’ll be beaten by someone using AI.”

Wel,l having spent 30 minutes with ChatGPT I can now put “AI expert” on my resume. :slight_smile:

1 Like

at one time there was a similar hype with CNC machines… that a turner is no longer needed, it does everything itself) many years have passed and we see the result…

One of my friends was told recently: “You won’t be beaten by AI. You’ll be beaten by someone using AI.”

Well, no. You’ll be beaten by someone selling AI.

And I don’t know why people keep wasting time with this ChatGPT thing, which is not even “AI” and is only language generation, for anything engineering. It’s not made for that, even OpenAI have put very clear disclaimers about its ouput not to be trusted for any particular purpose, and people still try and still expect. That’s almost the definition of craziness. :mask:


Although I heard that one of my former colleagues who is not retired uses a licensed version in the company he works in, training it on very specific datasets and they have good results.

But when the public model is trained with the Internet, which we know is full of junk, then you can expect junkout sometimes. Already it seems some junk has been reingested by LLM models, thus amplifying the junkitude.

I have yet to see any intelligence.
To date, all I have seen is another method for opening the same old filing cabinet.

I have a different opinion. No use debating it. In five years, we’ll probably have a definite answer as to whether it’s valuable or not.

(Note: Back in 94/95, I didn’t think the Internet would amount to all that much. We tend to overestimate how much effect something should have in the short term, and then get completely fooled by how much effect it has in the long term due to compounding.)


sometimes. but most of the times it is the exactly opposite: something gets hyped up way more then it should, can’t live up to any expectations and in the meantime does more harm then good. See Blockchain-tech or the Dot-Com bubble.

That’s why I stick with judging the status quo and on this base public available AI is not suitable to support engineering work. In the special purpose fields with very constrain training it can sometimes work, but I still would argue that the efforts put into training these networks and sorting out their hallucinations amounts to more then having a human do the task from the beginning.

If it’s able to only fix “it’s” and “its” in English text then it already has some intelligence… :grin:


Once it can distinguish between “their”, “there” and “they’re”, world domination is only a step away!

1 Like

To fix the two, its and it’s, is too easy. They’re there in their grammar usage files. :smiley: