AI (Artificial Intelligence)

I got too many details and cleared my confusion of using Chat GPT by reading different articles on chatgptopen.net.

I hope it will deliver whole new solutions to the concerned problems.

ChatGPT Generate KiCad Circuit Components

1 Like

I can’t tell if you’re using GPT3.5 or GPT4. If 3.5, you might find a lot of improvement by moving to 4. Still pretty impressive that it was able to mimic the symbol format/syntax what with balancing parentheses and nesting. I’m not surprised it had trouble drawing the symbol and placing the pins in the correct spot/orientation.

You may be able to develop a system prompt that puts GPT into a frame where it can be more accurate, but I’m not sure what that prompt would be. I’ll get scared when it can parse a part datasheet and partition the symbol into functional units.

It’s about as reliable as Wikipedia, Google, your newspaper, the internet or the back of cereal boxes since that’s where ChatGPT got it’s training data. And you should trust it the same amount which means you verify from multiple, independent sources.

The reliability also depends upon you since you can lead the AI towards the kind of answer you want. So if you start off with “Tell me about the mayor’s bribery conviction.”, then the AI may just assume you are reliable and concoct a story that fits this new data. This is actually one of the powerful features of GPT in that you can pass it new information in the prompt to help it correct itself. That feature was shown in the video above where the guy was prompting ChatGPT to correct errors in the 555 symbol it was building.

One simple way is to ask the AI “Are you sure?” (or some variant). Many times the AI will come back with an apology for making a mistake and a correction. Although why you should trust that new response any more than the original is open to question.

Another way is to screen out incorrect information from the training and make the AI give more weight to information from “authoritative sources”. Good luck with that! We can’t even decide what is and is not true in many cases, and our authoritative sources have repeatedly shown they are anything but.

The practical answer is that the transmitter can’t ever be trusted and the onus for getting correct information is on the receiver to verify it to the level they need to. If we all learned that, then AI would have provided at least one tangible benefit.

As for the mayor in your story, he wouldn’t have much of a case in the USA. According to ChatGPT:

In some cases, such as lawsuits that involve public figures or officials, the plaintiff may need to show that the defendant made the defamatory statement with actual malice, which means that the defendant knew the statement was false or acted with reckless disregard for the truth.

So it would have to be shown that ChatGPT actually intended to lie. That brings up a bunch of questions as to whether the AI has actual agency and can set its own agenda. And trying to pin it on Microsoft would mean proving that they intentionally placed incorrect information about this particular mayor into the training set. Both of those are hard hills to climb.

I have no idea what Australian law has to say on the matter…

I don’t know if it’s as prevalent in the US since I don’t watch or read much of the mainstream press. But I think everyday people will figure it out, just like they did with the internet (mostly). It’s been less than six months!

I think the people that are really worried are on the other end of the spectrum: the politicians. They want the people to believe what they say without question. But they don’t want AI to be believed the same way because AI might not be aligned with their political motives. But they can’t teach people to practice “information hygiene” on AI because then they’ll use the same techniques on what their leaders tell them. So they have to push to make AI “responsible” and keep it in line with their objectives. I think that’s partly where the 6-month AI pause letter came from.

3 Likes

This off topic subject is starting to drift into politics.

1 Like

Yeah, we can continue discussing topics somehow related to solving problems which resemble or are related to PCB design, but opinions about worldviews etc. lead to controversial topics and arguments.

I have with my posts.
My bad for using a way, way, off topic example.

I decided to try to create KiCad symbols with GPT4. But instead of generating the raw KiCad symbol file directly, I had it write the pin information as CSV that I could pass to kipart. So GPT would handle the collection and formatting of part information, and kipart would do the mechanistic placement of pin symbols and drawing the box:

Then I store the CSV in 555.csv and pass that file to kipart:

kipart 555.csv -s num -w -o 555.lib

And here’s the result along with a pinout from the internet:

Next, I tried to make a symbol for the 74LS244:

That’s just completely wrong. Power and ground are on the wrong pins, etc.
The '244 may be well-known to us, but it probably occurs much less in the GPT training set than the 555. So I cut the pin information from a table in the PDF datasheet and pasted it into the prompt:

This is the CSV generated by GPT4:

It’s not quite right: the pins aren’t evenly divided between the left/right sides of the symbol. So I asked it to correct itself:

And this is the resulting symbol after passing the CSV through kipart:

Finally, I asked it to create the '244, but as a multi-unit symbol:

And these are the separate units:

Is this a viable way to make symbols?

Maybe. If I had a new 100-pin microcontroller or small FPGA, then I might try to cut-and-paste its pin information from the datasheet into GPT4 to see what I could get. I would still have to verify the symbol, but that’s needed whether I make it with GPT or by hand.

It would be more difficult to do this with parts having hundreds of pins. But the manufacturer usually provides a ready-made spreadsheet or CSV with the pin table for those. That’s how I made these Xilinx and Lattice FPGA libraries.

2 Likes
1 Like

Not everyone is optimistic:
https://www.cnn.com/2023/05/01/tech/geoffrey-hinton-leaves-google-ai-fears/index.html

Things get really funny, if AI references publshed but unverified gibberisch from itself or other AI sources.
Positive feedback with a negative effect.

Fully automated self drive will work in two months. Since 2014 (or so).

I think AI will struggle to spot errors in datasheets and the often very misleading presentation of data. You need to looks at graphs and the small print notes very carefully.

I also don’t see how AI will avoid the biases of many designers which have been regurgitated in so many books and articles. Things like single point grounding, Intel vs Apple, ferrous component leads etc etc

I read this interesting article about how the major impacts resulting from the Gutenberg printing press didn’t happen until hundreds of years later. And he mentions that the internet is only a few decades old so we haven’t even begun to see its impact. But an early one might be the recent surge in AI due to the internet making large amounts of textual/image data available for training.

So we’re only in the fetal stage of something that sprang from the infantile stage of the internet. Right now, any attempt to say what will be possible or impossible to do is just a guess, and anybody who guesses right will just be lucky.

1 Like

These guys route Kicad pcbs with AI.

Check the video available there.