A custom GPT: KiCad Guider

For fun, I created a custom GPT as a KiCad helper: KiCad Guider. Add some latest KiCad experience and temp solutions for some issues to GPT’s knowledge base. Most of the knowledge comes from KiCad Experience and frequent questions. The source is in Chinese but I think machine translation does quite a good job.
The GPT is not always right and I am struggling to force it to use the information in the knowledge base as priority. But still a way if you want a quick answer without google or searching forums.


Looks like it requires ChatGPT Plus ($20/month).

Google translation of the more detailed introduction post:

the cost for a human programmer helping ai :rofl:

1 Like

I think the idea is not bad. With solutions like, for example, these (the live demo there is for math) it could soon become cost-effective to have a forum search/chat trained on KiCad docs and forums datasets. Would also solve the problem some regulars here have with non-english languages.

1 Like

I have already seen people in other forums posting questions like “ChatGPT generated this for me, can anyone fix it?”.

It seems people are willing to pay for AI to generate bad solutions, then get unpaid humans to fix them.


Yeah, but this is not (yet) about generating KiCad data but to answer questions about KiCad usage… I wonder why the likes for off-topic comments but not for the OP who actually produced something useful

1 Like

I am also talking about answers to questions about Kicad usage.

It remains to be seen whether AI generated answers are useful or not. I suspect it will continue the trend where most users have less and less knowledge or understanding of the tool they are using, and have to rely on human experts when they get stuck. If people are not learning basic skills, then they will be no experts.

I have already encountered this, with people bringing stuff written by ChatGPT, but they have no real understanding of what it means. Trying to explain how to fix it doesn’t help, because they don’t how it was produced in the first place. All I can do is give them back a working solution.

Like the old adage of “teach a man how to fish…”, the AI tools are basically giving us rotten fish. “Here, eat that!” What might be better is AI teachers that can teach people how to learn, not to give them (bad) solutions on a plate.


IF indeed they are solutions of any kind and not just vague generalities.

I looked at doing this six months ago, but you actually got something working. Good for you!

I do have something from the work I did that may or may not help you: discosuck. This is a small utility that will download posts from a Discourse forum and place the topic threads into a JSON file. You can post-process that file to create a knowledge base for ChatGPT.

You install discosuck like this:

pip install discosuck

As an example, you can get five pages of topic posts from this forum and store them in a file called kicadinfo.json using the command:

discosuck  -n 5 -o kicad_info.json forum.kicad.info

Post-processing the forum dialogs to make them useful to ChatGPT is probably the real trick. For some inspiration about this, take a look at ChipNeMo. Nvidia gathered all their chip design expertise and encoded it into a set of LLMs that act as advisors/design assistants to their engineers.

Thanks Dave, discosuck is useful in certain scenarios. I fully agree with you that collecting data is just the first step and how to feed the correct ones to custom GPT’s knowledge base(post-processing) is the real challenge.

@bobc At this moment, the idea of the custom GPT is not just to provide answers for general KiCad questions, but more focused to provide verified correct answers for some frequently asked question. For example, temp solutions for issues such as “Micrsoft Pinyin causes KiCad freeze”, “MacOS cannot switch KiCad’s localization to Chinese or Japanese”. Answers to these questions are not easy to find through google or forum(as its language related) but with the verified knowledge base, it will be more accurate and easier to get.

1 Like

A post was merged into an existing topic: AI (Artificial Intelligence)

So you have plans to prevent to the AI from hallucinating? Because every LLM to date does

Probably similar mechanisms like when a KiCad user or developer says something that’s not correct… Interestingly, while LLMs can (and do) provide incorrect answers, some are to a degree able to verify and correct the answers, if instructed to.

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.