I keep hearing and seeing this, and it is just too generalized of a comment to add value to the conversation. Yes, there are many models out there that are poor at generating code. However, the top models like Grok3 or 4, Claude Sonnet 3.5, 3.7, 4.0, Gemini 2.5-pro, etc. are all excellent at coding. However, they are tools like any other. If I buy a cheap hammer and use it to smash screws into wood, I’m using a cheap tool for the wrong purpose and the result might lead someone to say “Hammers are useless”. Even if I buy a high-quality hammer to smash screws into wood, I would reach the same conclusion. However, when I use a high-quality hammer to drive the proper nails into the proper material, I can derive a lifetime of benefit from it.
I take this point, and I agree, however there are already a plethora of no/low-code tools that generate useful software tools and applications for completely inexperienced coders and programmers. The pace of development around LLMs is the fastest of any technology I’ve ever seen. Our perception of the tools from even 6 months ago quickly become invalidated as the models are refined.
Excellent comment, I think you captured a lot of nuance. There is a massive potential for the LLM to accelerate development of basic non-critical circuits, but there will be a lot of effort from the experienced designer to spend plenty of time in the calculation and simulation phase to get that mission-critical design on paper.
Happy to answer and I always aim to be truthful . . . I have not used any in any way shape or form. How did I come to this conclusion ? maybe from ignorance, I’ve not got a problem admitting that, but my understanding is that AI does not have the creative ability to create something new and novel, so in my opinion it is not creating code, hence “AI cannot code”. I’m not saying it can’t be useful and a great benefit in some/many ways . . . perhaps this is all semantics . . . but it’s important to be understood when we communicate.
I’m not a Software Engineer, though I do some LabVIEW (occasionally an awful lot of LabVIEW) and some NodeRed, I’m not an Electronics Engineer, though I’m laying out a 4 layer PCB right now with microcontroller, ADC, etc and I’m not a Mechanical Design Engineer although I do a fair bit of SolidWorks CAD now and then . . . I’ve had my moments when I’ve “written” something in LabVIEW that I considered to be elegant in it’s approach, I think this is a creative process that I don’t think AI will ever achieve.
Time will tell if I’m wrong.
One final comment, you said . . .
. . . don’t you think that comment says a lot about who is doing the coding ? you or the AI assisting you ?
Agreed, I fall into semantics traps all the time, so I’m sure I am failing to properly communicate what’s in my head. I think agree with you, that if we get down to the brass tacks, an LLM is not necessarily “creating” something new. Thanks for adding some context for my understanding.
Personally, the LLM is doing 90% of putting “text on the page” coding. On one other project, I created the core functionality of the code then leveraged the AI to do 90%+ of any additional error handling or wrappers for UI. I make no claims that I am a programmer either! I am just a lowly hardware engineer who sometimes needs some Python, C, or PowerShell automation to help me validate my designs or automate mundane parts of my workflow Even though the LLM itself is not deterministic, thankfully the code it outputs is, so it is testable and I can eventually iterate down to something that is reliable far quicker than I would be able to on my own.
Do you have plans to integrate with local AI models, running on Ollama perhaps? Claude Code is obviously very powerful, but I know that I and other engineers are going to be hesitant about detailed design data making its way across the internet, not matter how much Anthropic claims that the data is secure. I’m not 100% familiar with their data privacy policies and procedures, but I know that having an option to collaborate with a local LLM is going to be preferable for some.
On a related note, do you have the capability or plans to fine-tune gpt-oss-20b or some of the other high performing open source models? They could be potentially great candidates for a fine-tuned model to run on a consumer-grade GPU that might rival the performance of larger general-use models.
This is such a silly hill to die on though. NOBODY in the SWE world gives a shit whether it’s semantically valid to say that an LLM is “coding”. Coding is not even considered a high-value activity! What matters is whether LLMs can produce RESULTS that are USEFUL.
And they absolutely can, TODAY, although they are still most productive in the hands of an experienced coder.
Many corporations have been encouraging people to become coders to drive down the costs for awhile. Now they have pulled out the rug from under those that heeded their advice.
There is a huge difference between copying an application note circuit, which has been published to be utilised and taking circuits from service manuals and even proprietary information out there on a companies cloud server. AI does not seem to be good at seeing the boundary.
Good call-out @speedy_leopard , local LLM will be critical for many companies.
In previous versions of the logic I used Gemini-2.0-Flash with Google ADK with good success for creative valid Circuit-Synth python code, as well as evaluating python <> Kicad updates. So I don’t think it will take too much horse power to run Circuit-Synth effectively on a local machine.
I moved to Claude Code because it’s the best coding agent and it’s easier to create workflows than Google ADK. Claude Code is also more transparent to users since .claude/ folder has plain text descriptions of agents/commands. One aspect that is important to me with Circuit-Synth is transparency for users so they trust the tool, which from looking at the exchange around AI in this discussion I think people will value.
Appreciate all the discussion going on here. It’s helped motivate me to build more effective guard rails and review processes before releasing new versions of the code and documentation. Here’s some example commands that I’ve made to help with reliable development using Claude Code:
Thank you for the thoughtful reply @johnbeard . Lots of great ideas to chew on!
Choosing suitable parts is a big issue I agree. Other projects have made pre-verified circuits, which I think is a great start and might even be an effective real solution long term if you can make enough circuits for commonly used parts. Sonnet 4 is decent at searching Digikey/jlcpcb and constructing a basic circuit (esp32/stm32 + sensor/basic IC) from web search + digikey/jlcpcb parts search.
As far as people caring about circuits as code: I agree people don’t care, so I tried to make Circuit-Synth as transparent as possible and easily discarded. But I think there is real value in having circuits represented as code because code is the preferred language for deterministic LLM behavior.
An example I have from work right now of the value of circuits as code is I make many revisions of boards, and it’s hard to keep track of what changed between v0.5 and v0.12 (or even v0.11 vs v0.12). Previously, I had to look through jira tickets and schematics to see what I changed, but with Circuit-Synth’s kicad-to-python function I can quickly convert both designs to Python, have Claude compare the circuits, and spit out a circuit diff. So maybe all that logic gets wrapped up in one command and it’s an easy and useful circuit diff tool.
Great advice around finding a target user. I’m not sure who the users are yet but I know there is interest. And I’m not sure there’s any money to be made with this project, but I love it and think code will be involved in circuit design in some capacity; it’s just too powerful not to. Especially with LLM’s.
My own skepticism with AI being involved with circuits here is because the electronics industry was offshored and outsourced three times over in the western world. It’s an industry that’s older than the “software tech” industry that’s currently experiencing mass layoffs now and being entirely offshored with the aide of AI as both an excuse and a crutch to smooth it over with cheaper labor overseas.
The design work left for us at least the US is restricted to very specific skilled niches. They are niches where AI won’t really help because nobody discusses the secret sauce. The secret sauce is the only thing keeping suppliers afloat from offshore competition. Heck, you can’t obtain datasheets publically for the high-voltage chipsets I use in certain applications from my semiconductor suppliers and they will absolutely throw their legal department at protecting them. They remind us often in conference calls You’ll never be able to train LLMs on this kind of stuff