I’ve been working on a side project called galvano.ai - an AI-based schematic review tool, which I hope can be useful for anyone designing circuit boards and wanna have another pair of eyes over the schematic before they build it.
I’m not an electronics veteran, but I’ve designed a few dozen PCBs. The largest was a 16x STM32 board emulating a spiking neural net in one of my previous research positions. I mistakenly tied the STM32 QFN’s exposed pads to VSS instead of GND — had to heat-gun them out, cover the pads with stickers, and re-solder. A fun and painful lesson. Mistakes like that are humbling, so I built galvano specifically to catch these kinds of issues early.
The idea is simple: you upload your schematic/netlist, galvano then fetches relevant datasheets (or you can upload your own) and then you can:
do datasheet-aware automatic schematic review, node-by-node.
or simply chat with your circuit to troubleshoot or get design advice.
The automatic review checks for common mistakes (missing pull-ups, wrong pin connections, risky power setups, and a slew of other possible issues), then gives back a “risk score” per node with explanations and always references the datasheets. The goal isn’t to replace design reviews, but rather to catch the obvious or not-so-obvious stuff early so you don’t waste board spins or debugging hours. It should complement the usual review stage in the circuit board development process.
Here is an example of an automatic schematic review:
Currently the service works well with KiCad schematics, single sheet for now. If you design in another EDA, galvano also accepts netlists in SPICE format, if you can export them in that way.
I know AI tools can sometimes feel overhyped, but I genuinely think there’s value to be added in the design review space — especially when dealing with hundreds of pages of datasheets where mistakes are easy to miss. If anyone is curious, I’d be really grateful if you could try it on one of your circuits and share your thoughts, good or bad. Even just feedback on what kinds of errors you’d want a tool like this to catch would be super helpful.
Thanks a lot, and happy reviewing!
Disclaimer:
It cannot review PCB layouts, yet, but the review does provide layout recommendations if specified in the datasheets.
It cannot fetch component alternatives. It will in the future.
I manage the SKiDL discussion list. SKiDL is a Python-based method of designing circuitry. There have been several projects discussed there for AI creation and analysis of circuitry. You may find others there with similar interests (such as me).
Thank you for directing me to this. I’ve looked into SKIDL before and was considering using it for my project! Will contribute to the discussion there.
I certainly think there’s room for this sort of tool. As a profession we ought to drive out the reliance on the “hero engineer” who spots the one mistake that would have derailed the project. Automatic checks are a fine complement.
But yes, I’ve grown very weary of LLM powered tools for this sort of assurance work. They’re not particular encouraging when it comes to providing consistent, reliable outputs.
I was excited to follow CADY, which seemed to pursuing the expert system approach for this task, as opposed to the probabilistic LLM approach. They were just using AI where it works good: to OCR datasheets and fill out investor pitch decks. But I think they too might have been swept up by the very short term gratification cycle that LLMs bring.
But I like what you’ve done here - the referencing is great, the summaries are really clear and technically accurate. Checking for voltage ranges and alternate pin functions is great. Like I said, I’m still wary (and weary) about this scaling past a single sheet with a very common circuit. Already there is a lot to wade through, and when you’re expecting a human to work as a signal to noise filter, there has to be a lot of trust that there’s value in the signal.
I think this is why static analysis has been successful in the software world - even though a typical static analysis tool will generate a lot of noise, you can be absolutely confident that it will catch every violation of each rule it checks for. So the effort you put into tuning the noise filter will be rewarded.
Thanks for the feedback. There’s still a lot of room for improvement.
One aspect that I aimed for is to have all the LLM’s reasoning be grounded in data from the datasheets, and provide quick visualisation of the references. This should alleviate some worry about risk of hallucinations and give more confidence in the service.
I would be interested in creating a dataset to benchmark on this task (eletronic schematic verification) and see how different pipelines, LLMs and prompting techniques improve some “mistake flagging” metric.
Replace the hero engineer with hero software that is non-deterministic?
I’ve been recently using the latest models like GPT5 for some grad school work. I don’t use it to write the papers, (because that absolutely removes all ability to learning) but I do have it help me summarize some papers after reading and even have it checked against some drafts. Basically like a group partner. Boy do these models making up things that don’t exist when they literally have the source documents provided.
I think the biggest problem is probably some boot instruction that tells them to be nice instead of assholes like they should be lol. I fear they have too much training data from corporate speak and other forms of “sensitivity training”. I want my collaborative AI partner to be an asshole just like the real ones hahaha.
I haven’t used them that much, but I was surprised when reading an article which gave advice on how to tell the AI how to respond. You can tell it to take a role, but I don’t know if it understands what it would mean to summarize like an asshole.
We can have a discussion about how LLMs are actualy all deterministic, as any other computational graph is, its just that in their operation they have a probabilistic sampling step at the output to make them more natural and not repeat the same response given the same input.
Similarly, a hero engineer in a good mood, vs a hungry hero engineer who’s wife just left him might have totally different outputs. Defective boards still get built all the time.
That’s a real problem. But thats the issue with a wide consumer product, you have to make it nice to everyone.
There is a lot of content and literature about how to prompt these models, and you can make them act in very different ways, tell them what not to do and what to avoid. They can be very stearable, but this works better in the API where you have access to the underlying commands which set the strong “developer” instructions of the model. Through the ChatGPT website, you only have access to the Chat-tuned version of it, so whatever you tell it to do via messages it will maybe conflict with the original “developer” instruction it has, that you don’t see, which tells it to be nice to you.