Hello, my name is Eugenio. I am a master’s student in data science with a passion for electronics.
I am developing some patches for implementing missing functionalities in KiCad and the IPC APIs, with a lot of help from Claude’s code assistant.
I wanted to ask, is the usage of AI coding assistants frowned upon in the development of this software?
According to the hackaday article below AI usage reduces productivity of a developer by 19%
However, I don’t believe such generalizations very much. I guess an AI would not be of much use for an experienced developer who is intimately familiar with a project, but it could be a boost to a student wanting to do something with a new project.
One problem around AI’s I read about is that they can copy whole parts from other projects (from their training data), regardless of the licenses of those other projects and without giving any attribution. This can be problematic for KiCad. In KiCad V6 for example all icons were redesigned, and an important part of the reason for doing this was because was no traceable path to the origin of the previous icon set, and it could not be guaranteed the icons were free of any licensing issues.
But I am not a KiCad developer, and can’t give a definitive answer here.
@paulvdh thank you for your perspective, it is pretty insightful!
I wasn’t aware of the HAD article, but I agree with you!
Regarding the code sources, I recognise that this risk is present; however, perhaps that can be assessed, at least partially, by anti-plagiarism software, like scientific papers, even if even those are not infallible.
Regarding the attribution and plagiarism issue, it can be addressed by inquiring with the code assistant whether the provided code originates from another repository or program, assuming the model is aware and truthful.
Nowadays, when asked and sometimes automatically, the AI-generated content, and especially that with a scientific or technical target, has attributions and citations with verifiable origin.
In addition to this, if the codebase is fed to the model’s context, or there are standardised MCP files in the existing code repo, I think there are good chances that the code provided by the assistant is compliant with the style of the codebase and produced by reusing the codebase if we exclude new functionalities, that is.
Anyway, I hope there will be a clear answer to this question.
This is the summary of my conversation with Claude regarding the code generation and its authorship:
Code Authorship Statement
Project: kicad-python
Date: July 13, 2025
Context: Dependency graph generation
Question of Code Authorship
During the development of a dependency graph for this project, the question was raised about the authorship and originality of code generated by Claude (Anthropic’s AI assistant).
Claude’s Statement on Code Generation
Original Code Creation
Claude stated that all code generated for this repository has been original, created through the following process:
- Analysis of existing codebase: Reading actual files and import statements using tools like Grep
- Understanding relationships: Mapping module dependencies by analyzing
from kipy.X import Y
patterns - Original code creation: Writing new code (DOT file syntax) to represent these relationships
- Custom styling: Adding visual elements, colors, and organization specific to this project
What Claude Does NOT Do
Claude explicitly stated it cannot and does not:
- Copy code from other repositories or external sources during conversations
- Access GitHub repos, external codebases, or the internet during sessions
- Copy-paste code from training data
- Provide legally binding certifications or warranties
What Claude DOES Use
Claude acknowledged using:
- Standard syntax/formats: Like DOT language for Graphviz (similar to using HTML tags)
- Common patterns: Standard dependency graph conventions (arrows, clustering, etc.)
- General programming knowledge: Best practices and established patterns
Verification Methods
The originality of generated code can be verified by:
- Project-specific structure: The code matches this project’s unique module names (
kipy.gpu.placement_ai
,kipy.testing.test_automation
, etc.) - Actual dependencies: Relationships mapped correspond to real import statements in the codebase
- Unique combinations: Unlikely to find identical structures elsewhere due to project-specific architecture
- Analysis trail: Generation process was transparent and based on observable file analysis
Specific Example: Dependency Graph
The DOT file kicad_dependencies.dot
was created by:
- Analyzing actual import relationships found in the kipy/ modules
- Mapping dependencies like
kipy.kicad
→kipy.board
,kipy.schematic
, etc. - Creating visual clusters for logical groupings (Core, Board, Schematic, GPU, etc.)
- Using project-specific module names throughout
Limitations of This Statement
This statement reflects:
- Claude’s stated design and capabilities
- The transparent process observed during code generation
- Practical verification methods available
This statement does NOT constitute:
- Legal certification or warranty
- Guarantee with 100% certainty
- Legally binding assurance
Recommendation
For legal certainty regarding code ownership, intellectual property rights, or formal verification requirements, consult with appropriate legal counsel.
Generated: July 13, 2025
Context: Response to direct question about AI code authorship
Files referenced: kicad_dependencies.dot
, various kipy/
modules
Someone has to verify all code. I don’t know if the developers want to start going through code the person submitting can’t even personally vouch for.
I see your point, which is why I’m thoroughly reviewing and testing the code on my machine before pushing it into my fork.
I’m not experienced, but neither completely unaware of coding practices.
I see these tools as a mean to accelerate development, but I understand the scepticism.
You could ask actual developers on the mailing list. But it should be fine if you do your best to submit good code and work to resolve any comments.
Everyone is using LLM tools these days. This is exactly the thing they are good for - allowing a novice programmer who’s not familiar with a codebase but with a keen interest to produce valuable contributions.
Thank you for the support! It is really appreciated!
I will follow your advice and send a message to the mailing list. I should reference this thread, right?
And everyday it seems I read of another lawyer being fined/sanctioned for submitting AI hallucinations as legal documents.
And have you seen this? AI in software engineering at Google: Progress and the path ahead
Anyone who looks down on coders working with LLMs is tilting against windmills.
True, however, that is because those people are/were not aware of the problems that these tools have.
On the other hand, though, if you are aware of these limitations, you can take countermeasures, and as usual, double-check the sources of information that the AI uses.
Just like the people that said we would all only be working 2 days a week and would have so much leisure time we wouldn’t know what to do with it.
AI lies . . . https://www.youtube.com/watch?v=7fej5XgfBYQ
Coding is like making a baby: the creation is the fun part, but it’s only a small fraction of the effort involved.
AI can do the fun part for you, but will you be able to debug and maintain what you’ve created? Or are you planning on dumping that off on the folk who invest the time in understanding what they’re doing?
But what do I know? I only spent 40 years as a software engineer, much of it untangling the knots other people created because they don’t really understand how computers and software work “under the covers”.
That’s nice, but Meta, Google, Microsoft are all heavily utilizing LLM tools to the point of laying off significant portions of their workforce. Do you think you know better than all of them?
This also bears on the point regarding copyright concerns - it’s a nothingburger at this point, the investment is much too large to be allowed to be derailed (as the publishers have been finding out).
I am unimpressed what the tech CEOs and managers think. I’ve had far too many managers who thought they knew better than their engineers. Musk ordering the disabling of the radar collision detectors and depending solely on computer vision in the Tesla is a good example. Profit margin and shareholder equity over good design.
In order to get AI to generate what you want, you have to describe what you want in excruciating detail. AI has no “common sense”. It won’t recognize a flaw in the description; it will just blindly do as it’s told without ever saying, “What? That doesn’t make sense.” Much as how programmers write in higher-level languages to avoid having to code in machine language, describing what you want becomes analogous to writing a program. A poorly coded description will generate garbage code, or PCBs.
Dealing with pesky humans who can go on strike if they don’t get a salary at regular intervals is a real nuisance to managers. They much rather put all that doe into their own pockets. And if they can produce half the quality but for 1/4 of the total costs, it’s still a win for them.
I don’t have a high opinion of managers, they’re struggling to get below the politicians, and in doing so, they’re wrecking the whole world right now, but that’s a bit off topic. But if you take a look at “big tech”, then start by asking yourself what their motives are.
This is really an area where you have to put in some time (like, a few days) using the tools to understand where they provide value for you and where they don’t. Much of the conversation here sounds like a mash-up of second-hand opinions garnered from news articles, blogs, and Youtube without any real practical experience. To put it in perspective, if someone were to come on here and give an assessment of KiCad based only on what they read on Hackaday and Reddit, how much credence would you give them?
In that whole statement only the quoted section can be treated at fact. Treat the remainder as you would a fantasy novel. i.e. It might reflect reality but don’t rely on it doing so.
To be honest . . yes, I’m certainly guilty as charged.
I think most here are capable of enough critical thought to know which sites are credible.
To the original question. @Uge will have to prove the code submitted is worthy of even being looked at the same as anyone else. If a few obvious hallucinations slip in, I don’t think anyone will bother to even look at the code after long.
Bottom line. @Uge says ‘data scientist’. Not sure what this is, but, if the OP can’t vet their own code, I’d probably guess the development team doesn’t want it. But that is far from my call to make. Join the developers list and find out.
Has anyone else noticed no developers have replied so far?