SnapEDA UltraLibrarian Automated Import

In light of the apparent lack of support for KiCAD V6 by the SnapEDA plugin, lack of Linux support for the Desktop App and nothing from UltraLibrarian as far as automated integrations, I forked an old github repo that claimed to be able to automate the import of SnapEDA and UltraLibrarian Zip files. After a few days of rework, it seems to be working fairly well.

With this post I want to probe the community on:

1.) Is anyone willing to help me make this better? (Current enhancements in github issues) if you are a FreeCAD or KiCAD python API wizard.
2.) Was I completely wrong, and such a tool already exists somewhere I simply haven’t found yet?

1 Like

Does the SnapEDA or UltraLibrarian license actually allow you to do this?

Generally those services require a login because they are trying to “monetize” you. If you look in the files you get, those files themselves have a restrictive license.

Personally, I don’t find things like SnapEDA to be that useful. The footprints and symbols tend to have significant errors if you start looking. Generally, it’s faster for me to construct a footprint/symbol rather than try to hunt down the errors from SnapEDA.

About the only time I grab something from SnapEDA is when it is the only source for a 3D model. However, if I hit that, I tend to go searching for a different component from a competent company that actually pays real engineers to generate a 3D model and put it on their website.

This whole trying to “lock-in” people with library management is a pox upon the industry. It’s what finally drove me away from Altium and onto KiCad completely.

1 Like

Does the SnapEDA or UltraLibrarian license actually allow you to do this?

Generally those services require a login because they are trying to “monetize” you. If you look in the files you get, those files themselves have a restrictive license.

This still requires you to login and download the zip files yourself, so I don’t think that would be an issue.

Personally, I don’t find things like SnapEDA to be that useful. The footprints and symbols tend to have significant errors if you start looking. Generally, it’s faster for me to construct a footprint/symbol rather than try to hunt down the errors from SnapEDA.

I haven’t noticed any “errors” so to say yet, but I often have to rework symbols as they’re unnecessarily split into two parts and things like that. What do you find to be wrong typically? Symbol creation can be quite quick, but I find footprint creation to be a very tedious task (but I haven’t tried this in v6 yet, so maybe its more efficient?) I felt getting dimensionally accurate footprints in kicad < v5 to be a nightmare. Has this gotten significantly better?

Also I’m really surprised that there is not a more community driven library management push. The idea that every engineer is reproducing the same footprint, symbol and model for a part is grossly inefficient and much more prone to errors than if we were all checking each other’s work. I’ve seen a few efforts on github and gitlab to move towards a community managed library, but haven’t seen anything as far reaching (part count) as SnapEDA and UL.

This whole trying to “lock-in” people with library management is a pox upon the industry. It’s what finally drove me away from Altium and onto KiCad completely.

Same here.

Every 3D model I’ve seen from SnapEDA has been a block approximately the size of the part. When you are looking for a 3D model of a connector, you want reality, so your board will fit into your enclosure!

I’m not sure whether SnapEDA’s 3D models are worse than their component symbols. Depends on which I’m looking at.

Checking footprints is tedious. The reason you see “a few efforts” is because every so often someone says, “We need to have a community footprint resource!” and a Github project is created, parts are added, and then … reality sets in.

I think the best option for a community library project would be for interested persons to join the KiCad library maintainers and contribute to the “official” libraries. That already exists, and they’re always asking for help, so why duplicate the effort?

2 Likes

The reality is that there’s no quality checking and people will abandon the library because they can’t trust it.

But hey – says someone – why not add an in-built crowdsourced quality check, a 5-star system! Because you can’t trust the stars. Any random user can give 5 stars if it happens to work for them (or even without any actual checking or testing). This would need a trusted team who inspects the contributions. And the contributions should obey some rules. And here we come to the KLC and the official libraries.

The problem with the official library project is the lack of inspectors. The lack of wannabe contributors wouldn’t be a problem, but it’s quite tedious to learn and obey the KLC rules.

This really is a ‘trust but verify’ situation. This goes for every part of your design. That’s the reality here. Having a starting point that turns out to be correct is really nice. Then the only time spent is the time you used to verify.

1 Like

Well, if we’re being fair, the real problem is that the bloody manufacturers can’t even get together and come up with a datasheet standard that would let us extract footprints programmatically–however, there is no money to be made in it so nobody ever complies and then we’re back at the beginning problem again.

However, even if that were the case, there would still be some issues. Just like BOMs, footprints tend to be “special snowflakes”. Your process can handle .40/.20 vias, while mine only handles .65/.40 and her process only handles .75/.50 vias. The thermal vias of each of out footprints are going to look very different.

Thanks everyone for your comments, but I’m sensing an overwhelming amount of pessimism here :D. I’ll keep trying to feed some motivation:

Yep, I have maintained my own branch off of that for a few years now. I haven’t tried to PR anything though, but I really should learn the rules there as this is definitely a good path forward. Curious how much of the SnapEDA/UL stuff could be added there (with slight modifications/fixes) without breaking any such terms mentioned by @buzmeg as these are the kinds of things I tend to add on my branches but don’t push.

Hmm, although I can appreciate the concerns, I think that might be getting a little too close towards a larger argument for/against open-source software, which sounds quite counter to the idea of the entire KiCAD platform. I sincerely don’t believe in the idea there exists any “trusted team”. Everyone makes mistakes; I don’t know how many datasheet corrections I’ve submitted to manufacturers, and they are the ones that are supposed to be the “trusted team”, but without user contributions highlighting their mistakes, these errors would remain in the datasheets for MUCH longer if not indefinitely. Again, the more eyes on something the more likely someone will notice something is wrong.

Besides, I don’t think you need as rigorous QA on the contributions as you mention. If a new contributor makes a PR without any reasonable commit message(s) then its pretty easy to throw out quickly, whereas if they’ve explained in detail what was changed and why, it becomes pretty fast to verify the accuracy of that. I can’t say I’m too familiar with the hierarchy of the gitlab roles for PR approval, but I believe the more eyes on a particular piece of code, the better/more accurate it will become given enough time. If one crappy PR/commit slips through, someone else can make a reverting commit just as easily.

Also, I’m not sure if gitlab has the same functionality, but github has some functionality to alleviate this kind of “trust/talent” bottleneck, and instead of depending on one of a handful of “trusted team members” to approve of the PR, you can get > 3 or > 5 low-level team members approve (the number and rank of these things can be set arbitrarily). In this way, it becomes much harder for someone to merge something of random error. Just saying, there are plenty of advancements in git collab tools to make this MUCH easier than it is sounding here.

I can’t claim to know, but it would seem this is what SnapEDA and UL are trying to do no? I’d guess rather than pulling from datasheets, they’re getting the design library files (Altium/ORCAD/etc) from the manufacturers with incentive that if their parts are on SnapEDA/UL more people will use them since they’ll be easier to integrate into their designs. I think there is indeed money in that. I can only speak for myself, but prior to all the supply chain issues, my first filter on Digikey after “Active” and “In Stock” was “EDA/CAD Models” because I couldn’t be bothered to use a part without them since there were so many that had them. Surely the manufactures have library files and aren’t drawing these things in Inkscape :D, and this is how they generate the datasheet entries, so why try to scrape it from a pdf when you can use the raw file? As much crap as SnapEDA and others take for monetizing our usage, I think its a great solution to what everyone else seems to be warning against (lack of funding/manpower to centralize all this data); they’ve found a way to make this process sustainable. I’m quite curious if anyone on the KiCAD library team has made any efforts to get this data directly from manufacturers as well?

I realize you are just playing the devils advocate here, but I think this is a bit of a logical fallacy. If we can’t solve everything don’t try to solve anything? Sure, you should still have the ability to customize these things, and the more complex the part the more that will be necessary for the reasons you mentioned, but for any low/mid level complexity part, I don’t want to be bothered to draw the same 10-200 lines/rectangles that someone already drew 2 days ago. Its rebuilding the wheel, but 10s to 100s of times a day. As an engineer, I simply can’t get behind such inefficiency. Yeah, I could (theoretically :smiley: ) build an op-amp out of single transistors every time I needed one, but come on, there are more efficient ways of doing this.

Thanks again for everyone’s comments!

Maybe we are talking about a bit different things. Footprints must work, they must have their pads in correct places and dimensions. No library can be perfect, everything must be checked, but a library which has random contributions without control would mean more work than creating one yourself.

This has nothing to do with Open Source. It’s the same for any “library” for any software. The problems are of course different and an end user can’t always choose to create things themselves. Take for example software app centers. Unfortunately I can’t trust the starring systems or anything.

By “trusted” I mean I can know that it has gone through some real inspection or verification where the responsible persons know what they are doing, and some rules or standards are obeyed. This happens with the official KiCad libraries. They are not perfect, but much better than anything which accepts all contributions without restrictions (and rely on crowdsourced verification).

Anyone who really wants to help with high-quality KiCad libraries should start to inspect existing pending contributions to the official libraries against datasheets and KLC.

A lot of my thoughts from some time ago were presented in a few comments in this topic. I liked that, thank you all.

Libraries and related stuff are hosted here, 9 repos in total and only 4 of them are installed with KiCad.

KLC is here and that rule-set must apply. Not every rule can always apply for everything.

The best way (IMHO) to make a step towards helping with library management is doing some search in older(even the ones that still exist in the previous Github repos) closed or merged PRs/MRs (with a number of comments in them). Some of those comments are a great source of information regarding decisions taken at some situations.

Getting comfortable with Git will help a lot. But the fact is that Git is not required to just review a MR and add some comments to it. When the MR is ready you can ping someone to review and merge.

Maybe try to contribute something small for a start and see how this thing is rolling.
It has to be noted that a reply might follow after a few months from your comment/MR (I know this is not a good thing, but this is the current situation).
Please be patience and try to not feel discouraged if that occurs.

efforts to get this data directly from manufacturers

There are some discussions in GL regarding automation of a few things.

Any kind of help is valuable.

1 Like

This isn’t “Devil’s Advocacy”.

Money is riding on whether I can trust your part or not. My reputation is riding on whether I can trust your part or not.

It simply isn’t that difficult to create a footprint for a semi-standard part. And generally I can trust the generator as it gets used and tested far more often than some obscure part.

The parts that take me a long time are the ones that have some sort of coupling between the electrical and the physical. Where is pin 1 relative to the the polarization notch? Which pins are tip and ring and what is their correspondence to left/right? Why does that 3D model have such a screwball origin and does it really intersect the edge of the board like that?

I’m simply never going to trust a library on those parts. I will have to expend as much time checking those parts as building them myself. So, a “parts library” doesn’t save me any time.

This is, of course, completely different if I’m working for a company that has a library steward whose job it is to make sure that those libraries work. Now, it’s somebody else who takes the money and reputation hit if something goes wrong. I can “trust” that library because there is someone with a strong vested interest in making sure it is correct and even still those libraries regularly have issues.

As far as I understand, this is not true. These companies do not have some magical access to design files at the manufacturing companies (not that these would be any more accurate!). They are using the same datasheets as the rest of us.

Someone will certainly correct me if I am wrong.

I think we’re getting a bit off topic for this particular post, but of course, you are welcome to choose what you trust and what you don’t individually, but I’m not seeing the difference between a footprint vs software with regards to open source. If the ERC code in KiCAD has bugs, you can’t trust that either. In the end you have to place your trust somewhere, and I personally place it in the code/footprints that have had more eyes on them. And by trust I mean what @hermit said with , “trust but verify”.

Yep, you’re right, but I think we’ve veered off topic a fair bit on this post :smiley: . I’m all for supporting the official KiKAD library development, but SnapEDA and UL have a LOT more parts, so making use of that data somehow was the original intent of this post. On one hand, the KiCAD libs allow everyone to maintain and make them better, but the part count is still relatively low. On the flipside, SnapEDA and UL have a TON of parts, but we don’t have the ability to update/manage them. Has anyone here submitted error reports to SnapEDA or UL and seen quick/positive progress on their management? Again, I’m curious how much we can “borrow” from SnapEDA / UL without getting into any trouble.

Hmm, I think you are just continuing to argue the “if you can’t solve everything you can’t solve anything” fallacy here. Of course, more complex parts will require further attention to detail, and of course in the end all fault lies with the designer. It you were using a paid service alternative to SnapEDA (I’ve never used them, but I know they exist) then you can blame them, but anything free or open-source is always user-beware. If you find checking existing libraries takes more time, then you must be very efficient at making new ones, and if thats what suits you, by all means, but for me personally this seems like a giant waste of time that can be engineered out of the workflow.

I don’t think anything magical is necessary here. As an example, TI’s eval boards usually come with Altium design files, which include footprints, symbols, and models for individual components. They lock their library vault to you (you can’t openly access anything and everything), but you can make local copies of the libraries for everything inside the eval project. The libraries have metadata showing which vault they came from and these point to a TI managed vault. So TI at least, is managing at least some of libraries for their parts in an Altium vault. I don’t see it being too far fetched that TI gives SnapEDA read access to this vault.

SnapEDA even has a web interface for manufacturers to upload this very data:

After reading that, and some more FAQs on SnapEDA, it looks like another aspect of their business plan is to make the manufacturers pay for library creation in exchange for higher visibility, so TI and others are likely paying SnapEDA to create new universal (any CAD plaform on SnapEDA) versions of their Altium managed libs.