KiCad: The case for Database driven design

What craftyjon said -

A database library link does not need to replace the existing way libraries are developed or managed, although I can imagine that its implementation may affect the current way parts libraries are handled.

Also, only a database link is needed. Actually packaging a database with KiCad would be gross bloat for a feature that some may not use. And yes - the link should be agnostic of what database is out there (or at least configurable to several different types of databases. I do believe that this feature, once people start to understand it, adds value far beyond just businesses or large teams. I found it invaluable as a one-man band.

Last, my post was not intended to berate developers or the development process, but to keep this topic alive. Not just with developers, but with the community. The more dialog on it, the better the implementation needs will be known and shaped (although it seems like craftyjon knows exactly what the need is, with respect to it implementation in other layout packages) , and the better understood it will be by the community if/when it is implemented.

Thatā€™s fine. I would suggest releasing documentation on this as soon as it might be practical in order to allow people/organizations to start some of the work that needs to be done at their end.

I have worked at companies with a team dedicated to part creation and library management. In other words, EEā€™s donā€™t make parts, they ask the library team to create, qualify and add to the library. In my current business every EE creates parts for the library (I use the term interchangeably with database), we donā€™t have a dedicated library team. In both cases, the transition to a DB-based library for KiCad will require a bunch of work. This could take months to complete. The earlier they can get started the better.

I do realize this is FOSS. The work being done is nothing less than amazing. It will take however long it takes. And thatā€™s the way it is.

The documentation wonā€™t be released until the code is done at a minimum :slight_smile:

There will probably be outlying cases, but I donā€™t actually anticipate that in the average case it will be too much work. I will be doing some tests to see that this is the case, though.

Drifting a bit off topic, MS Access is in many ways less capable than SQLite and is definitely single user with the very non standard Visual Basic functions

For the record, we were just going to implement ODBC and everyone can then use whatever flavor of database they want. Even DB2 on a mainframe if you want.

4 Likes

Fantastic - I think this (ODBC) is how other platforms do it. Looking forward to seeing this in the future!!

2 Likes

Nice. Canā€™t wait.

Is there any kind of discussion on what the approach will be at al high level perhaps?

For example, direct placement from the DB into schematic/PCB or will we export from DB to conventional libraries and then place from there? In other words, the ā€œsupermarketā€ model. The DB is the supermarket. The library is your shopping basket.

We are likely to implement this supermarket model ourselves in the next few months. The idea being to use a single symbol and schematic library as bridges from the database into the design.

One advantage of this approach (given how KiCad works today) is that these automatically created libraries are stored with the project. Yes, I know that schematic stores components. However, I like the idea of having design-specific libraries available as well and I donā€™t think thereā€™s currently a way to export/import from schematic/pcb to stand-alone libraries.

The other interesting thing is that the selection set from the database will be kept in an Excel file. Which means that this will be 99% of the way towards a clean BOM. The only thing missing might be the quantity per component. It also makes it so that recreating the stand alone libraries from scratch is as easy as re-running the DB-to-Libraries utility using the Excel file as the source.

Not a perfect set of ideas but something I have used in the a past with EDA tools that, say, 20 years ago, where at about the same data management level as KiCad is today (in rough strokes).

I am not sure I follow this, but there is not going to be an ā€œexportā€ ā€“ a database library will be another type of library you can configure and enable in your system library configuration.

It does not really make much sense to support project-local database libraries, as actually talking to a database will in general require additional software besides just KiCad. But, I think we are talking about different things: the database libraries feature is not about automatic generation of KiCad library files ā€“ it is about pulling part metadata from a database in conjunction with symbol/footprint data from existing KiCad library files, at the time when you place a part onto a schematic.

3 Likes

craftyjon -

I only had time to quickly scan it but your spec doc looks fantastic!

My confidence in seeing this implemented is way up seeing that thereā€™s actually quite a bit of thought and understanding put into it!

Thank you for the link!

2 Likes

ODBC is definitely the way to do it, no DB-specific implementations, only standard interfaces.

@craftyjon

Just had a chance to go through the document you provided. I need some time to think about it. On the surface it looks like a good path forward.

The one part that caught my eye was:

The actual symbol and footprint data comes from existing libraries that must be present in the library tables just like today.

I understand why, of course, I just havenā€™t thought through what this might mean.

You are linking to a symbol, which currently contains four mandatory fields. Does this mean KiCad will actually use these fields or will it only grab the symbol graphics? I would suggest all you want is the graphics.

What comes to mind first is that this would establish a situation where cloning the environment would require full copies of the relevant symbol and footprint libraries as well as access to the database and perhaps even some version control.

What happens if someone edits a symbol or footprint in a linked library? Now the database pulls graphics that are different from when the part was created.

It might be best to actually import the graphics definition into the database record and make the database the sole repository of truth in the design. Changes to the symbol or footprint libraries would not break the database definitions.

If the symbol and footprint editor are able to work with a definition passed to them without having to access a library, one could actually consider editing and maintaining the graphics directly on the database without having to touch conventional libraries. Being that everything is coded in the form of s-expressions --text-- inclusion in the database as a field as well as passing back and forth to the symbol and footprint editors should be doable.

Just my initial thought. Iā€™ll come back to this if something else pops up.

Correct

Correct, just like the existing library system. But just like the existing library system, the points at which a schematic or board is updated to incorporate changes from the library are chosen by the user as manual operations. It is up to the user or organization to develop any kind of review process / workflow they want to use.

This is not the approach weā€™re taking at the moment. We could consider this as an option in the future, though (it would always have to be an option though, because there are some workflows where the separation of the metadata from the graphics is important)

In a strict sense, once a component is fully defined and becomes ā€œgoldenā€ it should be very difficult to explicitly modify it. Agreed in that some of this is an organizational problem and not KiCadā€™s. I get it. I also understand that the schematic will store a snapshot of the component definition, which means that, for the most part, it is safe from a mistake made at the library/DB level.

The scenario I am looking at is one where a change is made to a linked symbol or footprint and that part is used in a new design. Again, it is probably fair to say that this is an organizational problem. I am just highlighting a potential issue.

One real-world example I have is what weā€™ve done over the years with some large pin count components, like FPGAā€™s.

Rather than having a one-and-only symbol, the symbol changes for every design. The arrangement of pins based on how the banks might be used are best optimized so that the schematic can be cleaner and easier to read. When you are dealing with 500 to over 1000 pins things can get messy very quickly.

And so, we have the same component with different schematic ā€œviewsā€ if you will. I wonā€™t get into how we are dealing with this (itā€™s a long discussion), just providing it as an example.

Another one would be using the same microcontroller in different designs. It is often useful to create new symbols with pin layouts that fit the intended application better and allow for cleaner in/out flow of signals, etc.

Not a simple problem.

I have also experienced all of those scenarios before. Currently, adding software features to enforce process workflows (e.g. the ā€œgoldenā€ part) is not part of the roadmap, but it could be considered in the future. In either case, this would be orthogonal to whether or not you are using a database to store metadata.

I have a strong dislike for software trying to enforce a certain workflow.

Currently the way microcontrollers just have their generic pins defined, and can be edited both simply and quickly is quite nice and that should never disappear from KiCad.

I am not sure what you have in mind by making it difficult to change the ā€œgolden componentā€. It certainly is not your intention to make life difficult for the people who maintain the database.

I donā€™t know how the matching of database back end info and KiCad symbol graphics is going to be connected. Maybe it should be a separate tool, loosely derived from the ā€œrescue dialogā€ with a comparison of both the graphics and meta data, and then can copy parts (or all) of that data from one side to the other.

1 Like

Picture this:

You are working at a company with a few dozen or a few hundred engineers. You may or may not have a library management person or team. You also hire lots of new engineers and a bunch of interns. Like it or not, things do get away from you. Thereā€™s a lot of ā€œcultureā€ to communicate.

In this environment it is crucially important to have a reasonable lock on parts that are certified or ā€œgoldenā€, whatever that means. The meaning of ā€œgoldenā€ changes from application to application.

One example of this is to have certification that buying the same part number, by the same manufacturer, will actually result in buying exactly the same product. Sometimes manufacturers make changes to the die or process. For most applications this isnā€™t an issue. However, in some domains (medical, aerospace, military) this is very important. In these cases you sometimes work with the manufacturer to arrive at a custom ordering code that guarantees you are actually buying apples when you ask for apples.

Sometimes golden components only make it on the list after a non-trivial amount of internal testing. Examples of this might be RF emissions and susceptibility, lead content, vibration, thermal cycling, etc.

So, you donā€™t have a lock on your golden component database and a new engineer or an intern comes in and decides to make what looks like an innocent change. And, six months later, your $500K assembly fails environmental and susceptibility testing because of it. Or worse, it isnā€™t detected and it fails on the field.

Part of designing reliable electronics is properly sourcing and qualifying components. This is no different in consumer land. In other words, this isnā€™t an aerospace thing. If you design a product that will be manufactured in the tens of thousands or more, you could walk into a horrible nightmare if the components are not carefully specified or qualified.

I have personally made mistakes like that early in my journey, where, all of a sudden you experience a 30% failure rate in the field and donā€™t know whyā€¦until you bring enough units back to have a look and discover a stupid component substitution to save a few cents (a capacitor is just a capacitor, right?) is what killed 30% of the units.

And so, there are at least two worlds in the design of electronic devices. One where you have a lot of freedom and the design can be approached almost without serious consideration for component qualification (buy any 10K 1% resistor, no big deal) and another where you do not even dare consider using anything that has not met some level of qualification. An example of the first case is anything that is hobby, personal projects or non-critical small batch products. An example of the second scenario are areas like automotive and industrial product engineering.

How to implement this lock on component libraries is a matter that has to be discussed in the appropriate context. None of this is necessary at all for hobby electronics

Iā€™ll use an unrelated example to illustrate further. We have a full CNC shop with Haas milling machines. When I design a part to be CNC machined and later create a program using a CAM tool that program goes through a series of tests and qualification before being considered golden. Machining in an art and a science. You canā€™t just cut along a straight line and expect accurate results. It doesnā€™t work that way. Which means that a program and the machine setup are usually optimized after much trial and error. Once you get to a program that is vetted, this is stored in a repository of golden programs with write protection. The program can represent dozens of hours in qualification and testing. You donā€™t want anyone mucking with it. At a minimum it could result in bad parts. At worst someone could get hurt or it could cause damage to a machine costing hundreds of thousands of dollars.

This concept of qualified process and elements permeates professional engineering at different levels. Not everyone taps into all aspects of it, but youā€™d be hard pressed to find high quality shops that donā€™t have reasonable versions of the above in place for all engineering disciplines.

2 Likes

In such case there must be a librarian as otherwise it will be an incompatible mess.

3 Likes

I too am of the opinion that this is a company process issue, not something Kicad should be controlling. If you are worried about a rogue intern messing things up, simply deny them write access to the server where the libraries are held - then any changes have to go through whatever gatekeeping and review system has been set up.
In the company Iā€™m currently working for the libraries ( Altium ) are held on a server that gets updated from an SVN repository. Changes to that library are via SVN, and tagged to a JIRA ticket (so you have to outline what you are doing and why). Any changes also must be reviewed by the guy in charge of the libraries before they can be added to the library.
Altium has no control over any of this - its all company process. As it should be.

5 Likes

A few thoughts about a possible futureā€¦
Once a database mechanism is implemented in KiCad, it becomes a lot more attractive for medium to big companies. Iā€™ve read quite a lot of requests for this feature here on the forum, gitlab, and also on EEvblog. For quite a lot of people it is the sole reason they do not use KiCad yet.

Currently KiCad has a lack of librarians. Quite logical, as maintaining libraries is relatively boring work and (almost?) all people working on KiCad are volunteers.

KiCad is also Open Source, people and companies can use it for free, for as long as they want, but donations are of course always welcome. But donations do not have to be monetary. I spend quite a lot on this forum answering questions as best as I can.

If companies who need the database integration start using KiCad, they will of course create such a database for in-house use, and such in-company databases are usually thoroughly vetted and tested. It is likely that a lot of those companies are willing to share (most of) their database with the KiCad project. This could lead to a quick growth of the KiCad libraries in a relatively short time. (Maybe 2 or 3 years after KiCad V7 is released?) I think itā€™s a good idea to think about this during the database driven part for KiCad V7, to implement a method to make it easy to exchange data between databases, and how KiCad keeps their own libraries / databases consistent wen parts data comes from companies with different backgrounds and design ideologies.

I wouldnā€™t be too sure about companies sharing their databases. Often purchasing information is proprietary, and when it isnā€™t it is highly custom based on the purchasing contracts the company has with its vendors.

2 Likes