How much could go on a chip if you could populate your own silicon?

I wondered a while back how soon much circuit boards would disappear in favor of customized silicon? I wondered if someday Kicad would have to start taking this into consideration and even incorporate chip design?


I wonder how long it will take till it’s affordable, non-professionally. Interesting theme though.
In the past there were already quite a few questions related to KiCad for IC/chip/silicone design.
So the train seems to be rolling already.

1 Like

I’d think it would go the way of PCB manufacturing equipment. As the big guys upgrade for the smaller die sizes they sell of the old equipment and soon you have the low tier guys.

I like your logic. But I think that diffusing and other processing of ICs has always been a much slower process, as in weeks or months instead of minutes or hours to etch a 2 layer board. And the price might be in 10’s of thousands of dollars rather than 10’s of dollars. (??) Could you buy 5 or 10 chips on one wafer, and who would “bond it out” for you? With my dumb ideas, I think I need to go a wafer a while. :wink:

1 Like

Let me have some thoughts on this topic.
Would PCB disappear? I believe , not. Even if the silicon chips becomes much larger and/or powerful, there is still the requirement of some interfaces, which require at least the power supply and a few control lines to some external actors or switches. This can be done by PCBs or other boards, which require a minimum of routing and processing.
Customized silicon? I had a glance at the reference to the open silicon with Google. It is an interesting approach to encourage more people to “think big” and do a customized IC instead of a PCB design with a handful of chips. However, I noticed in the past, that people who were perfectly familiar with PCB design were fearing to transfer their ideas into silicon. At that time you had to put each transistor by yourself and therefore you need a lot of verification that your design is properly working. Within the referred approach the trend to IP blocks is catched and the community is encouraged to generate a lot of these IP blocks, which can be openly used. So at the end the way to handle it is similar to PCB-layout: put your IP cores, arrange them and do a wiring between the different block. At a first glance the work is similar to PCB layout but with different dimensions.
From cost point of view it will be more expensive like a PCB design, so for private persons not likely an alternative, but small companies or groups can benefit of that.
The most critical point is the testing. When you have your silicon in hands you can cross fingers that it works; however, I noticed no design that worked the very first time. At least one redesign is neccessary. On the way to this redesign you have to test an analyze your design, which makes the big difference. For PCB layouts everything can be made in a macroscopic manner, for the silicon your regularly don’t have the equipment.
In order to minimize this risk you have to check your design carefully in advance. DRC checks, LVS verifications and intensive simulations is needed.
What does it mean for KICAD? Can KICAD cope with such trends? I would say, it depends.
It depends on the improvements and extensions that would be available in the future. As a key feature KICAC needs to handle nested designs in a similar manner like nested schematics. I have seen several approaches to explain how this can be used with the current versions, however, it is still too complicated in my impression.
The support for simulation must be improved a lot. The concept of a test bench would be helpful, where you define different test approaches for your circuits and subcircuits and link your schematics to these definitions. This can be done in principle with nested schematics too, but you actually have only one master sheet.
Furthermore the simulator must be capable to import other models than spice and its derivates. High level describing languages must be handable.
And last but not least we need to take care of parasitic elements. Like for high frequency or switching devices you need to follow some design guidelines to minimize the performance loss by layout impact. Often the chip suppliers are providing reference layouts for critical pins. This cannot be expected for silicon design, so the design tool must be capable to extract the parasitic resistances, capacitances ind inductances from the design and import the critical ones into the simulation again to check the more realistic behaviour. This kind of tooling is essential for silicon design and by now completely missing in KICAD.
So from my point of view KICAD can improve to support such trends. However, dedicated tools, which are available as open source will be preferred yet. A few major improvements are neccessary on this way to make KICAD a candidate for silicon design. But even if they are not used for silicon design, they can significantly improve the quality of the PCB designs too and therefore worth to think about them.
To come back to the headline of the topic: How much could you get on a chip if you could populate your own silicon? The answer is the same like for the question " How much could you get on a PCB if you can populate your own board?" Theoretically no limit, but practically you have to cope with the complexity and the parasitic effects which are increasing with increasing part count. The better your tools are, the less is the risk for failure, but you have to apply the full set of checks. And finally for both cases the grade of required experience is proportional to the size of the targeted design, which gives the practical upper limit.

1 Like

Nothing would really change in PCBs. Division of concerns is simply solid engineering. Even today, I don’t push my microcontroller into my FPGA, I put a microcontroller on my PCB and an FPGA on my PCB.

Chip design as Google imagines it is just a supercharged FPGA. If you can respin a chip for about $5K, you could kill most of the FPGA market.

What might change is that some people could do clever things with small numbers of transistors that currently can’t be done–think analog CCD delay lines in guitar pedals, for example.

What prevents chip design from being done, though, is design parasitic extraction. Unlike PCBs where parasitics mostly don’t matter, in VLSI parasitics tend to matter everywhere. And, you will note, that when parasitics in PCBs do matter, prices for tools and boards start to skyrocket.

(To this point: what you see Google and other initiatives like this doing is giving you a set of pre-extracted blocks that you are allowed to wire together so that the parasitics problem is bounded. If you want to do transistor level design, that isn’t really supported.)

1 Like

I think buzmeg answers the question I thought I was asking. More ‘what’ could move to the chip if you could.

Some years ago I did a schematic level chip design for a voltage controlled current source for LED driving - we had some friendly engineers at Panavision with time on their hands happy to do the silicon layout and make sample wafers. When you get to chip level analog CMOS stuff there’s all sorts of things you can do that are very difficult to do in discretes, but equally there are things that are very difficult to do on silicon. Having 50 identical MOSFETs makes for some very interesting circuit possibilities. And being able to design your own MOSFET by changing the channel length and width is also rather tasty.
But the real kicker is cost - we were going to use an older 0.18um CMOS process, but you’re still looking at tens of thousands for a mask set. Even with the volumes we would have used it turned out cheaper to stick with a discrete (if less functional) solution.
I still have one of the engineering wafers as a souvenir.

We did another custom chip with ST ( a line camera) - they were only interested in the project if we committed to buying a couple million chips. As I recall, the mask sets were in excess of $100k. The first version was a complete balls-up and didn’t work at all (parasitics!) , so ST wore the cost for that one, but it took two more revisions to get it right.

I guess my point is custom silicon is a costly endeavor ( both in time and money). Which is why FPGA’s exist.

@buzmeg: Do you really believe, that parasitics are no issues in PCB design? Looking at a two layer board with three transistors, I agree with you. This I did in the past even without CAD support. But as soon as the complexity is increasing and the frequencies are going up, you have to deal with it. Signal integrity, distortions and noise are parameters which can be heavily impacted by improper design by high parasitics contribution. I don’t speak about thermal management, which becomes quite important in dense layouts. Therefore I maintain my statement, that with increasing complexity parasitics becomes more critical and must be handled by proper tools within the CAD program.
In the same way I see your statement about negligible parasitics in the wiring of the pre-extracted blocks like in the Google initiative. Once again, your statement can be agreed when you use three predefined blocks and do some wiring … (the rest like above).

Finally , what kind of solution will be selected is finally a financial issue: return of invest. For PCB you have low initial cost but high cost for the parts. For silicon solution you have high costs for the initial setup (mask costs, even if shared for multi project wafers) and low costs for the parts. The FPGA approach is a tradeoff which is in between these two extremes.

Honestly–yeah. To a first order approximation I do believe that parasitics aren’t relevant for PCB design.

JLCPCB is manufacturing a ton of boards and those boards are rarely designed with any tool capable of considering parasitics and they are just fine.

Most PCB designs I have seen over the past 30 years have been digital and under 33MHz. Even most low-power RF stuff ignores parasitics until you’re ready to go to the FCC and which point you sit down and match it with measurements on equipment rather than do a PCB analysis.

The only parasitic consideration on these kinds of boards is to put a 50Ohm resistor on your digital clock lines. Not because of the board parasitics but because the stupid edge coming flying out of the silicon has gotten so fast that you have to knock it down or it will double clock your system.

Probably the biggest issue in days of yore that hit even slow things was crosstalk between wide parallel buses. Well, everybody got rid of wide parallel buses so nobody cares anymore. Even something like USB 2.0 is remarkably forgiving–you can really screw up D+/D- matching and things will still work. Ethernet up to 100Mbps is also pretty forgiving while 10Mbps ethernet works almost in spite of your efforts to stop it.

I do recognize that computer motherboards and graphics cards, for example, need to account for parasitics. High speed ethernet (and anything similar–HDMI, USB-C, Thunderbolt, etc.) certainly requires care. However, those designs are GHz+ and are generally done on expensive PCB programs that start approaching VLSI tools in price.

Parasitics on PCBs are very real. Its just that we take care of most of it automatically by good layout practice. This is the big difference between an experienced layout engineer and one who is new to the game.
I seen plenty of examples of parasitic coupling on PCBs. You say USB2.0 is remarkably forgiving, and it usually is, until you connect a proper USB test set to it and see just how little operating margin you have in the eye-diagrams because you got the ground plane too close, or mismatched the trace lengths, or ran it too close to a SMPS. USB2 has harmonics in the GHz, and they matter.

In my intro class when we first used an oscilloscope it was a problem. We were given a little envelope with parts, wires, breadboard and told we might have to hold, nudge, etc. to get a good trace.

Kind of a pet peeve of mine when people dismiss this. There is a reason for evolution. Sometimes you can get away with something until you can’t.

So, let’s redefine the question based on some of the input here. Let’s say the One Laptop Per Child folks were still in business. I’d think this would be the kind of project that would get green lighted. But, would it be useful to them?

Is there more that could/should be moved to a chip or is this totally about moving existing stuff into the open source realm?

Having worked in a group doing custom vlsi design, I think the biggest hurdle to individuals doing chip design is not the tools to do the design, but the cost of the entire spin. Even if you can get your design onto a piece of a “group project” wafer, no matter how well you do verification there’s almost always something wrong the first spin. To debug whats wrong requires 10’s of thousands of $ in specialized microscopes and wafer probing equipment to “look inside” the internal nodes and measure signals. It also requires special wafers (prior to the process of adding the top glass pasivation layer) so that you can actually probe the metal lines beneath. Cost for a “spin” can be many K$. Most individuals can’t afford it unless there is a good financial return involved.

Meh, you can have a semiconductor fab in your garage these days.

In his latest video he etched 1200 transistors on a piece of wafer and he stated (very briefly @00:59) that he designed the wafers in photoshop, but it’s also a simple repetitive test pattern.
Feature size is in the order of 10um (@00:14) "Z2" - Upgraded Homemade Silicon Chips - YouTube

I think if you were to look in a typical laptop you’d find everything that can be on a chip, is on a chip. As far as reducing the number of chips goes you run into a couple of issues: First : as complexity goes up, typically yield goes down, and so price goes up. Secondly different circuits required different process tech - you would not use the same process for an opamp as for a DRAM or a switchmode controller.

Interestingly you can see this in AMD’s latest processors, where they’ve changed from having one giant die to multiple smaller dice on what is essentially a fancy PCB - and the CPU dice are running a smaller node than the IO die ( I think its 5nm for the CPU and 14nm for the IO?).

1 Like

I think there is also a trend contradicting this idea: diversification of substrates. sure, for a mostly digital circuit moving everything to a chip could work in theory. but even there I would argue that you get problems with the necessary analog parts for power supply etc. because you probably won’t use the same structural sizes of the FETs as for digital elements.

But the bigger problems is popularity of different kind of substrates. sure SoI is not a big deal to migrate to but if you have GaAs elements in your circuit, there is no way to get nearly the same performance from one single Silicon based chip. And with the upcoming integration of optical elements into “electronics” a purely silicon solution gets completely impossible.

And it is bloody difficult to rewire with bodge wires. :wink:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.