Discussion regarding dropping of 32 bit builds of KiCad

There cannot be many PCs around with 32 bit hardware
For some reason some corporates have hung on to 32 bit Windows due to compatibility fears with old custom software.

But I bet that a significant proportion of the 6% downloading 32 bit could actually run 64 bit

Old machine that capable with x64, but I would had to install the Windows x32 version. Every in x64 require twice of bandwidth, memory for doing the same thing in x32. So old machine just like garbage with Win10 x64bit.

That’s not how 64-bit works at all.

Even software compiled for 64-bit still defaults to 32-bit preferred variables. There may be some minor benefit from forcing the OS to go purely 32-bit but the difference in performance is going to be so so minor.

In either case, Windows 32-bit support is basically has an undefined EOL for KiCad. Our upstream toolchain is phasing out support slowly and that’s really the driver. We don’t have the manpower or interest to support it ourselves. Even Linux distros are dropping 32-git x86 support if they haven’t already.

2 Likes

Thank for telling me what I experienced in my own hand isn’t correct :slight_smile: .

64bit does use more resources. The pointers are twice as large. Now on a system that has been upgraded enough this becomes moot as the benefit 64bit brings is larger. The new registers, the large number maths etc…

THIS is exactly why there was an x32 mode in the Linux kernel: the CPU would be booted into 64bit more so all the benefits of the x86_64 could be used BUT it used 32bit pointers and thus removed the large overhead 64bit pointers bring

1 Like

Please y’all keep this thread clean and continue the technical discussion about bitness in a new thread.

2 Likes

I would think that supplying the next version stable and then calling it there
Never leave a job unfinished

Windows 10 1909 was the last 32 bit release and has a 30 month servicing lifetime, so EOL April 2022

I had a short peek at:


Which was apparently a whole family of processors, and they’ve been capable of 64 bit since 2004.

When the “raspberry pi’s” started in 2012 I was surprised they were running on 32bits. The rest of the world had already mostly moved on to 64 bit operating systems, but they found a batch of (even back then) obsolete Broadcom chips what were cheap, and it became the basis of their product.
Even back then I thought: “oh shit, they’re going to be the last ones that keep on dragging this legacy stuff into the future”.

Now they’re in some trouble, because they think they’re not big enough to maintain both 32 and 64 bit versions, don’t want to abandon their first generation users, and the latest versions have up to 8GB ram. (Apparently 64 bit Linux distro’s for the Raspi’s have been emerging for some time now, I don’t keep a close eye on raspi)

I think there is some parallel with Python2 and Python3. Python3 is now over 10 years old, and Python2 has been obsolete for many years, but there were far to many people who shrug their shoulders to progress and think: “It still works for me, why should I upgrade?”.

Until recently I had been running 64 bit linux on a 12 year old PC and it ran just fine.
After I got a more modern and faster PC for free I wondered why I kept running that old beast for so long. I guess I just was used to it’s slowness. And even my current PC is an 8 year old thing.

In the end I’d think the best reason to determine if and when to drop the 32 bit version should depend on the amount of maintenance required to keep it going. If it’s just a compiler switch, then it’s easy to keep it dragging on for a few more years. If it takes a lot of extra testing and complicates the code to maintain 32 bit compatibility, then it’s time to drop it soon. KiCad development is accelerating, but there are still a lot of quite simple things “missing” in KiCad and there are many ideas for new functionalities and extension of existing functions and a limited amount of developers to work on all these wonderful features. I’d much rather have that KiCad development focuses on the future instead of the past.

3 Likes
  1. Running a x64 bit VM on a 12Cores machine still very slow than x32. While is consume more than twice of energy.
  2. If we scale the world of this 64x bit only to the whole world, the world now need to spend at least 2x many of energy as x32 bit world. Because every bit flip consume power

Speak of future, I’m not clear. But I see the fact of monopolizing will cause a unfortunate scaling problems. So just be aware of this inflexibility if you will.

The question is, what virtualization tool you use and what you do with it. In practice, virtualization is very efficient as long as you don’t need to simulate much. E.g. GPU pass-through instead of GPU virtualization, same instruction set as underlying hardware architecture (even though, good tools can JIT-compile it to your target architecture). With virtio+KVM I never had problems with virtualization on my 4 core machine. With Virtualbox on the other hand it was slow regularly.

No, it does not. Pointers are only a portion of program execution, and even then, the CPU does request chunks of memory for the cache, not only the bytes. Assuming a program which is not optimized for cache efficiency, roughly the same amount of memory will be transferred to the CPU. However, if you iterate over an array of pointers, you will double the cache misses. But when you use pointers stored in an array, you need to access those objects as well, which can cause an additional cache miss each as well.

Data requires the same amount of memory, and you can use more optimized instructions from x86_64, which can reduce memory requirements for the code section (and speedup execution). After all, x86 is a variable length instruction set, so your instructions are not doubled in size due to 64bit now.

And if you run a 32bit OS on a 64bit CPU, you will likely not save energy. I don’t think CPU’s will deactivate portions of their implementation due to 32bit. This would in fact slow them down due to additional gates required in the critical path of the pipeline.

2 Likes

Best to avoid commenting on technical topics you have no clue about.

I kind of see the potential that this discussion could escalate a bit. Therefore the reminder to please keep this civil and constructive. Avoid personal attacks and remember that not everyone is a native speaker (so some things might be lost in translation and people might interpret things slightly differently).

3 Likes

I don’t think hardware ramblings are very relevant. Hardware that can only run 32 bit is almost non existent, and the old stuff also tends to be more power hungry. On top of that, PC’s are idling 95%+ while running applications like KiCad, except for short bursts of 3D rendering or mili seconds for zooming and scrolling, or for very complex designs were you buy the hardware needed to run the software.

The questions remaining seem to be:
How many people are using 32 bit versions of KiCad for some valid reason.
How many people are using 32 bit version of KiCad out of ignorance, like: Huh, does a pc need bits?

How much extra effort is it to maintain a 32 bit KiCad?

2 Likes

From a developer standpoint, I would like to see a move to 64bit for internal units. This would make a 32bit build still possible, but arithmetic would be way more costly on such an architecture.

A bit background: right now, KiCad includes a common code section, which also handles geometric stuff. However, there are different internal units between eeschema and pcbnew, which requires the compilation of the same code multiple times for example.

The reason is simply: we use 32bit integers on nanometer scale for pcbnew, but a different, more sparse internal unit for eeschema (also 32bit integers), because schematic sheets can be way bigger than pcb’s.

The easiest, and most consistent fix would be to go to 64bit on nanometer scale for all geometric objects.

4 Likes

I guest many layer of cool layer and abstraction magically make this E = 1/2 C x v^2 x f x n-bits-bus isn’t correct.

And worse, a bunch of conditional compilation and other build hackery. It would be really nice to punt it all…

I wondering is that why don’t do x32 bits code all the way instead on 64bits machine, and many tricks to get 2x speed on 64bit machine on x32 bit calculation instead of x64 code? It also well faster x2 for if we use GPU calculation with trick (due 2x data transfer of 2 of x32 numbers)
Do we need everything in 64bits with KiCad for just nanosacle. Is KiCad intended to design the IC now?

1 Like

For get to mention - sqrt kind of math algorithm I known on ALU also need 2x longer on 64bit number regardless it calculate in x32 or x64 bit machine/os. Similar may apply for sin, cos etc…

KiCad internal units are not going to change in a hurry now
The use of 1 nm was chosen to allow both metric and imperial measurements to always be integers.

To get back on topic, the issue with 32 bit OS is that Microsoft is retiring 32bit Windows 10 in about 2 years and most Linux distributions are now 64 bit only. Macs are already 64 bit only.
Third party software frameworks used by KiCad are also dropping 32 bit support.
In other words KiCad 6 will find it dificult to have a 32 bit version along its lifecycle

5 Likes