Discussion regarding dropping of 32 bit builds of KiCad

I think it boils down to simply that. The last machine I dealt with that wasn’t implicitly a 64-bit system by default was the Pi, and even it’s (finally) looking to move to 64-bit. The limitation of not only many modern OSes moving to 64-bit only, but also the fact that the amount of RAM required for the OSes in general is creeping up, the physical limit of 4GB of RAM puts a very real damper on what KiCad would be able to do anyways.

Maintaining upstream packages that will absolutely drop 32-bit compatibility will only get more cumbersome as time goes on, and there are so many places better served by developer time than maintaining a legacy build type that is already only used by about 1 person in 20.

My opinion: With the current stable builds of KiCad being “feature complete”, I’d drop 32-bit as a target as soon as a needed upstream dependency required I do so. People stuck on 32-bit would be able to continue to use the last version available in the same way you’re still more than able to continue using Windows 2000 (the last exclusively 32-bit version of Windows!) – absolutely able, but quite unadvised.

That’s just my two bits!

Hey I had not object the dropping of 32bit. But a next step for switch every “int” in the code to “int64” without a second though what it effect would be. I just inject on reason for the second though.

Nobody is changing int32 to int64 just because they can

1 Like

Am I misunderstand this? Schematic is a mental drawing, PCB is physical drawing, aren’t they? What is the reason they need to be in same unit? Printing? Or because the common code are hard coded with one unit, if yes, it need to be more generic, isn’t it?

They don’t have the same unit.
1nm unit gives a maximum dimension of around 2m using an int32, adequate for a PCB.

Schematics have no need for very high resolution, but might need a describe a sheet bigger than A0.

2 Likes

The only computer I can think of that cannot do 64bit nowadays are the raspberry PI 1-3. The Raspberry pi 4 is able to use 64bit. The PI3 only has 1GB of memory, and has to share it with the GPU. This limits it’s ability to run Kicad, and to design a PCB on it. I wonder is anybody actually has used a Raspberry pi (or some other 32-bit computer) to at least open a Kicad file, and why he choose to do so.

I’d agree with @cmd to drop support for 32 bit in Kicad when a needed upstream dependency also drops support for 32bit. At that point it’s worth the discussion if there’s a volunteer that’s willing to continue to make a 32bit version of Kicad without that dependency and therefore reduced functionality, but personally I wouldn’t want that option, as 2 different versions of Kicad increase the maintenance costs.

I also agree with @pointhi to use one single way to express geometric objects throughout the entire code base. It prevents misunderstandings. Choosing 64bit on nanometer scale does not inhibit Kicad to run on 32 bit hardware, it just runs slightly slower, as the 32bit hardware has to use multiple 32-bits steps to calculate 64bit value’s.

2 Likes

I just did a few weeks ago, and I’m still alive to write about it :slightly_smiling_face:
I saw the writing on the wall (forum above) so bit the bullet and upgraded to 64 bit 5.1.7 after chucking Windows 7 32bit in favour of Linux. Still need to keep a virtual Windows just for two very old programmes unfortunately.

Hmmm, I suppose I’d better frame those old 3.1.1 and “Protel for DOS” floppy discs; they’re not heavy enough for boat anchors.

I would think, 32 machine are old. So it already took hit by a lot of stuff to make it run slower, and well this just another 2x slower. I guest who care when one can just buy a new one.

BTW: If we storage same information in a 64bit - then using 8GB memory effectively using 4GB in 32bit. So this mean for a grandparent machine was 4GB (strong and great) now act like it only have 2GB of memory. No wounder why I cannot event install Windows 10 x64 on my were strong and fast machine to just for doing the same thing. This machine to turn in electronic vast waste land by now.

You were informed multiple times that his assumption is wrong. Only the addresses (so pointers) are 64 bit. The default compiler setting is still that the data is 32 bit (unless one needs 64 bit for a given variable then one can use that).

This works the same way as you can still use 8 bit integers on a 32 bit arm processor. The remaining bits are either ignored (example fast_uint8_t) or they are masked (example uint8_t). Pointers are however always the size of the address register width.
For normal operating systems this really means that the virtual addresses use the bitwidth of the operating system. The physical addresses are not necessarily the same width. (This is also similar on embedded processors but the mapping is a lot more direct.) In the end this means a single process can address more virtual addresses with 64 bit (this virtual address space includes not just RAM but crucially also memory mapped I/O)

3 Likes

Exactly. As I understand, the compiler been doing great jobs to help code totally transparent on the size of pointer needed for a system (8, 32, 64 etc…).
My woundering was, what the actually root cause that need in dropping 32bit build? This usually the compiler jobs unless there are the some step in build/code/architecture try to cancelling out this, and force developer to handling it in the code. Hence causing duplication in maintaining, and testing?

If you build with 32 bit then even pointers are 32 bit. We use this trick to be able to have unittests for embedded code with the native pointer size of the embedded system but running on a normal computer. How exactly this works is beyond me but that is the difference i noticed between selecting 32bit or 64bit compilation. (Or more precisely we tell the compiler to compile for 32 bit but such that it can run on a 64 bit system)

All in all it is not as simple as it may seem. In general one can not say that doubling the bitwidth doubles the memory needed. Nor is it possible to state that the energy use increases and if so by how much. There are simply way too many variables at play to generalize this.

So in the end the right question really is the one asked by @davidsrsb. “How many users are still using it and how many of them have no other option.” From this one can ascertain if is it worth the extra development effort to release future version for 32 bit support (such users will always be able to use version 5 until they can switch). In the end it is all about prioritizing and tradeoffs.

And yes it will suck for users who need the 32 bit option. No matter why they need it (or feel they need it). I however assume this will be taken into account when making this decission.

1 Like

That none of the developers want to be responsible for testing on it and fixing any problems that occur on 32-bits only.

1 Like

Completely understand. But be-careful, because some time is a perfect exercise that hinges the bigger underline issues.

For the record, I’m testing (almost) everyday v5.99 nightlies (since the migration to Gitlab) and haven’t found yet a 32-bit specific issue / bug. Those that I reported were found on 64-bit as well.

I suspect you are right about the 6% who download the 32 bit version… just because someone is electronically technical does not mean that they are IT technical… I bet many have a Windows 32 bit “because that’s the button they (or someone else) clicked when they installed it”, and I’ll also bet that many just take the 32 bit version “because it’s safer” or something like that.

If the 32 bit version isn’t developed any more, it won’t stop them downloading a 32 bit version, it’s just that it will be out-of date… (but heck, I only just updated from version 4 to version 5 this week - it didn’t stop me designing PCBs!).

The issue isn’t just about bugs in KiCad itself but about upstream packages that KiCad depends on which are starting to show signs of dropping 32-bit support. One of them being MSYS2 (the official KiCad windows build system):

2020-05-17 - 32-bit MSYS2 no longer actively supported

32-bit mingw-w64 packages are still supported, this is about the POSIX emulation layer, i.e. the runtime, Bash, MinTTY…

After this date, we don’t plan on building updated msys-i686 packages nor releasing i686 installers anymore. This is due to increasingly frustrating difficulties with limited 32-bit address space, high penetration of 64-bit systems and Cygwin (our upstream) starting their way to drop 32-bit support as well.

At least you are testing it, one of the problems of development versions is that they are tested by the developers and the categories of users more interested in advanced software and therefore almost certainly 64 bit.

18 posts were split to a new topic: KiCad on Windows XP

That isn’t true, the current latest release 20H2 has 32bit ISOs available still… they talked about doing this and the backlash was bad enough they just gave up for now, but what else is new.

I am certain of this as I’m downloading it right now… via the set your IE browser as a mobile phone and get the direct link trick…

That’s not entirely true; sizeof() needs to be 64-bit, and hence size_t is 64-bit. This means there is often significant cleanup work needed in all things related to allocators and memory usage. A lot of 32-bit legacy code used to assume size_t was the same int32_t. (Signedness being compiler, API, and system dependent and all over the map.) It’s not that hard to fix, but it would often cause changes to library and system call interfaces, while retaining the old 32-bit specific interfaces for backwards compatibility. Another change is the ability to map large files and devices, like modern texture buffers, which permits more streamlined, single-copy interfaces and code design, which then again changes the 64-bit library interfaces. Finally, 64-bit kernel system calls changed as well in many cases and in fact windows has an entirely new set of 64-bit APIs. Not super different, just different enough that it’s a pain to have to support the old 32-bit ones.