We use Jenkins (on a Linux box) to remote-control a Windows VM that does the actual build.
This works reasonably well, but requires that the connection between Jenkins and the agent remains up for the entire duration of the build, otherwise Jenkins will mark the build as failed and the agent will kill everything. File transfers between Jenkins and the agent go through an RPC protocol over the same channel.
Jenkins isn't built for this kind of setup, really — they expect all machines to be on a LAN, with low latency (copying one file requires a full round trip, and the next file is not started before the last one is acknowledged) and high throughput (otherwise, large file transfers clog up the RPC pipe, causing the periodic "ping" requests over the same connection to time out).
So, the box needs a really good connection. I've had the builds running on a VM on my home DSL for some time, where about 50% of builds would succeed, and every attempt would take six hours. CPU horsepower isn't that important, even an i3 can do it in about 2.5 hours wall clock time, but rushing through in ten minutes will reduce the chance for failures even more.
Also, the build process could be a bit more efficient. Right now, we always download the latest state of the libraries, unpack them, copy them to the installation directiory, repack them, copy the archive into the installer, and then sign the installer, which requires another copy. That is where most of the build time comes from on the small boxes, they simply don't have the I/O bandwidth for that.
This is where the RAID card comes in handy — it has a few GB of RAM that has its own battery, so it will just accept the write of the footprint archive in one big transaction, report that the data has been written, and then send it to the disks in the background, while the build process does the next step, which conveniently uses the same data that is still in RAM, dropping the wall clock time for unpacking the archive from a few minutes to a few seconds.
I have an item on my TODO list to make the build more efficient and also allow separation of binaries and library, but this will take some time to get right — there are a few frameworks for that already, but none of them fit exactly (like Jenkins, which makes 90% of the job easy, and the remaining 10% would require a full rewrite of Jenkins from the ground up).