I can sort of see where Wayne is coming from, we could end up with a partial fix which seems expedient but is not extendable and really skirts the fundamental issue, which is where the github plugin has ended up.Kind of like adding lifeboats when we really need to be steering around the iceberg.
Handling and deploying large amounts of data is not a simple fix, and a lot of projects with any sort of online interaction find bespoke solutions, but there is not really any off the shelf solution. Any design really needs a tailored server side setup, using standard servers such as HTTP, git, ftp whatever provide generic solutions not exactly what we want.
Downloading individual files on demand would be a worse solution, it is much less efficient and unusable by people who work offline.
Really, the problem needs to be split between the representation of the data (including compression), and how the data is deployed (offline, batch update, on demand update). Trying to do both at once is always going to be much harder. Many projects have a similar problem, and implement some form of compressed packages which can be installed with the program, or downloaded/updates as needed. A manifest within the package tells the program what is in the package.