Agreed, this is the underlying main issue. I've faced it before with generated C++ code too, and after long and painful refactorings what ultimately helped the most was to just split the generated code into multiple compilation units to allow for parallel compilation. It comes with the drawback of potentially instatiating (and later throwing away) a lot more templates though
>On the other hand, we need a tiny build system that does all of the work locally and that can be used by the myriad of open-source projects that the industry relies on. This system has to be written in Rust (oops, I said it) with minimal dependencies and be kept lean and fast so that IDEs can communicate with it quickly. This is a niche that is not fulfilled by anyone right now and that my mind keeps coming to;
I believe that the biggest problem is that different "compilers" do different amount of work. In the race to win the popularity contest many, and especially newer languages offer compilers packaged with "compiler frontend", i.e. a program that discovers dependencies between files, links individual modules into the target programs or libraries, does code generation etc. This prevents creation of universal build systems.
I.e. javac can be fed inputs of individual Java source files, similar to GCC suite compilers, but Go compiler needs a configuration for the program or the library it compiles. Then there are also systems like Cargo (in Rust) that also do part of the job that the build system has to do for other languages.
From a perspective of someone who'd like to write a more universal build system, encountering stuff like Cargo is extremely disappointing: you immediately realize that you will have to either replace Cargo (and nobody will use your system because Cargo is already the most popular tool and covers the basic needs of many simple projects), or you will have to add a lot of work-arounds and integrations specific to Cargo, depend on their release cycle, patch bugs in someone else's code...
And it's very unfortunate because none of these "compiler frontends" come with support for other languages, CI, testing etc. So, eventually, you will need an extra tool, but by that time the tool that helped you to get by so far will become your worst enemy.
I have seen this first hand with Bazel. You have lots of Bazel rules that are partial reimplementations of the language specific tooling. It usually works better - until you hit a feature that isn’t supported.
I think the idea for these words here is more about preferring speed over remote execution and large build caching type of features, but not limiting the subset of toolchain functionality etc.,. In theory if you scoped your build tool to only support builds of sufficiently small size you can probably remove a lot of complexity you have to deal with otherwise.
Intelligent caching is also table-stakes though. It requires a detailed dependency graph and change tracking, and that's not something that can simply be relegated to a plugin— it's fundamental.
Right, and I think that's a combination of a few factors— first of all, there's the basic momentum that CMake is widely known and has a huge ecosystem of find modules, so it's a very safe choice— no one got fired for choosing https://boringtechnology.club
But bigger than that is just that a lot of these build system and infrastructure choices are made when a project is small and builds fast anyway. Who cares about incremental builds and aggressive caching when the whole thing is over in two seconds, right? Once a project is big enough that this starts to be a pain point, the build system (especially if it's one like CMake that allows a lot of undisciplined usage) is deeply entrenched and the cost of switching is higher.
Choosing technologies like Nix or Bazel can be seen as excessive upfront complexity or premature optimization, particularly if some or all of the team members would have to actually learn the things— from a manager's point of view, there's the very real risk that your star engineer spends weeks watching tech talks and yak shaving the perfect build setup instead of actually building core parts of the product.
Ultimately, this kind of thing comes back to the importance of competent technical leadership. Infrastructure like build system choice is important enough to be a CTO call, and that person needs to be able to understand the benefits, to weigh activation costs against the 5-10 plan for the product and team, and be able to say "yes, we plan for this thing to be big enough that investing in learning and using good tools right now is worth it" or "no, this is a throwaway prototype to get us to our seed money, avoid any unnecessary scaffolding."
The author clarifies that he wrote the section about Buck2 to demonstrate the need to be Bazel compatible (as opposed to Buck2), because the friction to try it out in a real code base is essentially insurmountable.
No wonder, seeing how conservative and right-leaning parties are doing everything in their power to delay renewables wherever they can. In Germany new power distribution infrastructure keeps being delayed despite desperate need (at times most wind turbines and solar farms are shut down remotely because the power cannot be transported to consumers) and they managed to slow down deployment of heat pumps to bring down gas usage used for heating.
I'm reminded of "insulate Britain", a protest group advocating for government-subsidized energy efficiency. People absolutely hated them because of their traffic-stopping tactics.
How much of this is intrinsic to nuclear versus due to regulatory requirements (various taxes including subsidies for certain renewables, early decommissioning)? My sense always was that this was about the latter rather than the former, but I’m happy to be educated.
The implied myth here is incredibly annoying to read. It takes 10 seconds to verify why nuclear power was not deployed more readily, and it's not what you're implying.
To be totally honest, greens are blocking nuclear and conservatives are blocking renewables so here we are, with the worst of both worlds, breathing smokes and fumes.
The more renewables on the grid the more gas you need. Gas is required to balance out solar and wind until battery technology matures. Renewable eliminate the need for costly coal and nuclear but they need gas to deal with times of low solar and wind.
So if you have a demand for say 100Twh a year and generation 1Twh from renewable, you need very little gas. On the other hand if you generate 60Twh renewable, you need more gas?
They are saying that if 50% of your generating capacity drops to 1% of you need a lot of gas powered plants to make up that difference. If your renewables represent 1% of the generating capacity you need very little gas to make up that difference.
If your example you've reduced generation from fossil fuels from 70% to 40% so still a win.
I assume you're still going on about nuclear. The problem with nuclear is that it doesn't make financial sense even if it can run at 100% of capacity for 100% of the time, it certainly can't cope with variable load - you can't scale nuclear to provide your peak amount, so you need to be able to top up with gas or battery, just like renewables need top up
This is a lie that has been disproven repeatedly and is part of the disinformation that is spread all over the internet. What _is_ required is flexible power distribution and storage infrastructure
How ever you do it you need to time shift. Either though batteries or usage or ideally both. The interim solution is gas. Because solar is so cheap it makes sense to take other more expensive and slow power sources offline but that does increase how much gas you use even if the total fossil fuel consumption goes down.
WiFi Aware looks interesting but there seems to be very little information out there beyond Android related docs and associated links. It seems to be hidden away behind the doors of WiFi alliance.
Can anyone familiar with the topic chime in what it would take to utilize WiFi Aware in let's say a Raspberry Pi (maybe using a different wireless chip connected via usb)? Maybe even to connect to Android smartphones
Nicely done! I just spent my last 16h of work time on implementing the reverse: parsing ethernet2 (with vlans!), ipv4+6 & UDP up to an automotive IP protocol. Use case is to understand a proprietary bus capturing stream that forwards ethernet frames (among other things) nested in ethernet frames that I receive on a raw socket. Fun stuff! Wireshark & Chatgpt are invaluable for this kind of task
Can't find any numbers in the linked thread with the patches. Surely some preliminary benchmarking must have been performed that could tell us something about the real world potential of the change?
> There is also, of course, the need for extensive performance testing; Mike Galbraith has made an early start on that work, showing that throughput with lazy preemption falls just short of that with PREEMPT_VOLUNTARY.
How would you benchmark something like this? Run multiple processes concurrently and then sort by total run time? Or measure individual process wait time?
I guess both make sense, and a lot of other things (synthetical benchmarks, microbenchmarks, real-world benchmarks, best/average/worst case latency comparison, best/average/worst case throughput comparison...)
Same here. Shared memory is one of those things where the kernel could really help some more with reliable cleanup (1). Until then you're mostly doomed to have a rock solid cleanup daemon or are limited to eventual cleanup by restarting processes. I have my doubts that it isn't possible to get into a situation where segments are being exhausted and you're forced to intervene
(1) I'm referring to automatic refcounting of shm segments using posix shm (not sys v!) when the last process dies or unmaps
It still is in most parts of the country: the type of roof and even the type and color of the tiles are mandated in most areas (through the infamous Bebauungsplan)
I'm into watching construction videos on Youtube, and since most of the content originates from the US I see lots of people using spray foam insulation without any sort of air-tight & moisture regulating membrane on the inside (behind the drywall for example). This is a disaster waiting to happen in almost all northern climates during winter (when AC/de-humidification isn't running) for the same reasons outlined in the article here.
It depends on what type of spray foam you use. Closed cell foam is rated to be a moisture barrier. It tends to be more expensive but is worth it to get a moisture barrier and insulation in one. In some scenarios where you need much thicker insulation to hit your desired R value they might start with closed cell for the moisture barrier and then switch to open cell since it is cheaper.
The problem is air escaping from the hot side through "cracks" or holes and causing condensation as it reaches the cold side.
In nordic standards, there must be an air-column between the insulation and the outer "cold" layer, to ventilate out any moisture that might get trapped there. There must also be a moisture barrier between the "hot" side and the layer of insulation, typically a PE-sheet, to prevent air from leaking into the insulation.
Since the wood expands/contracts with changes in temperature and humidity, filling compartments in wood constructions with foam does not guarantee air tight barriers.
Closed cell foam will trap moisture underneath it. When sprayed on wood which is naturally moist that water will have nowhere to go. Any delaminations of the foam from the substrate will form pockets where wet will concentrate, and as the foam breaks down it becomes acidic.
If I were to sprayfoam something I would only consider using open cell foam. If I were to use other impervious zero-perm insulation materials like rockwool I'd only do so with dimple board to allow air underneath. The small loss of efficiency is a necessary tradeoff for giving the moisture which will always be there a path to escape.
Spray foam doesn't remove the need for a designing a proper insulation and moisture barrier system for the building. If you spray foam an interior wall with closed cell foam you most likely will add something to allow the other side of the wood to breath.
Choosing where your moisture barrier line lies is typically easy in new construction but does get tricky with retrofit situations. It sounds like the biggest issue from the article is that they are taking what were vented attics and converting them to non-vented attics with spray foam. The issue isn't really the spray foam, the issue is converting an attic without proper understanding of venting and moisture barriers.
reply