Hacker News new | past | comments | ask | show | jobs | submit | willglynn's comments login

This is true for many but not all weather models.

GFS and IFS are both medium-range global models in the class Google is targeting. These models are spectral models, meaning they pivot the input spatial grid into the frequency domain, carry out weather computations in the frequency domain, and pivot back to provide output grids.

The intuition here is that, at global scale over many days, the primary dynamics are waves doing what waves do. Representing state in terms of waves reduces the accumulation of numerical errors. On the other hand, this only works on spheroids and it comes at the expense of greatly complicating local interactions, so the use of spectral methods for NWP is far from universal.


You're right, and in fact S3 does this with the `ETag:` header… in the simple case.

S3 also supports more complicated cases where the entire object may not be visible to any single component while it is being written, and in those cases, `ETag:` works differently.

> * Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-S3 or plaintext, have ETags that are an MD5 digest of their object data.

> * Objects created by the PUT Object, POST Object, or Copy operation, or through the AWS Management Console, and are encrypted by SSE-C or SSE-KMS, have ETags that are not an MD5 digest of their object data.

> * If an object is created by either the Multipart Upload or Part Copy operation, the ETag is not an MD5 digest, regardless of the method of encryption. If an object is larger than 16 MB, the AWS Management Console will upload or copy that object as a Multipart Upload, and therefore the ETag will not be an MD5 digest.

https://docs.aws.amazon.com/AmazonS3/latest/API/API_Object.h...


It is common to use multipart uploads for large objects, since this both increases throughput and decreases latency. Individual part uploads can happen in parallel and complete in any sequence. There's no architectural requirement that an entire object pass through a single system on either S3's side or on the client's side.


> In Rust, the main thread is special. (I consider this unfortunate, but web people like it, because inside browsers, the main thread is very special.) If the main thread exits, all the other threads are silently killed.

Rust inherits this from `pthread_detach()`:

       The detached attribute merely determines the behavior of the
       system when the thread terminates; it does not prevent the thread
       from being terminated if the process terminates using exit(3) (or
       equivalently, if the main thread returns).


The main thread is special because that's how the runtime works on Unix. In particular, when "main" exits, the process exits. This is required by the C standard. It's also fundamentally built into how Unix processes work, as certain global variables, like argv and environ strings, typically are stored on the main thread's stack, so if the main thread is destroyed those references become invalid.

In principle Rust could have defined its environment to not make the main thread special, but then it would need some additional runtime magic on Unix systems, including having the main thread poll for all other threads to exit, which in turn would require it to add a layer of indirection to the system's threading runtime (e.g. wrapping pthreads) to be able to track all threads.


> In principle Rust could have defined its environment to not make the main thread special...

Not to mention they'd have to be very careful with what they do on the main thread after they start up the application's first thread (e.g. allocating memory via malloc() is out), since there are quite a few things that are not safe to do (like fork() that's not immediately followed by exec()) in a multi-threaded program. So even a "single-threaded" Rust program would become multi-threaded, and assume all those problems.


Global lat/long coordinates are defined in terms of coordinate systems like WGS84 or ITRF2020, which are themselves the result of relative measurements between reference stations.

The earth's crust floats on top of liquid rock. This matters at relevant length and time scales; in most places, these effects alone are on the order of millimeters per year. One reason why it's better to use NAD83 over WGS84 in North America is that NAD83 latitudes and longitudes move with the North American plate.

Positions _are_ relative, and the closer you can put your datum, the less drift you'll accumulate.


There is a literal, autistic sense in which you are correct. But there is a practical, pragmatic distinction between measurements that we call absolute versus those we call relative, and pedantic correctness misses the point.


The document to which this article refers was published in the Journal of the Royal Society Interface, and the article links there. It is also available as open access, which was not linked:

https://hal.science/hal-04287433v1

https://hal.science/hal-04287433/file/Version%20HALL.pdf


Seconding Beckhoff. EtherCAT is a fantastic protocol, TwinCAT/BSD works great, reliability is excellent. It's super nice to run realtime PLC code on specific processor cores with µs of jitter while other cores run a normal OS with normal applications (e.g. VictoriaMetrics) on the controller itself.

I have a construction project involving several buildings with overlapping infrastructure. Everything gets connected to EtherCAT as quickly as possible. Electric generation: solar panels, batteries, inverters. Energy management: branch circuit monitoring, weather forecasts, solar forecasts, load control for things like EV charging and water heating. HVAC: heat pumps, buffer tanks, circulation pumps, valves. Building automation: lighting, access control. I just add I/O wherever, connect over Ethernet, and glue all the signals together in software.

I wouldn't dare approach a project like this with Arduino.


How is procurement process with Beckhoff? I am tempted to make the jump from mostly AB.


It's… fine? Unlike certain other brands, I've encountered no network of frothing, territorial, gatekeeping dealers with Beckhoff. For my project, I reached out to sales.usa@beckhoff.com, got a rep, asked for a quote, and went from there.

Secondhand can be viable too. Some of my "jellybean" EtherCAT terminals came from eBay. I won't get help from Beckhoff if they break, but given that I already have replacements on hand, I'm really not worried about it.

Beckhoff also lets you download almost all the development tools, runtimes, and PLC libraries without paying. In their words:

> Trial licenses can be generated in the TwinCAT 3 development environment (XAE) for many TwinCAT 3 functions for a validity period of 7 days. This can be repeated any number of times. An internet connection is not required for this. In this way, these TwinCAT functions can be used simply and cost-effectively in laboratory operations, e.g. in the education sector.

This is obviously useful for development and experimentation. It can also be an escape hatch in production if you need to substitute controllers. Beckhoff wants you to pay for what you use, sure, but their licensing scheme goes out of its way to avoid kicking you when you're down.


> Unlike certain other brands, I've encountered no network of frothing, territorial, gatekeeping dealers with Beckhoff.

This. They sell you gear then leave you alone. If you need help you call or email. Done. If the vendor demands you create an account to access simple datasheets - run like hell. Once they see you even glancing at a product they activate a frothing at the mouth sales rep who will launch a harassment campaign where they email and call multiple times a week seemingly forever or until you are EXTERMELY rude to them.


Thirded. I connected my stations' local UDP broadcasts to Prometheus push/VictoriaMetrics:

https://github.com/willglynn/tempest_exporter

Their central web API is nice too (and the tool above can extract metrics from it) but the local, offline data access is what got me in the door. Tempest could shut down their services tomorrow without breaking my setup.


The best treatment is at:

https://blog.rust-lang.org/2024/03/30/i128-layout-update.htm...

Note also that this involves LLVM, so clang < 18 had the same u128 behavior as Rust < 1.77/1.78.


No, clang never had this issue since they bodged it in the front-end. From the post you linked:

> Clang uses the correct alignment only because of a workaround, where the alignment is manually set to 16 bytes before handing the type to LLVM.

Maybe it took long to notice because it was assumed to be working in LLVM since clang worked?


Large single family homes have 400A 1Ø 120/240V service, which is 96 kW peak or 76.8 kW for NEC's definition of continuous. Most have 200A service or smaller, which is half that.

What load do you imagine causes "most" homes to exceed 100 kW, and "some much higher"?


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: