"Everyone should support education" is an empty platitude, it doesn't help answer questions like "how much funding?" and "who gets funding and who doesn't?". That's where the sides arise.
The author (and Nature) pretends like those aren't real problems and that scientists should get unconditional support. That's never been the case.
Meanwhile multiple non-technical people that I know pay $20/mo to OpenAI and have long, verbal conversations with ChatGPT every day to learn new things, explore ideas, reflect, etc.
These are obviously what voice assistants should do, the research was just not there. Amazon was unwilling to invest in the long-term research to make that a reality, because of a myopic focus on easy-to-measure KPIs. After pouring billions of dollars into Alexa. A catastrophic management failure.
Are they talking to ChatGPT, or are they typing? More and more we're seeing that user don't even want to use a phone for phone calls, so maybe a voice interface really isn't the way to go.
Edit: Oh, you wrote "verbal" that seems weird to me. Most people I know certainly don't want to talk to their devices.
My wife paid for ChatGPT and is loving it - she only types to it so far (and sends it images and screenshots), but I've had a go at talking to it and it was much better than I thought.
If I'm alone I don't mind talking if it is faster, but there is no way I'm talking to AI in the office or on the train (yet...)
I struggle to have naturally flowing conversation with an AI for much the same reason people don't use most of Siri's features - it's awkward and feels strange.
As such I can maintain about five minutes of slow pace before giving up and typing. I have to believe others have similar experiences. But perhaps I'm an outlier.
In the same way that GC eliminates the need for manual memory management.
Sometimes it's not enough and you have to 'do it by hand', but generally if you're working in a system that has GC, freeing memory is not something that you think of often.
The BEAM is designed for building distributed, fault tolerant systems in the sense that these type of concerns are first class objects, as compared to having them as external libraries (eg. Kafka) or completely outside of the system (eg. Kubernetes).
The three points the author lists in the beginning of the article are already built in and their behavior are described rather than implemented, which is what I think OP meant with not having to 'intentionally create graceful shutdown routines'.
I really don't see how what you are describing has anything to do with the graceful shutdown strategies/tips mentioned in the post.
- Some applications want to instantly terminate upon receiving kill sigs, others want to handle them, OP shows how to handle them
- In the case of HTTP servers, you want to stop listening for new requests, but finish handling current ones under a timer. TBF, OPs post actually handles that badly with a time.Sleep when there's a running connection, instead of using a sync.WaitGroup like most applications would want to do
- Regardless if the application is GCd or not, you probably want to still manually close connections, so you can handle any possible errors (a lot of connections stuff flushes data on close)
Thread OPs comment was pointing out that in Elixir there is no need to manually implement these strategies as they already exist within OTP as first class members on the BEAM.
Blog post author has to hand roll these, including picking the wrong solution with time.Sleep as you mentioned.
My analogy with GC was in that spirit; if GC is built in, you don't need custom allocators, memory debuggers etc 99% of the time because you won't be poking around memory the same way that you would in say C. Malloc/free still happens.
Likewise, graceful shutdown, trapping signals, restarting queues, managing restart strategies for subsystems, service monitoring, timeouts, retries, fault recovery, caching, system wide (as in distributed) error handling, system wide debugging, system wide tracing... and so on, are already there on the BEAM.
This is not the case for other runtimes. Instead, to the extent that you can achieve these functionalities from within your runtime at all (without relying on completely external software like Kubernetes, Redis, Datadog etc), you do so by glueing together a tonne of libraries that might or might not gel nicely.
The BEAM is built specifically for the domain "send many small but important messages across the world without falling over", and it shows. They've been incrementally improving it for some ~35 years, there's very few known unknowns left.
> They have much superior product compared to VSCode in terms of pretty much everything, except AI
Disagree, I keep trying Jetbrains once in a while and keep walking away disappointed (used to be a hardcore user). I use VS Code bc it is seamlessly polyglot. Jetbrains wants me to launch a whole separate IDE for different use cases, which is just horrible UX for me. Why would I pay hundreds for a worse UX?
you can install nearly all of their supported language plugins in your editor fyi. you just lose some of the language specific integrations if you use the python plugin via intellij for example.
That is Chinese New Year, it has always a very large drop. It's very impactful but expected and yearly.
My companies reporting needs to correct for it since the date shifts on western calendar and if would mess up all reporting otherwise, so yes, this is extremely significant.
Thank you for the details. I've no doubt that tariffs are having impacts. Was looking at that specific data and finding it hard to conclude with just that data. Does CNY also explain the Week 16->17 drop in 2024?
It’s sad that these posts of relevant facts - that are only presented as such, with no judgement - are being downvoted, apparently because they don’t supportive a narrative. One of the things I’ve always loved about Hacker News was how the community was so curious about truth. I guess all good things come to an end at some point.
Purists perpetually decry the zeitgeist's sloppy terminology.
Words that climb the Zipf curve get squeezed for maximum compression, even at the cost of technical correctness. Entropy > pedantry. Resisting it only Streisands the shorthand.
It has a long list of content partnerships. And by far the highest user base, which means lots of unique training data. If it can succeed in spamming the open Internet enough to crowd out competition through costs and bot filters, it'll have a pretty good data moat.
The userbase is for an undifferentiated swappable product.
Its data hoard is nothing special compared to any of the other players (Google, Meta, etc.).
And you can see this play out. If it had anything and its data was so good it would be significantly ahead of the competition. Instead everyone is pretty much at the same place moving at the same speed.
The author (and Nature) pretends like those aren't real problems and that scientists should get unconditional support. That's never been the case.
reply