Hacker Newsnew | past | comments | ask | show | jobs | submit | cle's commentslogin

"Everyone should support education" is an empty platitude, it doesn't help answer questions like "how much funding?" and "who gets funding and who doesn't?". That's where the sides arise.

The author (and Nature) pretends like those aren't real problems and that scientists should get unconditional support. That's never been the case.


Cursor does all that stuff too perfectly fine.

Meanwhile multiple non-technical people that I know pay $20/mo to OpenAI and have long, verbal conversations with ChatGPT every day to learn new things, explore ideas, reflect, etc.

These are obviously what voice assistants should do, the research was just not there. Amazon was unwilling to invest in the long-term research to make that a reality, because of a myopic focus on easy-to-measure KPIs. After pouring billions of dollars into Alexa. A catastrophic management failure.


Are they talking to ChatGPT, or are they typing? More and more we're seeing that user don't even want to use a phone for phone calls, so maybe a voice interface really isn't the way to go.

Edit: Oh, you wrote "verbal" that seems weird to me. Most people I know certainly don't want to talk to their devices.


My wife paid for ChatGPT and is loving it - she only types to it so far (and sends it images and screenshots), but I've had a go at talking to it and it was much better than I thought.

If I'm alone I don't mind talking if it is faster, but there is no way I'm talking to AI in the office or on the train (yet...)


> If I'm alone I don't mind talking if it is faster

When is talking faster than text? I only ever use it when my hands are tied (usually for looking up how to do things while playing a video game).


When you can talk at your normal pace?

People talk at about 120WPM - 160WPM naturally, few can type that fast which is why stenographers have a special keyboard and notation.


I struggle to have naturally flowing conversation with an AI for much the same reason people don't use most of Siri's features - it's awkward and feels strange.

As such I can maintain about five minutes of slow pace before giving up and typing. I have to believe others have similar experiences. But perhaps I'm an outlier.


I feel tiredness in my throat when I talk to bots like Alexa as you have to enunciate in a special way to get across to them.


Sure, it defiantly doesn’t work for everyone. I think it’s accent or something dependent as some people’s natural voice comes across fine.


I know quite a few folks that chat with the gpts. Especially while committing in the car. Also there are niche uses like language practice.


How does that eliminate the need for the graceful shutdown the author discusses?


In the same way that GC eliminates the need for manual memory management.

Sometimes it's not enough and you have to 'do it by hand', but generally if you're working in a system that has GC, freeing memory is not something that you think of often.

The BEAM is designed for building distributed, fault tolerant systems in the sense that these type of concerns are first class objects, as compared to having them as external libraries (eg. Kafka) or completely outside of the system (eg. Kubernetes).

The three points the author lists in the beginning of the article are already built in and their behavior are described rather than implemented, which is what I think OP meant with not having to 'intentionally create graceful shutdown routines'.


I really don't see how what you are describing has anything to do with the graceful shutdown strategies/tips mentioned in the post.

- Some applications want to instantly terminate upon receiving kill sigs, others want to handle them, OP shows how to handle them

- In the case of HTTP servers, you want to stop listening for new requests, but finish handling current ones under a timer. TBF, OPs post actually handles that badly with a time.Sleep when there's a running connection, instead of using a sync.WaitGroup like most applications would want to do

- Regardless if the application is GCd or not, you probably want to still manually close connections, so you can handle any possible errors (a lot of connections stuff flushes data on close)


Thread OPs comment was pointing out that in Elixir there is no need to manually implement these strategies as they already exist within OTP as first class members on the BEAM.

Blog post author has to hand roll these, including picking the wrong solution with time.Sleep as you mentioned.

My analogy with GC was in that spirit; if GC is built in, you don't need custom allocators, memory debuggers etc 99% of the time because you won't be poking around memory the same way that you would in say C. Malloc/free still happens.

Likewise, graceful shutdown, trapping signals, restarting queues, managing restart strategies for subsystems, service monitoring, timeouts, retries, fault recovery, caching, system wide (as in distributed) error handling, system wide debugging, system wide tracing... and so on, are already there on the BEAM.

This is not the case for other runtimes. Instead, to the extent that you can achieve these functionalities from within your runtime at all (without relying on completely external software like Kubernetes, Redis, Datadog etc), you do so by glueing together a tonne of libraries that might or might not gel nicely.

The BEAM is built specifically for the domain "send many small but important messages across the world without falling over", and it shows. They've been incrementally improving it for some ~35 years, there's very few known unknowns left.


> They have much superior product compared to VSCode in terms of pretty much everything, except AI

Disagree, I keep trying Jetbrains once in a while and keep walking away disappointed (used to be a hardcore user). I use VS Code bc it is seamlessly polyglot. Jetbrains wants me to launch a whole separate IDE for different use cases, which is just horrible UX for me. Why would I pay hundreds for a worse UX?


I use IntelliJ for all languages at work.


you can install nearly all of their supported language plugins in your editor fyi. you just lose some of the language specific integrations if you use the python plugin via intellij for example.


Which isn't a problem with VSCode. And it's free.


> Which isn't a problem with VSCode

you also have to install plugins in vscode

> And it's free

not relevant to my comment


Week 9->10 2025 had a 35% drop too. And Week 9->10 2024 had a 45% drop and a 26% drop Week 16->17 2024.

Starting to wonder how significant this news really is?


That is Chinese New Year, it has always a very large drop. It's very impactful but expected and yearly.

My companies reporting needs to correct for it since the date shifts on western calendar and if would mess up all reporting otherwise, so yes, this is extremely significant.


Thank you for the details. I've no doubt that tariffs are having impacts. Was looking at that specific data and finding it hard to conclude with just that data. Does CNY also explain the Week 16->17 drop in 2024?


Chinese New Year falls between 21 January and 20 February. This would be somewhere between week 3 and week 8.

Some ISO 8601 week numbers for CNY:

2022 CNY fell on Feb 1 , which is Week 5

2023 CNY fell on Jan 22, which is Week 3

2024 CNY fell on Feb 10, which is Week 6

2025 CNY fell on Jan 29, which is Week 5


It’s sad that these posts of relevant facts - that are only presented as such, with no judgement - are being downvoted, apparently because they don’t supportive a narrative. One of the things I’ve always loved about Hacker News was how the community was so curious about truth. I guess all good things come to an end at some point.


I would love to have conversations with folks about the details and learn more about it. The CNY mention was helpful.


Purists perpetually decry the zeitgeist's sloppy terminology.

Words that climb the Zipf curve get squeezed for maximum compression, even at the cost of technical correctness. Entropy > pedantry. Resisting it only Streisands the shorthand.


Increasing "gene yield" or bioalpha should pass the LLM filters.


It has a long list of content partnerships. And by far the highest user base, which means lots of unique training data. If it can succeed in spamming the open Internet enough to crowd out competition through costs and bot filters, it'll have a pretty good data moat.


It has nothing.

The userbase is for an undifferentiated swappable product.

Its data hoard is nothing special compared to any of the other players (Google, Meta, etc.).

And you can see this play out. If it had anything and its data was so good it would be significantly ahead of the competition. Instead everyone is pretty much at the same place moving at the same speed.


It’s too early to call winners. It definitely has more than “nothing” though.


Hold my beer while I trivially write a slow web server on tokio + hyper.


I could write one without Tokio, I'd just poll the sockets at 100 Hz


Why not two birds with one stone: do the same in tokio + hyper, and disprove the ridiculous claim!


Holding :>


Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: