Hacker Newsnew | past | comments | ask | show | jobs | submit | silon42's commentslogin

Even Apple's palm rejection is not good enough for me. I really hated the huge touchpad when I was using a Mac.

I really wish it was just low profile...

I won't move to Apple yet... but now there is not a reason to buy anything but the cheapest Android phone available.

And maybe a separate one to root while they still can be.


I'd almost never want do to fsync in normal code (unless implementing something transactional)... but I'd want an explicit close almost always (or drop should panic/abort).

I use transactional file operations when for instance when I write tools that change their own human-readable configuration files. If something is important enough to write to disk, then it's probably important enough that you can't tolerate torn writes.

Why? The explicit close does almost nothing.

Surely there are prompts on the "internet" that it will borrow from...


Definitionally no.

Each LLM responds to prompts differently. The best prompts to model X will not be in the training data for model X.

Yes, older prompts for older models can still be useful. But if you asked ChatGPT before GPT-5, you were getting a response from GPT-4 which had a knowledge cutoff around 2022, which is certainly not recent enough to find adequate prompts in the training data.

There are also plenty of terrible prompts on the internet, so I still question a recent models ability to write meaningful prompts based on its training data. Prompts need to be tested for their use-case, and plenty of medium posts from self-proclaimed gurus and similar training data junk surely are not tested against your use case. Of course, the model is also not testing the prompt for you.


Exactly.

I wasn't trying to make any of the broader claims (e.g., that LLMs are fundamentally unreliable, which is sort of true but not really that true in practice). I'm speaking about the specific case where a lot of people seem to want to ask a model about itself or how it was created or trained or what it can do or how to make it do certain things. In these particular cases (and, admittedly, many others) they're often eager to reply with an answer despite having no accurate information about the true answer, barring some external lookup that happens to be 100% correct. Without any tools, they are just going to give something plausible but non-real.

I am actually personally a big LLM-optimist and believe LLMs possess "true intelligence and reasoning", but I find it odd how some otherwise informed people seem to think any of these models possess introspective abilities. The model fundamentally does not know what it is or even that it is a model - despite any insistence to the contrary, and even with a lot of relevant system prompting and LLM-related training data.

It's like a Boltzmann brain. It's a strange, jagged entity.


Even if they don't, maybe go black for all weekends.


That wouldn’t address their liability.


It would raise awareness in the general population and increase justified resistance against this stupid law.


But in that case it'd be easy for supporters of the law to argue it was just performative and clearly not really needed since they're otherwise accessible.


They are free to do that, but what of it? Sanctions and boycotts are "performative" in the same sense, and yet they continue being a popular tool to compel voters and politicians of other countries to act or refrain from acting in particular ways.

Wikipedia is a popular website that many people depend upon; denying access to UK users would not only create a massive inconvenience along with the temptation that it could be avoided if the law were rolled back, but would also encourage more UK users to adopt VPNs, which would subvert the law's effectivity along with that of a plethora of other authoritarian measures that the UK has in place.


I think the risk is that it becomes framed as extortion, and would cause some proportion of voters - who at least right after the OSA was put in place remained relatively in favour of elements of it (though the polls have been wildly flawed) - to double down.

Hence, I think a total block would be better than a partial block, because that can be framed as legitimately risk mitigation and would be a lot harder to attack.

That said, some pressure would be better than no pressure, so if the alternative is no block, I'd prefer a partial block.


That could drive users to LLM services to fill in the gaps. I know a lot of people who just use LLMs instead of good ol internet searches because they are that lazy.


Why do I remember it was 50ms?


You're probably right. It was long long ago... I keep meaning to look at ArcaOS but I never seem to have the hardware to dedicate to it at the same time my interest returns.


IMO it's not ideal for Rust to have only Rust specific toolkits (at least they need to have other language bindings)

I've tried using Qt (non QML, using rust-qt) last week... initially things were looking fine (despite unsafe everywhere)... but I found quickly that I need unsupported things (deriving from C++ classes), and the project isn't really maintained anymore, so I'll need to so some low level work myself if I want to proceed.


Personally, I'd like to have an option of the outbound firewall doing the eSNI encryption, is that possible?


It seems wrong that the app would be responsible for cleanup... Shouldn't this be solved in the shell / terminal ? What if kill -9?


When you use ctrl+c, you are not killing the program, you are sending it a SIGTERM signal which essentially means « could you please stop yourself ? » so the program have a chance to clean things before exiting.

kill -9 is sending a SIGKILL signal which, well, kills the program immediately.


SIGINT, not SIGTERM.


You are right ! Thank you !


ctrl-c sends SIGINT.


You are right ! Thank you !


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: