Hacker Newsnew | past | comments | ask | show | jobs | submit | philplckthun's commentslogin

> There really shouldn't be two standard package managers for JS.

I chuckled a little when I realised that npm, Yarn, Yarn 2, and pnpm can all be found alive and well in the ecosystem


As someone who's worked on this, I'm oh so conflicted. Yes, it's mostly a terrible idea, if CodeMirror and the likes work. On the other hand, there's so much default behaviour that's just excellent that'd need to be reimplemented if you started from scratch. It can't be denied that contentEditables have the advantage of just requiring much less code overall.

To this extent, I've written a hook for React/Preact that attempts to be a decent code editor basis on contentEditables: https://github.com/kitten/use-editable

Funnily enough, if you just search the code for "// Quirk" it's easy to see that this experience wasn't pleasant in the slightest. But it did allow me to write a code editor in just about a day which can be found here: https://trygql.formidable.dev/

I'd say, there's plenty of edge cases to take care of. But on the other hand, things like composition, selection behaviour, and other niceties are things that you just have to take care of much less


This was my experience, too. Contenteditable wasn’t as bad as I was led to expect going into the project. (It is terrible, but not the showstopper everyone says it is.)

The biggest problem I have with it is that layering on collaborative editing is nontrivial.


I'm honestly not sure whether there's really a good excuse to invent a format that's immediately incompatible with all GraphQL tooling out there, which would need to be converted. After all, debugging and logging tools could still display these.

It also conveniently leaves out automatic persisted queries which would replace queries with hashes, or regular persisted queries, where we'd have a limited set of allowed queries on the server-side that it accepts.

I think ultimately when I think of sending GET requests for GraphQL, I'm immediately more worried about the URL length limit. The risk being that at some point in the future we may run over this limit and get into a lot of unexpected trouble.

Also, this article assumes that you can only stringify GraphQL a single way, with spaces. However a not as commonly known detail in the spec is that any commas are treated the same as whitespaces. Hence, this is a valid query:

  {item(id:4){id,name,author(id:4){id,name}}
Which isn't far off from reading a complex query string IMO, works with any existing GraphQL API via GET if the API already accepts GET requests, and isn't a special format that'll need to be learned supporting the full syntax.


There are solutions to simply turn GraphQL requests into (traditional) CDN-cacheable requests. Usually this would be done using (Automatic) Persisted Queries, where a query is not only requested as a GET request, if it's not a mutation, but it'd also do so using a hash rather than an entire query.

This has a couple of limitations that you'd also expect from a CDN cache for REST requests. However, I believe the interesting part about GraphCDN is that it can do more to look at the exact queries and mutations that are run to invalidate queries more precisely.

So, it's likely worth saying that it's not that CDN caching GraphQL is hard, but getting invalidation and a high cache hit rate (just as with REST APIs) is hard.


To be fair, since this has been a while ago it's hard to tell what to do about this. Personally I find it hard to draw any conclusions from this just due to the time that has passed. I don't use Facebook, so maybe it's just my distance from it. But it has happened and it's worth stating that this is basically a psychological experiment and not a simple A/B test, but at a company that most likely at the time didn't have an ethics board to review this.

Other sources list a couple of principles behind the ethics of psychological research. The relevant ones being:

- Minimise the risk of harm - Obtain informed consent

Some of them do state that the latter isn't always exactly possible, since that may influence the outcome.

But the fact of the matter is that Facebook did an A/B test that could inflect serious harm on the quality of life of the participants, who weren't aware of any research being conducted. The latter sounds like it'd be at least the minimum here.

So, I'm not a psychologist, but this does sound like it shouldn't have happened in this way. There were definitely more ethical ways in running this experiment that wouldn't have involved 700K unknowing and potentially unwilling participants.


Let's imagine hypothetically that sad, negative posts get more engagement by whatever metric Facebook uses, and Facebook was paying no attention to sentiments at all and ending up putting more sad posts on feeds. Would that have been unethical? I can't really see what would be so different.


Is it unethical to create an automated system that maximizes global unhappiness for profit?


When a movie makes the audience sad, it wins Oscars, we don't censor it. Why should the rules be different for Facebook?


That is a banal comparison. When a film makes you sad, you are aware of what is going on. If you are unusually sensitive to these types of emotions, you can read about the film ahead of time to see if you might want to avoid it.


Do you typically go read a synopsis of the entire plot of a film, including any surprise developments, before watching it?


No?


Ok. Then what you’re saying doesn’t make sense.


Why?


Because films’ promotions may deliberately conceal information about tragic events in the story to achieve maximum impact and nobody thinks that is unethical.


This feels a lot like talking to Eliza. Your replies very vaguely connect to what’s being discussed in this thread, but there’s just no substance or coherency to the argument.


"When a film makes you sad you are aware of what's going on" is your claim, but I don't see how that applies to something like, say, Terminator 2, whose entire goal, according to Cameron, was "making the audience cry for the Terminator," yet was not promoted as a sad film. It's hard to come up with a principled difference here.


The Terminator 2 audience knows the film is an authored fictional story. It can make someone cry when they didn’t expect it, but they understand that the filmmakers are intentionally trying to provoke certain emotions. If you can’t see the myriad principled differences between that situation, and logging onto Facebook expecting to see an unfiltered selection of posts, I really can’t believe you are trying hard.


If you'll scroll up a bit you'll see this subthread begins when I propose a thought experiment where the "natural" order yields the same results and ask if it's unethical, and people tell me that yes, they think it is. But now you're specifically calling attention to an unmet expectation of "unfiltered" posts (I'd question whether anyone has such an expectation, although the specifics of the curation are not advertised). I think this gets away from what I was talking about in the first place.


It's hard to get a lot of positive reinforcement by interacting with like-minded others at scale through a movie. Facebook's original stated intent was to study contagion of emotion, which seems to me to suggest a multiplayer, interactive effect.


Well, if so the problem goes a bit deeper than Facebook.


Yes, that would be deeply unethical. And to make matters worse, I believe that’s a fairly accurate description of how Facebook works.


So how could someone ethically run social media of any stripe?


Exactly, we have no idea if HN is supressing positive stories in an experiment or not. Twitter, reddit, FB, tictok all sort content by magic and could be trying to make you sad.


I don’t understand how this question can follow. Are you suggesting that social media simply must optimize for engagement and not pay attention to negative consequences?


What does that mean? Like, we want the Facebook mods to delete anything that's too depressing? Sounds more dystopian rather than less... and I thought we were supposed to be worried about "duck syndrome" where everyone appears to be having great lives, making you feel bad, because you don't see the negatives (like a duck paddling underwater, see?).


For what it's worth, I believe there are plans to reactivate the Siemensbahn tracks for a new line by 2025... or so I've read? Can't quite remember.


Quite appropriately, there are plans to reactivate it to serve the new Siemens research campus that is currently under construction.


Not to toot our own horn, but while this mentions GraphQL with Relay / Apollo as fetching clients, with urql and its normalised cache Graphcache we started approaching more of these problems.

Solving the mentioned problems in the article for best practices around fragments is on our near future roadmap, but there are some other points here that we've worked on that especially Apollo did not (yet)

Request batching is in my humble opinion not quite needed with GraphQL and especially with HTTP/2 and edge caching via persisted queries, however we have stronger guarantees around commutative application of responses from the server.

We also have optimistic updates and a lot of intuitive safe guards around how these and other updates are applied to all normalised data. They're applied in a pre-determined order and optimistic updates are applied in such a way that the optimistic/temporary data can never be mixed with "permanent" data in the cache. It also prevents races by queueing up queries that would otherwise overwrite optimistic data accidentally and defers them up until all optimistic updates are completed, which will all settle in a single batch, rather than one by one.

I find this article really interesting since it seems to summarise a lot of the efforts that we've also identified as "weaknesses" in normalised caching and GraphQL data fetching, and common problems that come up during development with data fetching clients that aren't aware of these issues.

Together with React and their (still experimental / upcoming) Suspense API it's actually rather easy to build consistent loading experiences as well. The same goes for Vue 3's Suspense boundaries too.

Edit: Also this all being said, in most cases Relay actually also does a great job on most of the criticism that the author lays out here, so if the only complaint that a reader here picks up are the DX around fragments and nothing else applies this once again shows how solid Relay can be as well.


It's not a cryptographic key or a secret of any kind. In the case of React (or other JS libraries) export identifiers are chosen to expose private APIs for integration purposes with things like developer tooling and other functionality that isn't part of the publicly documented API.

Instead of choosing something like `__PRIVATE` the React library in particular chose something more eye catching: `__SECRET_INTERNALS_DO_NOT_USE_OR_YOU_WILL_BE_FIRED`, which is definitely sure to get people's attention when they look into it, but is essentially just a joke.


We actually don't know what the usual communiqués internally at GoDaddy look like. In a vacuum we can also judge this to be an effective test. In practice there are many unknowns and factors we don't know about though. In my opinion phishing is also an issue at scale when we talk about companies; meaning, there's a likelihood that some will always be more likely to fall for it.

Given how the world has been this year and what some employees maybe have gone through the employees that will fall for this particular phishing emails may actually need more support from their employer.

Either way, this isn't a vacuum and we are talking about a test that is unnecessarily cruel.

Edit: just to make this more constructive, there are always alternatives. Instead of relying on emails only employees could be informed to check in via a second channel in all matters relating to money or a company's IP.


> Sign this thank you to let Apple know consumers are eagerly anticipating anti-tracking protection on iPhone.

I'll only sign a thank you like this when Apple allows other browsers engines on iOS, which is actual consumer-friendly behaviour towards the Web.


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: