Because it hasn’t resulted in a stable situation or a historical event that stops being relevant to the current moment. No harmony has formed since. It is still essentially an ongoing invasion and erasure and oppression of Tibetan culture and people. See moves related to giant damn building most recently, but it is has been continuous, and there are big concerns among Tibetans about it getting worse still in the future, when the Dalai Lama dies. Only the initial maneuvers of invading are in the past, the rest applies to the current moment, day by day.
Clean drinking water is actually de facto a finite resource. It does recycle through nature, but large reservoirs and water tables are slow to recharge, often taking millennia to form, so there’s a lossiness in that sense — our usage and loss of potable water can’t be higher than the overall recharge rate. So it’s something we could exhaust without big technical breakthroughs (converting salt water quicker than nature does in large quantities, etc). We rely on maintaining a sustainable rate of consumption to avoid setting up future generations for potential catastrophe, basically. Not saying data centre water usage could alone be the difference, but it’s not implausible if it increases exponentially. Another factor is existing reserves can be contaminated and made undrinkable, adding an unpredictable factor into calculations. It’s an interesting topic to read about.
I like that term much better, confabulation. I’ve come to think of it as it relies on an inherent trust in the fact that whatever process it uses to produce a coherent response (which I don’t think the LLM can really analyze after the fact) is inherently a truth-making process, since it trusts inherently its training data and considers that the basis of all its responses. Something along those lines. We might do something similar at times as humans, it feels similar to how some people get trapped in lies and almost equate what they have said as true with having the quality of truth as a result of them having claimed it as true (pathological liars can demonstrate this kind of thinking).
> since it trusts inherently its training data and considers that the basis of all its responses.
Doesn't that make "hallucination" the better term? The LLM is "seeing" something in the data that isn't actually reflected in reality. Whereas "confabulation" would imply that LLMs are creating data out of "thin air", which leaves the training data to be immaterial.
Both words, as they have been historically used, need to be stretched really far to fit an artificial creation that bears no resemblance to what those words were used to describe, so, I mean, any word is as good as any other at that point, but "hallucination" requires less stretching. So I am curious about why you like "confabulation" much better. Perhaps it simply has a better ring to your ear?
But, either way, these pained human analogies have grown tired. It is time to call it what it really is: Snorfleblat.
Yes! I agree completely. They’ve not even turned on the money faucets yet. These prices are likely just to hook users on the product, and will be more comparable to paying something that compares, but favourably, to minimum wage per hour in the future. Not implying a nefarious scheme, I just think that’s how the economics of it will pan out.
What’s amazing to me is this design looks a hella lot like tree branch formations to me. Makes me wonder if trees have some form of antenna-like functionality we are unaware of.
I see what you mean as in no tolerance for abusive people that can currently get away with overtly treating people like shit, but I do immediately think “that generally captures the more trivial abusers.” The most destructive ones tend to be covert, and more often will appear as friendly and polite as LLMs when setting up their targets. That makes me wonder if it will create a harder or easier era for them. Not something I’ve thought on before to have formed an opinion, but it makes me wonder.
The author is describing the socially (and physically) destructive percentage of the population who just want to grab power through manipulation and control and the way they express this through social media, I believe (dark triad personality disorders, loosely). The only danger I see is an embrace of passivism in the form of “anyone who objects to things that are happening in passionate terms is the real problem.” Which would be even worse, when that percentage has real power, and real ability to pull levers. Not to say everyone should go around screaming or protesting with every tweet, etc. It's the balance of these things I think is off-kilter, not a simple solution “just act aloof and block the right people, and all will be well in the world, just like it is in Starbucks when I leave my apartment each morning.” In any case, that's my two cents. There's a balance to be struck, that this article doesn't really get at.
There is a lot of behavior online which I'd characterize as "hearing the dog whistles and barking" that when confront folks they will characterize it as "objecting to things in passionate terms"
Like the Otaku described by Azuma [1] there is a definite regression in terms of the of use of language and ideology, essentially a reversion from a language-using animal which can create unlimited meanings by putting together a finite vocabulary in a grammatical system as opposed to words that have a meaning in and of itself.
For instance, anti-resilience activists will run you out of some communities because you use the word "snowflake" because this is a dog whistle that makes them bark. With their lexicon of triggering words in hand you can talk about the dangers of anti-resilience all day and you're talking right past them.
This style of communication is especially dangerous for marginalized communities because they create a bubble of false consensus that makes them think somebody agrees with them but doesn't do the hard work of explaining themselves and doing the even harder work of bringing about a change of heart across the society would be necessary to do something widespread and durable problems such as the mutual lack of respect between black Americans and the police.
Yes! It needs and seems to want the human to be a deep collaborator. If you take that approach, it is actually a second senior developer you can work with. You need to push it, and explain the complexities in detail to get fuller rewards. And get it to document everything important it learns from each session's context. It wants to collaborate to make you a 10X coder, not to do your work for you while you laze. That is the biggest breakthrough I have found. They basically react like human brains, with the same kind of motives. Their output can vary dramatically based on the input you provide.
I suspect the US definition of sandwich is different to the European one, but genuinely not sure. Curious — can someone give me a few examples describing the $10 sandwiches you get in the US? Are we talking warm, ordered off a menu, good quality meat, filling enough to serve as a meal?
This whole conversation has reminded me of the $5 milkshake conversation in Pulp Fiction.
I’ve signed up for the Kagi trial, so far I’m liking it. Breath of fresh air compared to the free ones. Best result, first position.
reply