Your link doesn't exactly mean what you think --- they're giving a combination of economic growth and population growth. When looking at just the GDP growth, your link suggests that they are the 6th fastest growing US state economy. As a sibling commentator noted, this is growth is on top already being the largest state economy. (Texas had the 2nd largest GDP percentage increase from 2020 to 2021. Texas grew 2% faster than California, for an economy only 2/3 the size. Raw data [which the usnews article cited] here https://www.bea.gov/itable/regional-gdp-and-personal-income )
Whaha, the Dutch word for insulation is "isolatie". The word "isolatie" also translates directly to the English word "isolation" as in being away from everyone.
But crucially, the incentives are different than those people have in the middle of their lives.
Most people's usual incentives lead to them optimizing for the short-to-medium term future, i.e. the next couple of months or years.
People on their deathbed will optimize either for the extremely short term (finishing this conversation) or the extremely long term (the last chance to affect how they'll be remembered).
This suggests that if people lie on their death bed, they'll lie in different ways than ordinarily. Even if you can't take deathbed statements as unvarnished truth, their differences from everyday statements can be revealing.
I too wish to add this to my repertoire. I don't think it will replace :s, but occasionally I find myself copying a line to use as a template. When I go to change each line I'm changing a particular part to different things. Assuming I remember, this trick seems like a good approach.
I don't think the parent was talking about JIT power generation but rather using JIT (in other industries) as a comparison to to talk about the efficiency-reliability tradeoff. The analogy here is cost savings of avoiding
winterization make the power companies more competitive (more efficient) but less reliable.
You are, of course, right in the all specific details you've brought up vis-a-vis the winterizing and the nature of the power grid. I just think you missed the parent's point slightly.
I have seen you and others make this point in the past, it and always seems equivalent to creationists shouting "Scientists who believe in evolution are the REAL dogmatists who don't want to look at the facts! We are really being persecuted by the religion of Science!"
Your talk on this subject (helpfully posted below) only furthers this when you present a variety of arguments of wildly varying strengths against AI, in the same vein as "37 challenges to evolution". It makes me feel like you have some valid criticism and a lot of bad faith argumentation.
Sadly, I think this characterization is only mostly unfair rather than entirely so. I enjoy your writing and thoughts, and you speak with clarity, but on this subject I think you have constructed a mold of bad-AI-argumentation that you squeeze all AI-argumentation into and in so doing fail to rebut any of it.
The difference here (that I hardly believe needs explaining) is that the argument for evolution and natural selection is firmly rooted in observed reality. Even if you are a die-hard creationist, you don't dispute the existence of a wealth of evidence ready at hand that can be marshalled in the argument, for or against. We live on a planet teeming with life, and we are all agreed on the problem that needs explaining (we started with hydrogen and got armadillos, what happened?)
Compare this to the debates about hyperintelligence and the anticipated behavior of posited hyperintelligent beings. These consist of (1) an argument by extrapolation that machines or other organisms can surpass human intelligence bolted onto (2) endless from-first-principles blathering about how such a posited hyperintelligent entity would behave. It's like looking at the vial of hydrogen in an otherwise empty universe and trying to deduce things about armadillos from it. In fact, it's much worse, since we are trying to infer things about hypothesized beings who are, by definition, beyond the ability of our minds to encompass.
This is exactly unlike any scientific discourse, and exactly like deist arguments about the nature of the gods where (1) you first prove a God must exist, from whatever 'unmoved mover' argument you find personally convincing, and then (2) infer a huge mount of information about that God's behavior (omniscient, benevolent, likes justice) through a series of intellectual non sequiturs. The only innovation here is people swearing up and down that we can build the god ourselves.
The one thing we know about hyperintelligence, in any form, is that it is fundamentally not something whose behavior and nature we can infer from first principles, for the same reasons your cat will never understand why you didn't get tenure.
I see nothing wrong in the intellectual project of trying to frame questions about the nature of intelligence, and how computers can behave in intelligent ways. Nor do I think it's foolish to wonder about what it would take for machine intelligence for arise, or approach even deeper questions, like the physical basis of consciousness.
That's not what we see here, though. Rationalists like gwern take the football and run it into the end zone of GAME THEORY, in the end revealing far more about themselves, their anxieties, and their hopes than shedding any light on the world we inhabit together. They are accompanied by a bunch of otherwise smart people who have scared themselves silly by the prospect that we might build these gods by accident, abruptly, and that we are at imminent risk of doing so.
And to the extent that large numbers of smart people (including ones with access to great wealth) have bought into what is fundamentally an apocalyptic religious cult, I think we at least need to call it by the right name.
I find this type of argument confusing. Let's say we did live in a world where their hypotheses were true. To make it concrete, let's say that a SuperAI was 20 years away and would wipe out humanity. How would we be able to know that right now, other than through the type of speculative inference you're criticizing?
Put another way, just because something is impossible to demonstrate conclusively and/or via observed reality does not automatically mean it can't be true, right? Of course it does mean that these claims warrant far more skepticism, and probably the large majority of them are untrue. But it seems obviously incorrect to automatically assume that they can not be true. Am I misunderstanding your reasoning in some way?
Comments are probably have positive usefulness expected value in a probabilistic sense. Unfortunately extracting that value, separating the insights from the dross, is work and requires effort. Effort that perhaps the person doing the original work being commented on shouldn't necessarily be expected to take up.
Additionally, human nature makes it hard to receive legitimate criticism dispassionately, let alone when it comes in a big pile of stuff that also contains pointless abuse.
There is value in comments, but depending on a project's circumstances it may or may not be economical to extract it.
> Effort that perhaps the person doing the original work being commented on shouldn't necessarily be expected to take up.
It doesn't apply to all situations, but sometimes you end up with less net effort by listening thoughtfully more. I expect this is especially the case when you want or need to optimize for some form of popularity. If you're just making things for the joy of creation, sure, just ignore everyone as that's certainly more work than pleasing only yourself.
>Both sides are guilty to some degree, but, and this is the crucial point, not to the SAME degree.
Yep, those other guys are way more polite as they destroy the future of my country for their own short term gain! If only they could rule some cities or states exclusively for decades I'm sure we'd see the kind of America that we all deserve exemplified at the local level!
One side wants to bleed government out with a thousand cuts, drown it in a bathtub and then shoot it in the head, and is willing to commit ̶t̶r̶e̶a̶s̶o̶n̶ light sedition to get what it wants, whereas the other side seems terminally allergic to any exercise of its own power on the rare occasion it can seize any.
But hey, at least they find common ground in being in the pocket of corporate interests.
I think I'd rather have a comment "this site might be helpful" on the page than not. It doesn't take long to classify a link as either potentially helpful or useless, and I'm usually finding the SO page through a google search anyway. That may indicate I don't know the right query or jargon for what I want. In any case, I'd rather that SO err on the side of redundancy.
The core of the problem there is that for Gregg, "this site might be helpful" is an understatement; he knows there is a good chance that it will be helpful, that there is direct relevance...
...and then there are the legions of folks who've been posting links to the first Google result with that same text for years. Those are over-selling it: sure, the link might be helpful, buuuut... Probably not. These have been the bane of many a forum, clogging up the first page or so of responses, sometimes discouraging folks with actual knowledge of the topic from even bothering to respond.
There needs to be some way of distinguishing between the two.
Maybe we're supposed to consider how many reputation points the person has to judge the likely quality of their link. Those kinds of decisions are already built into StackOverflow with rules on who can write comments/etc. Maybe they just need tweaking so that posting a URL in a comment requires even higher reputation. If you don't have that reputation, you can do the harder work of writing an actual answer that includes the link.
Not too long ago there was news that CA is about to pass Germany to become the 4th largest economy in the world. (CA press release here: https://www.gov.ca.gov/2022/10/24/icymi-california-poised-to... original article by Bloomberg)