Books may not be good propaganda for the latest, localized issues, but they are fantastic propaganda for ideology.
I read Atlas Shrugged as an impressionable young teen, and developed some pretty horrible notions about society and morality (and literary technique) as a result. Of course I saw the error of my ways, in no small part by reading other books!
Don't get me wrong, books-as-propaganda isn't necessarily bad. Animal Farm, 1984, To Kill a Mockingbird... These are brilliant but are also such effective forms of propaganda that even mentioning their titles is a form of propaganda in itself.
> Of course I saw the error of my ways, in no small part by reading other books!
I think that shows their weaknesses. Propaganda seems to work best when reinforced over long periods. People read a book and get really into something for a while, X is now the one true diet! However, I rarely see longer term shifts without something else reinforcing the ideas.
By comparison the US military has been subsidizing media who want access to military hardware for decades as long as they follow a few guidelines. It’s a subtle drip of propaganda but across America and much of the globe people’s perception has very much been influenced in an enduring fashion. No single episode of talk radio or Fox News is particularly effective but listen for years and you get a meaningful effect.
>I read Atlas Shrugged as an impressionable young teen, and developed some pretty horrible notions about society and morality (and literary technique) as a result. Of course I saw the error of my ways, in no small part by reading other books!
I would be more worried about you developing a terrible sense of narrative and character development. I would kill for a well written ancap paradise book (there are plenty of Ancom options) but it honestly just sucks as a piece of writing I cant get into it.
To be fair, the general public have been conditioned for a while now by things like blockchain and VR to be completely underwhelmed, perhaps rightfully so, by whatever's coming out of San Fran and Seattle.
So in the public consciousness it's like (NFTs, meme coins, metaverse, AI)
When I think it's more like (internet, smartphones, AI)
We'll see who's right in a few years I guess. But I'll +1 your view that plenty of people put AI in the first group, I know a few myself.
In western politics, there are various definitions of the word "progressive". The definitions that include Kamala Harris are mostly used by right-wing Americans.
How much corporate funding did Bernie get?
Why do you think capital supported Kamala? Especially in hindsight?
And your joke about left vs right sponsorship of streamers has a very soft underbelly, which, if you don't know about it yet, kind of tells the whole story right there.
> In western politics, there are various definitions of the word "progressive". The definitions that include Kamala Harris are mostly used by right-wing Americans.
No, Kamala Harris had some pretty extreme "Progressive" positions such as open borders.
> And your joke about left vs right sponsorship of streamers has a very soft underbelly, which, if you don't know about it yet, kind of tells the whole story right there.
I don't see the point of insinuation. Make a point, or leave it, please. Doing this is just a waste of time.
Hackernews is not the place for political arguments. That's not just a suggestion, it's a rule. I noticed somebody used an ambiguous word in a way that, IMO, was not quite correct, and it is an interesting word so I clarified the technicality, and mea culpa, I probably dipped into actual politics too far. Let's get back to building stuff, yes?
Kamala Harris's voting record is extremely progressive. Nevermind the subjective approximation of her policy positions as the Democrat nominee, which was watered down as she tried to garner broader support compared to her 2020 primary run.
>The Voteview project (now based at UCLA) has, since the 1980s, employed the roll-call votes cast in Congress to locate all senators and representatives on a liberal-conservative ideological map. These data and methods have been utilized by academics in thousands of peer-reviewed books, book chapters and journal articles. Although no method is perfect, there is a general consensus within the academic community that the NOMINATE methodology employed by the Voteview project and its close cousins represent the gold standard.
This places her on a spectrum where the farthest left you can go is the most left leaning US Democratic senator, which is not very "progressive" in the context of western politics as I mentioned in my last comment.
Yeah, exactly. When someone like Bernie Sanders talks about the economy and big corporations today, they sound a lot like Teddy Roosevelt. Teddy Roosevelt was a full-on capitalist and proud of it, and definitely would not have been understood as a "leftist" at the time. He believed the USA government's job was to act as a strong, firm check against the worst tendencies of capitalism (like monopolization), but only so that capitalism could function at its best. There were many at the time who did not believe capitalism was really the way to go, though! Just look at organizations like the IWW, and the various worldviews that were sympathetic to the rise of the USSR just a few years later. Hell, one of the people who ran against Teddy Roosevelt was Eugene Debs, a socialist who got more than an insignificant number of votes.
But 100+ years later, a Teddy Roosevelt-esque understanding of government and capitalism is the furthest left USA politics can imagine.
"Democrat" is a long-used general US political slang to refer to an individual member of the Democratic Party (or to refer to a collective of individual members if used as the plural "Democrats"). In the past few decades right-wing commentators have made frequent improper use of the slang to refer to the official party, partly due to its easy association with negative words such as "autocrat" and "plutocrat", resulting in the common misuse of the slang. However, there is no such thing as the "Democrat Party" or "Democrat nominee".
> The obsession with AI (and other vapourware) in our industry ... fuelling the hard-right — who coincidentally are very much using AI.
Is it useless or not? If it's vapourware, why would you care if the other side uses it? If the far right is using it successfully, then by definition it is not vapourware, right?
Because the output from LLMs drowns everything else. So if people use it to drown actual discussions yes, it's useful for that. Everyone else though, has to suffer.
I think that's aligns with what GP is saying: if one is going to say people are using it, even if for things you don't like, then choosing to call it vapourware in the same paragraph is a confusing use of the term.
In a charitable reading I think the author was meaning something along the lines of "fails to be as useful as made to sound on things I think are worth valuing but very useful for things I think are slop" but chose a different meaning term by mistake.
LLMs are purported by their creators to have a different use (advancing of human knowledge, genuine artificial intelligence, etc) than drowning discussions online. The fact that people can find some uses for bad technology does not make it less of a failure or, those goals less hot air.
It fits for software that has not reached its creators stated goals. LLMs are not AI and have not improved lives for any human, outside of making some people rich.
I'd disagree the definition fits that situation as vapourware is for software which is still unobtainable, not software which is available but the reviews and feature coverage suck vs the advertisement hype. Are we able to talk about that definition further before we dive into talking about your views on LLMs?
As someone who uses LLMs every day for general questions/brainstorming, more efficient coding, and building a product used by doctors to improve their documentation (saving them hours per week and freeing them to interact with their patients more personally), I would like more of this hot air please. Would you begrudge disabled people their new assistants? Language learners their translators? I could go on.
LLMs have some very important downsides, and I fully agree that they are dangerous, but we should dispel the notion that they don't have positive use cases. That leaves the benefits on the table, while the bad actors will continue with their destruction anyways.
Anyway, my original point was indeed just about the semantics of the word "vapourware", which if I'm interpreting the author correctly, would be better replaced with "malware" (not that I agree with such a stance).
Shady business, potentially, but you might be underestimating how much some guys really, really need to have the most expensive watch in their friend group.
IMO it's so easy to ChatGPT your homework that the whole education model needs to flip on its head. Some teachers already do something like this, it's called the "Flipped classroom" approach.
Basically, a student's marks depend mostly (only?) on what they can do in a setting where AI is verifiably unavailable. It means less class time for instruction, but students have a tutor in their pocket anyway.
I've also talked with a bunch of teachers and a couple admins about this. They agree it's a huge problem. By the same token, they are using AI to create their lesson plans and assignments! Not fully of course, they edit the output using their expertise. But it's funny to imagine AI completing an AI assignment with the humans just along for the ride.
The point is, if you actually want to know what a student is capable of, you need to watch them doing it. Assigning homework has lost all meaning.
The education model at high school and undergrad uni has not changed in decades, I hope AI leads to a fundamental change.
Homework being made easy by AI is a symptom of the real issues.
Being taught by uni students who learned the curriculum last year, lecturers who only lecture due to obligation and haven't changed a slide in years.
Lecturers who refuse to upload lecture recordings or slides.
Just a few glaring issues, the sad part these are rather superficial easy to fix cases of poor teaching.
I feel AI has just revealed how poor the teaching is, though I don't expect any meaningful response to be made by teaching establishments.
If anything AI will lead to bigger differences in student learning.
Those who learn core concepts and to critically think will be become more valuable and the people who just AI everything will become near worthless.
Unis will release some handbook policy changes to the press and will continue to pump out the bell curve of students and get paid.
And yet all the people who created all the advances in AI have extremely traditional, extremely good, fancy educations, and did absolutely bonkers amount of homework. The thing you are talking about is very aspirational.
There's some sad irony to that, making homework easier for future generations but those generations being worse off as a result on average. The lack of AI assistance was a forcing function to greater depth.
Outliers will still work hard and become even more valuable, AI won't affect them negatively.
I feel non outliers will be affected negatively on average in ability to learn/think.
With no confirming data, I feel those who got that fancy education would do so in any other institution. Just those fancy institutions draw in and filter for intelligent types, not teach them to be intelligent as it's practically a pre-requisite.
I don't see a future that doesn't involve some form of AR glasses and individual tuned learning. Forget teachers, you will just don your learning glasses and have an AI that walks you through assignments and learning everyday.
That is if learning-to-become-a-contributing-member-of-society doesn't become obsolete anyway.
> Flipped classroom is just having the students give lectures, instead of the teacher.
Not quite. Flipped classroom means more instruction outside of class time and less homework.
> This is called "proctored exams" and it's been pretty common in universities for a few centuries. None of this addresses the real issue
Proctored exams is part of it. In-class assignments is another. Asynchronous instruction is another.
And yes, it addresses the issue. Students can use AI however they see fit, to learn or to accomplish tasks or whatever, but for actual assessment of ability they cannot use AI. And it leaves the door open for "open-book" exams where the use of AI is allowed, just like a calculator and textbook/cheat-sheet is allowed for some exams.
Flipped classroom sounds horrible to me. I never liked being given time to work on essays or big projects in class. I prefer working at home, where the environment is much more comfortable and I can use equipment the school doesn't have, where I can wait until I'm in the right mood to focus, where nobody is pestering me about the intermediary stages of my work, etc.
It also seems like a waste of having an expert around to be doing something you could do at home without them.
Exams should increasingly be written with the idea in mind that students can and will use AI. Open book exams are great. They're just harder to write.
I should add that upon reflection, I did have some really good "flipped classroom" experiences in college, especially in highly technical math and philosophy courses. But in those cases (a) homework was really vital, (b) significant work was never done in class, and (c) we never watched lectures at home. Instead, the activity at home (which did replace lectures) was reading textbooks (or papers) and doing homework. Then class time was like collective office hours.
Failure to do the homework made class time useless, the material was difficult, and the instructors were willing to give out failing grades. So doing the homework was vital even when it wasn't graded. Perhaps that can also work well here in the context of AI, at least for some subjects.
That's a good point, and maybe I was preaching the gospel of flipping too hard. It is by no means a silver bullet.
Should we let the kids who cheat using AI drop by the wayside, never learning a thing for themselves? Or should we do the same for kids who, for whatever reason, just will not do school work outside a classroom? Maybe it works really well for some subjects and not others? Or only for some age ranges? What about the students like you, and there are probably a lot of them, where it would be unfair to judge their abilities at specific times in specific settings?
I guess the reason I bring it up now is that AI has tipped it over the edge, where cheating is now so easy and effective that it is starting to tempt kids who would not otherwise cheat.
Thank you, it's amazing how people don't even try to understand what words mean before dismissing it. Flipped makes way more sense anyway since lectures aren't terribly interactive. Being able to pause/replay/skip around in lectures is underrated.
Except that students don't watch the videos. We have so much log data on this - most of them don't bother to actually watch the videos. They intend to, they think they will, but they don't.
As a university student currently taking a graduate course with a "flipped classroom" curriculum, I can confirm that many students in the class aren't watching the posted videos.
I myself am one of them, but I attribute that to the fact that this is a graduate version of an undergrad class I took two years ago (but have to take the grad version for degree requirements). Instead, I've been skimming the posted exercises and assessing myself which specific topics I need to brush up on.
If they can perform well without reviewing the material, that's a problem with either the performance measure or the material.
And not watching lectures is not the same as not reviewing the material. I generally prefer textbooks and working through proofs or practice problems by hand. If I listen to someone describe something technical I zone out too quickly. The only exception seems to be if I'm able to work ahead enough that the lecture feels like review. Then I'm able to engage.
I think the tricky bit is that AI companies make money off the collected works of artists, regardless of user behaviour. Suppose I pay for an image generator because I like making funny pictures in Ghibli style, then the AI company makes money because of Ghibli's work. Is that ethical? I can see how an artist would get upset about it.
On the other hand, suppose I also like playing guitar covers of songs. Does that mean artists should get upset at the guitar company? Does it matter if I do it at home or at a paid gig? If I record it, do I have to give credit to the original creator? What if I write a song with a similar style to an existing song? These are all questions that have (mostly) well defined laws and ethical norms, which usually lean towards what you said - the tool isn't responsible.
Maybe not a perfect analogy. It takes more skill to play guitar than to type "Funny meme Ghibli style pls". Me playing a cover doesn't reduce demand for actual bands. And guitar companies aren't trying to... take over the world?
At the end of the day, the cat is out of the bag, generative AI is here to stay, and I think I agree that we're better off regulating use rather than prohibition. But considering the broader societal impacts, I think AI is more complicated of a "tool" than other kinds of tools for making art.
> I think the tricky bit is that AI companies make money off the collected works of artists,
There is also a chance that AI companies didn't obtain the training data legally; in that case it would be at least immoral to build a business on stolen content.
Hold on, I feel like everyone's missing that theres a real argument here. I think the key point was:
>They are just judging if anything reaches the point where shareholders were legally harmed, which still gives a lot of gray area to the acquiring company.
This distinguishes the lawsuit failing from the idea that a fair price was paid. The competing contentions are (a) fair price vs (b) unfair but beneath threshold of legally punishable harm.
Note that this is a civil suit, so the concept of a “threshold of legally punishable harm” doesn’t apply. There’s no “punishment,” and the plaintiff doesn’t need to meet the high standards (proof beyond a reasonable doubt, etc.) for imposing a punishment.
Under Delaware law, there’s two standards for evaluating this kind of claim. When there is no conflict of interest, the court applies the “business judgment rule,” which is similar to what you seem to be thinking—it gives corporate officers wide latitude.
But when there is a conflict of interest, the court applies the “entire fairness” standard, which requires both fair dealing and a fair price. And a fair price means what it sounds like—it’s what an objective businessman would consider a fair price under the circumstances. It doesn’t need to be the best price, but it must be within the range of fair. And to establish a fair price, the court relies on evidence from financial valuation experts. It’s a rigorous standard that’s hard to meet.
> And to establish a fair price, the court relies on evidence from financial valuation experts.
I generally find expert testimony to be suspect. Anyone can be trotted forward as an expert, rattle off their credentials, and say whatever they feel like saying, depending on who is paying them to testify. And financial valuation is not a science; there is of course plenty of math involved that takes into account hard, objective numbers, but a good chunk of it is opinion, too, as no one an know the future.
Having said that, the Delaware Chancery Court of course has more experience in these matters than any other state's courts, so I am of the opinion that they're less likely to be duped by "experts", but sill... it can and does happen.
I agree with you to a degree about expert testimony. But I’d argue that a Delaware Chancery Judge reviewing expert opinions from major investment banks is more likely to come to an accurate assessment than many other people offering view on the transaction. The legal ruling isn’t definitive for anything more than the case that was before the court, but I think it should be heavily weighted by anyone trying to inform their views about what happened.
I’d also point out that, in the legal industry, Delaware’s entire fairness standard is seen as a rigorous standard that typically results in victory for the plaintiff. The Chancery Court ruling resulted in various law firm updates using the case as an example of how the entire fairness standard isn’t always a death sentence for defendants.
NCEES has brought back the controls systems professional engineering licensure. This is the same license that civil engineers use to stamp designs for example.
Of course, the license doesn’t mean anything if everyone falls under an industrial exemption. I’d be in favor of safety-critical software requiring a PE stamp.
I have two feelings on this. One of which is alarm because it is a sentiment like this which is the backbone of misinformation believers and spreaders and we're in the worst era of misinformation I think that we've been in in a long time. Certainly the worst since the dawn of the digital age. Experts are right about vaccines. Right about building your savings with a 401k. They're right about using sunscreen. They're right about not ingesting too much sugar. They're right about reading to your kids from an early age and right about the impacts of tariffs on the economy. They're right about climate change. They're right about the Higgs Boson, etc. etc. In almost every case, the people going against the experts on these things are cranks, frauds, or confused conspiracy theorists.
But my other feeling is one of agreement in a very qualified sense. I believe that within the U.S. legal system, people who are presented as experts in certain forms of science, are able to invoke an unearned professional authority and legitimacy that has nothing in common with genuine expertise. When we talk about pseudoscience in the modern age, a lot of the time it's about new age crystals or evolution denial, but I think expert witnesses presented as authoritative in courtrooms have been responsible for generations upon generations of pseudoscience of various types. Everything from penmanship analysis to bite mark analysis to body language experts to, rather remarkably, supposed 911 phone call tonality analysis experts Who can include that wrongly timed emotional tremors or presence or absence of emotions prove the callers involvement in a crime.
And while it might be a gray area, I suspect there's at least a fair amount of crankery or motivated reasoning with hired gun economic experts summoned to Delaware courts to testify in favor of major corporate acquisitions.
I feel like this fixation on "punishment" as a legal term of art is not strictly necessary and my point can be reinterpreted in a charitable way that restates the same thing using different but functionally equivalent magic words.
So swap out "punishable" and instead say "legally actionable" (or other preferred synonym) and you nevertheless have an assessment that falls under what you noted is the entire fairness standard and the upshot is the same.
Also my understanding is that courts defer to the experts of the acquiring company. And if those experts are predisposed to have a favorable interpretation that favors the acquiring company, the valuation is in the less than optimal range of a range of values produced even by them, and they, by contrast to an actual market, might be much more lenient than a market would be in determining the price. So there's a convergence of variables that underscore the difference between fair as we conventionally understand the term (which is what we were all interested in here) and whatever it means to have survived legal scrutiny in Delaware.
Which again I would say means that this debate has real teeth and it's more than semantically equivalent differences in emphasis.
(c) no fair price can possibly be determined but the burden of proof lies with the claimant
(d) a court has no idea what a fair price would look like but made a finding of fact based on expert testimony despite being poorly situated to evaluate it
Not trying to be sassy but what definition of AGI are you using? I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks." Depending on which tasks you include and what percentage of humans you need to beat, we could be already there or maybe never will be. Several of these tests [1] have been passed, some appear reasonably tractable. Like if Boston Dynamics cared about the Coffee Test I bet they could do it this year.
> I've never seen a concrete goal, just vague stuff like "better than humans at a wide range of tasks."
I think you're pointing out a bit of a chicken vs. the egg situation here.
We have no idea how intelligence works and I expect this will be the case until we create it artificially. Because we have no idea how it works, we put out a variety of metrics that don't measure intelligence but approximate something that only an intelligent thing could do (we think). Then engineers optimize their ML systems for that task, we blow by the metric, and everyone is left feeling a bit disappointed by the fact that it still doesn't feel intelligent.
Neuroscience has plenty of theories for how the brain works but lacks the ability to validate them. It's incredibly difficult to look into a working brain (not to mention deeply unethical) with the necessary spatial and temporal resolution.
I suspect we'll solve the chicken vs. egg situation when someone builds an architecture around a neuroscience theory and it feels right or neuroscientists are able to find evidence for some specific ML architecture within the brain.
I get what you're saying, but I think "boiling frog" is more applicable than "chicken v egg."
You mention that people feel disappointed by ML systems because they don't feel intelligent. But I think that's just because they emerged one step at a time, and each marginal improvement doesn't blow your socks off. Personally, I'm amazed by a system that can answer PhD level questions across all disciplines, pass the Turing Test, walk me through DIY plumbing, etc etc, all at superhuman speed. Do we need neuroscience to progress before we call these things intelligent? People are polite to ChatGPT because it triggers social cues like a human. Some, for better or worse, get in full-blown relationships with an AI. Doesn't this mean that it "feels" right, at least for some?
We already know that among humans there are different kinds of intelligence. I'm reminded of the problem with standardized testing - kids can be monkeys or fish or iguanas and we evaluate tree climbing ability. We're making the same mistake by evaluating computer intelligence using human benchmarks. Put another way: it's extremely vain to say a system needs to be human-like in order to be called intelligent. Like if aliens visited us with incomprehensibly advanced technology we'd be forced to conclude they were intelligent, despite knowing absolutely nothing about how their intelligence works. To me that's proof by (hypothetical) example that we can call something intelligent based on capability, not at all conditional on internal mechanism.
Of course that's just my two cents. Without a strict definition of AGI there's no way to achieve it, and right now everyone is free to define it how they want. I can see the argument that to define AGI you have to first understand I (heh), but I think that's putting an unfair boundary around the conversation.
I read Atlas Shrugged as an impressionable young teen, and developed some pretty horrible notions about society and morality (and literary technique) as a result. Of course I saw the error of my ways, in no small part by reading other books!
Don't get me wrong, books-as-propaganda isn't necessarily bad. Animal Farm, 1984, To Kill a Mockingbird... These are brilliant but are also such effective forms of propaganda that even mentioning their titles is a form of propaganda in itself.
reply