I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.
Except for the founders/early employees who get a modest (sometimes excessive) paycheck.
> I've always thought of these VC fueled expeditions to nowhere as the opposite. Wealth transfer from the owning class to the middle class seeing as a lot of these ventures crash and burn with nothing to show for it.
That would be the case if VCs were investing their own money, but they're not. They're investing on behalf of their LPs. Who LPs are is generally an extremely closely-guarded secret, but it includes institutional investors, which means middle-class pensions and 401(k)s are wrapped up in these investments as well, just as they were tied up in the 2008 financial crisis.
I think the chilling effect on mom and pop businesses undoes all of that. When they (we) disrupt and industry the power consolidates but in new hands. The idea is to get it away from the entrenched interests but like a good cultural revolution the second tier ends up in charge when the first tier gets beheaded.
> Because if internet/tech is the gun then the clear solution is “not giving your children guns”.
Funnily enough, no. The clear solution is to ensure that you talk to your children about [gun|online] safety. Show them how to use the [gun|internet] safely. Make sure they know that they can ask to use your [gun|device] any time they'd like -- but only under your supervision.
Take the mystery away through education and experience, and like anything else [guns|the internet] becomes just another part of adult life. Just one more thing that can be dangerous if used incorrectly.
> Make sure they know that they can ask to use your [gun|device] any time they'd like -- but only under your supervision.
You could do that, but there's no particular need for it. "No guns until you're 16" works fine. You don't need to "take the mystery away".
You need to use the internet a lot before you become an adult, you need to use a gun never before you become an adult. You need a lot of practice to build up internet safety skills, you need barely any practice to build up gun safety skills.
Go ahead and have a basic gun safety talk, that's a good idea, but that's all you need.
> "No guns until you're 16" works fine. You don't need to "take the mystery away".
This is what leads to stories about kids who make their way into their parents’ locked storage and hurt themselves or others.
“The mystery” is what leads kids to investigate things on their own. Let them know they can just ask. If they do ask, explain what you’re doing as you clear it. Strictly enforce the four rules. Let them disassemble it, or do it for them if necessary.
It’s just a tool. No less useful than a drill or saw, and no more or less dangerous than the car or can of gasoline in the garage.
> This is what leads to stories about kids who make their way into their parents’ locked storage and hurt themselves or others.
> “The mystery” is what leads kids to investigate things on their own.
Do you have evidence that learning gun safety without use doesn't do enough here?
I'm not convinced there's all that much mystery. Even if talk doesn't do enough, I bet letting your kids use guns once would do more than enough to clear up mystery. If you let your kids use a gun every week (or whatever "any time" means) it's because you're a family that likes guns, not for safety reasons.
Okay, I think "use guns once or twice with them" is a very reasonable idea for gun safety.
And it ends up being extremely different from internet safety. It's much much harder to teach and it's not practical to supervise the full learning process.
On the topic of hammers: "if you don't hold the hammer at exactly 2.6cm from the end of the handle and strike the nail with 6N of force at an angle of 58 degrees then of course you won't get a good nail strike into the wood. Oh and you must only use acacia sourced from the subtropics".
Give me a break, it's a hammer. This is a perfectly normal "use" of ChatGPT and a good example of how a literature student may opt to try and use AI to make their work easier in some way. It also conveniently demonstrates some of the shortcomings of using a LLM for this sort of task. No need to call them a dumbass.
Just like AI seems like it's being used as a convenient scapegoat for the layoffs and trimming following the end of ZIRP, now we see it also being used to blame for the failures of our modern education system.
Mostly which are that our system only rewards one thing in education: the grade. Not understanding, knowledge, intelligence, but instead a single number that is more easily gamified than anything. And this single number (your GPA) is the single most important thing for every level from middle school to college where it will unironically determine your entire (academic/academic-adjacent) future.
... our system only rewards one thing in employment: the metric. Not understanding, knowledge, intelligence, but instead a single number that is more easily gamified than anything. And this single number (your metric value) is the single most important thing for every level from junior to principal where it will unironically determine your entire future.
Our modern education system in the US is broken, but acting as if AI is a scapegoat is comical.
Capitalism is what has destroyed higher education in this country. The concept of going to school to get a job isn’t a failure of education but of economics.
AI is just another capitalist tool made to not only extract wealth out of you but something that they want you to rely on more and more so they can charge you even more down the road.
Before college was a means to get a job, it was status signalling for the upper class by showing they could spend 4 years not working and learning things with no economic value that few could afford. There was never a time when a large portion of society went to school past 18 for any reason other than economic or status gain, and why should they?
Because modern life is radically more complicated than humans can naturally deal with.
Your average peasant for millenia didn't need to understand Information security to avoid getting phished, didn't need to understand compounding interest for things like loans and saving for retirement (they'd just have kids and pray enough of them survive), didn't need to have some kind of mental model for the hordes of algorithms deployed against us for the express purpose of taking all of our available attention (a resource that people before a couple decades ago had so much excess of that boredom was a serious concern) for the express purpose of selling it to people who want to extract any dollar you may have access to, did not need to understand spreadsheets(!), etc etc etc etc
Like, being productive in modern society is complicated. That's what education is for.
Economic OR status gain is putting a lot of work on the or.
We've put into place a context for intellectual achievement at scale. Why shouldn't status be apportioned to someone who is recognized by a panel of peers and teachers to have useful insight into their field?
Because a college degree isn't an intellectual achievement. It's 4 years of school when you've already done 13. I went to one of those schools where people go "oh, you went to $SCHOOL" when they find out, and I always want to roll my eyes because I didn't do shit to get that degree.
I think learning stuff and making art just for the hell of it is going to become a lot more accepted as society continues on and more and more peoples' jobs get automated away. Obviously that's a huge simplification of a much more complex situation, but in general I think the best future is one where people are free to pursue interests without regard for those interests ability to pay for their food and housing.
Decoupling working from living: means only intrinsically valuable things get worked on. No more working a 9-5 at a scam call center or figuring out how to make people click on ads. There is ONLY BENEFIT (to everyone) from giving labor such leverage.
Not every job needs to or should even exist: everyone having a job isn't utopia. Utopia is being free to choose what you work on. This directs market value for labor to go up. Work that needs to get done will be aligned with financial incentives (farmers, janitors, repair industries would soar to new heights).
UBI is a necessary and great idea: A bottom floor to capitalism means we all can stand up and lift this sinking ship.
There is nothing wrong with going to school to obtain knowledge and skills to secure a job.
The problem with the modern educational system is that it isnt very efficient at this task. Instead, most of the value relies on the screening that took place before the students even entered the institution, not the knowledge obtained while there.
Yep, this is a huge problem. I've long argued that we need value add metrics for colleges, and it probably won't be a single number, but rather a set of values depending on input values, e.g., some schools may deliver a lot of value for kids with 1550 SATs, but other schools may do better for kids with 1200.
Today we simply use college as a proxy for intelligence, so people just like to go to the highest rated college they can to be viewed as intelligent. What happens in the four years at the college is secondary.
> Today we simply use college as a proxy for intelligence, so people just like to go to the highest rated college they can to be viewed as intelligent.
Hmmm… I would say college is a proxy for social currency, of which intelligence is one type. In most cases, intelligence is the least valuable (imho).
Values are your culture. The Nazis were elected and supported (at least initially) so you are right some what, but the answer is multiple countries.
But a country without a culture and without shared values is a sled being pulled by dogs in different directions and not a real team (as many people would argue has been the case for quite some time).
You need common values to work together to achieve goals. That's what a country is, people working together. When you don't, you just become tenants with passports.
Even the Soviet Union made people go to school, and getting a degree was a route to higher status.
(there have been a few Communist revolutions against the concept of "university", for various political reasons, but China rebuilt theirs after the purges and Cambodia is a sad historical footnote)
Love this quote and tell it to friends often. I strive to be the clever and lazy officer. It was also eye opening to meet the first hardworking+stupid individual of my career and see just how much damage they really could do.
this is not a good measure because attainment is measured in graduation and not academic standards. the standards HAVE dropped and kids are forced through with relatively meaningless degrees.
i think the total reduction was about 2/3 if memory serves? so 1 in 3 is that number you'd be looking at for "actually useful". heavy air quotes on that.
> Right on up to professorships, this is how science really works.
Why I am making my exit from academia and research entirely as soon as I finish my PhD. The system is filled with wonderful, intelligent people but sadly simultaneously rotten to the core. It in fact, did not get better as I moved from undergrad to grad school.
So I'm a biomedical scientist (in training I suppose...I'm in my 3rd year of a Genetics PhD) and I have seen this trend a couple of times now where AI developers tout that AI will accelerate biomedical discovery through a very specific argument that AI will be smarter and generate better hypotheses than humans.
For example in this Google essay they make the claim that CRISPR was a transdisciplinary endeavor, "which combined expertise ranging from microbiology to genetics to molecular biology" and this is the basis of their argument that an AI co-scientist will be better able to integrate multiple fields at once to generate novel and better hypothesis. For one, what they fail to understand as computer scientists (I suspect due to not being intimately familiar with biomedical research) is that microbio/genetics/mol bio are closer linked than you may expect as a lay person. There is no large leap between microbiology and genetics that would slow down someone like Doudna or even myself - I use techniques from multiple domains in my daily work. These all fall under the general broad domain of what I'll call "cellular/micro biology". As another example, Dario Amodei from Claude also wrote something similar in his essay Machines of Loving Grace that the limiting factor in biomedical is a lack of "talented, creative researchers" for which AI could fill the gap[1].
The problem with both of these ideas is that they misunderstand the rate-limiting factor in biomedical research. Which to them is a lack of good ideas. And this is very much not the case. Biologists have tons of good ideas. The rate limiting step is testing all these good ideas with sufficient rigor to either continue exploring that particular hypothesis or whether to abandon the project for something else. From my own work, the hypothesis driving my thesis I came up with over the course of a month or two. The actual amount of work prescribed by my thesis committee to fully explore whether or not it was correct? 3 years or so worth of work. Good ideas are cheap in this field.
Overall I think these views stem from field specific nuances that don't necessarily translate. I'm not a computer scientist, but I imagine that in computer science the rate limiting factor is not actually testing out hypothesis but generating good ones. It's not like the code you write will take multiple months to run before you get an answer to your question (maybe it will? I'm not educated enough about this to make a hard claim. In biology, it is very common for one experiment to take multiple months before you know the answer to your question or even if the experiment failed and you have to do it again). But happy to hear from a CS PhD or researcher about this.
All this being said I am a big fan of AI. I try and use ChatGPT all the time, I ask it research questions, ask it to search the literature and summarize findings, etc. I even used it literally yesterday to make a deep dive into a somewhat unfamiliar branch of developmental biology more easy (and I was very satisfied with the result). But for scientific design, hypothesis generation? At the moment, useless. AI and other LLMs at this point are a very powerful version of google and code writer. And it's not even correct 30% of the time to boot so you have to be extremely careful when using it. I do think that wasting less time exploring hypotheses that are incorrect or bad is a good thing. But the problem here is that we can pretty easily identify good and bad hypotheses already. We don't need AI for that, what takes time is the actual amount of testing of these hypotheses that slows down research. Oh and politics, which I doubt AI can magic away for us.
> It's generally very easy to marginally move the needle in drug discovery. It's very hard to move the needle enough to justify the cost.
Maybe this kind of AI-based exploration would lower the costs. The more something is automated, the cheaper it should be to test many concepts in parallel.
A med chemist can sit down with a known drug, and generate 50 analogs in LiveDesign in an afternoon. One of those analogs may have less CYP inhibition, or better blood brain barrier penetration, or slightly higher potency or something. Or maybe they use an enumeration method and generate 50k analogs in one afternoon.
But no one is going to bring it to market because it costs millions and millions to synthesize, get through PK, ADMET, mouse, rat and dog tox, clinicals, etc. And the FDA won't approve marginal drugs, they need to be significantly better than the SoC (with some exceptions).
Point is, coming up with new ideas is cheap, easy, and doesn't need help. Synthesizing and testing is expensive and difficult.
The one model that would actually make a huge difference in pharma velocity is one that takes a target (protein that causes disease or whatever), a drug molecule (the putative treatment for the disease), and outputs the probability the drug will be approved by the FDA, how much it will cost to get approved, and the revenue for the next ten years.
If you could run that on a few thousand targets and a few million molecules in a month, you'd be able to make a compelling argument to the committee that approves molecules to go into development (probability of approval * revenue >> cost of approval)
Except for the founders/early employees who get a modest (sometimes excessive) paycheck.