Hacker Newsnew | past | comments | ask | show | jobs | submit | defgeneric's commentslogin

The physicality of having to actually do things in the real world slows things down to the rate at which our brains actually learn. The "vibe coding" loop is too fast to learn anything, and ends up teaching your brain to avoid the friction of learning.

This is exactly the problem, but there's still a sweet spot where you can get quickly up to speed on a technical areas adjacent to your specialty and not have small gaps in your own knowledge hold you back from the main task. I was quickly able to do some signal processing for underwater acoustics in C, for example, and don't really plan to become highly proficient in it. I was able to get something workable and move on to other tasks while still getting an idea of what was involved if I ever wanted to come back to it. In the past I would have just read a bunch of existing code.

A lot of the boosterism seems to come from those who never had the ability in the first place, and never really will, but can now hack a demo together a little faster than before. But I'm mostly concerned about those going through school who don't even realize they're undermining themselves by reaching for AI so quickly.

Perhaps more importantly, those boosters may never have had the ability to really model a problem in the first place, and didn't miss it, because muddling through worked well enough for them. Many such cases.

These notes may appear to be overly critical or an extremely pedantic reading but they're pretty good, and it's bit like like having the teacher across from you while you're reading. But some notes are a little excessive and the teacher comes off as overbearing. For example, the emphasis on the function style is itself pedagogical, hence the avoidance of `loop` and the preference for recursion over iteration. Some are more excessive than that, like Chapter 2 page 18 (the author shouldn't use ~S if it hasn't been properly introduced yet, so sticking with ~A is actually the right choice.). Overall it's a great guide to reading, especially as it gives the student a sense for the higher-order stylistic considerations when writing a more malleable language like lisp.


This also omits how often the area needs to be resurveyed. Could be yearly, which isn't bad, but that could limit some applications.


This has been possible for some time, and there is an open implementation here: https://quantumvillage.org/.


The fact that no charges have ever been brought against him by the Salvadoran government, yet they still claim to be holding him there lends credence to this. Why would they not simply release him? If he's dead it would be a major problem for Bukele's government as well as the US government. So evidently we're in "disappeared" territory now.


This is a lot of words to say "I don't believe him". None of us really knows, and it's not worth speculating about, because the case is about something much bigger now. It's about the limits of executive authority, separation of powers, and rule of law.

> he did this by almost certainly committing perjury by claiming there were criminal gangs who would kill him if he returned to El Salvador

What evidence is there for this "near certainty"? Your argument here should be with asylum laws, not this individual.

For what it's worth, the situation in El Salvador at the time he left (when he was a minor) does make the claim somewhat credible. There's plenty of evidence that the choice for male youths at that time was leave or join whichever gang controlled your area. The idea that everyone is an "economic migrant" ignores the reality of the situation, which is far more complex.


I don't know in his personal case if he is lying. However, you can get to what the probability of a random applicant for an asylum being truthful very easily. You just need a good estimate of the % of migrants that had to flee gangs or other risk of death (gangs that are still in control of their communities), then compare that to the % that claim they had to flee for those reasons, then subtract.

I don't know what that % is, but we have courts that are deciding these cases every day. The actual court cases are more complex to analyze because lack of adequate council or other factors might influence outcomes. However, of people who attend their interview, about 44% are determined to not have a credible fear.


To have the reputation as an AI company that really cares about education and the responsible integration of AI into education is a pretty valuable goal. They are now ahead of OpenAI in this respect.

The problem is that there's a conflict of interest here. The extreme case proves it--leaving aside the feasibility of it, what if the only solution is a total ban on AI usage in education? Anthropic could never sanction that.


After reading the whole article I still came away with the suspicion that this is a PR piece that is designed to head-off strict controls on LLM usage in education. There is a fundamental problem here beyond cheating (which is mentioned, to their credit, albeit little discussed). Some academic topics are only learned through sustained, even painful, sessions where attention has to be fully devoted, where the feeling of being "stuck" has to be endured, and where the brain is given space and time to do the real work of synthesizing, abstracting, and learning, or, in short, thinking. The prompt-chains where students are asking "show your work" and "explain" can be interpreted as the kind of back-and-forth that you'd hear between a student and a teacher, but they could also just be evidence of higher forms of "cheating". If students are not really working through the exercises at the end of each chapter, but instead offloading the task to an LLM, then we're going to have a serious competency issue. Nobody ever actually learns anything.

Even in self-study, where the solutions are at the back of the text, we've probably all had the temptation to give up and just flip to the answer. Anthropic would be more responsible to admit that the solution manual to every text ever made is now instantly and freely available. This has to fundamentally change pedagogy. No discipline is safe, not even those like music where you might think the end performance is the main thing (imagine a promising, even great, performer who cheats themselves in the education process by offloading any difficult work in their music theory class to an AI, coming away learning essentially nothing).

P.S. There is also the issue of grading on a curve in the current "interim" period where this is all new. Assume a lazy professor, or one refusing to adopt any new kind of teaching/grading method: the "honest" students have no incentive to do it the hard way when half the class is going to cheat.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: