Hacker News new | past | comments | ask | show | jobs | submit | dingnuts's comments login

> But what will happen when the new generations grow up in a world with technology that can generate in seconds or minutes something they enjoy and keep them entertained?

This is why culture is important. Technology cannot in seconds generate a song you teach your child to play or sing with you in the tradition of your people.

We should get back to song readers and sing a longs and unearth the old songs of our histories and celebrate them together with our children.


zoom actually has a chat service that might be worse than teams, too

Kagi Assistant can do this too but I find it's mostly useful because the traditional search function can find the pages the LLM loaded into its context before it started to output bullshit.

It's nice when the LLM outputs bullshit, which is frequent.


to me it sounds like an admission that AGI is bullshit! AGI would be so disruptive to the current economic regime that "winner takes all" barely covers it, I think. Admitting they will be in normal competition with other AI companies implies specializations and niches to compete, which means Artificial Specialized Intelligence, NOT general intelligence!

and that makes complete sense if you don't have a lay person's understanding of the tech. Language models were never going to bring about "AGI."

This is another nail in the coffin


That, or they don't care if they get to AGI first, and just want their payday now.

Which sounds pretty in-line with the SV culture of putting profit above all else.


If they think AGI is imminent the value of that payday is very limited. I think the grandparent is more correct: OpenAI is admitting that near term AGI - which, being that the only one anyone really cares about is the case with exponential self improvement - isn't happening any time soon. But that much is obvious anyway despite the hyperbolic nonsense now common around AI discussions.

Define "imminent".

If I were a person like several of the people working on AI right now (or really, just heading up tech companies), I could be the kind to look at a possible world-ending event happening in the next - eh, year, let's say - and just want to have a party at the end of the world.

Five years to ten years? Harder to predict.


Imminent means "in a timeframe meaningful to the individual equity holders this change is about."

The window there would at _least_ include the next 5 years, though obviously not ten.


I don't read it that way. It reads more like AGIs will be like very smart people and rather than having one smart person/AGI, everyone will have one. There's room for both Beethoven and Einstein although they were both generally intelligent.

AGI is matter of when, not if.

It will likely require research breakthroughs, significant hardware advancement, and anything from a few years to a few decades. But it's coming.

ChatGPT was released 2.5 years ago, and look at all the crazy progress that has been made in that time. That doesn't mean that the progress has to continue, we'll probably see a stall.

But AIs that are on a level with humans for many common tasks is not that far off.


Either that, or this AI boom mirrors prior booms. Those booms saw a lot of progress made, a lot of money raised, then collapsed and led to enough financial loss that AI went into hibernation for 10+ years.

There's a lot of literature on this, and if you've been in the industry for any amount of time since the 1950s, you have seen at least one AI winter.


But the Moore's law like growth in compute/$ chugs along, boom or bust.

AGI is matter of when, not if

probably true but this statement would be true if when is 2308 which would defeat the purpose of the statement. when first cars started rolling around some mates around the campfire we saying “not if but when” we’ll have flying cars everywhere and 100 years later (with amazing progress in car manufacturing) we are nowhere near… I think saying “when, not if” is one of those statements that while probably indisputable in theory is easily disputable in practice. give me “when” here and I’ll put up $1,000 to a charity of your choice if you are right and agree to do the same thing if wrong


If you look at Our World in Data's "Test scores of AI systems on various capabilities relative to human performance" https://ourworldindata.org/grapher/test-scores-ai-capabiliti...

you can see a pattern of fairly steady progress in different aspects, like they matched humans for image recognition around 2015 but 'complex reasoning' is still much worse than humans but rising.

Looking at the graph, I'd guess maybe five years before it can do all human skills which is roughly AGI?

I've got a personal AGI test of being able to fix my plumbing, given a robot body. Which they are way off just now.


It is already here, kinda. I mean look at how it passes the bar exam, solves math olympiad level questions, generates video, art, music. What else are you looking for? It already has penetrated into job market causing significant disruption in programming. We are not seeing flying cars but we are witnessing things even not talked about around campfire. Seriously even 4 years ago, would you think all these would happen?

> What else are you looking for?

To begin with, systems that don't tell people to use elmer's glue to keep the cheese from sliding off the pizza, displaying a fundamental lack of understanding of.. everything. At minimum it needs to be able to reliably solve hard, unique, but well-defined problems like a group of the most cohesive intelligent people could. It's certainly not AGI until it can do a better job than the most experienced, talented, and intelligent knowledge workers out there.

Every major advancement (which LLMs certainly are) has caused some disruption in the fields it affected, but that isn't useful criteria that can differentiate between "crude but useful tool" from "AGI".


Majority of people on earth don't solve hard, unique, but well-defined problems, do we? I dont expect AGI to to solve one of Hilbert's list of problems (yet). Your definition of AGI is a bit too imposing. Saying that I believe you would get answers from an LLM better than most of the answers you would get from an average human. IMHO the trend is obvious and we will see if it stalls or keeps the pace.

I don't mean "hard" in the sense that it can easily solve novel problems that no living human knows how to solve, although any "general" intelligence should certainly be capable of learning and making progress on these just like human would, but without limitations of human memory, attention span, relatively short lifetime, and other human needs.

I mean "hard" in the sense that it can reliably replace the best software developers, civil engineers, lawyers, diagnosticians. Not just in economic sense, but by reliably matching the quality of their work 100% of the time.

It should be capable of methodically and reliably arriving at correct answers without expert intervention. It shouldn't be the case that some people claim that they don't know how to code and the LLM generated an entire project for them, while I can confidently claim that LLMs fall flat on their face almost every time I try to use them for more delicate business logic.


AGI is here?????! Damn, me, and every other human, must have missed that news… /s

Such things happen.

I think this is right but also missing a useful perspective.

Most HN people are probably too young to remember that the nanotech post-scarcity singularity was right around the corner - just some research and engineering way - which was the widespread opinion in 1986 (yes, 1986). It was _just as dramatic_ as today's AGI.

That took 4-5 years to fall apart, and maybe a bit longer for the broader "nanotech is going to change everything" to fade. Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more, but it's not happening any time soon.

There are a ton of similarities between nanotech-nanotech singularity and the moderns LLM-AGI situation. People point(ed) to "all the stuff happening" surely the singularity is on the horizon! Similarly, there was the apocalytpic scenario that got a ton of attention and people latching onto "nanotech safety" - instead of runaway AI or paperclip engines, it was Grey Goo (also coined in 1986).

The dynamics of the situation, the prognostications, and aggressive (delusional) timelines, etc. are all almost identical in a 1:1 way with the nanotech era.

I think we will have both AGI and general purpose universal constructors, but they are both no less than 50 years away, and probably more.

So many of the themes are identical that I'm wondering if it's a recurring kind of mass hysteria. Before nanotech, we were on the verge of genetic engineering (not _quite_ the same level of hype, but close, and pretty much the same failure to deliver on the hype as nanotech) and before that the crazy atomic age of nuclear everything.

Yes, yes, I know that this time is different and that AI is different and it won't be another round of "oops, this turned out to be very hard to make progress on and we're going to be in a very slow, multi-decade slow-improvement regime, but that has been the outcome of every example of this that I can think of.


I won't go too far out on this limb, because I kind of agree with you... but to be fair -- 1980s-1990s nanotech did not attract this level of investment, nor was it visible to ordinary people, nor was it useful to anyone except researchers and grant writers.

It seems like nanotech is all around us now, but the term "nanotech" has been redefined to mean something different (larger scale, less amazing) from Drexler's molecular assemblers.


Investment was completely different at the time and interest rates played a huge part of that. VC also wasn't that old in 86.

> Did nanotech disappear? No, but the notion of general purpose universal constructors absolutely is dead. Will we have them someday? Maybe, if humanity survives a hundred more years or more,

I thought this was a "we know we can't" thing rather than a "not with current technology" thing?


Specific cases are probably impossible, though there's always hope. After all, to ue the example the nanotech people loved: there are literal assemblers all around you. Whether we can have singular device that can build anything (probably not - energy limits and many many other issues) or factories that can work on atomic scale (maybe) is open, I think. The idea of little robots was kind of visibly silly even at the peak.

The idea of scaling up LLMs and hoping is .. pretty silly.


Every consumer has very useful AI at their fingertips right now. It's eating the software engineering world rapidly. This is nothing like nanotech in the 80s.

Sure. But fancy autocomplete for a very limited industry (IT) plus graphics generation and a few more similar items, are indeed useful. Just like "nanotech" coating of say optics or in the precise machinery or all other fancy nano films in many industries. Modern transistors are close to nano scale now, etc.

The problem is that the distance between a nano thin film or an interesting but ultimately rigid nano scale transistor and a programmable nano level sized robot is enormous, despite similar sizes. Same like the distance between an autocomplete heavily relying on the preexisting external validators (compilers, linters, static code analyzers etc.) and a real AI capable of thinking is equally enormous.


Progress is not just a function of technical possibility( even if it exists) it is also economics.

It has taken tens to hundred of billions of dollars without equivalent economic justification(yet) before to reach here. I am not saying economic justification doesn't exist or wont come in the future, just that the upfront investment and risk is already in order of magnitude of what the largest tech companies can expend.

If the the next generation requires hundreds of billions or trillions [2] upfront and a very long time to make returns, no one company (or even country) could allocate that kind of resources.

Many cases of such economically limited innovations[1], nuclear fusion is the classic always 20 years away example. Another close one is anything space related, we cannot replicate in next 5 years what we already achieved from 50 years ago of say landing on the moon and so on.

From a just a economic perspective it is a definitely a "If", without even going into the technology challenges.

[1]Innovations in cost of key components can reshape economics equation, it does happen (as with spaceX) but it also not guaranteed like in fusion.

[2] The next gen may not be close enough to AGI. AGI could require 2-3 more generations ( and equivalent orders of magnitude of resources), which is something the world is unlikely to expend resources on even if it had them.


> AGI is matter of when, not if.

LLMs destroying any sort of capacity (and incentive) for the population to think pushes this further and further out each day


I agree that LLMs are hurting the general population’s capacity to think (assuming they use it often. I’ve certainly noticed a slight trend among students I’ve taught to use less effort, and myself to some extent).

I don’t agree that this will affect ML progress much, since the general population isn’t contributing to core ML research.


On the other hand, dumbing down the population also lowers the bar for AGI. /s

Could you elaborate on the progress that has been made? To me, it seems only small/incremental changes are made between models with all of them still hallucinating. I can see no clear steps towards AGI.


> AGI is matter of when, not if

We have zero evidence for this. (Folks said the same shit in the 80s.)


"X increased exponentially in the past, therefore it will increase exponentially in the same way in the future" is fallacious. There is nothing guaranteeing indefinite uncapped growth in capabilities of LLMs. An exponential curve and a sigmoidal curve look the same until a certain point.

Yeah, it is a pretty good bet that any real process that produces something that looks like an exponential curve over time is the early phase of a sigmoid curve, because all real processes have constraints.

And if we apply the 80/20 rule, feels like we're at about 50-75% right now. So we're almost getting close to done with the easy parts. Then come the hard parts.

> AGI is matter of when, not if.

I want to believe, man.


I don’t think that’s a safe foregone conclusion. What we’ve seen so far is very very powerful pattern matchers with emergent properties that frankly we don’t fully understand. It very well may be the road to AGI, or it may stop at the kind of things we can do in our subconscious—but not what it takes to produce truly novel solutions to never before seen problems. I don’t think we know.

doesn't bode well for you either

"A problem well stated is a problem half solved", is all I got for now.

it's more like, I'm not going to spend money on a smart lock that reports my movements to a third party just so a burglar can chuck a landscaping stone through my window and unlock the door

> it's amazing how quickly people go from “a couple times per year on special occasions” to having 20-30 exposures over a decade or two

a couple times a year is 20 exposures over a decade. were you trying to demonstrate an escalation? did you mean 20 over a year?


>and have a real time web socketed sync layer that goes direct to the database

you might be able to drop a web router but pretending this is "completely drop[ping] the backend" is silly. Something is going to have to manage connections to the DB and you're not -- I seriously hope -- literally going to expose your DB socket to the wider Internet. Presumably you will have load balancing, DB replicas, and that sort of thing, as your scale increases.

This is setting aside just how complex managing a DB is. "completely drop the backend" except the most complicated part of it, sure. Minor details.


I assumed they meant a client side DB and then a wrapper that syncs it to some other storage, which wouldn't be terribly different than say a native application the relies on a cloud backed storage system.

Which is fine and cool for an app, but if you do something like this for say, a form for a doctor's office, I wish bad things upon you.


IDK, there are a lot of little chores that I find needing done that previously would have been ten minutes of research and cli fiddling that are a prompt and copy paste now.

It doesn't feel that different but it is a little faster


If you don't understand what the copy/paste is actually doing, you shouldn't be script-kiddying it in your project. If you do understand, then the AI is just quickly writing what you already intended.

If you are having to understand something you didn't understand, it's probably taking a bunch of time to read and verify what the script does. This can be a good learning experience and reveal unknown unknowns, but probably isn't a massive speedup.


This seems false to me. I vibe code shell scripts all the time. I understand what it's doing perfectly, but it would have taken me ages to look up all the bash syntax and get it all correct. Is it one square bracket pair around if statements, or two?

You are in the category I mentioned that already know the answer and are using AI to write the code you already understand faster. This is rather different from “vibe coding” in my opinion.

It's different by the canonical definition - you aren't supposed to review the code when "vibe coding", only the app behavior.

Is Tcl having a revival? Anybody know where Tclers hang out online?

The Wiki[1] is one of the primary "hang out" spots, although it's a bit different from usual online communication. But there's a lot of mutual commenting, small articles and utilities etc. on there.

[1]: https://wiki.tcl-lang.org or https://wiki.tcl.tk


"The European OpenACS and TCL/Tk conference will be in Bologna/Italy/Europe on July 10 & 11 2025." - this is crazy. Seems there are still people using OpenACS in 2025.

I worked on a startup whose main language was Tcl, between 1999 and 2002, since then I hardly touched Tcl again.

Yet it has a special place on my heart and was one of the interpreters easiest to extend, in regards to the FFI API.


If you work with VHDL or Verilog tools, it is very well alive and kicking. Forums about HDLs are full of it.

They did have a recent language update after awhile. That may have triggered some folks to look into it again. There is sometimes a HN effect where an initial post triggers some interest amongst enough users to get us new posts for a few weeks and then things tend to die off again. I've seen this with a lot of the more obscure languages like APL.

It would be cool to have a Tcl revival though (although I don't see it happening - I'm not in the community though so hopefully someone more informed can post). The language itself seems more capable than most give it credit for. I'm more of a Python fan myself, but can appreciate Tcl after reading through a book on it and writing a few scripts.


I highly recommend The Tcl Programming Language: A Comprehensive Guide:

https://www.magicsplat.com/ttpl/index.html

For those who are not aware, Tcl is actually part of standard Python distribution through TKinter.

There are many things Tcl has built in that are quite amazing, like a robust virtual filesystem support, reflective channels, and less known these days Starpacks (stand alone runtime) that bundle sources with the binary.

I am current working on bringing back kitcreator for an AI project that uses Tcl as a scripting environment over llama.cpp.

https://github.com/tclmonster/kitcreator

Roy Keene is the original author, and has done some really clever stuff here, like encrypting the VFS appended to the executable. I added compression to this. It provides some manner of obfuscating sources.

And actually, I am also working on using tohil to compile a static Python and load it as a Tcl extension, with the goal to have standalone Python applications bundled with their sources and completely loadable from within the VFS. This will provide a means to bundle TKinter with a “frozen” Python app.

https://github.com/tclmonster/tohil


The previous edition of that book is the one I read lol. A great book. You can really feel the author's love of the language.

Thank you and GP for the recommendation, just bought the book, seems pretty good! Now I wonder whether it's a good idea to replace my shell with tclsh... seems a lot more sane than bash/zsh.

Definitely would be interesting to use it in that way! The nice thing about Tcl is the syntax is clean (in brevity and understanding). Basic features like piping, file globbing, encoding conversions, compression, and so-forth are intuitive.

If you’re interested, I have various Tclkits available for download on GitHub. I have added dependencies to them like TLS for HTTPS and so-forth. It can be convenient to have them standalone; the TLS extension here is bundled with the ca certs from libcurl.

https://github.com/tclmonster/kitcreator/releases/latest

And here’s an example how I use the kits in the CI build. It uses the kit it builds to push the update using the TLS extension along with the GitHub REST API:

https://github.com/tclmonster/kitcreator/blob/main/.github/s...


Bash is pretty good for really small scripts. Anything bigger and I have reached for Perl, Python, or Tcl in the past ... depending on what IT had installed on the server.

I last got help on the IRC channel (bridged to Slack, because I don't know IRC).

In the most recent big version update there was what I'd consider a breaking change regarding text encoding handling, but it was possible to go back to the old behaviour with an additional parameter .


r/TCL is worth a mention

It is unfortunately entrenched in the EDA industry. I have absolutely no idea why you would use it if you don't work in that space.

Because it works.

I introduced it into some of our release tooling in the mid-2000s. Easy to integrate, easy to understand, unsurprisingly good string/text handling, expect was very useful, and it’s not going to be used by anyone else, so no worries about version conflicts.

It ran successfully largely unchanged for around a decade.


Everything works. PHP works. Perl works. Bash works.

I like to use tools that more than merely work.

There's a reason nobody outside EDA uses it.


It’s included with Python in the form of Tkinter, the MacPorts package manager is written in it, and it’s also used by Cisco IOS for scripting.

Just FYI when people say things like "nobody like this" or "everybody does that" they don't literally mean 100.00%.

Strange, I've been attending the EuroTcl conferences for a few years now, I don't remember any of the presentations I've seen being related to EDA - https://www.eurotcl.eu/pastevents.html :-/

It is, for many people, an absolute pleasure to work in.

it's a language that's trivial to implement because it's well designed and simple, it embeds very nicely, and it's fantastic for use as a debug shell and to implement guis. it's a great technician's language, if you work with technically-minded people who aren't necessarily programmers, it's a great way to hand them deep interactive power without the footguns of a forth.

I would say it's cleverly designed. Well designed? Hmm, would a well designed language have such a basic flaw as comments that can only be used in very specific places?

I understand where they came from here: the Scheme-like obsession with purity (the enshrined Endekalogue, now Dodekalogue) didn't mesh very well with traditional comment.

Yeah, Tcl has its design warts, but I don't think it has that many remaining that can't be fixed via metaprogramming. Even the popular Python manages to frustrate me with its idiotic statement/expression divide (they doubled down by making match() a statement...) and constant need to convert between generators/iterables and lists.

Thing is that R6RS Scheme (or R7RS-large if it comes out one day) is basically a better Tcl if you only consider scripting and don't need the event loop. If Tcl had played its cards right, it'd have competed with fish/rc/nushell/powershell instead, it was really ready to be a better shell well before any other.

------

To be honest, Common Lisp is the only language I've ever seen get this right without compromising on said purity by specifying the reader (parser): https://www.lispworks.com/documentation/HyperSpec/Body/02_.h...

Comments are then just the result of a readtable entry like any other, allowing this kind of voodoo:

  ; A comment
  (set-macro-character #\% (get-macro-character #\;))
  % Also a comment

absolutely, i don't even consider that a flaw. i dont like EOL comments stylistically.

I totally agree, but TCL comments are even more restricted than that.

Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: