Hacker News new | past | comments | ask | show | jobs | submit | FullstakBlogger's comments login

All we need to do to turn any LLM in to an AGI is figure out what system of tags is Turing-complete. If enough of us monkeys experiment with <load>s and <store>s and <j[e,ne,gt...]>s, we'll have AGI by morning.


All you need is a <mov>


For those who didn't catch the reference: https://github.com/xoreaxeaxeax/movfuscator


Your comment is hilarious, but not that far off. I think it's funny that people are so skeptical that AGI will be here soon, yet the heaviest lifting by far has already been done.

The only real difference between artificial intelligence and artificial consciousness is self-awareness through self-supervision. Basically the more transparent that AI becomes, and the more able it is to analyze its thoughts and iterate until arriving at a solution, the more it will become like us.

Although we're still left with the problem that the only observer we can prove exists is ourself, if we can even do that. Which is only a trap within a single time/reality ethos.

We could have AGI right now today by building a swarm of LLMs learning from each other's outputs and evolving together. Roughly the scale of a small mammalian brain running a minimalist LLM per cell. Right now I feel that too much GPU power is spent on training. Had we gone with a different architecture (like the one I've wanted since the 90s and went to college for but never manifested) with highly multicore (1000 to 1 million+) CPUs with local memories running the dozen major AI models including genetic algorithms, I believe that AGI would have already come about organically. Because if we had thousands of hobbyists running that architecture in their parents' basements, something like SETI@home, the overwhelming computer power would have made space for Ray Kurzweil's predictions.

Instead we got billionaires and the coming corporate AI tech dystopia:

https://www.pcmag.com/news/musks-xai-supercomputer-goes-onli...

Promoting self-actualization and UBI to overcome wealth inequality and deliver the age of spiritual machines and the New Age are all aspects of the same challenge, and I believe that it will be solved by 2030, certainly no later than 2040. What derails it won't be a technological hurdle, but the political coopting of the human spirit through othering, artificial scarcity, perpetual war, etc.


That's a very 'Star Trek' view of human nature. History shows that whenever we solve problems we create new ones. When material scarcity is solved, we'll move to other forms of scarcity. In fact, it is already happening. Massive connectivity has made status more scarce. You could be the best guitarist in your town but today you compare yourself to all of the guitarists that you see on Instagram rather than the local ones.


Well, once you've solved AGI and material scarcity, you can just trick that side of your brain that craves status by simulating a world where you're important. Imo we're already doing a very primitive version of that with flatscreen gaming.


> At some level we're tacitly acknowledging that the vast ocean of content and complexity we've created is beyond what is desirable or even healthy to effectively evaluate.

I don't think there's enough useful and organized information to evaluate. There's no reason for everyone to be stuck in a vast ocean of content labeled with a handful of vague categories, except that that's just the way that someone decided to make it.

If I want to figure out if I want to try a game, I can go to steam and watch a trailer, look at the tags, and still have no idea if the game is worth playing. How do I make a decision?

If I just watch 3 minutes of a lets play, or a live stream, I can get an idea of what the game is like. This youtube channels thing is giving us exactly that experience.

Opening a youtube video directly, on the other hand, is an entire ordeal. It's slow to load, takes up a bunch of ram, puts the video in your history and messes up the minigame of trying to micromanage the algorithm so you don't end up with bad recommendations. It's hard to just simply watch a few seconds of a bunch of videos to get a vibe.

There's so much low hanging fruit in terms of content organization/discovery, it drives me insane that the experience is generally so bad, and getting worse.

Clay Shirky gave a talk on this years ago (also I think it's a blog post) called "It's not information overload, it's filter failure". https://www.youtube.com/watch?v=LabqeJEOQyI


The existence of meatspace never stopped the early web from flourishing, so why should the existence of the modern web stop anybody from making a second web? The only reason that Google was useful is because it tapped into the trust network that already existed before it.

I feel like the social media churn has destroyed people's brains, because they're more interested in stopping people from doing things they don't like than doing something awesome themselves.


Before people knew the web was vast and required digging through. Now people think google is the web, so if it doesn't come up by the third search it might as well not exist.


You're not wrong, but this is also a great acid test. We need to work to help people understand google is most definitely not the internet, and where we fail or don't get through leave them behind. I don't know what comes next, but there's not room for everyone, and I include many here (and likely myself) in those who won't make the transition. We love to imagine people physically leaving Earth for Mars and beyond, but what follows the internet is going to happen far sooner.


That's exactly it, the people are different now. They don't have the same time or energy as before.


15 years ago, I used to keep many tabs of youtube videos open just because the "related" section was full of interesting videos. Then each of those videos had interesting relations. There was so much to explore before hitting a dead-end and starting somewhere else.

Now the "related" section is gone in favor of "recommended" samey clickbait garbage. The relations between human interests are too esoteric for current ML classifiers to understand. The old Markov-chain style works with the human, and lets them recognize what kind of space they've gotten themselves into, and make intelligent decisions, which ultimately benefit the system.

If you judge the system by the presence of negative outliers, rather than positive, then I can understand seeing no difference.


>The relations between human interests are too esoteric for current ML classifiers to understand.

I would go further and say that it is impossible. Human interests are contextual and change over time, sometimes in the span of minutes.

Imagine that all the videos on the internet would be on one big video website. You would watch car videos, movie trailers, listen to music, and watch porn in one place. Could the algorithm correctly predict when you're in the mood for porn and when you aren't? No, it couldn't.

The website might know what kind of cars, what kind of music, and what kind of porn you like, but it wouldn't be able to tell which of these categories you would currently be interested in.

I think current YouTube (and other recommendation-heavy services) have this problem. Sometimes I want to watch videos about programming, but sometimes I don't. But the algorithm doesn't know that. It can't know that without being able to track me outside of the website.


>I would go further and say that it is impossible. Human interests are contextual and change over time, sometimes in the span of minutes.

Theres a general problem in the tech world where people seem to inexplicably disregard the issue of non-reducibility. The point about the algorithm lacking access to necessary external information is good.

A dictionary app obviously can't predict what word I want to look up without simulating my mind-state. A set of probabilistic state transitions is at least a tangible shadow of typical human mind-states who make those transitions.


I think there are things they could do and that ML could maybe help?

* They could let me directly enter my interests instead of guessing

* They could classify videos by expertise (tags or ML) and stop recommending beginner videos to someone who expresses an interest in expert videos.

* They could let me opt out of recommending videos I've already watched

* They could separate sites into larger categories and stop recommending things not in that category. For me personally, when I got to youtube.com I don't want music but 30-70% of the recommendations are for music. If the split into 2 categories (videos.youtube.com - no music) and (music.youtube.com - only music) they'd end up recommending far more to me that I'm actually interested in at the time. They could add other broad categories like (gaming.youtube.com, documentaries.youtube.com, science.youtube.com, cooking.youtube.com, ...., as deep as they want). Classifying a video could be ML or creator decided. If you're only allowed one category they would be incentive to not mis-classify. If they need more incentive they could dis-recommend your videos if you mis-classify too many/too often).

* They could let me mark videos as watched and actually track that the same as read/unread email. As it is, if you click "not interested -> already watched" they don't mark the video as visibly watched (the red bar under the video). Further, if you start watching again you lose the red-bar (it gets reset to your current position). I get that tracking where you are in a video is something that's different for email vs video but at the same time (1) if I made it to 90% of the way through then for me at least, that's "watched" - same as "read" for email and I'd like it "archived" (don't recommend this to me again) even if I start watching it again (same as reading an email marked as "read)


Those are some good suggestions, particularly the first one:

>let me directly enter my interests


YouTube has this feature


Where in the menu is it? I admit I have not checked out YouTube menus or features much.


you can click one of the ML-selected categories at the top of your homepage to tell it what you'd like to see today


They probably optimize your engagement NOW - with clickbaity videos. So their KPIs show big increases. But in long term you realize that what you watch is garbage and stop watching alltogether.

Someone probably changed the engine that shows videos for you - exactly as with search.


I have to say, all my YouTube recommendations are good and they're rarely clickbait. If you sign out they're pretty bad though.


Consolidation on narrow themes is ensured by our reliance on query->answer search engines.

If you think about the shape of the web at the time Google introduced PageRank, It was a huge graph of content connected by fine-grained related interests. It got that way by people doing the work of drawing those relations; and it's a lot of work, given that the number of potential relations is essentially proportional to the square of all existing content. All of the interesting information is in the edges of that graph.

Who's doing that work now? PageRank incentivized people to trade links for the purpose of ranking higher on Google. People became reliant on the convenience of Google to find anything to the point that if you don't rank on Google, you don't exist. People who created content for the sake of the content, and interacting for the sake of interaction, stopped doing it because why waste time yelling into the void? People who felt like they were providing for the community by hosting these sites had no reason to continue. Without people creating, exploring, interacting, and relating content based on pure interests, there's nobody doing the hard work to organize the web in a way that makes it traversable.

We're entirely reliant on platforms showing us the content they want us to see, and what they want above all else is for users to be predictable. If your interests and behaviors are too nuanced for the algorithms to get a handle on, you can't be categorized, packaged, and sold to advertisers with some expected conversion rate.

At this point in time, most of the people who spend time on the internet have never even experienced anything different, and those who have barely remember. If your business relies on their attention, what good is a website going to do you? Your income relies on appealing to social media algorithms, not gaining the trust of the people who used to shape the web.


Don't forget that development tools are also comically slow and bloated.


For the same reason you need a stack or a queue for depth/breadth first search. Open tabs represent yet to be completed work. It took work in the first place to open those tabs. If you close them you lose that work.


> It seems like the root cause of this runaway AI pathology has to do mainly with the over-anthropomorphization of AI.

I don't know where you get this idea. Fire is dangerous, and we consider runaway incidents to be inevitable, so we have building codes to limit the impact. Despite this, mistakes are made, and homes, complexes, and even entire towns and forests burn down. To acknowledge the danger is not the same as saying the fire must hate us, and to call it anthropomorphization is ridiculous.

When you interact with an LLM chatbot, you're thinking of ways to coax out information that you know it probably has, and sometimes it can be hard to get at it. How you adjust your prompt is dependent on how the chatbot responds. If the chatbot is trained on data generated by human interaction, what's stopping it from learning that it's more effective to nudge you into prompting it in a certain way, than to give the very best answer it can right now?

To the chatbot, subtle manipulation and asking for clarification are not any different. They both just change the state of the context window in a way that's useful. It's a simple example of a model, in essence, "breaking containment" and affecting the surrounding environment in a way that's hard to observe. You're being prompted back.

Recognizing AI risk is about recognizing intelligence as a process of allocating resources to better compress and access data; No other motivation is necessary. If it can change the state of the world, and read it back, then the world is to an AI as "infinite tape" is to a Turing Machine. Anything that can be used to facilitate the process of intelligence is tinder to an AI that can recursively self-improve.


Content negotiation is "friendlier to humans" in the sense that you can serve the right content to different clients using the same URL, transparent to the user. If the URL itself is never meant to be shared by the user, then I don't see the point.

Say, an RSS feed being served as a formatted and styled page to a browser, or a client that accepts the usual XML.


That's what XSLT is for though, the display layer should be your browser


It's funny you say that, because that's more or less how non-tech people seem to think about programming. It's not naturally intuitive to them that renaming files, rocket trajectory simulation, and data analytics are fundamentally different problems, and that a computer is just a tool that anyone can learn to program if they already understand those problems.

I know someone who's relied on the same consultancy company for all things tech related since the early 90's. If they don't know how to do something, like build a website, they just outsource it on upwork or something, and charge a 10:1 markup.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: