Hacker Newsnew | past | comments | ask | show | jobs | submit | jonas_kgomo's commentslogin

I tried watching this over a couple of months and still remain lost every 5min. There seems to be a certain jargon they use that transcends normal speech, and they are operating in the same linguistic sphere. Does anyone understand any randomly selected 5-20 min of this discussion


It would benefit from an executive summary


This is the most difficult conversation i ever listened to, because each sentence had so much layers that I could not possibly understand every 5 min of the discussion. My questions is, how does one get to understand these kind of conversations, and secondly, where can I find more of them? NB: I guess the word here for high-density should probably be entropy or information content as opposed to density.


This is the most difficult conversation i ever listened to, because each sentence had so much layers that I could not possibly understand every 5 min of the discussion. My questions is, how does one get to understand these kind of conversations, and secondly, where can I find more of them?

NB: I guess the word here for high-density should probably be entropy or information content as opposed to density.


this is basically a direct copy of the startup that got purchased by linktree right?


You're right that the current features are similar to Bento.me.

However, I have plans to develop unique functionalities that will differentiate griddd in the future.

My goal is to evolve beyond the basic concept and create a more comprehensive platform.


I have shared a similar post about another app of the same kind. It is called Invisibility (already using it, but concerned)!

https://news.ycombinator.com/item?id=40530354


what examples are you considering here, bioweapons?


Well OpenAI gets really upset when you ask it to design a warp drive so maybe that was it.


promising not to train on microsoft's customer data, and then training on MSFT customer data.


I don't think the person you are replying to is correct, because the only technological advancement where a new OpenAI artifact provides schematics that I think could qualify is Drexler-wins-Smalley-sucks style nanotechnology that could be used to build computation. That would be the sort of thing where if you're in favour of building the AI faster you're like "Why wouldn't we do this?" and if you're worried the AI may be trying to release a bioweapon to escape you're like "How could you even consider building to these schematics?".

I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that, considering all the many problems that need to be solved for Drexler to be right.

I think it's much more likely that this was an ideological disagreement about safety in general rather than a given breakthrough or technology in specific, and Ilya got the backing of US NatSec types (apparently their representative on the board sided with him) to get Sam ousted.


> I don't think it's correct not because it sounds like a sci-fi novel, but because I think it's unlikely that it's even remotely plausible that a new version of their internal AI system would be good enough at this point in time to do something like that

Aren't these synonymous at this point? The conceit that you can point AGI at any arbitrary speculative sci-fi concept and it can just invent it is a sci-fi trope.


No, not really. Calling something "science-fiction" at the present moment is generally an insult intended to say something along the lines of "You're an idiot for believing this made up children's story could be real, it's like believing in fairies", which is of course a really dumb thing to say because science fiction has a very long history of predicting technological advances (the internet, tanks, video calls, not just phones but flip phones, submarines, television, the lunar landing, credit cards, aircraft, robotics, drones, tablets, bionic limbs, antidepressants) so the idea that because something is in science fiction it is therefore a stupid idea to think it is a real possibility for separate reasons is really, really dumb. It would also be dumb to think something is possible only because it exists in science fiction, like how many people think about faster than light travel, but science fiction is not why people believe AGI is possible.

Basically, there's a huge difference between "I don't think this is a feasible explanation for X event that just happened for specific technical reasons" (good) and "I don't think this is a possible explanation of X event that just happened because it has happened in science fiction stories, so it cannot be true" (dumb).

About nanotechnology specifically, if Drexler from Drexler-Smalley is right then an AGI would probably be able to invent it by definition. If Drexler is right that means it's in principle possible and just a matter of engineering, and an AGI (or a narrow superhuman AI at this task) by definition can do that engineering, with enough time and copies of itself.


How would a superhuman intelligence invent a new non-hypothetical actually-working device without actually conducting physical experiments, building prototypes, and so on? By conducting really rigorous meta-analysis of existing research papers? Every single example you listed involved work IRL.

> with enough time and copies of itself.

Alright, but that’s not what you the previous post was hypothesizing,which is that OpenAI was possibly able to do that without physical experimentation.


Yes, the sort of challenges you're talking about are pretty much exactly why I don't consider it feasible that OpenAI has an internal system that is at that level yet. I would consider it to be at the reasonable limits of possibility that they could have an AI that could give a very convincing, detailed, & feasible "grant proposal" style plan for answering those questions, which wouldn't qualify for OPs comment.

With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work. That level of cognitive achievement is what I think is infeasible that OpenAI could possibly have internally right now, for several reasons. Mainly that it's extremely far ahead of everything else to the point that I think they'd need recursive self-improvement to have gotten there, and I know for a fact there are many people at OpenAI who would rebel before letting a recursively self-improving AI get to that point. And two, if they lucked into something that capable accidentally by some freak accident, they wouldn't be able to keep it quiet for a few days, let alone a few weeks.

Basically, I don't think "a single technological advancement that product wants to implement and safety thinks is insane" is a good candidate for what caused the split, because there aren't that many such single technological advancements I can think of and all of them would require greater intelligence than I think is possible for OpenAI to have in an AI right now, even in their highest quality internal prototype.


> With a more advanced AI system, one that could build better physics simulation environments, write software that's near maximally efficient, design better atomic modelling and tools than currently exist, and put all of that into thinking through a plan to achieve the technology (effectively experimenting inside its own head), I could maybe see it being possible for it to make it without needing the physical lab work.

It couldn't do any of that because it would cost money. The AGI wouldn't have money to do that because it doesn't have a job. It would need to get one to live, just like humans do, and then it wouldn't have time to take over the world, just like humans don't.

An artificial human-like superintelligence is incapable of being superhuman because it is constrained in all the same ways humans are and that isn't "they can't think fast enough".


I think you're confused. We're talking about a hypothetical internal OpenAI prototype, and the specific example you listed is one I said wasn't feasible for the company to have right now. The money would come from the same budget that funds the rest of OpenAI's research.


Our key insights are: 1. You don’t have time to do everything you want, you must make tough choices

2. Set a priority for each day, and plan your day strategically around it

3. By reflecting on the week gone by, you can reset yourself for the week ahead

4. Reflecting on your beliefs and knowledge(epistemic housekeeping) can be essential to removing overestimation and bias in your planning


very interesting, the uLog page looks great. i will try this


Thanks! Make sure you enable Radar from the start and setup some quality rules. Card testers will put a dent in your profits with the default settings.


Foundation Models Taskforce | Advise | Expression of Interest | UK

The UK government has committed an initial £100M towards its Foundation Models Taskforce, the largest amount directed by any nation towards AI safety. We want to find people with diverse skills and backgrounds to work in or with the Taskforce, to catalytically advance AI safety this year with a global impact. Apply: https://docs.google.com/forms/d/e/1FAIpQLSeOV3FPChEstFy7CcuH...


Andromeda Cluster | Grant | Remote | Compute (10 exaflops*)

Available for experiments, training runs, and inference. No minimum duration and superb pricing. Big enough to train llama 65B in ~10 days. For use by startup investments of Nat Friedman and Daniel Gross. Apply: https://andromedacluster.com/


Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: