Hacker Newsnew | past | comments | ask | show | jobs | submit | more glitchc's commentslogin

Yeah a Mac Mini seems to be the only one that can go the distance.

I am sorry for your loss. I cannot imagine the grief. It must be very hard.


No bottle can guarantee an absolute seal. Even a very tiny leak will allow ethanol to evaporate over time.


Does this mean that compilation fails without an internet connection? If so, that's horrifying.


Yes, of course it does, isn't it nice?

Even better if you want to automate the whole notarization thing you don't have a "nice" notarize-this-thing command that blocks until its notarized and fails if there's an issue, you send a notarization request... and wait, and then you can write a nice for/sleep/check loop in a shell script to figure out whether the notarization finished and whether it did so successfully. Of course from time to time the error/success message changes so that script will of course break every so often, have to keep things interesting.

Xcode does most of this as part of the project build - when it feels like it that is. But if you want to run this in CI its a ton a additional fun.


None of this comment is true.

Compilation works fine without notarization. It isn't called by default for the vast majority of complications. It is only called if you submit to an App Store, or manually trigger notarization.

The notarization command definitely does have the wait feature you claim it doesn't: `xcrun notarytool ... --wait`.


We don't know if AGI is even possible outside of a biological construct yet. This is key. Can we land on AGI without some clear indication of possibility (aka Chappie style)? Possibly, but the likelihood is low. Quite low. It's essentially groping in the dark.

A good contrast is quantum computing. We know that's possible, even feasible, and now are trying to overcome the engineering hurdles. And people still think that's vaporware.


> We don't know if AGI is even possible outside of a biological construct yet. This is key.

A discovery that AGI is impossible in principle to implement in an electronic computer would require a major fundamental discovery in physics that answers the question “what is the brain doing in order to implement general intelligence?”


It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment. Obviously this is cost-prohibitive and we don’t have even 0.1% of the data required to make the simulation. Maybe we could simulate every single neuron instead, but again it’ll take many decades to gather the data in living human brains, and it would still be extremely expensive computationally since we would need to simulate every protein and mRNA molecule across billions of neurons and glial cells.

So the question is whether human intelligence has higher-level primitives that can be implemented more efficiently - sort of akin to solving differential equations, is there a “symbolic solution” or are we forced to go “numerically” no matter how clever we are?


> It is vacuously true that a Turing machine can implement human intelligence

The case of simulating all known physics is stronger so I'll consider that.

But still it tells us nothing, as the Turing machine can't be built. It is a kind of tautology wherein computation is taken to "run" the universe via the formalism of quantum mechanics, which is taken to be a complete description of reality, permitting the assumption that brains do intelligence by way of unknown combinations of known factors.

For what it's worth, I think the last point might be right, but the argument is circular.

Here is a better one. We can/do design narrow boundary intelligence into machines. We can see that we are ourselves assemblies of a huge number of tiny machines which we only partially understand. Therefore it seems plausible that computation might be sufficient for biology. But until we better understand life we'll not know.

Whether we can engineer it or whether it must grow, and on what substrates, are also relevant questions.

If it appears we are forced to "go numerically", as you say, it may just indicate that we don't know how to put the pieces together yet. It might mean that a human zygote and its immediate environment is the only thing that can put the pieces together properly given energetic and material constraints. It might also mean we're missing physics, or maybe even philosophy: fundamental notions of what it means to have/be biological intelligence. Intelligence human or otherwise isn't well defined.


QM is a testable hypothesis, so I don't think it's necessarily like an axiomatic assumption here. I'm not sure what you mean by "it tells us nothing, as ... can't be built". It tells us there's no theoretical constraint and only an engineering constraint to doing simulating the human brain (and all the tasks)


Sure, you can simulate a brain. If and when the simulation starts to talk you can even claim you understand how to build human intelligence in a limited sense. You don't know if it's a complete model of the organism until you understand the organism. Maybe you made a p zombie. Maybe it's conscious but lacks one very particular faculty that human beings have by way of some subtle phenomena you don't know about.

There is no way to distinguish between a faithfully reimplemented human being and a partial hackjob that happens to line up with your blind spots without ontological omniscience. Failing that, you just get to choose what you think is important and hope it's everything relevant to behaviors you care about.


> It is vacuously true that a Turing machine can implement human intelligence: simply solve the Schrödinger equation for every atom in the human body and local environment.

Yes, that is the bluntest, lowest level version of what I mean. To discover that this wouldn’t work in principle would be to discover that quantum mechanics is false.

Which, hey, quantum mechanics probably is false! But discovering the theory which both replaces quantum mechanics and shows that AGI in an electronic computer is physically impossible is definitely a tall order.


There's that aphorism that goes: people who thought the epitome of technology was a steam engine pictured the brain as pipes and connecting rods, people who thought the epitome of technology was a telephone exchange pictured the brain as wires and relays... and now we have computers, and the fact that they can in principle simulate anything at all is a red herring, because we can't actually make them simulate things we don't understand, and we can't always make them simulate things we do understand, either, when it comes down to it. We still need to know what the thing is that the brain does, it's still a hard question, and maybe it would even be a kind of revolution in physics, just not in fundamental physics.


>We still need to know what the thing is that the brain does

Yes, but not necessarily at the level where the interesting bits happen. It’s entirely possible to simulate poorly understood emergent behavior by simulating the underlying effects that give rise to it.


Can I paraphrase that as make an imitation and hack it around until it thinks, or did I miss the point?


It's not even known if we can observe everything required to replicate consciousness.


The alternative is magic.

Brains are physical molecular machines. Everything that they do is the result of physical processes.


i'd argue LLMs and deep learning are much more on the intelligence from complexity side than the nice symbolic solution side of things. Probably the human neuron, though intrinsically very complex, has nice low loss abstractions to small circuits. But on the higher levels, we don't build artificial neural networks by writing the programs ourselves.


That is only true if consciousness is physical and the result of some physics going on in the human brain. We have no idea if that's true.


Whatever it is that gives rise to consciousness is, by definition, physics. It might not be known physics, but even if it isn't known yet, it's within the purview of physics to find out. If you're going to claim that it could be something that fundamentally can't be found out, then you're admitting to thinking in terms of magic/superstition.


The vast majority of the evidence, as well as logic, supports it so yes we have an idea.

You got downvoted so I gave you an upvote to compensate.

We seem to all be working with conflicting ideas. If we are strict materialists, and everything is physical, then in reality we don't have free will and this whole discussion is just the universe running on automatic.

That may indeed be true, but we are all pretending that it isn't. Some big cognitive dissidence happening here.


This bogus argument has been refuted numerous times--read Dennett's book "Freedom Evolves" for one sort of response. And whether people are "pretending" something is irrelevant (and ad hominem, and not even true). The plain fact remains that all evidence and logic supports physicalism, and even if you entertain dualistic ideas like those of David Chalmers they don't give you free will, they don't counter determinism.

Not necessarily , for a given definition of AGI you could have mathematical proof that it is incomputable similar to how Gödel incompleteness theorems work .

It need not even be incomputable, it could be NP hard and practically be incomputable, or it could be undecidable I.e. a version of the halting problem.

There are any number of ways our current models of mathematics or computation can in theory could be shown as not capable of expressing AGI without needing a fundamental change in physics


This is not a logically valid argument. If AGI isn't possible for any of the reasons you suggest, then human cognition isn't possible either. And you're making numerous category mistakes ... an AGI can't be "incomputable" or "undecidable" or "NP hard". A problem that we put to an AGI might be NP hard, but neither AGIs nor humans need to solve the entire class of problems, only instances of them, and they don't have to solve them optimally. Thus salesmen are able to travel.

To quote ChatGPT on this:

"Could cognition be NP-hard? Strictly speaking, no—if human brains were literally solving NP-hard problems in their general form, we wouldn’t be able to think at all.

Does cognition involve NP-hard problems? Yes—in theory, many of the domains we reason about are NP-hard in the worst case.

What’s really happening? Human cognition relies on heuristics, approximations, and exploiting real-world regularities, so we almost never hit the formal “worst cases” that define NP-hardness."


We would also need a definition of AGI that is provable or disprovable.

We don’t even have a workable definition, never mind a machine.


Only if we need to classify things near the boundary. If we make something that’s better at every test that we can devise than any human we can find, I think we can say that no reasonable definition of AGI would exclude it without actually arriving at a definition.


We don’t need such a definition of general intelligence to conclude that biological humans have it, so I’m not sure why we’d such a definition for AGI.


I disagree. We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris. I'm not saying we aren't generally intelligent, but a big part of believing we are is because not believing so would be psychologically and culturally disastrous.

I fully expect that, as our attempts at AGI become more and more sophisticated, there will be a long period where there are intensely polarizing arguments as to whether or not what we've built is AGI or not. This feels so obvious and self-evident to me that I can't imagine a world where we achieve anything approaching consensus on this quickly.

If we could come up with a widely-accepted definition of general intelligence, I think there'd be less argument, but it wouldn't preclude people from interpreting both the definition and its manifestation in different ways.


I can say it. Humans are not "generally intelligent". We are intelligent in a distribution of environments which are similar enough to ones we are used to. There's no way to be intelligent with no priors on environment basically by information theory (you can make your environment to be adversarial to the learning efficiency in "intelligent" beings which comes from priors)


We claim that biological humans have general intelligence because we are biased and arrogant, and experience hubris.

No, we say it because - in this context - we are the definition of general intelligence.

Approximately nobody talking about AGI takes the "G" to stand for "most general possible intelligence that could ever exist." All it means is "as general as an average human." So it doesn't matter if humans are "really general intelligence" or not, we are the benchmark being discussed here.


If you don't believe me, go back to the introduction of the term[1]:

By advanced artificial general intelligence, I mean AI systems that rival or surpass the human brain in complexity and speed, that can acquire, manipulate and reason with general knowledge, and that are usable in essentially any phase of industrial or military operations where a human intelligence would otherwise be needed. Such systems may be modeled on the human brain, but they do not necessarily have to be, and they do not have to be "conscious" or possess any other competence that is not strictly relevant to their application. What matters is that such systems can be used to replace human brains in tasks ranging from organizing and running a mine or a factory to piloting an airplane, analyzing intelligence data or planning a battle.

It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence.

Or see what Ben Goertzel - probably the one person most responsible for bringing the term into mainstream usage - had to say on the issue[2]:

“Artificial General Intelligence”, AGI for short, is a term adopted by some researchers to refer to their research field. Though not a precisely defined technical term, the term is used to stress the “general” nature of the desired capabilities of the systems being researched -- as compared to the bulk of mainstream Artificial Intelligence (AI) work, which focuses on systems with very specialized “intelligent” capabilities. While most existing AI projects aim at a certain aspect or application of intelligence, an AGI project aims at “intelligence” as a whole, which has many aspects, and can be used in various situations. There is a loose relationship between “general intelligence” as meant in the term AGI and the notion of “g-factor” in psychology [1]: the g-factor is an attempt to measure general intelligence, intelligence across various domains, in humans.

Note the reference to "general intelligence" as a contrast to specialized AI's (what people used to call "narrow AI" even though he doesn't use the term here). And the rest of that paragraph shows that the whole notion is clearly framed in terms of comparison to human intelligence.

That point is made even more clear when the paper goes on to say:

Modern learning theory has made clear that the only way to achieve maximally general problem-solving ability is to utilize infinite computing power. Intelligence given limited computational resources is always going to have limits to its generality. The human mind/brain, while possessing extremely general capability, is best at solving the types of problems which it has specialized circuitry to handle (e.g. face recognition, social learning, language learning;

Note that they chose to specifically use the more precise term "maximally general problem solving ability when referring to something beyond the range of human intelligence, and then continued to clearly show that the overall idea is - again - framed in terms of human intelligence.

One could also consult Marvin Minsky's words[3] from back around the founding of the overall field of "Artificial Intelligence" altogether:

“In from three to eight years, we will have a machine with the general intelligence of an average human being. I mean a machine that will be able to read Shakespeare, grease a car, play office politics, tell a joke, have a fight.

Simply put, with a few exceptions, the vast majority of people working in this space simply take AGI to mean something approximately like "human like intelligence". That's all. No arrogance or hubris needed.

[1]: https://web.archive.org/web/20110529215447/http://www.foresi...

[2]: https://goertzel.org/agiri06/%255B1%255D%2520Introduction_No...

[3]: https://www.science.org/doi/10.1126/science.ado7069


> It's pretty clear here that the notion of "artificial general intelligence" is being defined as relative to human intelligence.

Which is precisely what the comment you responded to said.


Well general intelligence in humans already exists, whereas general intelligence doesn't yet exist in machines. How do we know when we have it? You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be. Surely you wouldn't say that someone who can't remember names or navigate without GPS lacks general intelligence, so it's necessary to define what criteria are absolutely required.


> You can't even simply compare it to humans and ask "is it able to do the same things?" because your answer depends on what you define those things to be.

Right, but you can’t compare two different humans either. You don’t test each new human to see if they have it. Somehow we conclude that humans have it without doing either of those things.


> You don’t test each new human to see if they have it

We do, its called school and we label some humans with different learning disabilities. Some of those learning disabilities are grave enough that they can't learn to do tasks we expect humans to be able to learn, such humans can be argued to not posses the general intelligence we expect from humans.

Interacting with an LLM today is like interacting with an Alzheimer patient, they can do things they already learned well but poke at it and it all falls apart and they start repeating themselves, they can't learn.


Yes, there are diseases, injuries, etc. which can impair a human’s cognitive abilities. Sometimes those impairments are so severe that we don’t consider the human to be intelligent (or even alive!). But note that we still make this distinction without anything close to a rigorous formal definition of general intelligence.


How do we know when a newborn has achieved general intelligence? We don't need a definition amenable to proof.


Its a near clone of a model that already has it, we don't need to prove it has general intelligence we just assume it does because most do have it.


P.S. The response is just an evasion.


A question which will be trivial to answer once you properly define what you mean by "brain"

Presumably "brains" do not do many of the things that you will measure AGI by, and your brain is having trouble understanding the idea that "brain" is not well understood by brains.

Does it make it any easier if we simplify the problem to: what is the human doing that makes (him) intelligent ? If you know your historical context, no. This is not a solved problem.


> Does it make it any easier if we simplify the problem to: what is the human doing that makes (him) intelligent ?

Sure, it doesn’t have to be literally just the brain, but my point is you’d need very new physics to answer the question “how does a biological human have general intelligence?”


Suppose dogs invent their own idea of intelligence but they say only dogs have it.

Do we think new physics would be required to validate dog intelligence ?


The claim that only dogs have intelligence is open for criticism, just like every other claim.

I’m not sure what your point is, because the source of the claim is irrelevant anyway. The reason I think that humans have general intelligence is not that humans say that they have it.


Would that really be a physics discovery? I mean I guess everything ultimately is. But it seems like maybe consciousness could be understood in terms of "higher level" sciences - somewhere on the chain of neurology->biology->chemistry->physics.


Consciousness (subjective experience) is possibly orthogonal to intelligence (ability to achieve complex goals). We definitely have a better handle on what intelligence is than consciousness.


That does make sense, reminds me of Blindsight, where one central idea is that conscious experience might not even be necessary for intelligence (and possibly even maladaptive).


> Would that really be a physics discovery?

No, it could be something that proves all of our fundamental mathematics wrong.

The GP just gave the more conservative option.


I’m not sure what you mean. This new discovery in mathematics would also necessarily tell us something new about what is computable, which is physics.


It would impact physics, yes. And literally every other natural science.


That sounds like you’re describing AGI as being impractical to implement in an electronic computer, not impossible in principle.


Yeah, I guess I'm not taking a stance on that above, just wondering where in that chain holds the most explanatory power for intelligence and/or consciousness.

I don't think there's any real reason to think intelligence depends on "meat" as its substrate, so AGI seems in principle possible to me.

Not that my opinion counts for much on this topic, since I don't really have any relevant education on the topic. But my half baked instinct is that LLMs in and of themselves will never constitute true AGI. The biggest thing that seems to be missing from what we currently call AI is memory - and it's very interesting to see how their behavior changes if you hook up LLMs to any of the various "memory MCP" implementations out there.

Even experimenting with those sorts of things has left me feeling there's still something (or many somethings) missing to take us from what is currently called "AI" to "AGI" or so-called super intelligence.


> I don't think there's any real reason to think intelligence depends on "meat" as its substrate

This made me think of... ok, so let's say that we discover that intelligence does indeed depend on "meat". Could we then engineer a sort of organic computer that has general intelligence? But could we also claim that this organic computer isn't a computer at all, but is actually a new genetically engineered life form?


But my half baked instinct is that LLMs in and of themselves will never constitute true AGI.

I agree. But... LLM's are not the only game in town. They are just one approach to AI that is currently being pursued. The current dominant approach by investment dollars, attention, and hype, to be sure. But still far from the only thing around.


It doesn't have to be impossible in principle, just impossible given how little we understand consciousness or will anytime in the next century. Impossible for all intents and purposes for anyone living today.


> Impossible for all intents and purposes for anyone living today.

Sure, but tons of things which are obviously physically possible are also out of reach for anyone living today.


That question is not a physics question


It’s not really “what is the brain doing”; that path leads to “quantum mysticism”. What we lack is a good theoretical framework about complex emergence. More maths in this space please.

Intelligence is an emergent phenomenon; all the interesting stuff happens at the boundary of order and disorder but we don’t have good tools in this space.


Seems the opposite way round to me. We couldn't conclusively say that AGI is possible in principle until some physics (or rather biology) discovery explains how it would be possible. Until then, anything we engineer is an approximation as best.


Not necessarily. It could simply be a question of scale. Being analog and molecular means that brain could be doing enormously more than any foreseeable computer. For a simple example what if every neuron is doing trillions of calculations.

(I’m not saying it is, just that it’s possible)


I think you’re merely referring to what is feasible in practice to computer with our current or near-future computers. I was referring to what is computable in principle.


Right. That’s what I was responding to.

OP wrote: > We don't know if AGI is even possible outside of a biological construct yet

And you replied that means it’s impossible in principle. I’m correcting you in saying that it can be impossible in ways other than principle.


On the contrary, we have one working example of general intelligence (humans) and zero of quantum computing.


That's covered in the biological construct part.

And no, we definitely do have quantum computers. They're just not practical yet.


Do we have a specific enough definition of general intelligence that we can exclude all non-human animals?


Why does it need to exclude all non human animals? Could it not be a difference of degree rather than of kind?


The post I was responding to had

> On the contrary, we have one working example of general intelligence (humans)

I think some animals probably have what most people would informally call general intelligence, but maybe there’s some technical definition that makes me wrong.


Their point is not in any way weakened if you read "one working example" as "at least one working example".


Oh, good point, I hadn’t noticed the alternative reading. That makes sense, then.


I do not know how "general intelligence" is defined, but there are a set of features we humans have that other animals mostly don't, as per the philosopher Roger Scruton[1], that I am reproducing from memory (errors mine):

1. Animals have desires, but do not make choices

We can choose to do what we do not desire, and choose not to do what we desire. For animals, one does not need to make this distinction to explain their behavior (Occam's razor)--they simply do what they desire.

2. Animals "live in a world of perception" (Schopenhauer)

They only engage with things as they are. They do not reminisce about the past, plan for the future, or fantasize about the impossible. They do not ask "what if?" or "why?". They lack imagination.

3. Animals do not have the higher emotions that require a conceptual repertoire

such as regret, gratitude, shame, pride, guilt, etc.

4. Animals do not form complex relationships with others

Because it requires the higher emotions like gratitude and resentment, and concepts such as rights and responsibilities.

5. Animals do not get art or music

We can pay disinterested attention to a work of art (or nature) for its own sake, taking pleasure from the exercise of our rational faculties thereof.

6. Animals do not laugh

I do not know if the science/philosophy of laughter is settled, but it appears to me to be some kind of phenomenon that depends on civil society.

7. Animals lack language

in the full sense of being able to engage in reason-giving dialogue with others, justifying your actions and explaining your intentions.

Scruton believed that all of the above arise together.

I know this is perhaps a little OT, but I seldom if ever see these issues mentioned in discussions about AGI. Maybe less applicable to super-intelligence, but certainly applicable to the "artificial human" part of the equation.

[1] Philosophy: Principles and Problems. Roger Scruton


If some animals also have general intelligence then we have more than one example, so this simply isn't relevant.


We're fixated on human intelligence but a computer cannot even emulate the intelligence of a honeybee or an ant.


How do you mean? AFAICT computers can definitely do that.

Sure, it won't be the size of an ant, but we definitely have models running on computers that have much more complexity than the life of an ant.


> Sure, it won't be the size of an ant, but we definitely have models running on computers that have much more complexity than the life of an ant.

Do we? Where is the model that can run an ant and navigate a 3d environment, parse visuals and different senses to orient itself, figure out where it can climb to get to where it needs to go. Then put that in an average forest and navigate trees and other insects and try to cooperate with other ants and find its way back. Or build an anthill, an ant can build an anthill, full of tunnels everywhere that doesn't collapse without using a plan.

Do we have such a model? I don't think we have anything that can do that yet. Waymo is trying to solve a much simpler problem and they still struggle, so I am pretty sure we still can't run anything even remotely as complex as an ant. Maybe a simple worm, but not an ant.


Having aptitude in mathematics was once considered the highest form of human intelligence, yet a simple pocket calculator can beat the pants off most humans at arithmetic tasks.

Conversely, something we regard as simple, such as selecting a key from a keychain and using to unlock a door not previously encountered is beyond the current abilities of any machine.

I suspect you might be underestimating the real complexity of what bees and ants do. Self-driving cars as well seemed like a simpler problem before concerted efforts were made to build one.


> Having aptitude in mathematics was once considered the highest form of human intelligence, yet a simple pocket calculator can beat the pants off most humans at arithmetic tasks.

Mathematics has been a lot more than arithmetic for... a very long time.


But arithmetics was seen as requiring intelligence, as did chess.


No one said "exclusively humans", and that's not relevant.


There are many working quantum computers…


ah, I mean, working in the sense of OP: that a system which overcomes the "engineering hurdles" is actually feasible and will be successful.

To be blocked merely by "engineering hurdles" puts QC in approximately the same place as fusion.


There are working quantum computers that are not only feasible, but exist, can be rented on the cloud, and are people pay money to use.

Whether these are a commercial success at this point in time is missing the forest for the trees. A LOT of money has been put into getting as far as we have, and the limited market for using these machines at the moment means that getting a return on investment right now is difficult. But this is/has been true of every new technology.

And quantum computers are getting better & more energy efficient year-by-year.


This makes no sense.

If you believe in eg a mind or soul then maybe it's possible we cannot make AGI.

But if we are purely biological then obviously it's possible to replicate that in principle.


That doesn’t contradict what they said. We may one day design a biological computing system that is capable of it. We don’t entirely understand how neurons work; it’s reasonable to posit that the differences that many AGI boosters assert don’t matter do matter— just not in ways we’ve discovered yet.


I mentioned this in another thread, but I do wonder if we engineer a sort of biological computer, will it really be a computer at all, and not a new kind of life itself?


> not a new kind of life itself?

In my opinion, this is more a philosophical question than an engineering one. Is something alive because it’s conscious? Is it alive because it’s intelligent? Is a virus alive, or a bacteria, or an LLM?

Beats me.


Maybe — though we’d still have engineered it, which is the point I was trying to make.


We understand how neurons work to quite a bit of detail.


The Allen Institute doesn’t seem to think so. We don’t even know how the brain of a roundworm ticks and it’s only got 302 neurons— all of which are mapped, along with their connections.


It's not "key"; it's not even relevant ... the proof will be in the pudding. Proving a priori that some outcome is possible plays no role in achieving it. And you slid, motte-and-bailey-like, from "know" to "some clear indication of possibility" -- we have extremely clear indications that it's possible, since there's no reason other than a belief in magic to think that "biological" is a necessity.

Whether is feasible or practical or desirable to achieve AGI is another matter, but the OP lays out multiple problem areas to tackle.


The practical feasibility of quantum computing is definitely still an open research question.


Sometimes I think we’re like cats that learned how to make mirrors without really understanding them, and are so close to making one good enough that the other cat becomes sentient.


> We don't know if AGI is even possible outside of a biological construct yet

Of course it is. A brain is just a machine like any other.


Except we don't understand how the brain actually works and have yet to build a machine that behaves like it.


I've personally come to the conclusion that the novelty of the thought process is a big factor in recovery. Simply put, if I reach a conclusion that takes a rather unusual road through my mind, it's much harder to get back to after an interruption.


Instead of discarding points, why not project them all onto the sphere?


Because then they won't be uniformly distributed on the sphere (they'll be denser in the direction of the cubes corners.)


hmmm interesting...


Yeah but can I spend Robux on it? If not, pass.

The whole problem is Robux isn't it? It's not like the engine is anything special.


There's a number of developers who get stuck on ROBLOX because they learned their creation tools when they were younger (they're easy to use and easily accessible to any desktop ROBLOX player), spent their formative years mastering their skills, and those skills turn out to be niche and not easily transferable to most other game engines. The choice is between basically restarting as a beginner in Unity, or continue making advanced creations on ROBLOX with all their friends and prestige they've earned in various sub-communities. To be honest I'm surprised it took this long for someone to try making an API-compatible alternative


Maybe the idea is that developers can release standalone versions of their Roblox games and escape the platform lock-in? Of course - whether their audience will come with them is a different question.


Maybe this can be used as a way to archive a Roblox game?

I'm not really a Roblox player so I'm not sure.


I haven't used them (and I despise Roblox) but my understanding is that the Roblox creation tools are actually pretty good.


I have some kids into editing Roblox, and I'm a full time Unity / Unreal dev, and I would say that the Roblox editor and engine _are_ really good.

Kids don't care about fancy graphics, they care about connecting and running around with their friends, in a wide variety of games, that are downloaded and up and running within seconds.


Yeah the term "engineer" has been diluted into oblivion, and we only have ourselves to blame for not protecting it.


Agree 100%, even blue collar workers guard their profession. Hell, I was talking to a friend and they rejected her for a retail job because she had never worked in retail before. Engineering on the other hand has zero gatekeeping - it's a sign spinner job right now. Just do a few humiliation rituals like daily standup and you're the perfect candidate!


> Just do a few cleansing rituals like daily standup and you're the perfect candidate!

There, FTFY


In Dubai, the poor underpaid folks cleaning the roads and gutters late at night are called "Cleaning Engineers" and "Garden Engineers". It's honestly sad, almost a mockery.


In the US, we've had "sanitation engineer" as the euphemistic neologism for "worker paid to pick up your garbage bins" for 50(?) years.


In German you're not even an engineer if you don't sometimes wear a hard hat or hold a screwdriver.


protecting it? ha! we’re just the first group of greater fools who thought it applied to us in the first place (hell, i became an “engineer” with an Associates degree!). just because we benefited from the prestige doesn’t always mean we’re actually held to the classical standards of engineers.


Indeed, not just math. Biology requires immense amounts of memorization. Nature is littered with exceptions.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: