Hacker News new | past | comments | ask | show | jobs | submit | pbw's comments login

I'm the OP, and I think of it as more "completely and utterly inevitable" than "needed." Given our personalities and history, the promise of curing all diseases, along with many other promises, will compel us forward. But whether or not we'll look back and say it was the right move, I don't think anyone knows for sure.


Their experiments started in 2021 and the change was made in 2025? This makes me think AI writing code for us will only speed things up so much.


Do you genuinely believe those 4 years were spent writing code?

This is why wet behind the ears tech boys can't be trusted any more. They really think that the hardest part of software, the thing that slows us down, is writing code. Really!?

Kid, I'll offer you some free advice. Writing the code is the least difficult part. Deciding what and how to write (and what not to bother with) is a critical step that has nothing to do with writing code. Designing the architecture, ensuring it's correct, leaving something well written and maintainable for the next grunt, documenting code so it's easy to understand and to review, ensuring your code supports all the desired use cases and interactions with users and other code/apps/etc, iterating on it until it's polished, and then actually maintaining it and fixing bugs that are inevitably going to be there if the code is sufficiently expansive. Those are just a few of the things that aren't "grinding code" as you want to make software.

Read a programming book for Pete's sake, and stop assuming you can just fake it till you make it because you are part of what's destroying software for the world and it's got to stop.


The kid will learn. It always takes time.


I wrote this in 2021: Billionaires: Our Single Point of Failure https://metastable.org/billionaires/


Sincere question: what do you believe should happen when a company becomes successful?

You make excellent criticisms and I've had similar thoughts. But I never really understood what people want to do other than confiscate companies once they become successful. Not saying that is what you want to do - you specifically say not that.

You mention "let’s engineer a network of trust and monitoring and a culture of transparency". I'm not sure what that means.


In the post, I say private individuals can own assets in two ways, as individuals (up to a cap) or through a personal corporation (no cap).

So, if you start a company that becomes huge and your slice is worth $100 billion, or even $10 trillion, you can still own all of that via the personal corporation. And you can invest or spend all of it [almost] however you want.

The difference is that the personal corporation has oversight; I only specify there will be a "board" I don't have anything concrete beyond that.

But the idea is in extreme circumstances, the board can over-rule your money-related decisions. The intent is they will only step in if you are going nuts, but the devil is in the details of how exactly to do that. It might be impossible, but I'd rather see us at least try rather have brain-damaged trillionaires causing unchecked mayhem.

When I make this argument, people assume I want to tax or seize the billionaire's wealth, but no, I'm saying they can keep every penny. Although to be fair, if you did cleave apart people's finances like this, taxing the "personal corporation" higher than the individual portion would be tempting.


> the devil is in the details

Well yeah :)

Any ideas on how the board is chosen? Does the majority(only) owner / CEO of the p-corp do it?


The board can and should be friendly with the p-corp owner. Almost all the time they are going to be green light everything. They are really just a "sanity check" (literally).

Now you could say how do we "make sure" the board acts when the time comes? Stands up to the owner? Maybe we don't. We put the mechanism in place, and if the board fails to stop the owner, then it didn't work in that specific case. And the world will know that. But as long as it works "most" of the time maybe that's enough.

Also I forgot, apart from a board a big thing might be reporting. Your p-corp activities would have much more stringent reporting requirements compared to a private individual. You can do anything you want with your $300M in private funds, including get it as small bills and roll around in it, but the p-corp funds need to be much more closely monitored. That alone, even without a board, would be big.


It sounds like you are also libertarian, but you argue that even libertarian societies should have some form of public interest limitation, license or oversight. This is in line with traditional Thomas Jefferson thinking: "A wise and frugal government, which shall restrain men from injuring one another, shall leave them otherwise free to regulate their own pursuits of industry and improvement, and shall not take from the mouth of labor the bread it has earned. This is the sum of good government."

That's a nice thought but the problem, as we've now shown, is that any organization of humans will by definition become corrupted, the state will grow unless constrained, and any system is imperfect and can be gamed.

What is the place of the state, traditionally empowered through a monopoly on violence, in a multinational world? Right now, it's just a tool to be abused by the wealthy, and by extension to confound and entrap the masses. Can you believe they still teach nationalism? Populism should have been excised from education after the 20th century's tragedies, but instead it seems to have redoubled.

In future perhaps we'll have a dictatorship of AI to keep the humans in check, but the clear danger is that such a system would present too much value not to be usurped and abused.

Democracy in the utopian theoretical sense was supposed to be based upon popular education and representation. We could work towards improving the former with AI and more effective (not child minding oriented) personalized education programs, and the latter with more frequent referendums. Right now we have the opposite: laziness, lack of education, active misinformation, and near zero viable means for meaningful representation even in self-labelled democractic societies. It is no wonder so many people self-medicate.


I don't think using AI to write code precludes learning deeply about the problem domain and even the solution. However, it could lead to those problems depending on how it's done. But done well you can still have a very knowledgeable team that understands the domain and large portions of the code, I believe anyway.

I think software engineers will drift towards only understanding the domain and creating tasks and then reviewing code written by AI. But the reviews will be necessary and will matter, at least for a while.


I always heard it as "software development is an exercise in knowledge acquisition."


The best programmers eventually become experts in a problem domain they’ve worked on, because to teach a computer to automate a process well requires thinking like an expert and resolving incoherences. Weak programmers complain stakeholders don’t know what they want or that there’s no spec; I have a hunch these are going to be replaced by AI.


I wrote this to probe the question of why “slow” languages are popular and valid when used in conduction with fast ones: https://tobeva.com/articles/beyond/


I don't love the analogy, but I 100% agree that we are putting AI tools into people's hands that they are not yet able to wield safely. I think this is mostly because of how quickly AI has been developed. New technologies can take decades to be absorbed by society. Early on with cars we had no traffic lights, speed limits, or seat belts. And modern freeways were many decades off.

"Early electrical systems were poorly insulated, and fires or electrocutions were common. The creation of standards for wiring, outlets, grounding systems, and circuit breakers took decades to develop." It could be that AI will develop so fast that society never catches up, but I'm sure we'll at least get better over time.


Everything is physics at the bottom.


I really enjoy these. I’ve listened to them while driving —- blog posts by Astral Codex Ten or Paul Graham that I had never bothered to read.

There are millions of real podcasts, but now there are an infinite number of AI generated ones. They are definitely not as good as a well-made human one, but they are pretty darn decent, quite listenable and informative.

Time is not fungible. I can listen to podcasts while walking or driving when I couldn’t be reading anything.

Here’s one I made about the Aschenbrenner 165-page PDF about AGI: https://youtu.be/6UmPoMBEDpA


Early on, the cloud's entire point was to use "commodity hardware," but now we have hyper-specialized hardware for individual services. AWS has Graviton, Inferentia and Tranium chips. Google has TPUs and Titan security cards, Azure uses FPGA's and Sphere for security. This trend will continue.


The cloud's point was to be a computing utility, not to use commodity hardware. They may have been using commodity hardware but it was just a means to an end. That was also the era that "commodity hardware" was a literal game changer for all forms of large-scale computing for businesses, as before then you'd be beholden to an overpriced, over-valued proprietary computing vendor with crap support and no right to fix or modify it yourself. But anyway, the businesses you're referencing are literally the largest tech companies in the world; custom hardware is pretty normal at that size.


You must be talking about very early on because it would only have taken a short time spent on practical cloud building to begin realizing that much or even most of what is in "commodity hardware" is there to serve uses cases that cloud providers don't have. Why do servers have redundant power supplies? What is the BMC good for? Who cares about these LEDs? Why would anyone use SAS? Is it very important that rack posts are 19 inches between centers, or was that a totally arbitrary decision made by AT&T 90 years ago? What's the point of 12V or 5V intermediate power rails? Is there a benefit from AC power to the host?


You're not wrong but I would make a distinction between removing features (which requires little or no R&D and saves money immediately) and designing custom ASICs (which requires large upfront R&D and pay off only over time and at large scale).


> realizing that much or even most of what is in "commodity hardware" is there to serve uses cases that cloud providers don't have.

Why wouldn't they have?

E.g.

> Why do servers have redundant power supplies?

So if you lose power you don't lose the whole server? It's even more important for cloud providers that have huge servers with high density. You connect the different power supplies each to an independent power feed so 1 can go down. Would you rather have 2x the capacity instead?


It was more of an opportunity initially. Bezos saw a chance to purchase extremely discounted systems. He had the foresight to move redundancy from a single host to clusters and then eventually data center level redundancy. This decision reshaped how services were implemented and scaled. Specialized devices returned because they offer a better value or in some cases enabled products that were never possible before. At the end of the day a lot of decisions are based around cost. Can you do it with a toaster or an oven cheaper? Can we cut all of our food into pieces and cook it in a Beowulf cluster of toasters? I think you get the idea.


> hyper-specialized hardware for individual services > AWS has Graviton

This is commodity hardware.

It pretty much works off ARM spec. Besides AWS owning it and offering it cheaper it's not in any way faster / better than Intel / AMD / something else.

We've had custom motherboards, server cases, etc long ago even before clouds.

If Apple silicon happens in the cloud then maybe...


The trend was to begin to use distributed software in computing clusters that allowed lower cost with commodity machines with lower individual reliability.

Now the focus is on cost-efficient machines in clusters, which is why we have the specialization.


Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: