Hacker News new | past | comments | ask | show | jobs | submit login

I am still basically an entry-level developer even though I've been doing it for 30 years (20 for money).

The way I see this playing out is, something like behavior-driven development (BDD) where the business folks describe the functionality they desire, and programmers write up the backend logic. Then as AI progresses to AGI, a higher and higher percentage of that backend code will be generated by machine learning.

So over the next 10 years, I expect to see more specialization, probably whole careers revolving around managing containers like Docker. There will be cookie cutter solutions for most algorithms. So the money will be in refactoring the inevitable deluge of bad code that keeps profitable businesses running.

But in 5 years we'll start to see automated solutions that record all of the inputs, outputs and logs of these containers and reverse engineer the internal logic into something that looks more like a spreadsheet or lisp. At that point people will be hand-tuning the various edge cases and failure modes.

In about 10 years, AI will be powerful enough to pattern match the millions of examples in open source and StackOverflow and extrapolate solutions for these edge cases. At that point, most programmers today will be out of a job unless they can rise to a higher level of abstraction and design the workflows better than the business folks.

Or, we can throw all of this at any of the myriad problems facing society and finally solve them for once. Which calls into question the need for money, or hierarchy, or even authority, which could very well trigger a dystopian backlash to suppress us all. But I digress.




How is this maintainable? What do you use to describe the inputs and the outputs (if it resembles a programing language then we're basically pushing people around, don't we)? Is AI supposed to design the interfaces as well as the plumbing?

Let's say, a bug appears. If the internals are produced by machine learning, chances are it's basically un-freakin-fixable from the high mountains of the spreadsheet/lisp interface. So someone has to dive in, and do it by hand. I doubt the business folk will do it, they won't know where to look!

The result, seems to me, is a metric-ton of machine generated code that now someone has to rewrite. Better hire a team to do it...


Your argument is based on AGI, something we have no idea if will ever happen, and most likely nearly as close as you think.


Oh, If we're getting to AGI then it's either apocalypse-time or post-scarcity-time, ain't it?


I really love your storytelling even if I'm not sure I believe one whit! You should try your hand at writing books/blog posts/short stories if you haven't already.


Hah thanks, ya I don't even know what's real and what's not anymore. Someday we'll live in a society where grandma married a guy she met on the internet and the grandkids have to fill in their pre-learning questionnaire with all the stuff they've already learned on the internet so that the teacher can move on to the really important stuff that prepares them for getting their degree from Silicon Valley online university, where they'll major in pre-K robot childhood education. The year is 2029.


> But in 5 years we'll start to see automated solutions that record all of the inputs, outputs and logs of these containers and reverse engineer the internal logic into something that looks more like a spreadsheet or lisp.

And that Lisp code will look something like: https://groups.google.com/forum/#!msg/comp.lang.lisp/4nwskBo...

(Unfortunately, Lisp neither makes you smarter, nor a better programmer, which seems to be a very profound, ego-wounding disappointment for a lot of people who try to dabble in Lisp programming).

Now programming-by-spreadsheets, on the other hand, is a real thing, that is almost as old as Lisp, and is called "decision tables." It was a fad that peaked in the mid-1970s. There were several software packages that would translate decision tables to COBOL code, and other packages that would interpret the tables directly. I think decision tables are still interesting for several reasons: they are a good way to do requirements analysis for complex rules; the problem of compiling a decision table to an optimum sequence of conditional statements is interesting to think about and has some interesting algorithmic solutions; and lookup table dispatching can be a good way to simplify and/or speed up certain kinds of code.

What is not interesting at all is the use case of decision tables for "business rules." A few of the 1970s software packages survive in one form of another, and I have not heard anything good about them. And the problem is very simple: the "business folks" generally do not know what they actually want. They have some vague ideas that turn out to be either inconsistent, or underspecified in terms of the "inputs," or in terms of the "outputs," or have "outputs" that on second thought they did not really want, and they (the "business folks") never think about the interactions that several "business processes" of the same "business rule" might have if they take place at the same time, much less the interactions of different "business rules," etc.

AI cannot solve the problem of people not knowing what they want or are talking about. Machine learning on wrong outcomes and faulty assumptions is only going to dumb systems and people down (IMO this is already obvious from the widespread use of recommendation systems).




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: