Hacker News new | past | comments | ask | show | jobs | submit login
AI learns to write its own code by stealing from other programs (newscientist.com)
199 points by fragomatic on Feb 23, 2017 | hide | past | favorite | 155 comments



“It could allow non-coders to simply describe an idea for a program and let the system build it”

This is not a new thought. The problem with it, however, is that such a description would have to be very precise or else leave room for different interpretations which in turn could lead to very different programs being generated. In particular, as programmers know, it's often the edge cases that pose the main difficulty not the typical case.

Describing what your program should do in sufficient detail will probably end up being not very far from the actual program itself.


As a technical manager, this actually isn't that far from what happens with teams of engineers. The less precise I am in describing the requirements of a system, the less correct solution is produced. This isn't to say that my team is incompetent or mindless -- far from it, I find them to be some of the most talented engineers I've had the pleasure of working with.

Perhaps the solution to this particular problem is a clear specification of the requirements of the problem -- very similar to how corps define requirements documents for what their product does/behaves like?


I'm not saying it applies to your case, but one danger of specifying a problem in too much detail, is that it teaches the engineers to think less about the problem. From my experience developers often think of failure cases that the managers are not aware of, or have a simpler solution to a problem.


And yet if the problem isn't specified enough, we engineers will run off and deal with failure cases that aren't relevant to the client's needs.


This discussion reminds of this question/answer on quora: https://www.quora.com/Since-programming-can-be-self-taught-w...


I, too, find this to be the case. There seems to be a happy medium of "you figure it out" and "it must do exactly this".

As a relatively new manager, I find figuring out where the balance is tricky.


Not only is it not a new thought, but it's been the holy grail of large software companies for at least 30 years. You'd be surprised at how much money has been spent researching this problem. And, for those just coming into software engineering, this isn't a good sign. I see a lot of software engineers that don't fear the risk of automation. But, because software development is such a huge cost center and so much is controlled by software developers, complete automation is one of the most sought after technologies.

I don't mean to spew hyperbole, because complete automation is likely very far off. But, I just wanted to point out the market forces behind this type of technology are gigantic.


In my experience, people have trouble describing something in sufficient detail that another human can build what they want. I think we're a ways off yet from AI being able to do so.


Asking the right questions about the system usually helps a lot though. If an AI can help with that aspect, that may increase productivity at least.


Doesn't TDD basically provide a spec? Write a test, then let the AI generate a program that passes the test. If the program is still buggy, you didn't write sufficient tests.


To completely specify a program via tests, you've effectively written the program. You've also probably written somewhere around 2x the code the actual program would require as well.

I'd be more curious to see an AI write tests...


> ou've also probably written somewhere around 2x the code the actual program would require as well.

If you go trough the easiest path, you just need to write the program and compare the results with the AI's ;)

But if you go on a case by case basis, your work will grow exponentially with the program size, not a mere 2x.


Tests are much more verbose than specs. Test validate existential properties, ie. there exist inputs where property P is true, but specs can also specify universal properties, ie. for all inputs property P is true.

Static types also specify universal properties, which is why tests can't replace types, even in theory. But tests+types could be a type of spec.


Tests can constrain a problem enough so a human programmer can understand what was intended. In theory, many programs will pass the tests, but in practice most matching programs will be convoluted and in poor taste - they only pass the tests and not the general rules implied by the test cases. In other words, there are additional constraints implied by common-sense principles of software engineering.

Developing an AI that can infer a simple, tasteful set of general rules (and suitable types) from a small set of test cases is certainly hard but not theoretically impossible.


I don't think it's that simple. Imagine a simple add(int32, int32) int32 function. Unless you write a test for all 2^32 * 2^32 possible inputs and their expected outputs, how could you guarantee that the AI will come up with the "correct" implementation?

Automated tests are generally not practical for proofing program correctness. Why would you expect them to be sufficient as a specification format?


With property-based testing one can describing the behavior over the entire domain(s) of valid inputs. Sampling is used to draw a set of cases to actually test, up to a certain number of test. Knowledge about boundary conditions in the domain can help catching failures earlier, though plain random seems to be pretty darn effective (at least for code written by humans). Candidates with dead code should maybe be punished. Candidates with higher complexity should maybe be punished.


Software produced by a typical business app developer is not provably correct either. TDD of AI generated code could not hope to produce something more correct. As your tests get better, so would the software produced both by the human and the AI.

For your add(int32, int32) function: While an AI-generated implementation is not provably correct unless the test contains 2^64 inputs, I think it's pretty likely that a correct implementation would in fact be generated pretty quickly.


I don't think it's as likely as you assume. Let's say you ask the computer to implement your adder, and you feed it the results of a few dozen, or a few hundred uint32 a + uint32 b = uint32 c operations. By coincidence, however, in all of those operations, every time bit a[14] is zero, bit c[26] also happens to be zero. The computer may produce an implementation which assumes this is always the case. The computer may also reason it does not need to look at some of the input bits, if say, for all of the tests, bit b[2] == bit b[16].

It may seem that a solution with assumptions like those are more complex than a simple addition operation, and thus the computer will find the addition solution first, but "simple" is a matter of perspective. Addition requires a logic cascade through the entire number, since the carries of the lower bits must be passed to the upper bits, which requires a critical path equal to the entire length of the inputs/outputs. In searching for an implementation which satisfies the few dozen/hundred test cases you feed it, making assumptions like those in the first paragraph can drastically shorten the critical path, and require far less logic since it's ignoring much of the input. Thus algorithmically, the addition operation may be seen as more complex.


(BTW, I don't think it's that simple either. Just pointing out that TDD is a possible approach to avoiding the infinite recursion of specs-are-just-programs-in-a-higher-level-language.)


A few years ago I read a post by Bob Martin about how you can solve code kata problems by applying a sequence of 'transformations' to pass one test at a time. His method is very mechanical... so mechanical, in fact, that I think it may be possible to automate it.

So I started tinkering with the concept, and I've got a bit of a start on it here:

https://github.com/michaelmelanson/autocoder

What's there is very rudimentary. The main class is a good place to start.

Basically I'm building up a library of transformations that can describe solutions to kata problems (the 'word wrap' one currently). The next step is to create a search algorithm to derive an ordered sequence of transformations from that library to pass an ordered sequence of tests.


You might find Perelman et al's "Test-Driven Synthesis" paper interesting, it also cites that post as inspiration.

http://homes.cs.washington.edu/~djg/papers/tds-pldi2014.pdf


DHH did a great job explaining this exact same thing on Tim Ferris's podcast a while back. http://tim.blog/2016/11/25/david-heinemeier-hansson-on-digit... at around 13:49.

Basically as time progresses we keep creating languages that are a "higher level".


If you can describe something then you can just code it. I can see AI being used in an advanced IDE as an assistant. Just write your specs and let the AI build the app.


But wouldn't you agree that by the same line of reasoning, if you can spec it, you can code it?

I mean, the difficulty lies in specifying what you want in such an unambiguous way that a program that fulfill the specification actually does what you had in mind originally.


I think you're lacking imagination. You're thinking of a software developing ai as a simple compiler. You also seem to think there must be a 1 to 1 relationship between a user's specification and resulting program. But a software developing ai could work very quickly , taking a vague natural language specification and giving many programs to choose from. That set would be narrowed based on a learned correspondence between natural language and completed programs, and narrowed and refined further with more vague natural language specification from the user. If the ai is smart enough the system could be only a bit more complicated to use than a thumbs up/thumbs down on pandora.

or we just ditch humans and allow an ai to assess what software we need.


Sure. Exactly what I meant. It would just speed up your job.


Yes, and that difficulty has been lowered many times over the years as we create better programming languages, libraries, and tools. It is easier and quicker to unambiguously describe your program in python than it is in assembly. It is easier to write a web server using libraries than by writing the whole thing from scratch.

These advanced AIs would just be the next level of this, giving even easier ways to specify to a computer exactly what you want it to do.


but a lot of things use defaults or common practices which could be collected and used. and if this AI had a chat interface it could prompt for some of the requirements. for a lot of users its not knowing the questions. and the pros and cons of the answers.


This osund like all the promise of JavaBeans many years ago, just drag and drop and let the components write themselves.

Writing code isn't the difficult part about building most software.


making people in an industry more efficient at their jobs == job losses for that industry.

if one programmer can now do the work of 5 with the help of an AI, companies will quickly be hiring less developers.

yet everytime i mention this, im told that "software developers are the one job that will never be at risk of automation"


All jobs are at risk of automation. Jobs by themselves are a pointless construct. It is the living that they provide us that is of value. I'm looking forward to the day this is accepted in mainstream political thought.


You're making the assumption that companies need to accomplish a constant amount of work, and that they only hire enough developer to perform this work. I would argue that many companies have a limited budget, and will continue hiring developers to expend this budget. Once hired, there is effectively indefinite work that can be done, and which will better the position of the company. If each developer can do 5x more, then the company will accomplish 5x more, not spend 5x less.


> im told that "software developers are the one job that will never be at risk of automation"

The AI is itself a program. If the AI can write programs, it can write a better version of itself. A few iterations of this, and it will have far surpassed human intelligence.

So it would be more accurate to say that writing programs is the last job that will be automated.


> The AI is itself a program. If the AI can write programs, it can write a better version of itself.

Not necessarily (or even likely). There's a huge difference between being able to write "programs" (for example the ones described here are ~5 lines) and being able to write a large and very complex piece of software, which such an AI would undoubtedly be.

There's also the question of what constitutes "better." Improvements in e.g. speed or memory efficiency (hard enough as it is, but conceivable for an AI) are a quite different thing than improvements in capabilities or understanding.


> There's a huge difference between being able to write "programs" (for example the ones described here are ~5 lines) and being able to write a large and very complex piece of software, which such an AI would undoubtedly be.

That's true. real-life programs that programmers get paid to write are typically more complex than 5 lines. Any AI good enough to replace real programmers is going to be very close to good enough to rewrite itself.


This makes the assumption that every program it writes is somehow guaranteed to be better than the original. Why would it not run into the Multiplicity problem, where each subsequent copy is a bit dumber than the original?

I would think it would be much more likely follow a normal evolutionary model - improve itself to a local maxima, then just stall while the random mutations fail to make the huge leap required to hit the next maxima.

We can't even properly model the human mind in computers yet - not enough processing power - so it follows that an AI which is "better" than humans would require even more processing power than that required to ape humans.


The difference here is the "intelligence" of an AI can be described as a loss function, so if an AI can build and then train another AI which can achieve a better overall score from many examples in its loss function, then that AI, with perhaps a deep neural network will start to "understand" or the components that make a successful neural network. Thus it could use intelligence rather than random mutations to evolve.


I think things like psychology/psychiatry, social work, child protective services, and other jobs that rely on a lot of interpersonal communication will be around longer than development; at least at any scale worth talking about.


I would argue that every year programmers ARE more efficient at their jobs with adoption of new tools, frameworks and methodologies. Yet there is still not enough good developers.


yes and each new tool or helpful framework reduces the total number of developers needed. Just because we have yet to meet the demand for developers, doesn't mean that that demand is not decreasing over time.

software development is quite a young industry, so its growth can easily outpace the efficiency gains resulting in more developers, until it doesnt. Development as an industry will need to maintain stronger growth than efficiency gains through innovation, and i just dont see that happening long term.

edit: in my first paragraph i should have said " Just because we have yet to meet the demand for developers, doesn't mean that growth of that demand is not decreasing over time."


You start out with n amount of programmer time demanded from projects a b and c, given that you can do x with y number of programmer hours. So imagine that increased efficiency means you can now do 2x as much fantastic now you only need n/2 right!

Not so fast now that you can do more with less d, e, and f are worth doing and n+ are now demanded.

Basically simple economics says that decreasing price, the price for a given unit of work going down increases the demand for such. This is not realistically infinite however most of the world has a long way to go to catch up with the developed world and the developed world itself seems to have no end in sight as far as demand for automation and tech.

It would strongly look like until it doesn't is so far out on the time horizon that it is hard to predict anything at all on that scale.


>economics says that decreasing price, the price for a given unit of work going down increases the demand for such

an AI costs nothing in the long term, so you are agreeing here that demand for AI produced software will rise more sharply than demand for human developers, because they are more expensive.

So if you accept that basic law of economics, then you must arrive at the conclusion that human labor will inevitably be outpaced by cheaper labor. the fact that that labor is coding seems to be where people get really stuck here. I dont really understand why this is where people get stuck.

the population of the world, and thus its demand for food has very much grown over time, and is still growing. Yet we have fewer people producing food than ever before. The reason for this is that the augmentation of technology outpaced the increase in demand. So we had fewer farmers making more food than ever.

similarly with software development, the demand for software will continue to increase over time, but the demand for human developers will inevitably fall, due to augmentations of technology enabling fewer workers to produce vastly more output. So we will have fewer developers working than we have, say, today[1], but those fewer developers will be producing orders of magnitude more[2] software

[1]im not actually trying to predict the inflection point of this growth curve as today, i meant this only as an example to make my point easy to understand

[2] where more could mean more complex, more efficient, etc.


There is zero reason to believe that software less capable than artificial general intelligence will provide any non trivial software.


Neural nets already provide superhuman performance for many computer vision tasks. Both in terms of output of the algorithm compared to a human judgement. And it does better than most human hand-written algorithm. An automated image captioning algorithm is a non-trivial piece of software.


There is zero reason to believe it will not.

look i too can make baseless assertions.


I don't need to prove a negative.

It seems inevitable that on some time scale AGI will be achieved if we don't commit species suicide or bomb ourselves back to the stone age. It doesn't follow that AI less capable than AGI can produce non trivial software guided by non programmers. This requires proof because it has never been done in human history and thus without any examples it requires if not proof a coherent argument in its favor.

The most obvious flaw is that software that is capable of composing/improving software ought to be able to improve itself and with improvements in a finite amount of time ought to BE AGI.

This looks like the start of a great tool for people to find example code relevant to the current code. In other words new tools for programmers not a replacement thereof.


can you quote me saying non-programmers would be making software?

have you read my comments here?

Im talking about tools that enable a single developer to do the work of a great many developers, thus lowering the demand for developers in the marketplace over time.

you are talking about non programmers writing software(no clue where you got that from at all?) and you even admit that this looks like a great tool to enable developers to get more work done?

Go back and read my comments before you respond please, otherwise im not interesting in having a discussion with someone who isnt even reading my responses.


Sorry I conflated 2 related but distinct discussions. On your front I disagree that programmer productivity will have a substantial negative impact on employment for the earlier stated reasons.

Demand for software will continue to increase quickly as the rest of the planet catches up to the developed world and increased productivity just increases demand.

I also disagree with the notion that this will substantially increase productivity in and of itself.


Basically this seems like it could help you find similar example code on github or in your company's codebase I'm dubious about it being used to generate nontrivial code of any variety.


And every 6 months you need to learn the latest and greatest new framework if you live in JS land.


Open source has already started doing this to some degree. At my last few companies, we used open source projects as a base for many of our software.

Instead of having to hire multiple senior level developers, all they needed to do was hire one project manager (me) and a bunch of junior developers (which are much cheaper). This is because all of the engineering-level software was given out for free and the company only needs to make changes, which takes much less experienced employees.


Were you complying with the licenses?


Here's my 2 cents on why software developers will never be at the risk of automation: https://medium.com/@kapv89/code-will-always-be-written-207d3...


Immediately i see you have this line:

>And until we develop an AI powerful enough to write clear, step by step instructions for any random task, code will be written.

so its "will never be at risk of automation UNTIL" which is vastly different from "will never be at risk of automation"

The article we are commenting on is about an AI writing code, so im not sure im seeing a convincing argument here.

secondly, automation isnt about some industry disappearing overnight, its about an ever decreasing amount of jobs in that industry. You cant imagine a system/tool/framework that allows such greater efficiency that a single developer can now reach the output of 5 or 10 developers?


Automation is always a good thing. Should we stop progress to allow humans to do some inefficient work? Society will have to change so that work is something done by machines, not humans.


I'm not making an argument that it is bad by any means, im only countering a common thread in development circles that 'development will never be at risk of automation'

for an example, see kapv89's response to my comment. Its a seemingly very prevalent view that somehow development will not suffer job losses from the efficiency gains innovation is able to create.


I actually think that together with psychologists we are the only two categories of people killing their own job :) we really strive to automate and to use the component there etc. to make our life easier and write less code. However, as someone else said above, software engineering is not coding, this is just a subset. Our job eventually will also evolve, it's inevitable :) it has already, it will not stop certainly now.


You're right. This could happen tomorrow, if even a small percentage of Excel users suddenly knew how to use it properly. We'd be at 50% unemployment if everyone who uses Excel knew how to take advantage of it.


This has been tried before with outsourced developers which are much more intelligent than the best AI will come close to.

The end result is still that it takes more time to fix the errors than it would have to write it in the first place.


One of the things about human software engineers is that they can make assumptions based on what the system is for. For example you may say that "this system has to accept payments from counterparties" and they will intuitively know that they are dealing with people's money, and to be careful of situations like double billing and the like.


In automatic programming discussions, one commenter said long ago that we already have people that deal with English this precise: lawyers. Such a model would basically turn programmers into people writing legal documents that the AI then turned into programs.


We already have language lawyers for implementing compilers. I think the right solution is to take away most of the "convenience" features. Most people don't touch all of them anyway because they don't know how they would interact. But everyone joining a team would have to learn them.

Why do so many computer languages experience scope creep? Including C++ and JS.


Sort of. I mean that you have to write specifications gor programs in English so precise and literal that it's like writing legal contracts. Compiler writers don't do that except in language specs. They use codd. Closest thing, which would be used in addition to English, is using a mathematical notation like Z or TLA+ for anything ambiguous. That's standard in high-assurance development.


"Describing what your program should do in sufficient detail will probably end up being not very far from the actual program itself."

Isn't that the idea behind https://en.wikipedia.org/wiki/Prolog ?


Something from a non-coder's perspective:

If I can create using English, there is a lot of possibilities for me. I can work on expanding language and build on skills I already have. I figure something like this will spill over into other areas. For example, I could theoretically make animation by describing the scene with words. Detail varies depending on how detailed I get.

It would still be a vast amount of work, but suddenly it is doable just by honing skills I already have. This is where the hope lies with folks like me.


Well, I think the are refering to that the computer would solve it's specific problems by itself just as we do as programmers.

I guess that you have worked for someone else as a programmer? Didn't they tell you what they wanted? How did you achieve it? They probably didn't have to write it for you in very specific terms unless a problem occurrs.


They achieved it by making a lot of guesses, based on a huge amount of explicit and implicit knowledge about humans, computers and the world in general.

The brute-force approach described in the article, where you just randomly try lines of code without any other knowledge, won't make for a very competent programmer.


The AI would have to be able to ask for clarifications on ambiguity as it goes.


Sounds like we need ai to figure out what sort of software we need.


A nuance that is underappreciated by a lot of people / the media is the degree to which any of the AI generates X models (faces, drawings, fills in photos, increases sharpness, etc.) simply copy and paste and interpolate from the training set.

Unlike general supervised learning problems, for many of the deep learning "generative" models that get posted to HN regularly, there is no objective "test set" to measure generalization, so it's extremely easy to claim the model has learned something. When we see a cool demo / audio sample / pictures, how do we know the model hasn't just simply interpolated a bunch of training examples? In many cases this is quite clear, like when you start seeing cats everywhere in the generated imagery. It's very hard to ferret out the BS with these models.


Actually, there is a formal method to evaluate generative models. All you need is a validation set. Then you need to find what the odds are that the model would have generated your validation set. That probability is the one you want to maximize [0]. In order to be able to calculate or estimate this probability, you need to make some amendments to your model architecture. But for instance PixelCNN's know the probability exactly, variational auto-encoders have a lower bound on the probability, etc. It is mainly the generative adverserial networks (GANs) which cannot. And this is the reason research in the latter is limited, despite the remarkable visualizations it produces.

But the judging of models by their visualizations is something which is still done too often, and it annoys quite some researchers in the area that those papers still pass review at machine learning conferences. They should be sent to computer graphics conferences, because that is what they actually do /vent.

[0] https://arxiv.org/abs/1511.01844


Yes, I'm mainly talking about generative adversarial models, which have proliferated greatly.


One way to check for overfitting is to compare samples to their nearest neighbors in the training set. Of course that doesn't cover interpolations between two points in the training set, but I would argue plausible interpolations are already something "learned".

A different problem I see with faces in particular though is that our visual system is actually wired to do some really heavy denoising/pattern matching on faces (for example people seeing the face of Jesus on slices of toast), so the generation of faces doesn't actually need to be that good to produce results that seem appealing to humans.


Eh, I'm sure many people copy and paste from Stack Overflow. (Not me ofc, I would never:) )

So I guess we are all out of jobs in a couple years...


we'll still have job but we'll be explaining an AI what we're looking for like it's 5.


that sounds pretty easy to debug though

but the verbosity.... it will be a lot of writing, even for a simple function


> that sounds pretty easy to debug though

Sounds like a nightmare to me. How do you describe a bug to an AI? "Dear AI, that SQL query you did break on my server. Could you improve it and regenerate all your code? ... Nope, still doesn't work. Could you try again?"


and the guy you eventually pay to fix the bug comes back from the bahamas and spends 2 mins to fix it then goes back home with a fat check.


Interesting how you consider interpolating from training data not learning. Isn't that like exactly what learning is? If you get unintended repeating patterns then there isn't enough diverse training data


I'm not an expert in deep learning (yet) so someone please correct me if I'm wrong.

Technically I see no reason you couldn't say that meets the definition of learning.

But perhaps that's asking the wrong question. Dictionaries aside, one might decide that a useful standard of "learning" in certain contexts is to have a level of understanding of the subject. And then we ask, does the system have understanding? What is understanding?

A system that only interpolates training data can't logically become consistently better at the task than the data it learns from. Whereas a a sufficiently deep learning system (including, but not limited to, a human) can defeat its master.


If there is not enough diverse training data then perhaps you should not call it learning in first place. The whole notion of learning is about algorithm + correct train data meant to make sense of certain kind of test data. All three components matter a lot and it is something that even the experts sometimes seem to miss.

Running a crude algorithm some small set of training data specifically meant to amaze you on specific algorithm have very little information content.


It will allow programmers to make errors much faster :D.

  Programmer: why is my program not working?
  AI: Strange, it's working for me.


Ah, but it will help us debug errors faster if we train it on a troubleshooting dataset! :)

  Programmer: why is my program not working?
  AI: Have you tried turning it off and on again?


I'm not sure if you're aware, but this has already been done, pretty much literally: https://arxiv.org/abs/1506.05869


Very interesting, thanks! Partially trained based on subtitles from movies. Of course, I think I'd want to be pretty careful about which movies I used to train the system!


Haha. Please, someone record the first time that happens for posterity.


"It works in my computer."



Link to the PDF: https://openreview.net/pdf?id=ByldLrqlx

The approach taken seems to be quite unique. Especially when compared to GPs and GA evolved custom assemblies. Those approaches work too, but mostly fail (or take too long) at solving slightly complex problems.


But is it asking questions and copying and pasting snippets from Stack Overflow yet? Then it'll truly be sentient.


Haha - 'Can you solve this homework problem my creator set me? :('


A nit I'd like to pick.

No, AI didn't write code. A program explicitly built and trained to write code, wrote code.

A bit too pedantic, perhaps, but there isn't some singular program out there which first learned to play chess, then see and catalog pictures, create creepy art, play Go, drive cars, and now write code. Which is what "AI learns to X" seems to imply.

Of course, that's not as interesting of a headline.


Seeing an article refer to "AI" as if it's a monolith is a really good indicator of the quality of that article.


It's even less interesting because programs to generate code have already been around for a while. In fact, at PLDI, program synthesis is considered a specific track of research papers.

Hell, you're probably using computer-written code right now. FFTW and ATLAS are autogenerated and autotuned kernels for solving FFT instances of known size and linear algebra routines, respectively, and they're among the most common implementations of these APIs.


There are many different kinds of AI, other than general purpose AI.


"AI learns to write its own code by stealing from other programs"

Much like non-artificial intelligence, then.


But with much, much less humor.


Well, that depends on the code (or perhaps whose code) it steals. :)


  This article appeared in print under the headline “Computers are learning to code for themselves”
And someone decided that the article is not click baity enough, lets make it look like malevolent


When these journalists discover how compilers work, they will stop thinking that computers writing code is something uncommon.


In the meanwhile, Mark Cuban says: "I personally think there's going to be a greater demand in 10 years for liberal arts majors than for programming majors and maybe even engineering"

(From this article: http://www.inc.com/betsy-mikel/mark-cuban-says-this-will-soo...)


I know he knows the business world and he obviously knows how to manage a business but what does he know about tech? This prediction is beyond nonsensical.


I thought this would take longer. "Developer" will stick around as a job for people who need new and unique things, or for apps that need speed, but if your job is making forms to take in data, putting it in a database, performing some sort of analysis on it, and then printing a report to the screen then you should really starting learning something else, because those jobs will not exist in 10 years.


Your timeline is hopelessly optimistic.


i don't think so.

the problem is human computer interaction. to specify what is needed is harder than the actual coding.

i would guess this will be implemented in an IDE and will reduce coding times tremendously


Specifying exactly what is needed is coding. The hope for some kind of high-level language you would generate the actual code from is decades old, see compilers. (And it does reduce coding times tremendously!)


that's kind of what i meant.

someone has to tell the thing what to do, in a understandable way. and this will not be the client/manager/whoeverneedssomething.

and that's not coding. that's software engineering. maybe codingmonkeys will disappear but a "developer" is, imo, a guy who does more than code


someone has to tell the thing what to do, in a understandable way. and this will not be the client/manager/whoeverneedssomething.

To start with, sure. That's where we are now - even the best "AI" tool needs a lot of help from the user giving it instructions in specific language. That will improve very quickly though. For a limited subset of apps, you will be able to describe what you want in plain English and get something usable out of it. That subset will start off being the sort of apps people have made in VB for years, and now make in web languages.

If that's what you do then your job will be automated away over the next decade. Learn to make something that can't easily be automated.


Remember when cobol was going to make us obsolete because managers can just write their own reports?

The absolute worse case is that we'll be employed to goad the AI.


At least when code monkeys copy-paste from Stack Overflow, there's a trace of common sense in the loop. Yes, it's possible to copy-paste your way to working software without having a clue. No, it's not a very good idea. How about if we shut these clowns down and use the money for research that has a chance of improving anything?


calm down, it's not coming for your job yet. soon, though.


I'm not worried about that; computers will only ever copy, they are incapable of creating anything new; much like management. What bothers me is the amount of energy being sucked into this black hole, trying to mimic human intelligence in computers. The sooner we get over this stupid hump and start treating AI as a tool, the better.


Care to defend that seems kind of synonymous with actually AI which doesn't look like its coming soon.


Well, it's obvious that there will be an improvement in specifiying the syntax for a computer program as it has been for the last decades.

e.g. Assembler -> C -> C++

There recently has been a post @ HN about the missing programming paradigm (http://wiki.c2.com/?ThereAreExactlyThreeParadigms). With the emerge of smarter tools, programming will get easier in one way or the other, releasing the coder from a lot of pain ( as C or C++ did realse us from tedious, painful assembler ). However, I am quite sure that it won't replace programmers since our job is actually not to code but more to solve a given problem with a range of tools. Smarter tools will probably boost productivity of a single person to handle bigger and more complex architectures or other kinds of new problem areas will come up. Research will go faster. Products will get developed faster. Everything will kind of speed up. Nevertheless, the problems to get solve / implement will remain until there's some kind of GAI. If there's an GAI smart enough to solve our problems probably most of the Jobs have been replaced.


>It could allow non-coders to simply describe an idea for a program and let the system build it

Well that's the thing, "describing" an idea to the point where you are explicit enough to get the actual b behavior you want, you are basically writing code. Granted, you might have to add superfluous constructs and syntax to make it fit the programming languages we currently have, but that is a different kind of problem.



Just like real people!


People pay me... to use StackOverflow haha

On a serious note this could potentially suck for developers. Just like the labor jobs and automation. You have a skill set? Well this computer can do your job. I guess be the guy building the code writing the code.

I do wonder what will we do when computers do everything for us. I have this motor that moves my neck to a look at a girl who also has a motor that moved her neck to look at me haha.


>I do wonder what will we do when computers do everything for us.

Yeah, this is a pretty existential question that has been posed a lot around here. What do you do if you do not have to do anything? It is like vacation, forever. You are free to be lazy, or creative, or adventurous, or somewhere in between, but will you be happier? I have been between significant work for a few months and I am itching to do something consequential but I also really enjoy the space to do anything, not that I take full advantage more often than when I was working full time.

The ongoing basic universal income study at YC, blog.ycombinator.com/hiring-for-basic-income/ I hope produces some more clarification.


> It is like vacation, forever.

I guess that's one way to look at death from malnutrition, exposure or untreated illness.

We can wax optimistic about what post-automation life could be like, but let's be honest: right now, unless you are an investor in companies that have automated or provide automation solutions, you are in absolutely no position to see 'vacation, forever' on the horizon for yourself or your children in a post-automation world.

We should be frantically trying to do everything possible to fix that, however, because automation will be competing against the wage-labor relationship that the majority of people are dependent on.


Either we fix that, true, or there will be just far less people after a while. In the meantime, starvation may be a common "disease" in 1st world as well, just as it is already in the 3rd world.

It's just that automation needs many people. It is to fix the problem of mass production. If there are no consumer masses, there is no need for mass production and thus no need for automation.

But maybe the masses are just turned from consumers into products. That is what Google and Facebook already do today. Don't know If I would like to live in such a world. Already now I don't like it to be a Google product. Is that the choice, between starvation in freedom, or being nothing but a product, a second order slave?


If people are starving in the street they will just tear down the nation around them on their way out sounds like a non survival strategy.


If society can change from taxes on tea being too high, it can change when unemployment is 91%.


The revolution started from tea taxes being too high also had the backing of the richest people at the time in what would become the US.

The automation revolution also has the backing of the richest people in the world.

I don't think a populist movement was behind the revolution nor here.


I should check that out regarding the hiring-for-basic-income, I wonder is that a concept or... yeah should read. I don't understand how that proposed system would work as far as how things would be valued unless goods are just given to us/produced. Which you know, with regard to war I don't get what is to be gained, are there concentration camps, are there borders/lands trying to be gained. Also with the sun, what is this argument of diminishing resources if energy comes in from the sun, I mean are "nutrients" destroyed after being taken from the ground and into our bodies? sorry... ranting/rambling.

More so it was just, for me when I was not very busy I kind of felt unhappy, am I depressed? What is my purpose? I think the drive might disappear that made us evolve/try to do things in the first place. Then perhaps we become extinct haha. Got too darn efficient.

I also realize when we talk it's usually people reverting to themselves talking about themselves (I'm doing this right now). A friend of mine said "that's what people do, that's how you converse, offer your perspective tangible to the current discussion..." I don't know I just feel that I I I I I I (use the letter I) too much.

Yes, you might have just had the misfortune of crossing paths with an awkward person. Ha.

Thanks though for entertaining my thought. Wall-e is the answer.

edit: subbed awkward for neurotic


Maybe, as a species, we've been in this situation before. Perhaps we realized we weren't ready, or never will be, to deal with a world in which all labor is replaced by technology, so we made one last high tech invention to destroy all the technology. The cycle continues!


> People pay me... to use StackOverflow haha

Sorry but as another Google-coder I just had to say that from another perspective people pay you to do a job, to perform a task. You perform that task. How is up to you. ;)


Google-coder haha that sounds better.

Yeah can't be too copy-paste, gotta see what you're copying/pasting. Still the upvotes though/comments and date. Yeah really helpful and MDN/forums.


We'll finally reach the point where we can gather by the fire in the evenings and tell stories, eat fresh apples from trees, swim in the ocean etc.


I remember Buck Rogers in the 1980s, the robots just built other robots and programmed them. No human being built robots because robots were better at it than human beings.

They thought the same was true of the robots flying their spaceships to fight the Marauders until Buck Rogers showed them he could fight the Marauders better than the robots. Something to do with red dogging the quarterback and going with gut instincts that robots could not do.


> Something to do with red dogging the quarterback and going with gut instincts that robots could not do.

There's a lot of hand-wavey wishful thinking about some innate capability humans have that robots/AI somehow will be unable to obtain in sci fi, that will semi-magically ensure we remain superior.

It's the same kind of blind-spot you run into if you try to get people to explain how free will could be possible and ask them to define it.


That's a tough one free will.

I usually default to "what if you killed yourself" but then you follow it up with "Well but by killing yourself..." ahhh

It does drive you insane to think you're just in this infinite sized thing and you're this thing that creates a perception of your external container and soon you'll cease to exist, does the world cease to exist as well, hard to imagine just not being. Guess time to find out haha.

I also laugh at my own problems and then you just imagine zooming out of the Earth and being in space. Nothing there, no laws, money, just you and your mortality.

I have a cat and I was just holding him one day and I was like "holy cow here is this living thing that evolve along side us" I just stared at my cat for a bit. Odd. this living thing independent of me. Why does a seed grow? hahaha why does the Gerbil run in circles.


"I do wonder what will we do when computers do everything for us."

Depends how our social structures will morph to support a society where having a job is no longer the norm and most people are in fact jobless.

In our current societies the way things stand all these masses of people will be living in slums fighting for scraps while those elite who control the system live in their walled off mini-societies and palaces.


I imagine, maybe wishful lyrics and hopefully, that we will still need and value human coders, if at least to maintain the AI developers


This just goes to show that nobody's job is safe. The crux is that AI won't get tired and won't balk at changing requirements. Many people will think it's still very far, until it suddenly arrives. The horse buggy drivers never saw the automobiles coming.

On the other hand, until that day arrives, we could take the technology into a direction that actually helps users/customers and software engineers communicate better. The software engineer could have the user feed requirements to a bot, and systematically identify and explain issues in the requirements, based on the bot's output.


programmer in future will just have to give an AI tests and the AI will try and come with the fastest code that passes those tests.


This actually sounds like more work than coding.


Programming is translation of requirements written in natural language, graphical notation, and mathematical notation to a deterministic syntax. There has been much improvement in making these syntaxes more closely match these older forms of communication, and I'm sure this is one tool that will be used in the future. However until AI can gather its own requirements, there will always be a translation step that people must do. When this technology comes to fruition, requirements gathering will become programming.


What we need are libraries of (automatically verifiable) requirements. As open source components which can be compose together, like we do with the code today. Then we, possibly assisted by "AI", can assemble a solution matching the requirements.


>AI learns to write its own code by stealing from other programs

I initially took this to mean that the AI learnt how to generate the source code which makes up its own program, a bit like a quine I guess.


Exactly. And this is being done by a science publication. Irresponsible. Let's make people freak out and fear AI even more, and unecessarily, with a clickbait title.


What If this AI (Deepcoder) manages to manipulate it's own code?


Original paper here: https://openreview.net/pdf?id=ByldLrqlx

This is more likely to vanish in the flood of AI-based applications, before reappearing as a tool for coders akin to code snippets plus, or boilerplate generation 2.0.

> and third, the neural network’s predictions are used to guide existing program synthesis systems.


Any code or real results to show for it? I'm a bit sceptical about claims in papers because I've found a lot of overstatements in them. Even in those with easily reproducible algorithms, that when you go and execute them, they simply don't pan out in any remotely realistic scenarios.


Hm, I wonder if the past ACM problems or TopCoder problems and solutions could be fed to a deep learning network...


"DeepCoder uses a technique called program synthesis: creating new programs by piecing together lines of code taken from existing software" - So essentially just copying others code as opposed to generating the logic on its own?.


I'm not sure the journalist even read the paper, or if they did they didn't understand it.

DeepCoder works with a very simple functional-ish toy DSL to solve very simple toy problems. It doesn't copy code, because the DSL is unique - and it's so simple there's no need to copy code, and nowhere to copy code from.

What it actually does is use an RNN to speed up a dumb search through the space of all possible programs written in the DSL by "learning" which code constructs are most common.

This works surprisingly well, but it's not obvious the process is generalisable to code written in production languages to solve complex problems with explicit logic and possible thread timing issues.

(It may well be. Naively I would expect a level of complexity beyond which the improved search stops working and/or is too slow to be useful. But I may well be wrong about that.)

Anyway - it's very, very interesting research. The article doesn't come close to explaining it or doing it justice, so it's worth reading the source paper.


The title is misleading, I thought this was going to be about "True AI". The AI doesn't write its own code, it writes code instead of you but always uses the same code. Or did I misunderstand?


_This article appeared in print under the headline “Computers are learning to code for themselves”_

Oh dear New Scientist. _Steal_? What was wrong with the original article title?


Now computers can blindly copy and paste code they don't understand from stack exchange just like people.


What happens when an AI breaks a patent or copyright based on a high level description of the problem you fed it?


So, this is basically a script-kiddie AI?


Gives new meaning to PEBKAC


like me.


This sounds like StackOverflow's mascot AI.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: