I don't believe that non trivial apps can be built with conversational programming and we've had plain English programming since COBOL, so no to that either.
The "mind blowing" example of auto generating a "layout" can draw circles with a color fill or an HTML button with text. What kind of programmer needs or would use this? Should I really type, "I need a button that says Login and goes to the login page when you click it" every time that's the case? And for my second button, do I type that again with the new text? This sounds awful. If someone is amazed by an <a> tag, I would question their programming chops.
I have yet to hear even rumors of any AI that could generate anything like a real program from a conversation. Not gonna happen.
I must add here that debugging is 70% of the programmer's job. Conversational AIs don't help there at all. Can you ask it why your function returns weird output when sent something from another function? No, you cannot. Yet these are the real problems programmers face all day long.
To add on to this. Code generators are only good if you don’t have to maintain the generated code and if they can be relied on to produce good code. Copilot fails both of these tests: when I use it to generate some code, I can’t commit only the comment and run copilot in CI, because the outputs are non-deterministic. Additionally, I have to read the code it generates and verify that it handles the edge-cases correctly: and I have to do this review without having recently gone through the mental work of thinking through the implementation of a spec.
Speaking as someone who only dabbles in HTML rarely, and JavaScript even more rarely, it's not so irrational to think I could ask my computer, maybe even using voice, to generate a simple HTML interface for something I'm working on.
Certainly a lot less tedious than typing a bunch of angle brackets and trying to recall which tag is used for text input fields.
After about the second one, you will want to just copy and paste your first ine and mod it -- not describe it all again from the beginning.
The density of text simply cannot be equalled by speech. Try showing someone one of Edward Tufte's marvellous tables and see what they can glean from it, versus how long it would take you to explain the same information verbally.
There's no jumping the shark on text. It has always won on information density, succinctness, and reusability.
> I have yet to hear even rumors of any AI that could generate anything like a real program from a conversation. Not gonna happen.
Yet. The current state is admittedly primitive, but Copilot certainly feels like a step towards AI being able to reprogram itself. There's no doubt in my mind we'll eventually reach that state, which would propel us even quicker towards the singularity.
I think you've nailed why copilot is so exciting, so irresponsible, yet so not novel. The other comment in this thread who compares it to tab complete gets it too — tab complete is a dialogue as well, as is every "utterance" in the feedback loop between you and your computer.
The first half of this essay I wrote last year also develops the dialogic nature of computing, using some of the ideas from Soviet philosopher Mikhail Bakhtin: https://blog.jse.li/posts/software/
Even if this ends up good enough that it can reliably generate programs from natural-language comments, being able to specify what you want precisely enough in English will be a skill that only a few will be good at: if anything, writing specifications of behavior in English is _more difficult_, than implementing the behavior in code because Natural Languages are ambiguous and communication in them relies heavily on a the “common sense” of your interlocutor.
There are comments saying they agree with the article, that it's like autocomplete - the opposite of what the article is arguing ("autocomplete is the wrong mental model")!
When I use autocomplete I know what I want it to complete. To mee also copilot seems like a different category.
Sorry about that -- I think the article presents a correct analysis of conversational computing but kind of oversells the novelty of copilot, by missing the ways older tech like autocomplete are also conversational. So it's less a change in quality than a change in quantity (that said, a change in quantity can produce a change in quality -- see [1]).
You might use autocomplete when you know what you want it to complete, but I also often find myself using it to look at the properties of an object, see what variables are in scope, "probe" the type of different variables, etc. -- all exploratory, all dialogues.
I didn't mean to agree. I disagree with the article.
As for the way you use autocomplete -- I would say it's not that uncommon for me to use autocomplete to "fish" for stuff when I'm not sure what I want. Especially if I'm working with something unfamiliar. I'll frequently just type the dot and then scroll through suggestions.
Someone else on here said it was a glorified tab complete that was really nice and that seems to be my impression too. It looks productive but it doesn't look like it totally changes the nature of programming to me.
I haven't seen anything out of copilot's autocomplete that would take a senior dev more than 4 minutes to whip up. When I was a lead, things that took up the most time were logic errors that needed to be chased down and fundamental design decisions that needed to be made. I only see copilot creating more of the former and railroading the latter.
The Copilot positioning seems to have led people to pretty large expectations. "A better tab completion" is the exact positioning used by competitor TabNine and that leads to expectation-beating performance from a very useful tool thay isn't promising to fundamentally change software development.
Programming is already conversational. I'm telling a computer what I want, it does it, I see what it does, and elaborate or correct myself where necessary. Repeat endless times, until product exists.
That's kinda the case with Copilot, or I'd just type:
// Unify relativity with quantum mechanics.
I'm trying to say Copilot is not a fundamental shift to programming. It's what programming already is, and we already have IDEs assisting us with refactoring and second-guessing our intent with autocomplete (which in some IDEs is powered by AI now, as well).
Programming is like working in a team. You try to communicate with your teammates, and then everyone does what they can according to their skills, and how they understood the task.
The shift to higher-level communication in programming is inevitable, will it look like Copilot, I don't know.
There are two side of a programmer
a. Intellectual - The one who solves a problem by design discussions and creating mental model of the solution.
b. Robotic - The one who actually types the code, tests it, makes it free of issues and deploys it.
Just like the deployment part is taken care of by the CI/CD pipelines now, the CoPilot now attempts to automate the "typing code" part in a different way than the IDEs have approached.
The "mind blowing" example of auto generating a "layout" can draw circles with a color fill or an HTML button with text. What kind of programmer needs or would use this? Should I really type, "I need a button that says Login and goes to the login page when you click it" every time that's the case? And for my second button, do I type that again with the new text? This sounds awful. If someone is amazed by an <a> tag, I would question their programming chops.
I have yet to hear even rumors of any AI that could generate anything like a real program from a conversation. Not gonna happen.
I must add here that debugging is 70% of the programmer's job. Conversational AIs don't help there at all. Can you ask it why your function returns weird output when sent something from another function? No, you cannot. Yet these are the real problems programmers face all day long.