Hacker News new | past | comments | ask | show | jobs | submit login

“50 years from now, I can’t imagine people programming as we do today. It just can’t be."

Dear writer, let me introduce you to FORTRAN, COBOL, LISP, or BASIC. These are alive languages, all 50+ years old.

Coding didn't change much. The languages, the methodologies, the ideas change, but the approach is the same, and whoever thinks this will soon (50 years is not _that_ far) changes, have never had to debug something nasty. Doing that with voice commands in my opinion is significantly harder compared to what we have now.

We will have tools, accessible, easy tools; Arduinos and Pis of the future; sure. But it will not replace, nor eliminate or reduce the amount of code written.




There's a serious gap in the writer's mind about computation and programming. It's like the author is suggesting that "eventually we won't need writing: it will be replace by writing-thinking or picture-writing". It's completely absurd. Specific, complex ideas can only be described and communicated in text. Not pictures. Blueprints, for example, have a pictorial element to them, but their fundamental value is our ability to use the formal language to analyze what's on the plan and whether it is correct or not. To the degree that a picture or a motion graphic can formally accomplish this is to the degree that it is supported by a specific language under the covers. Not the other way around.


Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative. Given the ease of preparing written text compared to drawing schematics nobody would go to the trouble of doing so if that weren't the case.


Of course, all of the pieces of information that blueprints and schematics are conveying are 2D layouts. Once you're out of the realm of things whose forms can be reproduced at reduced scale, or simplified to functional equivalents that lie on a plane, the types of useful visual representations of things are sharply reduced; they become extremely stylized, and symbolic. At that point, you've basically arrived at language again.


An interesting angle on the topic comes from my father who at one time was a project manager and designer in the construction industry. In the days before computers he would painstakingly hand-draw the design that was reproduced as blueprints, that was the role of the "draftsman".

But the drawing wasn't the source of stress, rather it was the project "specification" that he sweated. The issue was the spec was a legal, text-format document detailing the size of beams, type of wire, plumbing, fixtures, etc. He had to assure that beams were sufficient to support structure, electrical wiring was safe and up to code, etc. A mistake could expose the contractor and himself to legal liability if a component failed, so an accurate spec was a task he took seriously.

Of course the subject of program specifications is commonly discussed, though often doesn't have the same significance that my father experienced. I guess in most cases program crashes don't have the same impact that a roof caving in would entail. In situations where crashing can't be tolerated, the spec will mean a whole lot more.


I work in the same construction design industry. The drawings themselves are also contractually binding. Many smaller jobs forgo the written specifications altogether.


My father had mostly worked on larger projects, like tract houses and the like. Of course, times change, my recollection was of how things were a long time ago. My comment was just illustrating an instance where relying on a text description was still important even though there was a graphic format as well.

Your info was relevant to the idea of that at some level of complexity it becomes necessary to use text vs. only graphic presentation. Maybe in construction that occurs when there are more than a few elevations to juggle, but you probably know much more about it than me.


If you had a blueprint of the whole of NewYork city, you surely would need some tool to abstract away the maze of individual lines and be able to refer-to/work-with concepts like "Central Park", "the Harlem", or "Brooklyn Bridge".

It is not about how much more information can we convey, but how much less data must be expended to present a tractable model of reality to the human operator. Conveying more details is worse than useless, it results in informationi overload and cognitive stagnation.

Historically, the way it happened in computer programming is those tools are text based. This is as much about the early use of computers as clerical aids to process business data, and the early synergies between computation and linguistics. Maybe it can be done, but it will require millions of man hours to accomplish. And almost nobody wants to invest in doing so because of the cost of opportunity.


Of course, there's the ability to zoom and pan to get the appropriate level of detail. There's a reason Google Maps isn't a text adventure.


In Google Maps, the ability to zoom relies heavily in a (unacknowledged) property of the problem domain: planar geometry. If every relevant detail is nicely clustered together and, more importantly, every irrelevant detail is nicely clustered far away from wherever you are zoomming-in, then sure!

If, on the other hand, you cannot ever be 100% sure that fixing one stop light in Brooklyn will cause a bunch sewage lines to flush out to the street in Long Island, then zooming does more harm than good. At the end of day, you need the map to conform to the realities of the territory. If that gets in the way of that pretty abstraction of yours, then the abstraction - not reality - is wrong. And when that is the case, you need to start over and make a better map.

Text based toolchains are, for all their limitations, a (sufficiently) reality conformant map. It does not mean there cannot be others; but as of today I do not know about any suitable candidate.


When I write "2.5 mm" this is not narrative. If you want to explain "2.5 mm" without using text, how would you do that? The only way to do it is to use something literal from the real world. That's what we're talking about when we're comparing blueprints to programming. I think the word is literal. Can't avoid the need for text when it's precision we're after.


> Blueprints/schematics are far, far superior at conveying the information they do compared to a written narrative.

Blueprints don't change as much as software does. It's not generally interesting to diff, fork, reformat, or patch a blueprint.


Hmm, I don't think you've ever worked on designing a building. Being able to diff two sets of plans would be hugely beneficial.


Graphics can be useful in some domains but nothing beats text in the general case.


Just imagine a compiler that scans your diagram written by hand on a piece of paper, translate it into AST then interpret it or even produce an executable.


And then realize that your diagram was misinterpreted and you have a big bug in said executable.


It used to be that CPUs were designed with schematics (drawings). Today, they seem to be designed with text (VHDL or Verilog). I wonder why?


Basically all other electronics is developed with schematics though.


Why do you think textbooks (and ancient works) are written in text, not comics?


Why do you think Euclid drew diagrams and didn't write everything out in text?


I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs. Even when "hierglyphics" is used as as a term of abuse of for programming languages synatax -- it ends up pretty popular.

I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work -- if only it refrained from doing the cking stupid things that LabView did (such as the strongly typed editor* that would automatically progate the any type error it found, but not your fixes).

In my current C++ work, I would dealy love a graphical tool that showed me where any given value came from, much like LabView does by its very nature.


"I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs."

My understanding is linguistics research has pretty thoroughly debunked this idea.

Don't remember the experimental design (was a long time ago, sorry), but I believe a study showed Chinese readers basically translate the characters back into the sounds of spoken language in their heads, before any processing of meaning takes place. In other words, pictographic mnemonics may be helpful when first learning the characters, but play no role for a fluent reader.

I suspect a similar thing will be true with programming for a long time to come. Even if you try to replace keyboard characters with other icons, it will be just substituting one arbitrary association between symbols and meaning with another. (Which is basically what language boils down to, anyway.)


> I thoroughly hated LabView when I had to program in it, but it did convince me that a graphical programming language could work

That's funny. I came away with the opposite opinion. Text is much better at describing details and it's much easier to be consumed by various things: people, editors, analysis tools, web apps, test engines, code generators, code transformation tools, ... I could go on.

Languages like LabView never have a complete toolchain (Prove me wrong by posting a small piece of editable LabView in a reply to this HN comment). They work well as domain specific languages, but that's about it.


> I don't think we can distinguish text and pictures so easily. Look at Chinese, look at Egytian hieroglyphs.

Based on these two sentences, I'm confident that you don't know the first thing about Chinese characters or Egyptian hieroglyphics.


I think we can distinguish. The ideograms and hieroglyphs have very, very specific rules about they can recombine, and that nothing to do with their pictorial aspects. It has to do with semantic / grammatical aspects.


As someone who is awful at Pictionary, I hope so as well. Just today, I defined a class with 4 functions. I had another function that created an instance of the class and called one of the functions. It changed a variable that would show up in the web browser formatted by CSS. And I can't even draw a dog in Pictionary...


Emojis = "picture-writing"


And with emojis you can describe how to build a bridge precisely?



You can't necessarily judge the future of of a technology by its past. Consider transportation. Imagine it's 1936, automobiles have been around for 50 years, but there are still plenty of people getting around by horse. Some people are claiming that in another 50 years, by 1986, horses will be hardly used for transportation compared to cars, other people say that horses have been used for thousands of years, there's no way they'll ever go out of style.

Programming languages exist today because computers can't handle ambiguity and don't understand software design. In another 50 years, machines will be a lot smarter, more able to handle ambiguity, and better than people at designing workflows and catching potential errors. Like the horse, no doubt some people will still prefer to do things the old way, but there's a good chance this will be limited mostly to academic exercises.

All they're saying here is that the tools we have will progress a lot in the next 50 years. There are some obvious problems with the way we design software right now which are due to human limitations. The only way to fix those is to remove a lot of direct control from humans and give it to AI programmers. Manually writing JavaScript in 2066 will be like manually carving arrowheads today: still effective but not something you would do for a serious purpose.


Your example actually cuts the other way. Imagine it's 1966 and someone tells you that the cars, trains, and planes will "have to be dramatically different in 50 years," yet lo and behold a trip from NYC to LA takes about the same amount of time now as it did back then and the Northeast Regional is a hair slower than the Metroliner used to be.


I was focusing on 50 years into the development of the technology as a rough analogy. By 1966 it was much more mature, but look at how much things have changed. A mechanic from 1966 would find today's cars completely unrecognizable. They might appear somewhat similar from the outside, but on the inside they're basically just giant computers. We now have cars with varying levels of self-driving capabilities, drones replacing pilots, traditional pilots being essentially babysitters for autopilot systems, hyperloop designs. I'd say those are much bigger changes than 1936-1986.


Cars are not "basically just giant computers" on the inside. Computers are used to control various engine parameters, and aspects of the transmission and suspension, but all the parts that make the car go are just refined versions of what existed in the 1960s. Okay, so now we use computers to control valve timing instead of mechanical means. But the principles of what valves are and how they make the engine work are very similar to 1966.

And that computing horsepower mostly goes towards fuel efficiency and safety. Which is nice, but almost certainly not the kind of progress people in the 1960's thought we'd make in automobiles over 50+ years.

> traditional pilots being essentially babysitters for autopilot systems

The first autopilot takeoff/cruise/landing happened in 1947.

> hyperloop designs

But we don't have hyperloops.

> I'd say those are much bigger changes than 1936-1986.

By 1986 we had fully-digital fly-by-wire aircraft. Our big achievement since then has been about a 20% improvement in fuel efficiency.


The concept of what change is more or less significant can be pretty subjective. I'm not talking about things like fuel efficiency, although those are some really interesting facts. Autopilot in 1947! didn't know that one. Yes, cars and jets still use the same basic architecture for what makes them move, but the control mechanisms for that architecture have completely changed.

To bring your comparison closer to the subject at hand, this article has nothing to do with the design of computers themselves. We could use the same basic Von Neumann architecture in 50 years and still get rid of traditional programming languages as a primary method of designing software, just like we use the same basic engine designs from 50 years ago but use entirely different methods of designing and controlling them now.

Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow. Now computers are heavily involved in the design process and most of what was done manually by engineers is now written into software. The same situation will happen 50 years from now for people who design software.

Take an extreme example like game creation. In 1966, you could make a computer game, but you were doing manual calculations and using punch cards. Now you download Unity and almost everything but the art design and game logic is done for you. Game design moved quickly toward these kinds of automated systems because they tend to have highly reusable parts and rely mostly on art and story for what separates them from the competition. But there's no reason why this same concept wouldn't apply to tools used for any kind of program.

The horse to car comparison was only meant to show that the development of a technology in the first 50 years (or any arbitrary number) will not necessarily look like the next 50 years. Well-established tools quickly fall out of use when a disruptive technology has reached maturity, even if that tool has been used for thousands of years. Right now, software design is difficult, buggy, and causes constant failures and frustrations. Once we have established and recorded best practices that can be automated instead of manually remaking them every time, there will be no need for manual coding in traditional programming languages. Machines are getting much better at understanding intent, and this will be built into all software design.


"Take an engineer designing a jet in 1966 and put them with a 2016 team. They will have to learn an entirely different workflow."

Send them to the "PCs for seniors" course at the local library to learn the basics of clicking around on a computer. Then a one or two week training course on whatever software is used to design planes these days.

Getting up to date on modern "workflow" is not going to be a major hurdle for someone smart enough to design a jet. Heck, it's very likely there could be someone who started designed jets in 1966 and still designs them today. (Post retirement consultancy.)


My point was not that they wouldn't be able to learn it, only that the tools and methods of design have changed and become much more automated. That process has not stopped, only accelerated. The people in this article are saying that the process of making software in 50 years will be very different from the modern method. It will rely heavily on automation and what was done manually by writing in programming languages will be integrated into systems in which the intent of the designer is interpreted by a machine. You can see it in IDEs today. They already analyze and interpret code. This is extremely primitive compared to what we will have on 50 years. The progress of machine intelligence is clear and doesn't require any major breakthroughs to continue for the foreseeable future. It will be as irresponsible for most people to write everything manually in 50 years as it is not to use a debugger today. No doubt there will be people doing things the same way, just like we have traditional blacksmiths today, but we will not have billions of people typing into terminals in 50 years. The criticism is against the idea that in the future, everyone will need to learn how to code in the same way as everyone needs basic arithmetic. That is not a plausible version of the future. It's trending the other way, more automation, more code reuse, less manual entry.


"Now you download Unity and almost everything but the art design and game logic is done for you."

Yes, Unity helps to visually organize your game's data, and there are built in and downloadable components (which are all created by coders) that can be used to plug into your game, but it's just another set of abstractions. Most of the time you will be writing your own components in a traditional coding language or delving into other's component code to adapt it to actually make your game function. There ARE game creation systems intended for no coding required, but they come with the expected limitations of visual coding that people are bringing up in this thread. No, Unity doesn't really fall into this category, barring a few limited game domains.

Perhaps in 50 years every domain will be "mapped" in this way, with predefined components that work with each other and can be tweaked as needed, but I don't see how that could eliminate coding, or even displace it that much. Two reasons I think coding is here to stay:

1) Any sufficiently complex system needs it's organization to be managed. At a certain complexity, whatever system is replacing coding will become something that looks a lot like coding. At that level of complexity, text is easier to manage than a visual metaphor. 2) Most pieces of software need custom components, even if only to stand out. Those game creation systems with no coding? No one is impressed by the games that are created in those systems. Not because the system cannot produce something worthwhile - but because with everything looking the same, the value of that output drops substantially.

I think coding will only go away when programming does. When the computer is as intelligent and creative as we are. And that's a point which I do not want think about too much.


I think we'll reach that point in 50 years because we already have computers with certain types of intelligence that exceed ours. Translating a human intent into machine language does work with coding, but we have to admit that it's not ideal. There are too many mistakes and vulnerabilities. Even the smartest people create bugs.

This like the shift in transportation. A lot of people love driving and mistrust autonomous vehicles. But the tech is almost to the point where it's safer than human drivers. In most situations, it already is.

Another comparison would be SaaS. For a lot of companies, it's about risk mitigation. Moving responsibilities away from internal staff makes business sense in many cases.

This is a criticism of the idea that we need to make coding a basic life skill that everyone should focus on. It looks a lot like denial to some people.

Let's go back to transportation. Imagine if people were pushing the idea that commercial driving needs to be in every high school because driving was such a big employment area. Some people might say that the autonomous vehicles look like a big threat to job prospects, so maybe it's not such a good idea to focus on those particular skills.

Coding is great, provides a lot of opportunities to the people that it attracts, but it's a pretty specialized skill that's going to be increasingly displaced by more natural and automatic interfaces this century in all likelihood.


Well, it's dramatic in the little things, but not so much in the big things.

Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

Cars are much safer in collisions than they used to be.

Most cars now have air conditioners. I've driven in a car without AC in Arizona in July; believe me, AC can be a really big deal.

Most cars now have automatic transmissions, power steering, and power brakes.

And cars get much better fuel economy.

Driving from NYC to LA takes less time due to interstates and higher road speeds (and cars that can comfortably handle those speeds). Not half the time, but still a significant improvement.

And yet, most cars are not dramatically different as far as the experience of driving them is concerned. Nothing in the last 50 years looks revolutionary. It's been an accumulation of improvements, but there has been no game changer.

I suspect that the next 50 years in computing will be similar.


>Cars now go 100,000 miles between tune-ups. They used to go, what? 10,000 miles?

I'm curious what your definition of tune-up is, because I don't believe there exists a car that can go that far unmaintained without doing lasting damage to various systems.

After a quick Google, my impression is that most 2016 cars have a first maintenance schedule around 5k-6k miles. Some as low as 3,750.


I don't think an oil change is a tune up. Maybe it is.. My honda has 80k miles on it, and has had oil changes + tires replaced. That is it. Compare to a 1970s car and what it would need in the first 80k miles.

For even lower maintenance look at electric cars. I think Tesla has very very low maintenance requirements for the first years.


> I don't think an oil change is a tune up. Maybe it is.

It's not.

I have a couple of 60s Mustangs and several newer cars. My original '65 needs ignition service (what most people call a "tune up") every couple of years (of very modest usage). My '66, converted to electronic ignition, gets about twice as long (and 10x as many miles) before needing ignition service. They both end up fouling plugs because of the terrible mixture control and distribution inherent in their carbureted designs.

My wife's 2005 Honda CR-V gets about 100K to a set of plugs. (Fuel injection, closed loop mixture control, and electronic ignition are the key enhancements that enable this long a time between tune-ups.)

My diesel Mercedes and Nissan LEAF obviously never get tune ups.


> My diesel Mercedes and Nissan LEAF obviously never get tune ups.

You don't do valve adjustments on the Mercedes?


No. I have the OM606 engine. Hydraulic lifters eliminate the need for mechanical valve adjustments as on the older diesels.

About the only thing I've done abnormal on the car in 7 years is replace two glow plugs. (And when the second one went, I actually replaced the 5 that hadn't been changed yet, since they are cheap and I didn't want to take the manifold off again to change #3...)


Actually, the Nissan Leaf can, although is be really conserned about the brakes at that point.


There are currently no signs that what you think will happen will happen. Soft AI is the only place where anything is moving on that front and the movement is infinitesimally small. Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.


Years don't make progress on their own, people working during those years push progress forward. The estimated population of ancient Babylon at its height was 200,000. Let's imagine that 1% of them were working on developing writing for at leat 2 hours every week and that those who came after them were able to maintain that level of work for 1000 years until ancient Greece, over 200 million hours of work. That's less time than the official Gangnam Style video has been watched on YouTube.

In 50 years, 99%+ of all the work ever done by civilization will be done after 2016.


As long as we're criticizing the analogies in the discussion (rather than the actual arguments) I'd say the hours spent do not have a consistent quality vis-a-vis solving hard problems. Because there are more absolute hours available does not mean that there are more hours available for solving hard AI problems. There are very likely less. And there has been virtually NO progress on the hard AI front.


Hard, human-level AI would help this a lot, but it isn't necessary. All that's required for traditional programming to become obsolete is for computers to be much better at understanding ambiguity and have a robust model for the flow of programs. With today's neural networks and technology, I have no doubt it would be possible to design something that would create good code based on all the samples on github. Not easy by any means or someone would have done it, but it doesn't require any breakthroughs of computer science, just lots of data and good design. The tools referenced in the articles are working primitive versions of this.


There's an important distinction though between being able to write a compiling (or even functional) program and being able to write a program that serves a particular purpose.


I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.


> I'm talking about human-guided programming without using traditional programming language, creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set.

Creating clear and accurate design documents is so much harder and more specialized a skill than programming that many places that do programming either avoid it entirely or make a pro-forma gesture (often after-the-fact) in its direction.

(I am only about half-kidding on the reasoning, and not at all about the effect.)


"creating a design document to lay out what it does and how data flows and allowing the computer to sort out the details based on a stored data set."

This is exactly what programmers do today. We just call the "design document" a "program".

Over time, our design documents become higher and higher level, with the programmer having to specify fewer details and leaving more of the work of sorting out the actual details to the computer.


Yes, exactly! That's what the article is claiming.


Why do you assume that this design document would be simpler to create than the traditional computer program? Because otherwise, this is exactly what happens now.


There are some fairly aspirational claims about how it might be different in this paper, which is a great read:

http://shaffner.us/cs/papers/tarpit.pdf

There has already been some significant progress on this front. E.g., SQL and logic programming let you describe what you want to happen, and let the computer figure out some of the details. Any compiler worth using does this, too. Smarter machines and smarter programs will mean smarter programming languages.


Design is always going to be a part of creating something. What this article is arguing is that manual typing of text by humans using traditional programming languages will not be the primary means of implementing those designs in the future. We don't yet know how to make computers into good designers, but we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.


My question is how is drawing rather than writing simplifying anything i.e. what is the gain from moving from traditional programming to some sort of theoretical picture programming? Is it that you can draw lines between things rather than just assuming that the line from one symbol points to the next symbol on the line? Does that simplify things, or make them more complicated?

> we know that we can create tools that translate designs into executable code that can be less error-prone and more reliable than people typing letters into a text editor.

I disagree. Maybe you know, but I haven't seen any indication of the sort.


Drawing rather than writing is just one method. A lot of it will likely be conversational. I could imagine a designer with an AR overlay speaking to a computer which offers several prototypes based on an expressed intent. The designer chooses one and offers criticism just as a boss would review an alpha version and suggest changes. The machine responds to the suggestion and rewrites the program in a few fractions of a second. The designer continues the conversation, maybe draws out some designs with a pencil, describes a desire, references another program which the machine analyzes for inspiration, and the machine adjusts the code in response. This is just one of many possible examples. The point is that software design is trending toward more automation. Coding is not a new essential skill that everyone will need on the future. Human-machine interactions are trending toward natural and automated methods, not manual code entry. Most people need to learn to be creative, think critically, analyze problems, not learn the conventions of programming languages.


Analogies are always a rabbit hole. Haha.


> for us to go from writing with only consonants to using vowels for the first time

Speaking as someone who has studied cuneiform and Akkadian, I would say that this claim isn't true. Here's a vowel that predates the period that you mentioned[0].

[0] https://en.wikipedia.org/wiki/A_(cuneiform)


> Here's an analogy for you: It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time.

Where did you get this idea? Babylonian writing fully indicated the vowels. It always has. You're thinking of Egyptian / Hebrew / Arabic writing.

Even where a semitic language was written in cuneiform, vowels were always indicated, because the cuneiform system didn't offer you the option to leave them out. https://en.wikipedia.org/wiki/Akkadian_language#Vowels

(Old Persian was written in repurposed cuneiform, and therefore could have omitted the vowels, but didn't.)


>It took more than 1000 years (from Babylon to Archaic Greece) for us to go from writing with only consonants to using vowels for the first time

Yeah and it took "us" 60 years from discovering flight to landing a rocket on the moon. Took "us" 60 years from the first computer to globally-live video streaming in your pocket. Time is a pointless metric when it comes to technology. You don't know what someone is cooking up down in some basement somewhere that will be released tomorrow and shatter your concept of reality.


I wonder if 'leisure-person years' is a better metric of progress (where 'leisure' is defined as the number of hours you can spend neither raising/searching for food nor sleeping).

Be really hard to identify, though.



What do you mean by computers handling ambiguity? At the end of the day for a idea to become cristalized it needs to be free from ambiguity. That is the case even in human interactions. When using ambiguous language, we iterate over ideas together to make sure everybody is on the same page. If by handling ambiguity, you mean that computers can go back and forth with us to help us remove ambiguity from our thoughts then they are basically helping us think or in some sense do programming for us. That is a great future indeed! A future where actually AIs are doing the programming in long run! But with this line of thought we might as well not teach anything to our kids because one day computers will do it better. Specially if we already stablished that they can think better than us :)


Let's teach our kids the higher level stuff that doesn't ever get old, thinking clearly, engaging in creativity, solving problems, whether through code or whatever means appeals to them. Let's give them options and opportunities, not must mandate memorizing specific facts. Let's teach kids computer science instead of just programming, creative writing instead of just grammar, mathematics instead of just algebra, let's engage their imagination, not just their instincts to conform to expectations!


The best "programming" curricula aimed at general education teach (elements of both) generalized problem solving and computer science with programming in a particular concrete language or set of languages as a central component and vehicle for that (and often incidentally teach elements of a bunch of other domains through the particular exercises.)

This is particularly true, e.g., of How to Design Programs [0].

[0] http://www.ccs.neu.edu/home/matthias/HtDP2e/


Let's teach them computer science with programming as a fantastic way to concretely demonstrate its abstract ideas. (The same goes for math vs. arithmetic!)


Yes, definitely. Too often the application of the idea is taught without understanding the idea itself. Then we get standardized testing and focus not even on the application but in what ways the application of the idea will be stated on a test. We still need the conceptual framework to learn anything lasting!


I've said this before, the reason code and CLI and texting and messaging all these text based modes of communicating and controlling computers are still popular is that they mirror one of the most intuitive and fundamental inventions humans have ever created: language, specifically, written language. Even speech doesn't rival written word in some contexts; for example, laws and organization rules, policies, are still written.

You can't beat written word. Corps are not lining up to rewrite their bylaws in a bunch of connected drag-and-droppable blocks. I really don't think it's just inertia, the preciseness, versatility,and ease of examination and editing, and permanence of written language is hard to beat. Same with source code.


I appreciate the context of your argument when you discuss the use of text in by-laws, but it's worth noting that there are lots of examples of by-laws being enforced via non-text mediums:

1) road signs, which are predominantly graphic based

2) public information signs. Eg no smoking. Also usually picture based albeit does often contain text instruction as well

3) beach flags indicating where to swim etc.

All of these are enforcing by-laws yet none specifically text driven. In fact when conveying simple rules to people, it often makes more sense to explain that in meaningful images as that enables anyone to understand the message, even if one doesn't understand the written language (eg tourists).


Right, note that in communication referring to written laws, people tend to use visual aids. This is also similar to how people try to visualize source code, like dependency graphs, inheritance graphs and such. In one case, we are reminding one about ideas using visuals and the other, we are helping comprehension of it.

However, the original specification, which in the case of the signs are laws and for software is code, is text, not in visuals. There in lies the difference. I think visual aids like dependency graphs will help us visualize code and communicate ideas like in the signs you mention, but due to the reasons I mentioned previously text will still be the preferred method for specification, or in software engineering, programming. For example, visuals only go so far. The best visual specifications I can think of are blueprints, which I'd argue still require a little of reading to understand. But in certain domains, as I said, text is a better medium.


I agree, though I think there are improvements that could make things better even for existing languages. It just doesn't seem to be a main focus of our industry.

I found a lot of ideas in this article to be pretty interesting: http://worrydream.com/#!/LearnableProgramming


>> We will have tools, accessible, easy tools; Arduinos and Pis of the future; sure. But it will not replace, nor eliminate or reduce the amount of code written.

I think something eventually will, though. My reasoning for this conclusion is simply that I don't believe a significantly larger percentage of people will learn to write production software than are able to do so now. At the same time the need for software in every sector continues to grow, leading to some varying levels of scarcity in programmers. That's a massive economic opportunity, and so people will continue to pound at that nut until it cracks.


We will keep making developers more and more productive. And if there aren't enough of us to solve all the problems, well tough luck - let them unsolved, every profession is like that.

But if we create an AI that can understand people well enough to know what those people want without clear instructions, yes, we will have placed us out of the job market, together with everybody else.


Not all nuts are crackable. I agree, though, that people will continue to pound. Even if it doesn't crack, we may find a way for many people to get things done without learning to write "production" code.


> Coding didn't change much.

That's not true and even Sussman acknowledges this:

"The fundamental difference is that programming today is all about doing science on the parts you have to work with. That means looking at reams and reams of man pages and determining that POSIX does this thing, but Windows does this other thing, and patching together the disparate parts to make a usable whole.

Beyond that, the world is messier in general. There’s massive amounts of data floating around, and the kinds of problems that we’re trying to solve are much sloppier, and the solutions a lot less discrete than they used to be."


I dont agree with that sentiment.

50 years ago, it might be conceivable to build auto scaling website that does something like pinterest within a decade, which now can be built in hours.

I'm not just talking about scaffolding and api usage either, so much has changed in coding in the last 15 or so years as well. think object oriented programming, interfaces, and GIT, and other new / useful practices.

the way we store our data is different as well. I believe it was the 70s, but during that time people needed convincing storing data in relational databases was a good thing.

today even that is changing


The actual practice of writing programs is, a few outliers aside, incredibly different in 2016 as compared to 1966.

(And even just looking at languages, Fortran 2008 is hardly recognizable as compared to FORTRAN IV)


50 years from now, I can't imagine people driving cars as we do today. I do know that human-operated cars are old, but it does not mean that we can't do better nowadays.


drivers:users::mechanics:programmers


And designing cars hasn't become easier, it's become exponentially harder as we demand more from them.


you missed assembler; it's still a thing


I did miss that, true; however, Assembly is heavily architecture-tied. Therefore x86 Assembly significantly differs from, for example, ARM assembly, but nonetheless I should have mentioned it.


Its annoyingly incompatible and RISC is more verbose but really the concepts are pretty much the same. Load something to a register, do some very basic operations on said registers and save it out. An X86 programmer should be able to pick up other CPUs fairly easily. Although delay slots will probably piss them off every time.


>Doing that with voice commands in my opinion is significantly harder compared to what we have now.

You could have automations around that though. A lot of manual work could be replaced with little AI bots that do the work. And since this work is not really "creative", it could be done through AI.


Aside specialty industries, the way the average programmer codes in a very different way to what it would have been like 50 years ago.


Yes and no.

Yes, in that the tools are massively better. So is the hardware that it all runs on.

No, in that you still have to tell the computer precisely and unambiguously exactly what you want it to do, and how, mostly in text. The level of detail required today is somewhat less, due to better tools, but at a high level the work hasn't changed.


That has changed significantly too though. Sorry about this being a long post, but having programmed through most of the last 50 years, I've seen a massive shift in the way people code even from a language perspective:

1) There's a massive reliance on reusable libraries these days. Don't get me wrong, this is a good thing, but it means people spend less time rewriting the "boring" stuff (for want a lazy description) and more time wring their program logic.

2) Most people are coding in languages and/or language features that are several abstractions higher than they were 50 years ago. Even putting aside web development - which is probably one of the widest used frameworks these days - modern languages and even modern standards of old languages have templates, complex object systems, and all sorts of other advanced features that a compiler needs to convert into a runtime stack. Comparatively very few people write code that directly maps as closely to hardware as they did 50 years ago.

3) And expanding on my former point, a great many languages these days compile to their own runtime environment (as per the de facto standard language compiler): Java, Javascript, Python, Scala, Perl, PHP, Ruby, etc. You just couldn't do that on old hardware.

4) Multi-threaded / concurrency programming is also a big area people write code in that didn't exist 50 years ago. Whether that's writing POSIX threads in C, using runtime concurrency in languages like Go (goprocesses) which don't map directly to OS threads, or even clustering across multiple servers using whatever libraries you prefer for distributed processing, none of this was available in the 60s when servers were a monolithic commodity and CPUs were single core. Hence why time sharing on servers was expensive and why many programmers used write their code out by hand before giving it to operators to punch once computing times was allocated.

So while you're right that we still write statements instructing the computer, that's essentially the minimum you'd expect to do. Even in Star Trek with the voice operated computers, it's users are commanding the computer with a series of statements. One could argue that is a highly intuitive REPL environment which mostly fits your "you still have to tell the computer precisely and unambiguously exactly what you want it to do..." statement yet is worlds apart from the they we program today.

Expanding on your above quote, "mostly in text": even that is such a broad generalisation that it overlooks quite a few interesting edge cases that didn't exist 50 years ago:

1) web development with GUI based tools (I some people will argue that web development isn't "proper" programming, but it is one of the biggest areas in which people write computer code these days. So it can't really be ignored. And there are a lot of GUI tools that write a lot of that code for the developer / designer. Granted hand crafted code is almost always better, but fact remains they still exist.

2) GUI mock ups with application-orientated IDEs. I'm talking about Visual Basic, QtCreator, Android Studio, etc where you can mock up the design of the UI in the IDE using drawing tools rather than writing creating the UI objects manually in code.

3) GUI based programming languages (eg Scratch). Granted these are usually aimed as teaching languages, but they're still an interesting alternative to the "in text" style programming languages. There's also an esoteric language which you program with coloured pixels.

So your generalisation is accurate, but perhaps not fair given the number of exceptions

Lastly: "The level of detail required today is somewhat less, due to better tools, but at a high level the work hasn't changed.":

The problem with taking things to that high level is it then becomes comparable with any instruction-based field. For example, cook books have a list of required includes at the start of the "program", and then a procedural stack of instructions afterwards. Putting aside joke esoteric languages like "Chef", you wouldn't class cooking instructions as a programming language yet it precisely fits the high level description you gave.

I think as programming is a science, it pays to look at things a little more in-depth when comparing how things have changed rather than saying "to a lay-person the raw text dump of an non-compiled program looks broadly the same as it did 50 years ago". While it's true that things haven't changed significantly from a high level overview, things have moved on massively in every specific way.

Lastly, many will point out that languages like C and even Assembly (if you excuse the looser definition) are still used today, which is true. But equally punch cards et al was still in widespread use 50 years ago. So if we're going to compare the most "traditional" edge case of modern development now, then at least compare it to the oldest traditional edge case of development 50 years ago to keep the comparison far rather than comparing the newest of the old with the oldest of the new. And once you start comparing ANSI C to punch-inputted machine code, the differences between then and now become even more pronounced :P


It should be noted that that quote is not from the author, but from someone that the author is quoting.


for some perspective... 50 years is just twice the length of time since i've been coding (in some capacity)... and i'm only in my early 30's.


Agree completely!




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: