Hacker News new | past | comments | ask | show | jobs | submit login
Programming Languages are the Least Usable, but Most Powerful User Interfaces (uw.edu)
131 points by newgame on April 7, 2014 | hide | past | favorite | 100 comments



Yeah. I've come to realize that the hard way.

It is also what it comes down to w.r.t. code-over-config.

To make a system sufficiently configurable, you end up having to re-invent a turing complete language inside it's configuration files, so it starts looking more like lisp.

This is due to the [1] Inner Platform effect.

The only interfaces I know of that's expressive enough to drive a turing complete language is either flowcharts, or actual code.

This is kind of why I feel that systems like gulp are just superior to grunt, because ultimately the config format can never be expressive enough to solve all the problems, and there's just so much less work to write-what-you-mean, the first time. properly.

This also relates my newly adopted philosophy w.r.t. software complexity. [2] Simple vs Easy.

[1] - http://en.wikipedia.org/wiki/Inner-platform_effect

[2] - http://daemon.co.za/2014/03/simple-and-easy-vocabulary-to-de...


>To make a system sufficiently configurable, you end up having to re-invent a turing complete language inside it's configuration files, so it starts looking more like lisp.

You need dependency injection too, unless you're going to let people just smash the definitions of existing code.

A good demonstration of safe, powerful, and type-checked configurability is Xmonad.

http://xmonad.org/


> To make a system sufficiently configurable, you end up having to re-invent a turing complete language inside it's configuration files...

Isn't source code basically a configuration file for the compiler/interpreter?

> because ultimately the config format can never be expressive enough to solve all the problems

And if so, I don't think this statement is quite true. Source code is expressive enough.

Configuration files give us flexibility but not expressiveness. Source code, as it exists today, gives us expressiveness but not flexibility[1].

This disparity should be a big red flag that we are doing something really wrong.

This disparity, in my opinion, is caused by the abstraction we use to communicate information between systems within source code: parameterized sub-routines.

A programming language that doesn't use parameterized sub-routines is basically a configuration file. This gives us both expressiveness and flexibility.

[1] Flexibility is within the context of config-vs-source code and ops post. Programming languages are very flexible when you know them.


Isn't source code basically a configuration file for the compiler/interpreter?

Source code is input, not configuration. Every time you run the compiler, you give it different source code. The whole point of the compiler is to transform source code.

Configuration is "input" that is not expected to change very often, but that the application developer can not guarantee will never change. Configuration falls into a few categories:

1. Context-specific inputs that can not be predicted or discovered automatically by the application. (For example, the DNS zones the process is expected to host - named.conf)

2. Inputs that are not expected to change very often, but should be easy to change if necessary. (ssh client or server settings)

3. For applications with interactive user interfaces, customization and changes to the default interface.

For the vast majority of applications, a text file capable of expressing simple data structures is more than sufficient. Others advocate using only a database, but only to make configuration dynamic so explicit reloads aren't necessary, not to make it more like a programming language.

The source-vs-configuration question is usually only be an issue when dealing with large, complex applications with very general specifications.


> Source code is input, not configuration. Every time you run the compiler, you give it different source code.

Interpreted languages (JIT) re-interpret the same source code every time they are run (though they don't have to).

What about monkey patching (http://en.wikipedia.org/wiki/Monkey_patch)? An example usage is altering behavior (the source code) based on running the code in a testing environment -vs- production (as one example).

How about dependency injection? How about lambda expressions?

What about best known practices like favor composition over inheritance? If we hard code a solution using inheritance that is input but if we build out a solution using composition that is a configuration?

These are all tools available to programmers that provide ways of changing behavior at the source code level even though that behavior may not change very often.

This is starting to look like a real gray area to me: input -vs- configuration.

I guess we can try and distinguish between the types of input a system consumes (based on the static nature of the input) but I don't know how useful that is as an abstraction.


Monkey patching is 3rd-party modification to the application itself; or specially modified inputs that simulate such a modification. It's not configuration.

Dependency injection is a software design pattern. There's no point calling it configuration.

A lambda expression, from a programming perspective, is just an anonymous function, a function so trivial it does not need an associated identifier.

    > If we hard code a solution using inheritance that is
    > input but if we build out a solution using composition 
    > that is a configuration?
No, a solution using composition is not configuration. Maybe it might help to think of the intended audience:

Configuration is intended for end-users or system administrators, NOT programmers. Configuration specifically refers to inputs that you remove from code and place somewhere that is (a) easily modified by anyone and (b) very hard to break.

All of the tools or techniques you list are targeted at programmers (and programmers doing programming, not configuring their text editor or IDE).

    > This is starting to look like a real gray area to me:
    > input -vs- configuration.
Of course it's a grey area. Configuration is ultimately a type of input. But there's still a semantic distinction to make, just like there's a semantic distinction between a desert and a grassland even though the border between the two isn't distinct.

Also, I should make it clear there's an assumption we're talking about application configuration, not configuration in the ITIL sense, where it has an extremely generic meaning.

    > I guess we can try and distinguish between the types
    > of input a system consumes (based on the static nature 
    > of the input) but I don't know how useful that is as
    > an abstraction.
Primarily, the distinction informs decisions about how access to various options and features are provided. Do you require the code itself to be modified? Do you make it a compile-time flag? Do you have the application load it from a default file in a standard location (like /etc/myapp.conf) or do you read the input from stdin? It's possible to have a solid understanding of where to put things without actually using the word "configuration" but why not just use the term since it is already there?


Code vs config is not supposed to be a grey area, but it quickly becomes one when you try to make the application so configurable that you're actually moving business logic to the config files. That's why that is an anti-pattern.


I upvoted your comment, but the reality is that the grey area is easy to find.

Consider apache's httpd.conf. It includes support for fairly advanced features like conditionals, scopes, and sub-configuration of 3rd party modules. In some environments, for example, it may be quite sensible to include some bits of what might technically be considered "business logic" in something like URL rewrites.

Consider DNS records. Not named.conf, but the actual zone data itself. Is that configuration or is it data? Do you check zone files into a configuration repository with your other files, or do you treat them more like a database to be modified on the fly and just back it up periodically? It probably depends on how dynamic the records are expected to be.

Consider emacs: it uses lisp as its configuration language.

On the other end of the scale, you have very small single-purpose scripts that are easily hand-editable. If you have a script that's no more than 1K, maybe you simply put an "options" section at the top with some defaults that can be changed by modifying the code directly. Or maybe your script is so small that even that amount of overhead is pointless.

It's important to remember that just because there is a grey area doesn't mean that just because you might stuck there, everyone is. But it doesn't mean it can't be confusing sometimes.


Yeah, at a certain point you cross over from configuration into scripting. Emacs Lisp is definitely in the scripting zone, on purpose.

Yes, Apache has a crazy amount of configuration in httpd.conf, and supports conditionals and scopes, to the point where they had to write a syntax checker for it. I would actually consider httpd.conf a good example of the "softcoding" anti-pattern.

A certain amount of configuration is good, and deliberately providing a scripting DSL language for your program is also good if appropriate.

But when you inadvertently cross over from normal configuration into absurdly complex configuration that resembles a crappy scripting language, giving you the feeling that you're in this big grey area between code and config, that's an anti-pattern.


What would be an alternative to parameterized subroutines, though? Is there another way to avoid re-implementing an algorithm every time you want to use it? And keeping all source in one gigantic file?


Think something like messages with behavior[1]. Think objects where behavior is implemented in properties: a single "makeItSo" property for example.

Every object has the exact same behavioral interface. The exact same behavioral interface means there is no specialization: every message looks the same. We can compose programs (behavior) by hooking up objects/messages as opposed to coding them. This is because we have 100% encapsulation[2]. We need to know nothing about the internal working of an object since it has no parameterized subroutines.

The abstraction for passing information between sub-systems are these messages (every object is a message) as opposed to parameterized subroutines.

[1] We could call it message-oriented programming (not to be confused with message-oriented software/frameworks).

[2] Even a single parameterized method leaks some of the internal workings of an object and leads to specialization of the objects interface. This also leads to tightly coupled software systems.


Aren't you basically just talking about Smalltalk/Obj-C style message-passing rather than method-calling? But perhaps with more complete encapsulation? That's a step in the right direction perhaps, but I don't really see how it obviates the need for interfaces--you still need to know what messages a particular object can respond to, don't you?


> you still need to know what messages a particular object can respond to, don't you?

Messages don't need to know what messages they can respond to as all messages have the same behavioral interface (the "makeItSo" property). Where messages expect certain informational properties (information interface) then, ya, the message would need to check if the message passed to it contains the information it requires (such as a UX layout engine that is waiting for UxControl messages).

If you have some time, here are some examples:

An example using addition:

    message Add (
      left 5
      right 6
    )
or it could be

    message Add (
      left Subtract ( left 5 right 6 )
      right 6
    )
or it could be:

    message Add (
      left FromFieldGet ( name "someForm" field "left" )
      right FromFieldGet ( name "someForm" field "right" )
    )
In all these examples, the Add message does not need to know anything about the messages provided to it in the left and right property (5 and 6 are actually messages themselves). The last Add example uses a message FromFieldGet that is able to locate information from a form (in this case named "someForm" with a field left and a field right. The form itself has not been defined in the add example but would also be defined as a message passed to some system that creates the UI/UX which would expected a message of type UxControl).

We could code the usage of our message like this:

   float result = message.asFloat; // the "makeItSo" as a float primitive
   int result = message.asInt; // the "makeItSo" as an integer primitive
   string result = message.asString; // the "makeItSo" as a string primitive
Or let's use Add in our configuration:

    message FormFieldSet (
      name "someForm"
      field "result"
      source RunAsFloat (
        part Add (
          left FromFieldGet ( name "someForm" field "left" )
          right FromFieldGet ( name "someForm" field "right" )
        )
      )
    )
Let's hook that message up to a ux "button"

    message Button (
      action FormFieldSet (
        name "someForm"
        field "result"
        source RunAsFloat (
          part Add (
            left FromFieldGet ( name "someForm" field "left" )
            right FromFieldGet ( name "someForm" field "right" )
          )
        )
      )
      // properties specific to the layout of a ux element on a form
    )
and so on.

Each message knows nothing about what behavior the messages composed in the properties of the message (100% decoupled) supports.

A message oriented language doesn't (can't) have constructs like for loops, switches, if/then/else, etc. in it as those aren't messages if they are integrated into the language. Instead, these are also viewed as messages (first class citizens).

A for each statement is also a message:

    message ForEach (
      start FromFieldGet ( name "someForm" field "start" )
      stop FromFieldGet ( name "someForm" field "stop" )
      action ...
    )
and so on.

If you look at Obj-C, it is really message passing implemented using traditional parameterized subroutines. Smalltalk is a lot more true to it's own nature but they still use methods as an abstraction for passing information between systems (For example, they will refer to #do as method as opposed to a message).

I think if Alan Kay was forced to implement Smalltalk by focusing 100% on messages (because he was told he couldn't use parameterized subroutines) we would have had a much different world today. In programming, mental models created through abstractions is everything.


I see, you're talking about a completely different paradigm from imperative languages. Thank you for the explanation and code examples.


> Thank you for the explanation and code examples.

Thank you for taking the time to look through them.


"To make a system sufficiently configurable, you end up having to re-invent a turing complete language inside it's configuration files"

This sums it up perfectly, I've had clients say 'oh and it has to be 100% customisable' and I've been like 'it is, it's called HTML, CSS and js'


I think it's a massively fascinating question to ask "Why didn't something like hypercard become the web?"

And please don't knee jerk some story about some Apple decision making. That doesn't matter. If the hypercard model had the value I think it does, someone else should have surfaced and gained momentum.

That didn't happen, and while I don't have the answer for why, I think it would shed some light on why "code as a user interface" for non coders has had only limited successes (Spreadsheets, SQL).


It's arguable that Excel is the most widely used "programming language" depending on your definition. If you've spent much time around certain business professionals, you'll see that they really push spreadsheets to do unexpected things to solve their problems.

I've actually wondered why there hasn't been more exploration into visual / declarative programming since it has the appeal of being very easy to get started.


I'm not sure what you mean by more. Visual programming was explored a lot and became a popular approach for building non-standard interactive multimedia applications. I think the most famous and inspirational piece of this kind of software is Max/MSP which started as an interactive computer music environment (created by academic) and evolved into multimedia processing/generation powerhouse.

It's commercial and proprietary software but it has an open-source "cousin" called Pure Data.

Both applications were created by Miller Puckette and his creations inspired a lot of more or less similar systems.

One worth noting is vvvv. It doesn't have musical programming roots and uses some unique and really powerful concepts to make graphics programming easier. http://vvvv.org/documentation/of-spreads-and-slices

Also it's a very pragmatic system which is built and maintained by people who do really cool and sometimes complex projects. http://www.meso.net/vvvv


Thanks for pointing that out, I actually have used Max/MSP and think it's awesome, it didn't even occur to me.


You're welcome!

Oh I just remembered there was a successful kickstarter for flow-based IDE. The interesting thing here it demonstrated some interest from the general public. http://noflojs.org/


A couple of recent, interesting explorations:

Subtext (http://subtext-lang.org/) - in particular check out the video on "schematic tables," which is very immediate and direct like Excel but better for general purpose.

Some of Bret Victor's work: Inventing on principle (http://vimeo.com/36579366) Media for the Unthinkable (http://worrydream.com/MediaForThinkingTheUnthinkable/) He has a lot of examples of similar interfaces for more domains than just what Excel is good at.

Another example, although much less so, is Unity's editor- property editing and level editing are very immediate and visual even for things like behavior.


There was, called Visual Basic 6 (1998), the best RAD tool for Windows back then. It's the full version of limited VBA that comes with Office. Instead of releasing a version 7, Microsoft scarified it in favor of dotNet Framework idea.

Hundreds of thousands developers are still pissed at Microsoft in 2014 (according to Wikipedia): http://en.wikipedia.org/wiki/Visual_Basic

(I moved on to various other languages, but I still have VB in fond memories.)


And yet, despite millions (I'm being nice, probably billions) of lines of VB6 code, not one single competitor has managed to build a business out of creating a 100% Visual Basic 6 compatible system.

Looks like Microsoft was correct in their assessment of shooting VB6. Nobody using it was willing to pay money.


VB was all about COM technology. It suits perfectly to work on and extend Windows and Office applications. Migrating to another RAD tool like Delphi or varous *Basic or even VB.net is not possible without starting more or less from scratch. As no other RAD tool offer compareable compatibility and most even do not support or are COM based.


Hogwash. It's perfectly reasonable to make something that would be compatible. It's not even that hard.

There's just no money in doing so.

People used VB6 because it was easy and cheap. They're not going to suddenly turn around and start spending 7 figure sums of money. They'll just hobble along until they don't make a computer that can run VB6 and Windows XP any longer.


I moved from VB6 to Python.


> visual / declarative programming ... has the appeal of being very easy to get started

Does it, really? I spent the last 2 years building systems that ran off of workflows (as in flowcharts), and I think now that they don't make the big hurdle (capability for abstract reasoning) any easier at all.

They are fine to document and communicate things, but as input they are just flawed (except for some very very specific niches).


> building systems that ran off of workflows (as in flowcharts) > but as input they are just flawed

The flaw is to assume there is only one mental model that Visual Programming Languages should operate under: say only workflows for example (a mistake all VPLs I've seen have done). This is like assuming that there should only be object-oriented programming when there are other methodologies we use as programmers to form mental models such as declarative programming, functional programming, structured programming, etc.

It doesn't make much sense to code out mathematical equations using flow charts.

A VPL which supports multiple mental models is able to best represent behavior based on specific business domains. The right tool for the right job.


Functional VPLs have the same abstraction/scaling up problems as OO VPLs. The problem is that we (as human beings) don't know how to communicate very efficiently without our words, and visual embellishment is not directly useful.


> The problem is that we (as human beings) don't know how to communicate very efficiently without our words, and visual embellishment is not directly useful.

To me, this sounds like someone in the literary field of the arts telling an artist or musician that those fields of the arts are not directly useful.

Music and art? Those visual (and aural) embellishments are not directly useful.

There was a great HN post (https://news.ycombinator.com/item?id=7543691) on visually stunning math concepts. What you are implying is that coding out a mathematical equation as opposed to representing it using actual equations (http://i.livescience.com/images/i/000/036/119/original/minim...) is a visual embellishment that is not directly useful?

> Functional VPLs have the same abstraction/scaling up problems as OO VPLs.

This could be a problem with VPLs or it could be a sign of some root cause problem(s) with how we code today. Perhaps, there are better programming abstractions/methodologies that work equally well as words (source code) and as VPLs.


I'm all for art and music, but can you converse with it? They obviously communicate something that is quite different than "buy eggs on your way home from work." Could you communicate this to your SO without using words, using a picture or music? There is a good reason why pictionary is a challenging game. And just as well, artists and musicians are probably not very interested in communicating such utilitarian trivialities through the artifacts they create. Art is not meant to be efficiently communicative, but to influence us in other, perhaps deeper, ways.

> This could be a problem with VPLs or it could be a sign of some root cause problem(s) with how we code today. Perhaps, there are better programming abstractions/methodologies that work equally well as words (source code) and as VPLs.

The language center of our brains evolved 50-100 thousands of years ago, which eventually led us to technology and civilization (things really pick up after we discovered writing at about 10kya). The reason we use words for programming is that we are biologically evolved for that. Are you seriously suggesting that there might be a better way for us to communicate and express ourselves concisely?



I see plenty of words there. Visual notation is great for conveying spatial relationships, no doubt about that, and spatial layouts can work well as secondary notational aspects, there is nothing to argue about there. But would you want to read war and peace in dataflow diagram format?


I'd read a story written as a dataflow diagram if it made good use of the format.

War and Peace has been made into a movie at least half a dozen times.


How about War and Peace performed by mimes?


> Are you seriously suggesting that there might be a better way for us to communicate and express ourselves concisely?

I've never suggested any such thing. In fact, I hinted at just the opposite when I wrote

"The flaw is to assume there is only one mental model that Visual Programming Languages should operate under: say only workflows for example (a mistake all VPLs I've seen have done)."

I wrote a blog post on VPL - Snapshots (https://news.ycombinator.com/item?id=7274674). The pattern I noticed on these VPLs (of which there are close to a hundred in that list now) is that they all attempted to use a single mental model (some are flow, some are spatial, some mathematical, etc.). None of them support different mental models (say flow based when defining business process and mathematical when defining equations).

The flaw is assuming a single mental model can be used to most efficiently describe all real world systems. Given a real world system, there may be one or more approaches used to efficiently describe that real world system in a computing device. That mental model could be textual, it could be visual or it could be both.

Taking the position that no VPL(s) exist that could better describe a particular real world system better than a textual language is standing on loose ground. Perhaps a general purpose domain agnostic VPL doesn't exist (yet), but VPLs shine for some domain specific solutions (gaming engines for example).

> Are you seriously suggesting that there might be a better way for us to communicate and express ourselves concisely?

I'm suggesting that to assume otherwise is limiting our chances of growing Information Technology as a community. I'm suggesting that if we "get it right" then our programming abstractions would be equally useful and descriptive in both a textual and visual language formats.


> I wrote a blog post on VPL - Snapshots

I remember this post and participated :) I've also designed and built my share of visual languages and have been studying this field for about a decade now.

> The flaw is assuming a single mental model can be used to most efficiently describe all real world systems.

There are two things going on here: the paradigm of the language that guides but restricts its users, and the notational syntax of that language that limits its expressiveness and abstractive power. You seem to be conflating them together, but paradigm is separable from notation (textual flow-based languages are common), while notation is solely related to the textual vs. visual debate.

> That mental model could be textual, it could be visual or it could be both.

When you think about something, do you not talk to yourself? I have only my own experience to go by, but it takes some effort to call forth images and it definitely interrupts my ability to think through something.

> I'm suggesting that to assume otherwise is limiting our chances of growing Information Technology as a community.

I am suggesting that our love of words and text is biological. We also have capabilities for sensing space, color, and so on, but these are adapted more to experiencing and reacting rather than communicating. If it is indeed a biological limitation of human beings, then it would be impossible to get it right enough (though I could be wrong, and please try if you think otherwise).


> I have only my own experience to go by, but it takes some effort to call forth images and it definitely interrupts my ability to think through something.

"It (Visual Thinking) is common in approximately 60%–65% of the general population." - http://en.wikipedia.org/wiki/Visual_thinking

http://en.wikipedia.org/wiki/Autism_spectrum

An amazing lady: Temple Grandin. (https://www.youtube.com/watch?v=fn_9f5x0f1Q) (http://www.grandin.com/inc/visual.thinking.html)

I happen to lean towards visual thinking. I can see and run systems in my head. When I see a mathematical equation I understand, such as f=ma, I feel momentum and see acceleration of shapes in my head. If I can't form that mental picture, I can't do the math. It sucks because even though I can do all of that in my head, I can't remember your name. This is just me. People don't think the same way.

Now, let's considering the situation where source code is the only way to program a computer. This leads to a self-fulfilling prophecy where a majority of the programmers are a certain types of thinker. Sure, people can think using different approaches (as you pointed out: "but it takes some effort to call forth images and it definitely interrupts my ability to think through something"), but it isn't easy for them. It takes extra effort.

> paradigm is separable from notation

I guess I may be conflating them because I see paradigm as driving notation and/or notation can drive paradigms. But perhaps this is a result of how I understand things.


I'm definitely left handed, but I don't think visually very well. I do prefer brainstorming with diagrams, but I need to draw it out.

The brain is a fascinating piece of hardware, especially how it supports language. For example, I would guess OOP is more suited to human thinking because language (and hence metaphor) is supported directly in the brain while mathematics is a more recent invention that we have to "learn."

I guess there is a reason research in visual programming started the whole field of HCI. Good luck with your work!


The widespread use of syntax highlighting in code editors is some evidence that visual embellishment is at least somehow directly useful - it does aid communicating the code to the reader.


Yeah, I probably used the wrong terms there. I was referring not to flowcharts, but the way Excel is used as a "see-it-as-you-go" functional language.


I think excel is interesting because of it's limitations, and it's short feedback loops. all it's data is ultimately tabular, and it changes visually as you work with it.

That last property is one of the reasons I think that flowcharts can work in some niches (like video and audio processing).


Excel is very often abused and made to do things it should definitely not be doing (massive files, enormously complex formulas). In many business situations, this may only lead to frustration and lost productivity, but in finance in particular the implications can be huge(1), and a faster, more stable, testable and transparent/auditable solution should definitely be found for critical applications.

[1] http://www.forbes.com/sites/timworstall/2013/02/13/microsoft...


Exactly, I'd be curious to see something that gives business analysts the interactivity of something like Excel but has ways to avoid the rigid, interlinked-sheets with ridiculous formulas that seem to be inevitable in a large model.


I have dreamed of a similar thing for a long time...

When I worked in a (science research) lab, we had a few Excel files we passed around for performing various calculations. These were great in that my non-computer-fluent boss could use them (and even contribute). There were a few input boxes to fill out which were run through some calculations, and the answer spit out.

Pros:

* Single file

* Everyone has Excel installed on desktop

* Accessible. Even if you're not an Excel wizard, you can see and edit the basic formulas.

Cons:

* Rigid grid (Any documentation, notes, etc. must fit into the grid)

* Formulas are hidden, when you might want to highlight the most critical ones

* If you want to calculate intermediate results (which you do, to prevent very long formulas that are hard to read), you've got to plop them in some cells

* No meaningful variable names, A10:A55 what?

Excel is so entrenched, it'd probably be difficult for a slight improvement to gain any traction, but it would have made my life a heck of a lot easier.

And I'm by no means an Excel wizard, so there's a good chance that some of the Cons above can already be avoided... But I still want some kind of hybrid of LabVIEW and Excel.


Have you seen or tried slate[1] btw, and does it mitigate any of the cons you mention? I've only seen the demos for it and not tried it, so I don't know if it would be useful in practice.

[1]:https://www.useslate.com/


I've not heard of slate before. Glancing at the front page, it looks very interesting. I'll definitely take a closer look, thank you.


Lotus Improv and Apple's descendant of it have a better model IMHO. At the very least making formulas a top level construct avoids countless "oops I accidentally updated that cell" errors. But the locked in value effect of Excel is enormous.


Functional Reactive Programming has some interesting parallels, though pointing to an FRP package is clearly not a complete solution.


"things it should definitely not be doing (massive files, enormously complex formulas)"

Massive files especially is something Excel "should not be doing" for superficial technical reasons that are mostly unrelated to whether the UI is a good fit. "Enormously complex formulas" do call for better editors, but again don't damn the model.


Visual programming languages have gone a long way in some niches, too; LabView (lab automation and experimental logging) and Max/MSP (audio and video synthesis) come to mind.


For Hypercard, I think the answer is that, at the end of the day, people will write off any technology that gets "stuck". Spreadsheets are amazing (I have a secret deep love for them) but you can't code with them without hitting a wall of what they can do. SQL and declarative languages are better but still suffer from that granularity issue.

[Disclaimer: I work on a hypercard-like declarative language.]


Hypercard did take over the web. It's called Flash.

Even the JavaScript system we have now seems highly inspired by it.


This is interesting, can you elaborate? My reaction is to disagree, I don't see any meaningful similarities.


IMO It didn't happen because Hypercard does not provide a paradigm shift in application only in interpretation.

It's the same discussion you see around why 3D interfaces aren't more popular.

The second we need to manipulate more real world objects (like in robotics for instance) in real time, the more visual kind of programming languages will gain more popularity as they will provide faster access to controlling the already optimized algorithms.


There was a big underground movement using Visual Basic and variations on dos32.bas to do interesting things on top of AOL's client.


That sounds fascinating. Is there a good write up for someone totally unfamiliar with that world and that doesn't want to get into the weeds, but is interested in what the community was like?


Here you go: http://articles.marco.org/44

There are some references in this thread if you want to go deeper: http://www.ign.com/boards/threads/was-anybody-else-part-of-t...

This was one of the more popular programmers: http://patorjk.com/blog/programming/


Obviously there's lots of research to be done on the best ways to design programming languages and the best ways to teach them.

However, the headline presumes there can be some universal standard of measuring what we can "usability" that's somehow separate from the practical "usefulness" of the tool itself. Tools that do fewer things can be made easier to use that tools that do many things. Turing-complete programming languages can be made to do anything the computer can do, which is a lot. And so there's going to be a hard limit on how "usable" they can be. Not to say we're even close to reaching that limit, but you'll never make a programming language that's as "usable" as a screwdriver, and that's okay.


Well, I'll admit that a rather debatable claim. But article provides reasonably good backup for that claim using the tools of professional user interface design, a field that has become a lot more systematic in recent years. Towards the end of the article, he describes a number of metric which can be used in measurement of programming languages as task-accomplishment tools.


Reminds me of the different ways the environment is set up in Windows and Linux. In Windows, when you want to edit your environment variables, there's a simple configuration utility with a list of them, you click and edit and it all takes effect immediately. In Linux, there's a chain of events where different shell scripts are ran in some way at different levels of the bootup process depending on strange and arcane factors, so when you edit ~/.profile that only takes effect in your terminals and not in applications started by the DE, but also not in tmux, because it's not sourced by ~/.bashrc somehow. And once you puzzle out the chain of command and edit everything you have to log out and back in. And when you want to make changes to the global environment, there's /etc/environment, /etc/profile and /etc/profile.d/yourscripthere.sh. Only one of these is the right choice but only for a specific distro, DE and shell combination. Oh, and if somehow you manage to source a script twice your PATH gets duplicate entries in it. Not technically a problem, but it kind of irks me.


The "glue-language" philosophy underlying Tcl can strike an optimal balance between usability and power:

http://www.yosefk.com/blog/i-cant-believe-im-praising-tcl.ht...

Shells partition away sophistication.


Tcl is actually really powerful and fun.

Everything is a string is a bit quirky, but it's just lisp-like enough to allow you to do some really nifty things.

I don't ever want to write anything seriously in it again though, and I can't see myself starting a new project with it.


I wrote many lines of Tcl back in the early .com days, as we had an in-house server stack based on Apache, similar in certain ways to AOL Server, but doing ORM stuff people only discovered a decade later with Ruby on Rails.

Only used a few times later to script Websphere via Jacl.

However, nowadays I don't have any reason to touch it as well.


"Shells partition away sophistication."

Could you elaborate on this? It sounds interesting, but I'm not quite grokking.


I'll try.

Programming doesn't have to be hard. As in speech, the more one says, the less one means.

I know shell isn't going to with this popularity contest, but a return to it is what's badly needed in CS today. Instead of attempting to recreate the shell in C# or Java's supplied libraries and subsequently becoming frustrated when interaction with the "outside world" is clumsily accomplished through a FFI pinhole, just use the shell as it was intended: as a lingua franca between utilities. Write what requires prolog in prolog, what requires c in c, what requires awk in awk, etc. Use flat file databases such as starbase or /rdb and avoid data prisons such as MSSQL, Oracle, MySQL, etc. Make all of these utilities return sane return values and spit out JSON formatted output. Finally, tie it all together with shell. If you need a UI, code it as a thin layer in Tk, python/pytk, ansi c/gtk, or, consider pdcurses, etc. Profile your program and find any weak links in the chain. Recode in a lower level language only when needed.

Weighing the tradeoffs of adding language features is a sign of a false dilemma; rather than a single bigger or smaller language, what is actually needed are more specific languages which speak to each other through a lingua-franca. Tcl accomplishes this communication through a string representation, Powershell through an object representation, etc. Again, rather than choosing one solution over another, use them all where they work best. This is where Unix got it right all those years ago; Unix isn't just a slightly more stable platform for running today's bloated and monolithic software, rather, it's an elegant system for connecting maintainably-small utilities. The shell glues said utilities together into programs. Such an approach combines the best of high and low level programming, reuse and specificity, tradition and novelty, etc.


If you add nc between the pipes you've implemented an ESB :)

But shh, don't tell anyone how easy it is, if everyone figures out just how easy and scalable shell scripts are it will ruin the magic we bring to over-engineered enterprise projects.


If any here think fleitz is joking, please pick up a used copy of Manis, Schaffer, and Jorgensen's "Unix Relational Database Management". The authors create a relational database with little more than awk, cat, grep, od, sed, spell, tail and a bit of bourne-shell glue. After covering the basics of tables and relational theory, the authors create a small-business accounting system. The database became a commercial product (/rdb) and is still sold and used today (allegedly in hospitals).


And for 'object databases'/ Key value stores implemented on flat files, clone a git repo and cd .git/objects.

http://git-scm.com/book/en/Git-Internals-Git-Objects


As a very superficial example of this, pianobar is a fantastic terminal-based Pandora client that will happily read input from a FIFO. I had bound keys to issue various commands through the FIFO, on both my laptop and my desktop. Then I found my hands on my laptop keyboard when my music was playing from my desktop (or vice-versa). Simple application of netcat meant my commands could be typed at either keyboard.


That's a rant I mostly agree with, but I see a couple ways it could be mapped onto the original phrase. Did you mostly mean "sophisticated language features don't span executables anyway"? or "you don't need a sophisticated understanding of what's inside each box that a shell operates on"? or something I'm missing?


Both. "Sohpisticated" as in sophistry, or, showy complexity.


Wow, there are many ways to go with this...

I would claim that natural language is the most usable and the most powerful user interface. We humans have been relating with it for quite a while and there's no sign of a let-up.

And most programming language either contain fragments of natural natural language or can be translated into such fragments. Yet, the author is right, programming languages are "least usable". Indeed, consider SQL was created specifically to be usable like a natural language but it now considered more unusable than even an average programming language. What gives? (I have my suspicions but I wonder what people think).


Utterances in any natural language are always ambiguous, often hopelessly so. It works out because the speaker and listener are cooperating towards a shared goal; the removal of this assumption of cooperation is why contracts and legal codes get longer and longer over time as they try to deal with the fact that they can't actually specify anything the way they'd like to.

The computer isn't like another person, and we'd really prefer to have it do what we say rather than guess what we want and act accordingly. Indeed, we are not yet able to have a computer guess what we want to any respectable accuracy.

(Think of it this way. In some languages you can provide compiler hints that say, for example, "this variable will usually be an integer, so please optimize for that". A natural language consists basically 100% of compiler hints and 0% of instructions.)


The computer isn't like another person, and we'd really prefer to have it do what we say rather than guess what we want and act accordingly.

From the point of a view of a programmer, yes.

But isn't this kind of the division point between a "good user interface" and a programming language.

Lots of user level system are intended so the computer guesses what you want and does that.


In all areas where we need to transmit exact information or describe something in great detail - wether it's 'legalese'/contract language, mathematical notation, musical notation, computer code - we've found out that natural language doesn't fit it well, and chosen to use something less natural and more exact.

This is the sign of a flaw - we've stumbled upon the limit in many distinct areas, and ceased to use natural language in them.

If you want an architect to describe the builder how large a construction pit should be - you don't want a long description in natural language, you want a very specific diagram; i.e., natural language is not the most usable thing for this scenario. If you want to 'discuss' your computer which file to open, then clicking in a GUI is much more efficient than having the folder contents read to you and speaking the file name - since such technology exists, but only people with severe vision problems choose to use it. In niches where accuracy is important - say, "chat" between pilots and air traffic controllers - you migrate away from natural language by discouraging most phrases (e.g., words "to" and "for") and trying to replace them with domain-specific codenames and "formulas" for saying phrases with very exact meaning.

Natural language is a useful general-purpose UI; but for any specific niche we can have a better specialized UI than that - wether it's a controlled almost-natural language, a visual UI, or some specific notation like music scores.


I agree with the you on the builder/construction pit and the air control examples. But don't underestimate what natural language can do for humans interfacing with computer systems.

The file system example:

"Computer, show me the document jake sent me the other day."

"Computer, I want to continue writing my thesis."

"Play the cute cat video again!!!"

Of course, it'll need agents with more deeper semantic understanding, advanced AI, NLP, computer vision, etc that is on the market today, but this is not a problem of natural language. See the work wit.ai is doing, we're pretty much there.


As I said, language is effective for general purpose communication, and the cases that'd work well for communicating with humans would also work well for also communicating with sufficiently smart computers.

But that's a different argument - the original discussion was about natural language replacing the current specialized systems (such as programming languages), and my point is that it's no more likely than natural language replacing musical notation or Photoshop interface.

You could have an interface where you say "computer, draw a funny moustache on that cat" in the same way that you can tell that to an artist; but if you want to define exactly what kind of funny moustache you have in mind, then both Photoshop and paper&pencil would be far superior interfaces than natural language.


Turing-complete config files do not sound like proper config files :)


It's an antipattern the inexperienced frequently fall into. Just one little extra bit of functionality, and ... whoops! Suddenly, you can program there, so now you will have to program there.

Same applies to markup languages. TimBL held HTML back from Turing-completeness for ages, then JavaScript came along.


> Just one little extra bit of functionality, and ... whoops! Suddenly, you can program there, so now you will have to program there.

That's an argument for programming languages in configuration, not against. It's just that we need good languages there, so that if you "have to program there" it's not a painstaking experience.

Designing a programming language is hard. Really, really hard. Or rather it's extremely easy to make completely unusable language, especially if it wasn't designed as a language from the start. You don't want to do this to your users and to yourself.

You either just don't include a programmable configuration, which is a fine tradeoff, or you include a "real" language from the start. For scripting languages it's easy, just use themselves. For compiled languages it's even easier: just use Lua or Scheme and be done with it.

But please, whatever you do, don't invent your own configuration (what's wrong with standard "key = value" format anyway?) format and if you do DON'T try to gradually extend it into a language. It will be hideous, unusable, insecure, unreadable, impossible to debug, buggy like hell and will haunt you forever in nightmares.

That being said, don't be afraid to experiment and try to build a language if you want. Just don't force it on others in production environment until you're sure it's not as described above :)


  > ... whoops! Suddenly, you can program [in the
  > configuration files], so now you will have 
  > to program there.
What's forcing a user to configure a system if they don't need to make changes? Why is it preferable to use something functionally restricted rather than a programming language to configure a system?


you need to develop the sense to know that just because something could be made configurable doesnt mean it should be.

what software doesn't do is often as important as what it does do.


  > you need to develop the sense to know that just because 
  > something could be made configurable doesnt mean it 
  > should be.
That doesn't explain why one shouldn't use a programming language when things are configurable.


I don't know where that line is, honestly.

I'm still trying to figure it all out myself, but it feels like the line is drawn around the time that you find that the order of configuration options starts to matter, and probably also when start conditionally loading code based on it.

So it's something like the divide between "let's keep this in a variable because we might need to reference later", and "let's store this in a variable because we need to alter control flow based on it".

Hopefully I will know how to tell the difference someday.


> the order of configuration options starts to matter

Commutativity (and idempotency) are too important traits of declarative languages and they are at odds with turing-completeness: I have the feeling a lang can't be declarative turing-complete and efficient (finish in a reasonable time) at the same time.

So you pick two: efficient and declarative or efficient and turing-complete. Once you go turing complete anyway you can't guarantee termination and reasoning about programs becomes hard.


I'll take a stab at it, please feel free to critique.

All my reasons come down to complexity. Competing against our righteous need to make software do cool stuff, is the fact that everyone writes software that breaks all the time. Configuration that can perform arbitrary computation can put our programs into literally any state, making it that much harder to make them robust and correct.

It also opens up the risk that people won't configure their software correctly because they don't understand the configuration. And it even opens up attack vectors - what if there's a buffer overflow in your configuration interpreter, or a resource link that lets configuration files do arbitrary things to a system?

But I think one of the biggest risks is fragility. Configuration files that can do arbitrary computation will be made to do so. Software that gets used by lots of people will end up with towers of complexity built into the configuration, to the point that removing or refactoring them risks bugs, edge cases, or breaking a particular feature or misfeature that someone relies on. Better to control the complexity in the first place.


  > Configuration that can perform arbitrary computation can 
  > put our programs into literally any state...
Considering that it's generally a poor design that expose the full state of an application to all parts of the same application (think: "global state"), then yes, it holds that exposing the entire application state to the configuration system is also a very poor design. I would think that if a user could set three values via configuration, then those three values would be the only state exposed.

  > It also opens up the risk that people won't configure 
  > their software correctly...
We can always find someone who can't configure the system, regardless of how the configuration is done. If we're worried about making configuration "safer", then the system should be more careful about what configuration values it accepts i.e. implement robust validation.

  > But I think one of the biggest risks is fragility. 
Systems that can't adapt to unanticipated needs of their users get replaced by systems that can adapt. It's why many pieces of mature software develop this robust configuration ability (often hiding under the label "api", "scripting interface", "plugin framework", etc.)

  > Software that gets used by lots of people will end up with 
  > towers of complexity built into the configuration, to the 
  > point that removing or refactoring them risks bugs, edge 
  > cases, or breaking a particular feature or misfeature that 
  > someone relies on.
This is not a inevitable. Many mature systems have successfully incorporated fully programatic configuration [1] without turning into "towers of complexity". (It probably says something about the application's architecture if it's so tightly coupled to it's configuration.)

[1] Unix is the poster child for this.


Essentially that. That simple/easy terminology I mentioned before actually has this as one of it's conclusions:

http://daemon.co.za/2014/03/how-complexity-affects-software/


You don't really have to worry about any of this if you use a proper interpreter for your configuration files (for example, if you do configuration by embedding Lua in your application, which is very common and useful). If you try to build up a programming language from scratch, you're right that you're going to have problems, but embedding a battle-tested, safe language means you don't have to worry about your program being put in "literally any state" or your program being susceptible to buffer overflows.


It's a recipe for shooting yourself in the foot.

https://en.wikipedia.org/wiki/Rule_of_least_power


Yep, this anti-pattern is well-known enough to get its own Wikipedia entry: http://en.wikipedia.org/wiki/Softcoding



Django embraces programming in the config files. They are pretty much all Python code. Means you can do some fairly cool stuff. It ends up making a lot more sense to configure a Python framework in Python, rather than XML or something else.


Same in many others. I still consider it a pattern to apply with great caution.


Using DSLs for configuration can be incredibly useful: allowing a single "config file" to respond to its environment and work differently in different contexts.

Of course at some point, you're just calling your program your config file. But don't tell Emacs users that config files can't be turing-complete.


You know I'll say emacs is a great example of how its a great idea to make your "config" files a programming language.

I'll also say it's a horrible idea to make your "config" files a programming language.

Overall though its hard to argue it doesn't do the job.


they can be turing complete. that just looks like lisp.

there are other turing complete DSL's too though. Puppet's isn't though...


Yes, yes they are. I'm not a programmer, but a user with a refined taste for java--hah just kidding! Clojure isn't so bad though. Java is an interesting interface or programming language because you need an autocomplete editor i.e. another interface to really enjoy it. Don't forget maven and ant for libs and builds.

But some people are probably comforted by java ui maybe because it is so structured.

Edit: remember ant and maven.




Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: