Hacker News new | past | comments | ask | show | jobs | submit login
Rich Hickey: Simple Made Easy (infoq.com)
176 points by kjbekkelund on June 28, 2012 | hide | past | favorite | 42 comments



Normally I am opposed to chronic reposting but I have watched this video start to finish 5+ times and it has never been time wasted. It is an eloquent expression of a philosophy that has shaped how I approach problem-solving more than any other. If you've never watched it, you're doing yourself (and those who depend on your ability to efficiently and effectively solve problems) a disservice.


I'm also opposed to reposting, but I mention this talk to people all the time and nearly none have seen it. Those who have, however, all agree that it is an amazing (and eye-opening) talk. I just saw it again today for the first time in half a year and realized how much it has actually changed how I develop software.


I watched this a few months ago but don't remember much other than simple != easy. I thought I had taken notes, but turned out I had it confused with a Dan Ingalls talk on OO from 1989 I watched around the same time (which I found interesting enough to take notes).

Reading the comments on the infoQ page jogged my memory a bit. I remember thinking that his concept of "complect" was the same as "connascence" - a term I learned from a Jim Weirich talk [1]. Minimizing complectity/connascence (variables shared between modules) is good.

Is there something more striking (and summarizable) I should have remembered?

1. http://www.bestechvideos.com/2009/03/29/mountainwest-rubycon...


Basically, it gave me a new vocabulary for thinking about the decisions I make every day when coding. It opened my eyes about things to look for, to focus on, to change, and so on. With regards to decisions, I also love Dan North's Decisions, Decions from NDC recently: https://vimeo.com/43536417

It's mainly the basic philosophy that Hickey focuses on that changed a lot for me, not any of the specific examples. After watching Hickey I've read great books such as Pragmatic Programmer, Passionate Programmer, Coders at Work, and other books that have helped me, as a recent university graduate, build my "coding philosophy". Hickey was just a very inspiring "first step" in changing how I look at code.


The simple vs easy concept is broader than data sharing. One example is perl, which is quite easy to pick up but complects many things, like strings representing numbers being silently coerced into numeric values.

Another example, where I immediately thought of simple/easy as it came up: I realized the other day that a component of an app I've been designing serves two almost independent purposes, and I can drastically simplify the design by making separate components.

The video you linked doesn't seem to be available anymore. The slides are available on scribd, but they don't seem to make much sense without the context of the talk.


Thanks. This one is working: http://confreaks.com/videos/77-mwrc2009-the-building-blocks-...

He mentions that back in the 70s he was writing Fortran for NASA and his mentor recommended he read a book called Composite/Structured Design. "Structured Design" was the big thing back then and the controversy was using if-else and while loops instead of Gotos. Nobody was worried about strongly vs weakly typed langauges (perl!).. Key chapter in that book is on Coupling and Cohesion.

Junp to the late 90s for his second book recommendation: "What Every Programmer Should Know About Object-Oriented Design", really just the third part of the book which introduces "connascence". Two pieces of software share connascence when a change in one requires a corresponding change in the other.

I love the historical angles on this stuff.


Sure I guess those terms are about the same. Not going to spend any time summarizing it for you though :) There are plenty of blog posts that do so already.


This was worth watching again so I'm glad it was reposted and brought back to my attention. I was as struck by it this time as when I was sitting in the room listening to him last year.

What I would like to see, or create if I have to, is a condensed version of this argument that is meant for the non-programmers, the managers, and the c-level employees of a business. The underlying premise of believing in and executing with simplicity is one that nearly requires air support, and buy-in.

I think in his summary at the end there are a few key statements he makes:

"The bottom line is that simplicity is a choice. It's your fault if you don't have a simple system.... it requires constant vigilance... You have to start developing sensibilities about entanglement... You have to have entanglement radar... You have to start seeing the interconnections between things that could be independent."


If, like me, you're overwhelmed with complexity in software projects, you need 'Out Of The Tar Pit'[1]. This essay is so good, I've read it four times, gaining new insights every time.

[1] http://web.mac.com/ben_moseley/frp/paper-v1_01.pdf


Cool to read this. I've actually built a library in CoffeeScript that enables a lot of the "relational programming" ideas expressed in this paper.

http://github.com/nathansobo/monarch


Thanks. I really enjoy what I've read so far.


Ugh, somehow in the 3 days since you've posted this the URL just returns a default "MobileMe is closed" page.



I like most of the points he makes but that "complect" business is fingers-on-a-chalkboard pretentious to my ears. "Coupling" and "complexity" are perfectly good words and have been used for decades to talk about this stuff.

But the stuff about how simplicity and easiness are not the same (at least in the short run) is very good.


I like the appropriation of an archaic word for this use. The point is to make you think about something familiar in a manner that is unfamiliar to most.

The word is now strongly connected to the concepts of easy and simple which Rich tries to untangle. From now on, when you hear someone tell you that you have "complected" something, it will most likely cause you to remember the talk and sort of forces you to think.

Just hearing talk about "coupling" might not trigger such a reaction.


"Coupling" and "complexity" are nouns, "complect" is a verb. Complect is to complex as complicate is to complicated - It means "complexify" for those who prefer archaisms to neologisms.


"Couple" is of course a verb. There are other words people have long used for this too. There's no need for obscure new jargon, and it's ironic that a talk about simplicity would introduce any. It gives the wrong impression, because these concepts are neither new nor difficult. What's difficult is building systems that respect them.


"Coupling" has always been a particularly weak word for the software problems to which it's been applied, IMO. After all, when you connect 2 Legos together you couple them.

"Complicate" was a candidate, but is decidedly unsatisfying. It just means "make complex", saying nothing more about how; nor about what it means to be complex. For many people, simply adding more stuff is to "complicate", and that was another presumption I wanted to get away from. There is also some intention in "complicate", as in, "to mess with something", vs the insidious complexity that arises from our software knitting.

I wanted to get at the notion of folding/braiding directly, but saying "you braided the software, dammit!" doesn't quite work :)


As far as how we use these words in software goes, I think "coupling" is just fine. To me it means exactly what we're talking about: making things depend on each other. "When you connect 2 Legos together you couple them" sounds off to me. I'd say that's just what you don't do. Rather, you compose them. Composition to me means putting together things that have no intrinsic dependency and are just as easy to separate again.

Reasonable people can obviously have different associations, but I thought "coupling" and "decoupling" were pretty standard terms in software. You know, "low coupling high cohesion" and all that.

What about when we simplify a design by removing dependencies between things? Surely we're not going to say we've "decomplected" them?

It goes without saying that we agree on the more important point, which is that whatever we call that thing we do to software where we make everything depend on everything, we fuck it up :)


> Surely we're not going to say we've "decomplected" them?

Simplified.


But that has the same problem you mentioned about "complicate". It just means "make simple", saying nothing more about how, nor about what it means to be simple. Not all simplification is disentangling.


http://www.thefreedictionary.com/complicate

tr. & intr.v. com·pli·cat·ed, com·pli·cat·ing, com·pli·cates

1. To make or become complex or perplexing.

2. To twist or become twisted together.

* ---> To make or become complex <--- *

Why did we need this complect business again?


I thought it was pretty clear - he used "complect" because it shared an etymological root with "complex". The whole talk is about drawing distinctions between superficially related concepts, and using specific definitions based on words' etymological histories to do it.

The word "complicated" is generally synonymous with the word "complex", but that doesn't matter - the word "simple" is generally synonymous with the word "easy", after all. If Rich Hickey had said "complicate" viewers may well have asked whether he meant "to make complex" or "to make complicated", and perhaps wonder whether he was trying to draw a distinction between those concepts as well.


I also recommend Rich's talk called hammock-driven development

http://blip.tv/clojure/hammock-driven-development-4475586

http://www.popscreen.com/v/5WwVV/Hammockdriven-Development

or his recent talks about reducers or Datomic.

For me the talk about reducers was especially jaw-dropping experience because it was about something simple we all do every day - crunching data in collections (how many times you have implemented lists library? :). Yet after decades of collection traversing, there is a still a place for fresh approach, if you are willing to thing hard.

This is the difference between blindly following known programming patterns (cargo-cult programming I would say) and really thinking about a design.


Been really impressed by the man, the language and the philosophy ever since I saw the video. Clojure has been a challenging and yet eye-opening experience, and I plan to continue learning it and using it in as many projects as I can from now on.


Tip: If the video and the slides don't fit on your widescreen display, shrink your browser window horizontally.


Or click on 'horizontal' and then on 'fullscreen'.


And then click 'X' to close the meaningless countdown timer early.


If you haven't seen it, Stuart Halloway's "Simplicity Ain't Easy" is a more Clojure-specific talk that's a nice complement to this one. It has some more concrete examples pulled from Clojure.

http://blip.tv/clojure/stuart-halloway-simplicity-ain-t-easy...


I'm glad Rich and its presentation gets the popularity they deserve. I attended to that one at QCon London in March and it was the presentation that struck me the most.

Rich gave also another presentation about the modeling process that I find great (slides from Goto Con) : gotocon.com/dl/jaoo-aarhus-2010/slides/RichHickey_ModelingProcess.pdf


If someone wants to do a talk about how to get as close to this as possible in a language like C++, I would watch it.


The issue with languages like C++ is, that you can follow better programming practices, but the compiler doesn't support you in the verification and accordingly you can't trust that easily your code, which complicates the reasoning about a system a lot.

Having properties like immutability and pureness in your language makes it lot easier to trust your code and to reason about it.


Clojure doesn't give you immutability guarantees, it just makes it harder to chose otherwise, but on the other hand calling a Java method on some object is just one special form away. I'm not saying Clojure does the wrong thing here btw, but this thing you're talking about is a fallacy, unless you're working in Haskell and even there you could find ways to screw things up by interacting with the outside world, which isn't immutable.


"I'm not saying Clojure does the wrong thing here btw, but this thing you're talking about is a fallacy ..."

Please, read exactly.

"... unless you're working in Haskell and even there you could find ways to screw things up by interacting with the outside world, which isn't immutable."

The whole point is, that you're able to express immutability and pureness in a language like Haskell _AND_ have a compiler which can verify it.

You will never be able to prohibit any screwing, but you can make it a lot harder to screw something.


Erlang:

    X = 5.
    X2 = X+1.
C++:

    const int x = 5;
    const int x2 = x + 1.
My C++ style use const modifiers extensively. Likewise you can use final in Java.


const_cast and mutable, and gone is any kind of verification.


That your code uses neither is trivially verified with grep. Are you saying your issues would be solved if someone added a ten-line patch to gcc for -Wconst-cast (that provided a waning, obiously upgradable to an error, if you used const_cast; as in, similar to -Wold-style-cast)?


You really can't express immutability and pureness in C++, because you can still modify global variables and do any kind of IO everywhere, regardless of const.

const_cast isn't the big issue, because there's also unsafePerformIO in Haskell. For both you could say, that they shouldn't be used, that it's bad programming practice to use them.

The point is, even if you follow good programming practices in C++, you can't express them and your compiler can't help you in the verification, if you're really following them.

That doesn't might seem like a big thing, it's also not related to your smartness, because it mostly depends on the size and complexity of your system.


If you are arguing that you don't have immutability by default across all values, that is a very different point that I think you need to provide more clarity for... I mean, of course you can modify state that has nothing to do with the variables that are marked const "regardless of const": that is sufficiently obvious as to be a useless comment. However, you really can mark values as const in C++ and allow the compiler to verify that you aren't doing anything non-epic to defeat it. Yes: you can still accidentally or purposefully access the memory via a random hand-calculated pointer, but we can actually harden the compiler (not the language: no changes there required) against that as well by just keeping you from using pointer arithmetic (really, that's a feature that tends to only be used in restricted contexts anyway).


In a way dynamic typing is easy and static typing a la Haskell is pretty hard.

A good type system allows you to reason more easily about your system and checks if you're violating the rules of the system.

Looking at static typing and only see inheritance and the increased complexity, is only looking at static typing a la C++/Java.


Has anyone seen this recording and the newer one [1]?

Is one of them better in any form?

http://www.infoq.com/presentations/Simple-Made-Easy-QCon-Lon...


I get something new out of this every time I watch it.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: