Hacker Newsnew | past | comments | ask | show | jobs | submit | hotBacteria's commentslogin

I don't have much hope regarding this issue.

Researchers work a lot on their own time, so the millions mentionned in the article don't directly impact the institutions employing them. They are alone in this fight.

The petition (https://www.change.org/p/simplify-manuscript-submissions-in-...) linked in the article barely received 100 signatures in a month.


What is demonstrated here is that if you understand well the different parts of some code, you can recombine them in more efficient ways.

This is something very good to have in mind, but it must be applied strategically. Avoiding "clean code" everywhere won't always provide huge performances win and will surely hurt maintainability.


This twitter thread from the author provides insight about how it was made: https://twitter.com/LodsDorian/status/1349291334759288832


I like the shapes problem because I actually encountered it and it made me think.

I'm not sure about the switch approach described in the post:

  function area(shape)
    switch shape.type
      case "square": return shape.side ** 2
      case "circle": return 2 * PI * shape.radius
      case "triangle": return ... 
      case "segment": return 0
      case "polygon": return ...
      ...
      case "oval": return ...
You can have a lot of cases, some of them requiring non trivial code... Eventually you write a function for each case and it's more work than adding a method for each shape because you still need to write the switch...

Classes seem work better than structures here.

But then you want to handle intersections

The switch approach doesn't seem realistic:

  function intersection(shapeA, shapeB)
    if(shapeA.type == "circle" AND shapeB.type == "circle")...
    if(shapeA.type == "circle" AND shapeB.type == "square")...
    if(shapeA.type == "square" AND shapeB.type == "circle")...
    ...//uh oh you have nShapes**2 cases to handle
But java classes or not better: where do you define Circle-Square intersection? In Circle? In Square?

Even with multiple dispatch the solution is not ideal. You now have some things related to Circle (area, perimeter...) in the Circle.blub file, and intersection(Circle, Circle) wich only works with Circles is now in intersections.blub...

I don't see a good solution and sometimes I feel like the problem is more with our tools (code in text files) rather than programming paradigms


I have to admit I don't quite understand your issue. To me it seems like you've used a lot of OOP and cannot befriend the idea that the data structure (e.g. "Circle" in file GeometryTypes.blub) and operations that are performed with it (e.g. collisions in "CollisionDetection.blub") are completely separate. There should be no discussion whether the circle type and collision belong in the same file, while combinations are in some other file. Think of it like this: if you're going to add 3d rendering of circles, will you put that in Circle.blub together with collision detection? Wouldn't you rather add it to 3DRenderer.blub?

That said, ultimately it doesn't really matter. If you're going to implement collision detection like this, then yes, you will have a combinatorial explosion. This is not a language issue. Switching from Java to some other language with a different form of dispatch will not save you from implementing a lot of algorithms when adding bezier curves into the mix.

The practical approach is reduce the problem to a common case, e.g. to turn the collision shapes into a set of triangles first, and then perform triangle-triangle collision detection.


Business logic tends to be more organic than the rules of geometry and physics. Thus, many of the "toy" OOP examples that use shapes and physics don't extrapolate well to real world abstractions, which often turn out to be semi-abstractions. Geometry and physics generally don't change so we know certain patterns will always stay around. Business logic (the domain) is often NOT like this. Be careful.


Two points not mentionned:

1- The question of access: A theorical exhaustive map of the brain would be easier to access than an actual brain.

2- Tools: A map is not a piece of paper anymore. A 1:1 map of Earth doesn't seem absurd, as long as we can navigate it with levels of details


You can write it without resorting to one-line conditionals:

  if(cond1){
   myVar = 1;
  }
  else if(cond2 && cond3){
   myVar = 10;
  }
  else if(cond2 && !cond3){
   myVar = 100;
  }
  else {
   myVar = 4;
  }


But when I encounter this kind of situations I have other problems than code formatting anyway:

with 3 conditions you have 2^3 possibilities to check: are you really really sure (!cond1 && !cond2 && cond3) should give you defaultValue ? Do you really want 1 even if !cond2 ? etc...

When possible these conditions should be avoided in the first place (depending on context of course)


If you're running through that many conditionals, it may be clearer to work them into a state enum that you can use with a switch.


Golang doesn't have enums


Neither does JS, but there are ways to accomplish the same thing[0].

[0] https://stackoverflow.com/questions/14426366/what-is-an-idio...


TypeScript does and it’s one of my favorite features.


That's not really the same. That's badly designed and error prone.


That makes code formatting inconsistent because many prefer the other way. The VB.Net style is more compact and readable in my opinion:

        if cond1 then
            myVar = 1
        else if cond2 and cond3 then
            myVar = 10
        else if cond2 and not cond3 then
            myVar = 100
        else
            myVar = 4
        end if
The VB.Net style also makes it easier to identify mismatched blocks because the "enders" are named differently. (One can also put the assignment on the same line as the conditional, but I generally don't recommend it.)

Some may complain it's slightly verbose, but few if any will claim it's not clear.


IMHO, better but it’s still confusing when you have multiple places where you name the variable. Functional programming style forces you to be more explicit and pushes you to separate out the assignment expression. In Elixir I’d do:

    myVar = 
      cond do
       cond1 -> 1
       cond2 && cond3 -> 10
       cond2 && !cond3 -> 100
       true -> 4
      end
ML languages have similar constructs.


Maybe, but that misaligns the values. I'd rather see them lined up. And often more complex logic may be added to the sub-blocks such that the simple pattern goes away. Code should be able to "degenerate well", meaning it shouldn't require lots of rework when the initial pattern fades or changes in the future. It's one of the reasons I often use If/else instead of switch/case statements.


You assigned “myVar” in every branch, right? Let me reread that again to make sure you really are assigning myVar in every case.

That’s a problem. One I’ve seen all too often, but a problem, nonetheless.


As I mention above, it "degenerates" better in that if more code is needed in each sub-block, it's easy to add. I'm okay with syntax that could factor such out if it doesn't require overhauling the block when the starting simple pattern goes away over time. As they say, the wrong abstraction is often worse than no abstraction. Change can kick the simple pattern away.


Alternative solution: put the if/else block in an “IIFE” (which Go supports, just like JavaScript) and change the assignments to returns.


If you are worried with all the cases, you can very easily display your code as a truth table with nearly no overhead in this way:

  myVar =
    (!cond1) && (!cond2) && (!cond3) ? myValue_0 :   // case 0 0 0 blabla 
    (!cond1) && (!cond2) &&  (cond3) ? myValue_1 :   // case 0 0 1 bla
    (!cond1) &&  (cond2) && (!cond3) ? myValue_2 :   // case 0 1 0 blablabla
    (!cond1) &&  (cond2) &&  (cond3) ? myValue_3 :   // case 0 1 1 
     (cond1) && (!cond2) && (!cond3) ? myValue_4 :   // case 1 0 0 
     (cond1) && (!cond2) &&  (cond3) ? myValue_5 :   // case 1 0 1 
     (cond1) &&  (cond2) && (!cond3) ? myValue_6 :   // case 1 1 0 
     (cond1) &&  (cond2) &&  (cond3) ? myValue_7 :   // case 1 1 1 
                                       defaultValue; // uninteresting default (null, -1...)


I think its a poor example. How often you are going to write something like that in RL scenario?

In case of more complex data generation you could use supplier pattern and combine it with strategy.

No need for any if else statements at all.

Sure it requires a lot more of work but is future proof and clear.


I've not heard of the supplier pattern, any chance you would direct me to a resource about it?


A Supplier is when a function needs a value, and instead of providing a value as an argument, you pass in callable:

Foo(int x){return x+1} becomes Foo(Supplier x){return x() + 1}


Function that supplies value, can also encapsulate alot of data generation code. After that you can write a strategy to select which data generation algorithm (supplier) shpuld be used.

Thats my cause.


Thanks.


Im pretty sure you know what i meant, no need for nitpicking.

(Does not make me any less right)


Just maybe he genuinely doesn't know what you mean. I mainly use Python and have no idea what you are talking about.

(You might consider reflecting on your tone and message. It presupposes bad faith on the other party, is dismissive and condescending, and you are probably less right than you think you are.)


I, too, have no idea what a supplier pattern is. Don't think that's a very mainstream notion.


I'm not OP, but I don't know what you mean... Googling it leads me to Supplier in Java 8, which is probably not what you meant?


I can't nitpick what I don't know about. I know the strategy pattern, not the supplier pattern.

It sounds interesting, that was all.


Just to add a bit of context, this article was written more than six years ago: http://thecleancoder.blogspot.fr/2010/08/why-clojure.html


Thanks! I knew I had read a blog post with this title somewhere, but I couldn't put my finger on it. Here's the related HN post from 6 years ago: https://news.ycombinator.com/item?id=1615182


Same here: I was reading and when I read the anecdote about a some basic concept being introduced only around page 216, I thought "hang on, I've read that before..."

And I had the same thought as back then, too: "Concepts, Techniques and Models of Computer Programming" also used this approach, except it's only after a few chapters that they teach you "oh by the way, Oz also supports for and while loops" whereas everything until now used recursion. All my eng school buddies found that off-putting, but I always thought that if you put the book in the hands of a completely newcomer to programmer, they would never even think of it.

(I know SICP predates CTM by decades and may not be considered in the same league, and that Mozart/Oz is rather more obscure that any lisp/scheme ever can be, but I consider it a fairly important book as well, and one of the best textbooks I've ever read: very well structured, well written, very complete, starts shallow but goes very deep and very wiiiiidddeee in terms of knowledge.)


Thanks, we changed the url to that from http://telegra.ph/Why-Clojure-is-better-than-C-PythonRuby-an.... It's annoying that the latter article put a false date on the piece.


I found this document : https://github.com/kriskowal/q/tree/v1/design very helpful to understand how promises work behind the scene Also liked being able to access the author's reasoning and the motivations behind his design decisions


If you're going to mention Kris and concurrency, you better mention: https://github.com/kriskowal/gtor


Don't be stupid, be slow [1]

We take shortcuts when we feel the proper solution takes too long to implement If you purposely take a shortcut, write why you take it, how you take it and how to turn back in case of troubles.

Slowing down also allows you to share your problem with your team, your friends or your rubber duck Again, don't forget to write somewhere what your rubber duck told you, don't waste his time asking him the same questions over and over again.

[1] https://ventrellathing.wordpress.com/2013/06/18/the-case-for...


The story ends well because the project was actually simpler than what it looked at first. Unfortunately, more than often, things happen to be a lot harder than expected

What happens when, after 2 months of scribbling and playing space invaders, Charles realizes the project actually requires 3500 lines of code? He wants the project to succeed but now he doesn't have enough time, and he fears to ask for help because he knows he is labeled as a lazy and arrogant guy.

So he works long hours to fix the situation, then he burns out.

Source? This is somehow happened to me. Several times.

This story can be true, people like Charles and simple projects exist, but these are exceptions, not the rule. It's easy for a beginner to believe he is that guy and then experience a death march [1] Things can go wrong for Alan, but he has a team to support him and his managers know he is working at something complicated.

I'd like to be Charles one day, but for now I'm Alan.

[1] https://en.wikipedia.org/wiki/Death_march_(project_managemen...


The way I see it: Charles made the problem look simple by spending a few months thinking about the whole program, while Alan made the same problem look complicated by writing a bunch of code and always looking busy.

While I agree that Charles is the exception, I don't believe meritocracy is a valid solution; I'll always bet on Charles:

If the project was actually a "3500 lines of code" problem, then Charles might have taken longer to think about it, but it's my experience that Alan never would have finished.


If this story had happened in real life, and the problem had been more complicated than anticipated, and Charlie had realized this, it would be likely to fail, because he did not have enough leverage toward the upper management to get the resources required to succeed, so in that case the project would be likely to fail, or at least be severely delayed.

However, Alan was making the problem more complicated by introducing a lot of accidental complexity, and I have often seen that this is done even when the problem is more complicated than anticipated. Such a project could easily create enough work for 10-15 people in the hypothetical scenario in the article if the project had enough necessary complexity to be a four person project.

It is very hard to distinguish between what is necessary and accidental complexity, and that is precisely the point of the article. Prestige is very often bound to how many subordinates you have rather than how well you solve a particular task, so making your project artificially complex can be a strategy for climbing toward upper management. This may of cause be a conscious or unconscious from the employee/manager in question.


Indeed, it's very important to match the structure to the problem.

I've seen real-world situations where the Charlie approach was clearly wrong - a maverick programmer that wrote a module on his own, only some light testing with end users, no peer-review, not even source control. The program was finished in record time (a month or so), looked good, apparently did what was required, users and management were delighted, he got a huge raise. "Charlie" went on holidays... and then disaster struck. He hadn't considered the impact on other systems, and a bug delayed the montly accouning closing process, costing thousands of man-hours correcting the errors. Other programmers had to go to his terminal to see the code... and found a huge hardcoded, unstructured mess.

OTOH, in the same company, they hired a Java Senior Architect "Alan" to lead a module, just a little more complex that what "Charlie" had done. "Alan" spent the first few months meeting with all possible stakeholders, writing process diagrams, selecting a 4-person team, then spent a few more months building a "perfect" software architecture, an entire ORM layer over the systems he had to connect to. Then they chose a complicated Javascript framework for the frontend which none had experience with. After a year and a half (over a year over budget), they finally launched a first version... which wasn't what users needed. A year and a half later (3 years total), they finally have a working system.

While he didn't get the credit "Charlie" got, everyone thinks "Alan" is some kind of guru and that he understands "hard" problems, and he's going to be given the lead (again) on an even larger project, which the company is betting several millions on.


> ... and Charlie had realized this, it would be likely to fail, because he did not have enough leverage toward the upper management to get the resources required to succeed

Not to mention, the company will be paying Alan and his team much more than they’re paying Charles.

It’s my suspicion that people will give more weight to people they’re paying more, simply because they’re perceived as more valuable - independent of any other hard data. So despite having already spent more money on Alan, they’d be likely to continue doing so, due to the perception of hard work and the perceived value to the company.


I read an study testing an idea similar to yours, but with wine. They organized a study in the guise of a wine tasting event where there were 3 bottles of wine, but 2 bottles were marked up (one from $5 -> $50 and another from $10 -> $90). Even though people didn't know they were tasting the same wine, they said the the more expensive one tasted better. The other half of the study was with the prices removed, people couldn't guess the more expensive wine. The conclusion from the study was that "Individuals who are unaware of the price do not derive more enjoyment from more expensive wine."

Source: http://www.wired.com/2011/04/should-we-buy-expensive-wine/


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: