Hacker News new | past | comments | ask | show | jobs | submit login
Practicing Programming (sites.google.com)
70 points by krat0sprakhar on Aug 26, 2012 | hide | past | favorite | 25 comments



Although I appreciate what the article has to say (indeed its true); I think we are having an issue with the reverse with regards to the SMARTS/SKILLS/JOBS sections.

Most new computer scientist/engineers these days DON'T KNOW FUNDAMENTALS WELL. You may say people who code in Assembly (or are learning it) are foolish; that couldn't be farther from the truth.

Keeping up to date with your low-level computing knowledge forms a base to work on any other abstraction layer up the chain. The core concepts important in older skills will still be important 1000 years down the road.

With the older generation fading out I fear we will lose a lot of the fundamental, gritty knowledge that all other programming is based on. You don't see many new-age programmers learning about low-level details like memory, and i/o. Yet, in years to come if we need to make changes and possibly rebuild some things from close to the hardware we may end up in the dark. Without the people who handled a lot of the hardcore stuff, we may potentially lose knowledge when todays younger generation steps up to the plate.

So basically my argument is that any serious hacker should be able to conceptualize a fully working machine from the logic gates to the software. Understanding the primitive components of a computer system helps you everywhere else and will most likely never be knowledge that becomes obsolete.


I'm going to need actual statistics before I'll believe that assembly programmers, hardware hackers, et cetera are any rarer than they once were.

My informal observation is that there's more hardware around than ever. Who designed and built these dozens and dozens of Android phones and tablets? Where are all these tiny helicopters and balancing scooters and taco-delivering drones coming from?

Yes, the average programmer today knows a lot less about assembly language than the average programmer in 1980. But there were a lot fewer programmers and a lot fewer computers in 1980, and to do anything beyond BASIC on a microcomputer required assembly. Since then the number of programmers has exploded, and it has not done so evenly. I'm sure that today there's a lot more Visual Basic-only programmers in the world than assembly programmers, but that doesn't mean that the population of assembly programmers has shrunk; it just hasn't grown as quickly.

I also suspect that, should it become sufficiently important to double the number of trained assembly hackers, we have the means to do so: Raise the bid high enough and, sure, I'll absolutely retrain in an assembly language newer than 6502 and Motorola 68000. ;) (Though, alas, I'm told that however clever I become at optimizing my assembly, the compiler is probably cleverer still, so I guess I should dream of embedded C instead of assembly.)


The time you are talking about was when computer science was in its infancy, and was small enough for people to be fluent in all parts of it. This is not unlike early science, when you had the same people make discoveries in pure math, physics, chemistry ...

What we have now that the field of computer science is maturing is more specilization. There are still "systems" students, who learn the low level details (one of my friend's class was writing an operating system). In addition to that, you have application developers, who can spend there resources make a more user-friendly product, and likely taking advantage of dozens of other black box systems other programmers created.


> You don't see many new-age programmers learning about low-level details like memory, and i/o.

Is that true? We learn that in first semester, including everything from flip-flops upward to how and why the parts of a CPU function, not matter what you specialize on (medical cs, software engineering, ...). One doesn't pass the first semester without basic understanding of assembly and microcode. People doing technical cs do something more in-depth I think (well, one should at least guess so) and I am not at a renowned university. 345 in the Top 400 ranking done here earlier this year[1].

[1] http://www.timeshighereducation.co.uk/


Fundamentals are algorithm design and complexity analysis, not the accidental complexities of some specific piece of hardware nobody will be using in twenty years.

On the flip side, there will always be teenagers. There will always be people attracted to assembly language for the same reasons I was: A love of complexity and half-formed (and mostly mistaken) ideas about what programming really is.


I completely agree.

As I get older, I find the abstractions to be much more interesting; how does one express a lambda in assembly? Functional programming in general? The simplicity of Python and the beauty of Lisp / Scheme / f# keep me going as a programmer.

I wonder, as assembly languages become more complex and malformed with the introduction of ARM and all of the headaches of custom architectures, if there won't be more people moving to the higher level, just for sanity's sake.


Actually, functional programming is very straight forward on a low level. When you define a function, foo, the machine code for that function is at a specific memory location. When the source code says foo(*args), the machine pushes local variables, its current location in the code, and the args to the stack. It then jumps to the memory location stored in foo Once there, the code pops the arguments, does stuff, pushes the return value, and jumps back to where it was called from. Once there, the return values and local variables get pop'ed, and execution continues. Different compilers might use different conventions, but for our purposes this is a good enough model.

The way a normal function call would look in assembly might be:

mov %eax <returnPoint> ;at assembly time, <returnPoint> is replaced with the memory address of the operation labled returnPoint:

push %eax ;save the return point to the stack

push %ebx ;push local variables and the arguments

...

jmp <foo> ;jump to the memory location labled foo

returnPoint: pop ebx ;pop local variables and return values,

pop ecx

...

The thing to note in this example is that when you run the assembler, <foo> gets replaced with the memory location of the function code. If we want to a functional type thing, we need the address we jump to be able to change at runtime, so instead of having the assembler hard code the memory location to jump to, we need to get that information from a variable. This would look like

...

jmp [foo]

...

This time, foo is not pointing to a function, but rather to the memory location of a function pointer. This means that if we want to make it so when we call foo(), we run the bar() function, we would simply need to do:

mov foo <bar>

foo is the memory location of the function pointer, and <bar> is the memory location of the function itself, so now when we dereference foo in the jmp, we will execute bar().

With regard to the ARM architecture, I've done a little bit of work in it, and it isn't more complex than x86.


Sorry, when I am short, but:

https://en.wikipedia.org/wiki/Lisp_machine


> A love of complexity and half-formed (and mostly mistaken) ideas about what programming really is.

Couldn't you basically say the same about most system languages (for example C and actually even stuff like Java)?


> Couldn't you basically say the same about most system languages (for example C and actually even stuff like Java)?

Those languages have more pragmatic use, though.


But it's not like Assembly has no pragmatic use.

I agree, people often tend to use too low level languages, but there are also good reasons to use Assembly, just like for about any language. In most cases people had a good reason to implement a language and if there are still compilers and interpreters, which maybe even continue to evolve that's a good indication for a using a language.

And Assembly... well, when I think about boot loaders, operating system, that 64k ego shooter, implementations of (for example cryptographic) algorithms and how some things are actually easier to do in Assembly, because it is low level then that's enough reason.

And if you have fun doing so than it is more than enough reason. I mean if you don't do stuff for fun (and curiosity) in some way, I wouldn't call someone a good scientist. A passionate programmer, no matter what language usually gets stuff done.

I mean, here we often talk about things on a kinda philosophical level, but especially when you do some assembly it's just doing stuff and I mean even using a higher, more modern language they usually don't guarantee any of the things that make you better. I often see code that looks like C or Java by just ignoring how a language was designed.

I mean we just came to a point where functional programming languages (which actually can be and are pretty simple, when you think about what you need for lambda) and on the other hand loose touch with everything else. Of course, abstraction. That's what it is for. You can forget about it, but that doesn't mean it's good to not know about it.

> complexities of some specific piece of hardware nobody will be using in twenty years.

Is that so? I mean one is usually still having some kind of interface, because of backward compatibility. And even if it is like that, is it really a problem. Everything changes, what counts is experience, learning from mistakes, ... That's usually very generic and that's the stuff that you spend time on and that will allow you to make stuff quicker.


This is some great stuff. Even though it's old it doesn't really matter. The same concepts still apply, and the ways to practice that he brings up are invaluable.

As a programmer I often wonder how to improve, or if the things I'm doing actually help. Typically my only measure is encountering a problem in the wild, and being able to recall that one thing I read and typed up a few months back.

To see how other successful programmers approach the problem of personal growth so as not to become stagnant in their fields is useful, especially to someone like me who is just starting their careers. Despite having gone through college pursuing a degree in CS, the learning I did on my own is some of the most valuable and now I have even more ways to keep learning independently!


My solution to personal growth in programming is essentially, when I see a problem and know (or suspect) the existence of a tool better suited to answer it than what I would use, I learn the new tool.

Granted I cannot always do that because of time. And in big projects the tools I have been using are the best solution because they are used everywhere else in the project, and I still want/need my work to be understandable by others without needing to learn a thousand different tools.


I have a related question. When I try to do 'interview questions', I usually get stumped (I'm not really into math). In all the project Euler type problems that I've attempted, I find myself continuously using the Brute-force approach, only to find someone's very clever (and in hindsight, obvious) method to do the same. Or usually, I'll google the problem to find how someone else approached it, and only after I've studied it, or just 'took a peek', will I attempt my own solution.

Now, I've studied Algorithms and data structures, and it's not that I'm bad at algorithms. I can understand well defined (aka classic) algorithms just fine, but I find it really hard to find (or create!) patterns in numbers and to manipulate them in order to solve a complex problem.

Any suggestions on how can I improve myself?


When attempting a problem that has an obvious brute force solution ask yourself why the brute force solution is wasteful. "It is wasteful because if I've already compared element A and element B, and I've already compared element B and element C then in some cases I shouldn't need to compare element C and element A." Is the type of thinking you should be having. This will lead you towards the right data structures and algorithms. Also, understanding sets and set theory really helps too.


1) Start with a brute force solution, and then look for optimizations. A good way to do this is have it print out (or display in some form) each of its guesses. You will almost allways see it do something stupid, at which point you found an optimization.

2) Write downs as many observations about the problem as you can; if possible, write them down in mathematical notation. See what you can find by combining observations.

3) If you have some sense of what a solution might look like, write it down and see where it goes; don't wait until you think you have the entire answer.

4) When you look at other people algorithms, as soon as you see them do something you didn't, put there paper to the side, do what they did, and see where you can take it on your own.

5) If what you are doing looks related to a subject you are not fammilar with, research that subject.

6) Similarly, if you can reduce a problem (or sub-problem) to another one, but you have no idea how to solve the new one, research to see if people have already solved your new one.

7) If you every see a sequence of numbers, go here: http://oeis.org/

7b) If the only description in oeis is a link to the Project Euler problem you are solving, then this technique is probably to cheety.

8) Invent to variables/functions

9) Math notation is there to help you, make up your own if it expresses the problem more cleanly.

10) Never right a false statement on your work paper unless it is clearly indicated by words like "assume" or "not".

10b) If you see a statement on one of your papers, assume that it is true even if you forgot why.


Thank you for writing that. I've been struggling with this for quite a while and your pointers can help me be someone more clever than a brute-force code-monkey. :)


Practice.

More specifically, divide your practice problems into two categories before you start working on them. Category A - don't peek. Category B - do peek, and try to learn from others. Category A will ultimately be where the real learning occurs.


Great article. The programming practice "drills" come a little short though. You'd be better off reading the article and then deciding for yourself the way you should practice. My favorite ways of practicing and keeping my programming skills up-to-date is to do micro-projects taking about 2 or 3 days. It can be some challenging algorithmic problem, or solving a problem in a new language I want to get familiar with, or using a new library to do some interesting tasks, or trying a new build tool on one of my old projects. It's fun, it doesn't take long to get bored or tired and you learn something new.


Practicing Programming (2005)


What's that?


The essay was posted in 2005 by Steve Yegge.


Was there something in the article that you feel is no longer relevant today? If not, who cares when it was posted?


sorry off topic.. but can some one please explain this "It's(American football) more like playing chess than playing soccer."


American football can be viewed as a chess match between the opposing coaches, with the players having the job of executing the coaches' strategies as perfectly as possible. Each play, both on offense and defense, is designed and planned. Here's an example play chart[0].

Soccer, on the other hand, is relatively continuous. The closest thing to an American football play would be how the teams prepare for Set plays such as corner kicks and free kicks, or their overall formation. It's not viewed as a series of "moves" like American Football.

[0]: http://www.wheelbarrowsoftware.com/images/fpd/fs5.jpg




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: