Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Do "height on demand". Constant abstract and compress the knowledge so you can ramp up in any area as needed.

Really, this is why programming languages are my "home base" of expertise. It's the study of formal ideas and their communication.



Interesting to see someone else with the same base and thoughts on it, and I'd highly recommend it! At a lower level, I've found a PL/Compilers background lets me switch/pick up languages quickly often and also helps even finding quirks as you can often guesstimate where the bodies are buried from design choices of the language. Also very helpful for architect type roles where you're picking what technologies to build on, consulting experts when needed.

I think the idea of "you have to spend years and years to know the internals of X" is a bit flawed - you can get quite a good feel in ~6 months that's not equivalent but not terribly far off. If you do "height on demand" for long enough, you end up with a good number of these. As someone called it, a "comb" engineer, if you will.


I feel like the 80/20 rule applies here. For most things you can probably learn 80% of the knowledge in 20% of the time it would take to learn 100%.


This is also the strategy I tend to take. Strong, broad fundamentals so I can quickly pick up specifics on the job.


IMHO, I just don't buy this. Every piece of technology has its own set of quirks, pitfalls, and shortcuts. You can jump around from Rails to Spring to iOS to whatever and be a decent individual contributor. But to be a real Senior/Lead engineer that can be responsible for a project you've gotta have the in depth knowledge.


- If you become senior manager of an existing project, learn from the rest of the team!

- If you end senior manager of a new project, you probably shouldn't also be new at the company period, but at the very least, you start with something there is expertise somewhere on the team on.

- New tech, new team, new org all at once: that's a terrible idea. The project shouldn't happen, or you shouldn't be the sole senior manager on it.

All that said, if we have some team continuity across the industry, that should allow enough on boarding time for everyone to a be a non-pidgeonholed generalist. Just because there are derivative bounds doesn't mean we can't abandon change and pigeon-hole people in myopic specialist roles.


Sure, but the time it takes to hit diminishing returns wrt different technologies seems greatly underrated to me. I've seen many engineers stick with a single technology so long that they were just memorizing standard libraries instead of actually learning anything.


This still restricts you from getting many of the jobs that require narrow specialization. Specializing on the fly only works to a certain extent and only if you get the opportunity to do so in the first place.


In my experience, "narrow" professionals tend to heavily overestimate what they bring to the table, which makes them not very hard to outcompete.


Switching technology is easy, switching domain is hard. You can't specialize in a technology, that is just nonsense, you specialize in a domain.

Or several domains if they are simple enough. Like, the typical needs of a web product are simple enough that you master how to do the database, back end and front end. You wont be able to do the back end or database at Google scale, but you can master doing them at typical scale and I'd actually argue that Google scale is a different domain entirely with a different set of skills, you need a different specialist to handle that problem and he wont be able to efficiently solve your small scale needs either.


I think one person can understand the backend at google scale. You shouldn't LARP solving problems you don't actually have (it's stupid to pay those costs, especially when the google state of the art isn't really that good) but you can still understand it.

- Modern hardware realities (nearby network faster than disk they say, memory of all sort slow relative to CPU). My mental model is the good ol' hydrolic analogy, but with molasses. Wires are slow, components are hardly slower. Flash is slower still. Maybe also think of the machines being close relative to length of wires in CPU and mollases properties.

- DB arch and similar. Well, if everything is molasses synchronizations is clearly hard. Rather than think about nifty hacks in isolation, think about what is the "business logic"'s actual expectations of synchronization. Remember financial settlement is the original example solution for this sort of thing, long predating computers.


> Constant abstract and compress

can you expand on this? i feel like i do this but not quite sure what you mean


Well, taken to the limit, something like https://ncatlab.org/nlab . (I do endorse, but am no category theory wise by any stretch of the imagination.)

By default being extremely skeptical of all things computing related, and general being dour about the mainstream trends also helps. Skepticism is what allows separating the accidental and essential complexity, as some term it. Only learn the essential parts.


I read some of the introductory pages such as their perspective, but unfortunately, I think this is mostly beyond my level of comprehension.

I don't really get category theory in layman terms.

However, it does seem like this is really powerful for whomever understands and adopts it.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: