Hacker News new | past | comments | ask | show | jobs | submit | dogman144's comments login

I’ve led teams and then I’ve worked as an engineer, so I feel I understand both sides but have an informed take on how things go when OKRs show up despite/due to that.

I get as a leader that some form of goal orientation and niche engineering speak translation into outcomes must occur.

But, for every 1 leader who can do this well, there are 99 who can’t. For every 1/2 leaders who can do this faithfully, there are 99.5 who don’t care much beyond building fiefdoms or just plain don’t know how to do it in practice - the ex-consultant PM, the manager not the leader, the PM who holds zero authority over a team of skilled ICs, and so on.

Also, as an engineer, I have OKRs, I have ops, and I have the random stuff that shows up that blows a hole in my week to week plans. A good PM maybe can reduce this, but again see above. And: what am I measured on for my hearth and home paychecks: Answer - OKRs.

So, in practice, when OKRs show up, I believe earnest big picture effort (which people love going to startups for, I think), goes out the window overall because of the above.

You’ll only get people hitting OKRs, “hitting OKRs” has so much absurd flex in it because again see above, and so you best hope you have the 1 of 100 PMs that know how to do that well or else people start doing the silly dances engineers leave companies over.


I’ve had the same experience, both as IC and a manager. It’s a nice framework, just not for human organizations (KPI for next framework: result in less “no true Scotsman” replies whenever a post details why the framework sucks in their organization)


Good comment. When we parse, data science, and algo a barely structured brawl, sports are donezo and we get too far away finally from being human.


The last fairly technical career to get surprisingly and fully automated in the way this post displays concern about - trading.

I spent a lot of time with traders in early '00's and then '10's when the automation was going full tilt.

Common feedback I heard from these highly paid, highly technical, highly professional traders in a niche indusry running the world in its way was:

- How complex the job was - How high a quality bar there was to do it - How current algos never could do it and neither could future ones - How there'd always be edge for humans

Today, the exchange floors are closed, SWEs run trading firms, traders if they are around steer algos, work in specific markets such as bonds, and now bonds are getting automated. LLMs can pass CFA III, the great non-MBA job moat. The trader job isn't gone, but it has capital-C Changed and it happened quickly.

And lastly - LLMs don't have to be "great," they just have to be "good enough."

See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do.

Edit - Advice: the job will change, the job might change in that you steer LLMs, so become the best at LLM steering. Trading still goes on, and the huge, crushing firms in the space all automated early and at various points in the settlement chain.


> LLMs can pass CFA III.

Everyone cites these kind of examples as LLM beating some test or other as some kind of validation. It isn’t .

To me that just tells that the tests are poor, not the LLMs are good. Designing and curating a good test is hard and expensive.

Certifying and examination bodies often use knowledge as a proxy to understanding or reasoning or any critical thinking skills.they just need to filter enough people out, there is no competitive pressure to improve quality at all. Knowledge tests do that just as well and are cheaper.

Standardization is also hard to do correctly, common core is a classic example of how that changes incentives for both teachers and students . Goodhart's law also applies.

To me it is more often than not a function of poor test measurement practices rather than any great skill shown by the LLM.

Passing the CFA or the bar exam while daunting for humans by design does not teach you anything practicing law or accounting. Managing the books of a real company is nothing like what the textbook and exams teaches you .

—-

The best accountants or lawyers etc are not making partner because of their knowledge of the law and tax. They make money same as everyone else - networking and building customer relationships. As long as the certification bodies don’t flood the market they will do well which is what the test does.


> To me that just tells that the tests are poor, not the LLMs are good.

I mean the same is true of leetcode but I know plenty of mediocre engineers still making ~$500k because they learned how to grind leetcode.

You can argue that the world is unjust till you're blue in the face, but it won't make it a just world.


There are influencers making many multiples of that for doing far less. Monetary returns has rarely if ever reflected skill, social value, or talent in capitalist economies, this is always been the case, not sure how that is relevant

I was merely commenting on why these tests exist and the dynamics in the measurement industry from an observer, we shouldn't conflate exclusivity or difficulty of a test to its quality or objective.


sure, but if companies find that llm performance on tests is less correlated with actual job performance that human test performance, then that means the test might not be not a useful metric to inform automation decisions


Having also worked on desks in the 00s and early 10s I think a big difference here is what trading meant really changed; much of what traders did went away with innovations in speed. Speed and algos became the way to trade neither of which humans can do. While SWE became significantly more important on trading desks, you still have researchers, quants, portfolio analysts, etc. that spend their working days developing new algos, new market opportunities, ways to minimize TCOS, etc.

That being said, there's also a massive low hanging fruit in dev work that we'll automate away, and I feel like that's coming sooner rather than later, yes even though we've been saying that for decades. However, I bet that the incumbents (Senior SWE) have a bit longer of a runway and potentially their economic rent increases as they're able to be more efficient, and companies need not hire as many humans as they needed before. Will be an interesting go these next few decades.


> That being said, there's also a massive low hanging fruit in dev work that we'll automate away

And this has been solve for years already with existing tooling. Debuggers, Intellisense, Linters, Snippets and other code generations tools, build systems, Framework Specific tooling.... There's a lot of tools for writing and maintaining code. The only thing left was always the understanding of the system that solves the problem and knowledge of the tools to build it. And I don't believe we can automate this away. Using LLMs is like riding a drugged donkey instead of a motorbike. It can only work for very short distances or the thrill.

In any long lived project, most modifications are only a few lines of codes. The most valuable thing is the knowledge of where and how to edit. Not the ability to write 400 lines of code in 5 seconds.


"And now, at the end of 2024, I’m finally seeing incredible results in the field, things that looked like sci-fi a few years ago are now possible: Claude AI is my reasoning / editor / coding partner lately. I’m able to accomplish a lot more than I was able to do in the past. I often do more work because of AI, but I do better work."

https://antirez.com/news/144

If the author of Redis finds novel utility here then it's likely useful beyond React boilerplatey stuff.

I share a similar sentiment since 3.5 Sonnet came out. This goes far beyond dev tooling ergonomics. It's not simply a fancy autocomplete anymore.


"AI didn’t replace me, AI accelerated me or improved me with feedback about my work"

This really sums up how I feel about AI at the moment. It's like having a partner who has broad knowledge about anything that you can ask any stupid questions to. If you don't want to do a small boring task you can hand it off to them. It lets you focus on the important stuff, not "whats the option in this library called to do this thing that I can describe but don't know the exact name for?".

If you aren't taking advantage of that, then yes, you are probably going to be replaced. It's like when version control became popular in the 00s, where some people and companies still held out in their old way of doing things, copying and pasting folders or whatever other nasty workflows the had, because $reasons... where the only real reason was that they didn't want to adapt to the new paradigm.


This actually surfaces a much more likely scenario: That it's not our jobs that are automated, but a designed-from-scratch automated sw/eng job that just replaces our jobs because it's faster and better. It's quite possible all our thinking is just required because we can only write one prototype at a time. If you could generate 10 attempts / day, until you have a stakeholder say "good enough", you wouldn't need much in the way of requirements, testing, thinking, design, etc.


This is interesting - yes of course true

But like so much of this thread “we can do this already without AI, if we wanted”

Want to try 5/10 different approaches a day? Fine - get your best stakeholders and your best devs and lock them in a room on the top floor and throw in pizza every so often.

Projects take a long time because we allow them to. (NB this is not same as setting tight deadlines, this is having a preponderance of force on the side of our side


We need that reduction in demand for workers, though. Backfilling is not going to be a thing for a population in decline.


I don't see it.

Trading is about doing very specific math in a very specific scenario with known expectations.

Software engineering is anything but like that.


Yes, software engineering is different in many areas but today a lot of it is CRUD and plumbing. While SW engineering will not die it will certainly transform a lot, quite possibly there will be fewer generalists than today and more specialized branches will pop out - or maybe being a generalist will require one to be familiar many new areas. Likely the code we write today will go the same way writing assembly code went and sure, it will not completely disappear but...


> software engineering is different in many areas but today a lot of it is CRUD and plumbing

Which you can do away in a few days with frameworks and code reuse. The rest of the time is mostly spent on understanding the domain, writing custom components, and fixing bugs.


Actually the profession is already taking big hit. Expecting at least partial replacement by LLMs is _already_ one of the reasons for the reduction of new jobs. Actually _replacing_ developers is one of the killer apps investors see in AI.

I'd be impressed if the profession survived unscathed. It will mutate to some form or another. And it will probably shrink. Wages will go down to the levels that OpenAI will sell their monthly pro. And maybe 3rd world devs will stay competitive enough. But that depends a lot on whether AI companies will get abundant and cheap energy. IIUC that's their choke point atm.

If it happens it will be quite ironic and karmic TBH. Ironic as it is its our profession very own R&D that will kill it. Karmic because it's exactly what it did to numerous other fields and industries (remember Polaroid killed by instagram, etc).

OTOH there's no riskier thing than predicting the future. Who knows. Maybe AI will fall on it's own face due to insane energy economics, internet rot (attributed to itself) and what not.


Good comment! I see all these arguments in this post, and then think of the most talented journeyman engineer I know who just walked away from Google because he knew both AI was coming for his and coworkers jobs and he wasn’t good enough to be past the line where that wouldn’t matter. Everyone might be the next 10x engineer and be ok… but a lot aren’t.


They most insightful thing here would have been to learn how those traders survived, adapted, or moved on.

It's possible everyone just stops hiring new folks and lets the incumbents automate it. Or it's possible they all washed cars the rest of their careers.


I knew a bunch of them. Most of them moved on: retired or started over in new careers. It hit them hard. Maybe not so hard because trading was a lucrative career, but most of us don't have that kind of dough to fall back on.


+1 this. Saw it first hand, it was ugly. Post-9/11 stress and ‘08 cleared out a lot of others. Interestingly, I’ve seen some of them surface in crypto.


To answer what I saw, some blend of this:

- post-9/11 stress and ‘08 was a big jolt, and pushed a lot of folks out.

- Managed their money fine (or not) for when the job slowed down and also when ‘08 hit

- “traders” became “salespeople” or otherwise managing relationships

- Saw the trend, leaned into it hard, you now have Citadel, Virtu, JS, and so on.

- Saw the trend, specialized or were already in assets hard to automate.

- Senior enough to either steer the algo farms + jr traders, or become an algo steerer themselves

- Not senior enough, not rich enough, not flexible enough or not interested anymore and now drive Uber, mobile dog washers, joined law enforcement (3x example I know of).


I like this comment, it is exceptionally insightful.

Any interesting question is "How is programming like trading securities?"

I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

What trading software has a hard time doing is coming up with new securities. What LLMs absolutely cannot do (yet?) is come up with novel mechanisms. To illustrate that, consider the idea that an LLM has been trained on every kind of car there is. If you ask it to design a plane it will fail. Train it on all cars and plans and ask it to design a boat, same problem. Train it on cars, planes, and boats and ask it to design a rocket, same problem.

The sad truth is that a lot of programming is 'done' , which is to say we have created lots of compilers, lots of editors, lots of tools, lots of word processors, lots of operating systems. Training an LLM on those things can put all of the mechanisms used in all of them into the model, and spitting out a variant is entirely within the capabilities of the LLM.

Thus the role of humans will continue to be to do the things that have not been done yet. No LLM can design a quantum computer, nor can it design a compiler that runs on a quantum computer. Those things haven't been "done" and they are not in the model. The other role of humans will continue to be 'taste.'

Taste, as defined as an aesthetic, something that you know when you see it. It is why for many, AI "art" stands out as having been created by AI, it has a synthetic aesthetic. And as one gets older it often becomes apparent that the tools are not what determines the quality of the output, it is the operator.

I watched Dan Silva do some amazing doodles with Deluxe Paint on the Amiga and I thought, "That's what I want to do!" and ran out and bought a copy and started doodling. My doodles looked like crap :-). The understanding that I would have to use the tool, find its strengths and weaknesses, and then express through it was clearly a lot more time consuming than "get the tool and go."

LLMs let people generate marginal code quickly. For so many jobs that is good enough. People who can generate really good code taking in constraints that the LLM can't model, is something that will remain the domain of humans until GAI is achieved[2]. So careers in things like real-time and embedded systems will probably still have a lot of humans involved, and systems where every single compute cycle needs to be extracted out of the engine is a priority, that will likely be dominated by humans too.

[1] Very early on there were papers on 'genetic' programming. Its a good thing to read them because they arrive at a singularly important point, "How do you define 'Which is better'?" For a solid, qualitative and testable metric for 'goodness' genetic algorithms out perform nearly everything. When the ability to specify 'goodness' is not there, genetic algorithms cannot out perform humans. What is more they cannot escape 'quality moats' where the solutions on the far side the moat are better than the solutions being explored but they cannot algorithmically get far enough into the 'bad' solutions to start climbing up the hill on the other side to the 'better' solutions.

[2] GAI being "Generalized Artificial Intelligence" which will have to have some way of modelling and integrating conceptual systems. Lots of things get better then (like self driving finally works), maybe even novel things. Until we get that though, LLMs won't play here.


> LLMs let people generate marginal code quickly.

What's weird to me is why people think this is some kind of great benefit. Not once have I ever worked on a project where the big problem was that everyone was already maxed out coding 8 hours a day.

Figuring out what to actually code and how to do it the right way seemed to always be the real time sink.


>I believe an argument can be made that the bulk of what goes for "programming" today is simply hooking up existing pieces in ways that achieve a specific goal. When the goal can be adequately specified[1] the task of hooking up the pieces to achieve that goal is fairly mechanical. Just like the business of tracking trades in markets and extracting directional flow and then anticipating the flow by enough to make a profit is something trading algorithms can do.

right, but when python came into popularity it's not like we reduced the number of engineers 10 fold, even though it used to take a team 10x as long to write similar functionality in C++.


Software demand skyrocketed because of the WWW, which came out in 1991 just before Python (although Perl, slightly more mature, saw more use in the early days).


Okay, but one thing you kinda miss is that trading (e.g. investing) is still one of the largest ways for people to make money. Even passively investing in ETFs is extremely lucrative.

If LLMs become so good that everyone can just let an LLM go into the world and make them money, the way we do with our investments, won't that be good?


The financial markets are a 0 sum game mostly. This approach would not work.


you are missing that trading in what I’m talking about != “e.g investing.”

And, certainly, prob a good thing for some, bad thing for the money conveyor belt of the last 20 yrs of tech careers.


I am not missing that. I understand the difference. I'm saying the economic engine behind trading is still good (investing). So while people don't do the trading as much (machines do it), the economic rewards are still there and we can still capture them. The same may be true in a potential future where software becomes automated.


The key difference between trading and coding is that code often underpins uncritical operations - think of all the CRUD apps in small businesses - and there is no money involved, at least directly.


"See if you can match the above confidence from pre-automation traders with the comments displayed in this thread. You should plan for it aggressively, I certainly do."

Sounds like it was written by someone trying to keep any grasp on the fading reality of AI.


> LLMs don't have to be "great," they just have to be "good enough."

NO NO NO NO NO NO NO!!!! It may be that some random script you run on your PC can be "good enough", but the software the my business sells can't be produced by "good enough" LLMs. I'm tired of my junior dev turning in garbage code that the latest and greatest "good enough" LLM created. I'm about to tell him he can't use AI tools anymore. I'm so thankful I actually learned how to code in pre-LLM days, because I know more than just how to copy and paste.


You're fighting the tide with a broom.


No, I'm fighting for the software I sell my customers to be actually reliable.


I would not hire someone who eschews LLMs as a tool. That does not mean I would accept someone mindlessly shoving its code through review, as it is a tool, not a wholesale solution.


That ship seems to have sailed at the same time boxed software went extinct.

How many companies still have dedicated QA orgs with skilled engineers? How many SaaS solutions have flat out broken features? Why is SRE now a critical function? How often do mobile apps ship updates? How many games ship with a day zero patch?

The industries that still have reliable software are because there are regulatory or profit advantages to reliability -- and that's not true for the majority of software.


Not sure I agree. Where it matters people will prefer and buy products that "just work" compared to something unreliable.

People tolerate game crashes because you (generally) can't get the same experience by switching.

People wouldn't tolerate f.e broswers crashing if they can switch to an alternative. The same would apply to a lot of software, with varying limits to how much shit will be tolerated before a switch would be made.


So majority of software we have now is unreliable piles of sh*t. Seems to check out, with how often I need to restart my browser to keep memory usage under control


I don't think the problem is the LLM.

I think of LLMs like clay, or paint, or any other medium where you need people who know what they're doing to drive them.

Also, I might humbly suggest you invest some time in the junior dev and ask yourself why they keep on producing "garbage code". They're junior, they aren't likely to know the difference between good and bad. Teach them. (Maybe you already are, I'm just taking a wild guess)


You might care about that but do you think your sales team does?


I think you have the right mindset about the most ideal approach, but how-to guides on how to do this *such that* it is only *a few days every few months* to maintain a setup like that are few and far between... as in there aren't any.

Sure - same page, digital sovereignty isn't free, if you want it have to work for it.

But, speaking as a technical securtiy user myself, and have worked with ghidra, I have zero context on how to take the approach you call for that cleanly strips out the bad stuff without buying me, what would feel to be likely, hours upon hours of troubleshooting dependencies for core functionality that I inadvertantly broke due to the surgery... such that I'm back to not carrying a smart device or being ok with a fliphone traingulating me.

One approach I have thought through with effective (I think? still considering this) privacy outcomes is leveraging LLCs and related device plans as a "cloaking" mechanism. If my current phone stays at my house always, I travel locally with a fliphone, and travel with a network of smart phones under LLCs, that could be enough to throw off the tracking effectively while only (maybe?) exposing data that's already exposed in public records.


Was on a plane a few days ago watching someone do this out the window 10 yards away.

- Seemed like the baggage handler was required to do a very fast cycle of <scan, toss, align, repeat next bag>. Automation seems helpful, and certainly it’s hard labor.

- This was also a young woman, in presumably a safe union job, working in a very pricey city (one of the mountain west towns that exploded). Adios union job hello robots.

Tricky ethics! Outside of picking stuff up and putting stuff down, not too many automation union-safe jobs left. Saving them from back pain is also going to be saving them from a job.


We're really focused on health and safety aspects of this job - in a repetitive stress sense, these jobs are much more dangerous than many people imagine they would be and people end up with lifelong injuries.

Generally, regulators seem to be moving in this direction as well. The EU has introduced new regulations on the total amount of weight someone can move in a shift, and the Dutch government has mandated that baggage handling move away from manual processes like this in the near future.


Despite the focus difference, do you think it's unlikely that automating baggage handlers will replace their jobs?

The regulator focus seems like it'd reduce the max allowed weight of a checked bag, not automate the baggage handler handing the checked bag. So, I don't see the similarity between the regulatory push and your product? Edit - to clarify, beyond what Dutch regulators say about Dutch markets, which are a very small subset of "regulator focus" internationally.


Good post, fwiw I feel basically similar to what you wrote, and this jumped out at me as this has been my main problem point - “I relaxed the rules, observed myself increasing the amount of content I consume over time, and now I'm back here”

I haven’t found a solution yet but how to find one is in a lot of my thoughts lately.

I feel similarly as OP it seems that this cycle is actively getting in the way of my job sometimes. I can go cold turkey/flip phone life and it works as in I don’t miss the content and adjust, but I work in tech it’s hard to not engage with the platforms and also do well, engaging in the platforms is like being a drug addict at free drug convention with a few helpful booths about work, and rinse/repeat.

At periods it’s been such that I put some thought into if I had undiagnosed ADHD, because some times it veered a bit too close for comfort to where I really needed to focus on adult stuff and I just couldn’t.

But, generally I speculate it’s an information diet thing first and foremost, but I’d rather sort that out first and see if it works.

Tricky topic! Interesting to see discussion on it, it’s the first I’ve ran into with similar sentiments as my experiences.


I get your point but the literature on this I’ve read leans towards:

- ubiquitous surveillance is here (your point broadly)

- the data engineering to work the data isn’t quite there, or isn’t full spectrum in the manner you argue (what prevents your theory as of now)

However, what is clunky tech today can be scaled and effective tech tomorrow, so maybe your argued future is possible, if not likely.

https://www.mitre[.]org/news-insights/publication/decipherin...


If you’re an American and an engineer, how did you pick up work when needed internationally? I consider this, I’m a security engineer, but I haven’t had much progress considering how to pull it off, if not having an established consultancy.


> the worst thing in my experience is relatives who think they know what i am doing wrong as a parent

Glad someone said it! I’m disinclined to take any parenting advice from a peer group that’s been raising kids on tablets for the last 10 years. But ya, moving around is the concern hah. God forbid they see life outside the suburbs.


Because you can't imagine, I can't carry whatever cash I'd like for a legal use case without the risk of seize-first, ask questions later? Nonsense implication if so.


Did you read the first half of my comment? Could you quote me where I supported government seizure of assets without due process? Please. I'll wait.


Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: