Hacker Newsnew | past | comments | ask | show | jobs | submit | FitCodIa's commentslogin

This approach is simplistic. People can usually direct their anger and frustration, to some extent. Most of the time, there's little reason to be angry at a coworker. Even if they mess up, it's usually not a huge deal, it's relatively easy to mitigate or undo; if you need mediation, there's a manager "nearby" in the org chart to escalate to, and so on. In addition, you probably have some camaraderie from past projects and assignments etc, which provides a basis of resilience when they (or you) screw up. Staying relatively pleasant and positive is not a huge challenge.

Conversely, when upper management fucks up, and refuses to take responsibility (for example: admit to making the wrong decision, or even reverse the decision), that's when cynicism runs rampant among the rank and file. And gee, what a surprise, VPs and CEOs try to avoid underlings that speak up about the screw-ups of the brass.


White lies are a necessary wrong; we just shouldn't turn them into a "modus operandi" at a company. Indeed I cannot wrap my brain around how white lies managed to turn into a social protocol in the Anglosphere. Dishonesty encoded in the most basic forms of verbal interaction. In comparison, when I say "good day" in my own language, it's truly not far-fetched that I do wish you a good day, when I'm greeting you.


> If everywhere smells like shit, it’s time to check under your own shoe.

LOL, are you kidding? The human condition is mostly shitty.


Most humans don't feel that way most of the time. Barring extreme cases of trauma we tend to be moderately happy regardless of circumstances. If you find yourself unable to be consistently at least neutral in a first world country that tends to be a mental health issue worth addressing.


> Barring extreme cases of trauma we tend to be moderately happy regardless of circumstances

This has been scientifically proved wrong. Sonja Lyubomirsky writes that people come with innate levels of happiness, and apart from temporary swings (in either direction, in response to life events and activities), and apart from hugely intrusive, foundational trauma, "level of happiness" tends to remain constant for any given person's lifetime, and said level covers a huge spectrum, when viewed across people.

You can train your mind and habits to increase your happiness, but still, in her famous book, she assigns 50% weight to what level you are born with, and says that, however you fine-tune yourself only amounts to the other 50%. And, since her book was published, more recent research assigns an even higher weight to the innate level of happiness (i.e., higher than 50%). The sun does shine differently on different people, and it's not a mental health issue, it's just a given.

Think about it: if someone is born with 100% happiness, and never thinks consciously about their own happiness level, they will still be more happy (1 * 0.6 + 0 * 0.4 = 0.6), roughly speaking, than a person who is born with 0% happiness, but does everything in their power to improve (0 * 0.6 + 1 * 0.4 = 0.4).

> If you find yourself unable to be consistently at least /neutral/ in a first world country[,] that tends to be a mental health issue worth addressing.

I do agree about this; just know that the playing field is not level at all, and people who are less than moderately happy most of the time are not outliers; they are frequent.


> gallows humor

I think that may be a very cultural thing. I love gallows humor (I understand, enjoy, and cultivate it myself), but some cultures don't even understand it.


Yeah, probably true.


This entire subject is very culture-specific.

For example, if you try pulling US-style toxic positivity on a dev team from Poland or Russia, the result isn't going to be pretty all around.


Precisely. Fuck "yes people", and the commitment to lying to ourselves / to each other about broken things, as an institutional strategy. If we always dismiss the negatives, then responsibility and accountability have no meaning. Every organization needs a few people who act as the org's mirror and conscience.


There's no reward for it, but it is required.


> improved safety tooling (bolts) were used to increase efficiency rather than safety

"And so, amazingly, for the first 20 years of its use, the main effect of the most important lifesaving technology in the history of coal mining was to increase the efficiency of the mines while preserving existing probabilities of death and injury."

To me, this is the hardest-hitting sentence of the entire article.

Be sure to remember this whenever a new achievement in efficiency (power or otherwise) is announced, be it in computing, industry, or transportation. Such advances are rarely aimed at lessening the load on the environment; not at first, anyway. Instead, they are used for extracting more profits, while burdening the environment just the same -- I think "more profits" is the incentive for such research and advances in the first place. I think the EU does it right, by demanding progress via regulations. Whether those directives are issued after the technological advances are reported, or the directives are the motivation for the research, I cannot say; either way, advances can be steered toward public benefits only via regulations.


Thanks for the link to The Bitter Lesson.

I indeed find the lesson that it describes unbearably bitter. Searching and learning, as used by the article, may discover patterns and results (due to infinite scaling of computation) that we, humans, are physically uncapable of discovering -- however, all those learnings will have no meaning, they will not expose any causality. This is what I find unbearable, as it implies that the real world must ultimately remain impervious to human cognizance; it implies that our meaning- and causality-based human reasoning ultimately falls short to model the world, while general, computation-only methods (given ever-growing computing power) at least "converges" to a faithful (but meaningless) description of the world.

See examples like protein folding, medicine research, AI-assisted diagnosis, self driving cars. We're going to rely on their results, but we'll never know why those results work. We're not going to reject self-driving cars if those cars save lives per same distance driven and/or same time driven; however, we're going to sit in, and drive, those cars blind. To me, that's an unbearable thought, even apart from the possibility that at some point the system might break down, and cause a huge accident inexplicably. An inexplicable misbehavior of the system is of course catastrophic, but to me, even the inexplicable proper behavior of the system is an unsettling thought -- because it is inexplicable.

Edited to add: I think the phrase "how we think we think" is awesome in the essay. We don't even know how our reasoning works, so trying to "machinize" those misconceptions is likely bound to fail.


Arguably, "the way our reasoning works" is probably a normal distribution but with a broad curve (and for some things, possibly a bimodal distribution), so trying to understand "why" is a fool's errand. It's more valuable to understand the input variables and then be able to calculate the likely output behaviors with error bars than to try to reduce the problem to a guaranteed if(this), then(that) equation. I don't particularly care why a person behaves a certain way in many cases, as long as 1) their behavior is generally within an expected range, and 2) doesn't harm themselves or others, and I don't see why I'd care any more about the behavior of an AI-driven system. As with most things, Safety first!


Previously on Hacker News (I had bookmarked it):

"Antibiotics damage the colonic mucus barrier in a microbiota-independent manner"

https://news.ycombinator.com/item?id=41516419


> Grub is like a turd that won't flush. It's been completely unnecessary for years, is massively overcomplicated

shim + grub suck, but bare bones EFI sucks way more, generally speaking. Vendors of consumer-oriented EFI platforms ("client platforms") are batshit insane; they don't offer UEFI console redirection to/from the serial port even if the motherboard has one; they expose neither secure boot configuration nor boot options management to the user, and so on. A purely EFI-based boot loader such as systemd-boot or rEFInd remains the least annoying choice, IMO.


> "This was the right way to do it forty years ago, so that's why the experience is worse" isn't a compelling reason for a user to suffer today.

On my system, "dnf repoquery --whatrequires cross-gcc-common" lists 26 gcc-*-linux-gnu packages (that is, kernel / firmware cross compilers for 26 architectures). The command "dnf repoquery --whatrequires cross-binutils-common" lists 31 binutils-*-linux-gnu packages.

The author writes, "LLVM and all cross compilers that follow it instead put all of the backends in one binary". Do those compilers support 25+ back-ends? And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?

My impression is that the author does not understand the modularity of gcc cross compilers / packages because he's unaware of (or doesn't care for) the scale that gcc aims at.


> And if they do, is it good design to install back-ends for (say) 23 such target architectures that you're never going to cross-compile for, in practice? Does that benefit the user?

   rustc --print target-list | wc -l
  287
I'm kinda surprised at how large that is, actually. But yeah, I don't mind if I have the capability to cross-compile to x86_64-wrs-vxworks that I'm never going to use.

I am not an expert on all of these details in clang specifically, but with rustc, we take advantage of llvm's target specifications, so you that you can even configure a backend that the compiler doesn't yet know about by simply giving it a json file with a description. https://doc.rust-lang.org/nightly/nightly-rustc/rustc_target...

While these built-in ones aren't defined as JSON, you can ask the compiler to print one for you:

     rustc +nightly -Z unstable-options --target=x86_64-unknown-linux-gnu --print target-spec-json
It's lengthy so instead of pasting here, I've put this in a gist: https://gist.github.com/steveklabnik/a25cdefda1aef25d7b40df3...

Anyway, it is true that gcc supports more targets than llvm, at least in theory. https://blog.yossarian.net/2021/02/28/Weird-architectures-we...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: