Hacker News new | past | comments | ask | show | jobs | submit login

> From what I've seen in Amazon it's pretty consistent that they do not blame the messenger which is what they consider the person who messed up

Interesting that my experience has been the exact opposite.

Whenever I’ve participated in COE discussions (incident analysis), questions have been focused on highlighting who made the mistake or who didn’t take the right precautions.




I've bar raised a ton of them. You do end up figuring out what actions by what operator caused what issues or didn't work well, but that's to diagnose what controls/processes/tools/metrics were missing. I always removed the actual people's name as part of the bar raising, well before publishing, usually before any manager sees it. Instead used Oncall 1, or Oncall for X team, Manager for X team. And that's mainly for the timeline.

As a sibling said you were likely in a bad or or one that was using COEs punatively.


In the article's case, there's evidence of actual malice, though-- sabotaging only large jobs, over a month's time.


All I got from the linked article was

> TikTok owner, ByteDance, says it has sacked an intern for "maliciously interfering" with the training of one of its artificial intelligence (AI) models.

Are there other links with additional info?


A lot of the original social media sources have been pulled, but this is what was alleged on social media:

https://juejin.cn/post/7426926600422637594

https://github.com/JusticeFighterDance/JusticeFighter110

https://x.com/0xKyon/status/1847529300163252474


Thanks. Google translate off the first link:

> He exploited the vulnerability of huggingface's load ckpt function to inject code, dynamically modifying other people's optimizer to randomly sleep for a short period of time, and modifying the direction of parameter shaving. He also added a condition that only tasks with more than 256 cards would trigger this condition.

Okay yeah that's malicious and totally a crime. "modifying the direction of parameter shaving" means he subtly corrupted his co-workers work. that's wild!


Some of the sources say that he sat in the incident meetings during troubleshooting and adjusted his attacks to avoid detection, too.


Wonder what the underlying motive was? Seems like a super weird thing to do.


Could be just so his work looked better in comparison. Or something more sinister, like being paid to slow progress.


LMAO that's just diabolical. Wonder what motivated them.


"parameter shaving" (参数剃度) is, by the way, a typo for "parameter gradient" (参数梯度), 梯度 being the gradient and 剃度 being a tonsure.


Whats bar raising in this context?


Usually I hear it in the context of a person outside the team added to an interview panel, to help ensure that the hiring team is adhering to company-wide hiring standards, not the team's own standards, where they may differ.

But in this case I'm guessing their incident analysis teams also get an unrelated person added to them, in order to have an outside perspective? Seems confusing to overload the term like that, if that's the case.


They are the same role different specialties. Like saying SDE for ML or for Distributed Systems or Clients.

you can usually guess from context but what you say is "we need a bar raiser for this hiring loop" or "get a bar raiser for this COE" or "get a bar raiser for the UI", there are qualified bar raisers for each setting.



Bar raisers for COE are those who review the document for detail, resolution, detailed root cause and a clear set of action items to prioritize which will eliminate or reduce chance or reoccurrence.

It’s a person with experience.


As I recall the coe tool “automated reviewer” checks cover this. It should flag any content that looks like a person (or customer name) before the author submits it.


I’ve run the equivalent process at my company and I absolutely want us to figure out who took the triggering actions, what data/signals they were looking at, what exactly they did, etc.

If you don’t know what happened and can’t ask more details about it, how can you possibly reduce the likelihood (or impact) of it in the future?

Finding out in detail who did it does not require you to punish that person and having a track record of not punishing them helps you find out the details in future incidents.


Isn't that a necessary step in figuring out the issue and how t prevent it?


But when that person was identified, were they personally held responsible, bollocked, and reprimanded or were they involved in preventing the issue from happening again?

"No blame, but no mercy" is one of these adages; while you shouldn't blame individuals for something that is an organization-wide problem, you also shouldn't hold back in preventing it from happening again.


Usually helping prevent the issue, training. Almost everyone I've ever seen cause an outage is so "oh shit oh shit oh shit" that a reprimand is worthless, I've spent more time a) talking them through what they could have done better and, encouraging them to escalate quicker b) assusaging their fears that it was all their fault and they'll be blamed / fired. "I just want you to know we don't consider this your fault. It was not your fault. Many many people made poor risk tradeoffs for us to get to the point where you making X trivial change caused the internet to go down"

In some cases like interns we probably just took their commit access away or blocked their direct push access. Now a days interns can't touch critical systems and can't push code directly to prod packages.


That was not the idea of COE ever. Probably you were in bad org/team.


Or maybe you were in an unusually good team?

I always chuckle a little when the response to "I had a bad experience" is "I didn't, so you must be an outlier".


No. The majority of teams and individuals are using it as intended, to understand and prevent future issues from process and tool defects. The complaints Ive heard are usually correlated with other indicators of a “bad”/punitive team culture, a lower level IC not understanding process or intent, or shades of opinion like “its a lot of work and I dont see the benefit. Ergo its malicious or naive.”

I worked at aws for 13 years, was briefly in the reliability org that owns the COE (post incident analysis) tooling, and spent a lot if time on “ops” for about 5 years.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: