I love the format! Short intro texts and self-explanatory code examples that I can copy/paste to fiddle with is, at least for me, a great way to actually understand things (compared to the "wall of text"-books that I never manage to read properly).
> Scrum has outmarketed "Agile", and swallowed it up wholesale. Agile was about putting people before process, Scrum is all process, no-wonder the people feel downtrodden.
This.
I understand that some managers want graphs, points, velocity, etc. Those things makes it easy to point at something and tell the team to improve. A good (imo) manager on the other hand, would just ask the team how the sprint went and what problems needs solving.
I usually come back to my “products vs projects” division.
Projects are discrete work efforts with a specific business objective and an end date. They’re effectively waterfall, and managing a whole program with multiple work streams is at least possible via scrum (if you have the right feedback loops into the product team and execs). I suspect this is the kind of work people really hate. This is the kind of work consulting companies generally do.
Product development should be much more free form. The product manager should own P&L for the product (or at least some proxy) because it aligns incentives. The dev team need not be 100% utilized all the time as their ability to quickly and effectively develop functionality is valued above cost efficiency (because again, product owns P&L so no business case has to be made). You can then use the “extra” time to deal with tech debt or take on special projects.
The companies that really suck to work for are the ones that confuse the two. They are two different mindsets — one requires a developer who thrives on tight deadlines and can turn around functioning code and toss it over the fence, the other requires a more deliberate approach where doing things right preserves the ability to do things quickly. Put a project developer on a product they’ll be lost without the structure; put a product developer on a project team and they’ll quit within 6 months because they hate being micromanaged.
Yes I tell everyone who ll listen this distinction too. I've arrived in a project based financial institution. It boggles my mind the software mind set: get project "done" toss / handover to some poor bastard to struggle with. Repeat. I actually think handovers should be banned. They don't work.
I used to lament “they’re doing it wrong” but I’ve come around to the fact that sometimes, that op model really is best for the size and way the company operates. If they don’t have technology product management as a skill in-house, it’s a seriously hard capability to build. In those cases, being able to work with an experienced vendor is great — but usually requires either time-boxed T&M or a deliverable-based contract (which is effectively timeboxed to protect the consultant’s margin). So you need to do handovers as your MSP / internal IT may not have the expertise to execute every project, and those (expensive, specialized) resources need to go away after the project is done.
I understand what you are saying, but this is all internally built stuff. The slopey shouldered mentally breeds a "don't give a fuck" mentality around code, testability, ease of new releases etc. I'm used to eat your own dog food.
This crystallizes for me that I’ve been a product developer mostly living in a project world (even when nominally building products).
> the other requires a more deliberate approach where doing things right preserves the ability to do things quickly
Yes. There is a surprising nonlinearity of benefits to doing something properly.
It’s not about over-engineering to “best practices” or being a “cowboy coder” tuned only to your own preferences. These two are the Scylla and Charybdis of pursuing quality in software.
To me, Scrum is a sad story about how the road to hell is paved with good intentions.
Scrum does have a lot of processes and artifacts and things to measure. They all had a decent purpose: to give the product and project managers (Product Owner and Scrum Manager in Scrum parlance) information that allowed them to do their jobs more effectively. Story points and velocity estimates were supposed to drive decisions about what to build and when by giving the product manager some sort of signal they could use to decide things like, "Let's skip this feature; it's going to cost more than it's worth," or, "Let's push this bit a few weeks forward in the schedule, because it's looking like it's bigger and more likely to go quagmire on us than I had been hoping."
There was supposed to be an understanding that generating this signal would reduce the speed at which the development team could churn out features. (Of course it would; that time spent playing planning poker had to be taken from somewhere.) But this was understood to be worthwhile in the long run, in a, "Work smarter, not harder," sort of way.
On paper, it seems like a pretty good idea. The critical flaw, I think, is that, when you take a system that produces all those numbers, and even do something boneheaded like name one of them "velocity", and then drop that in the middle of a crowd of people who went to business school and have had heavily indoctrinated into Taylorist ways of thinking, well, it's like a will-o-the-wisp to them. They'll see something that looks bright and shiny, and follow it straight into deepest part of the swamp. And, since they're the manager, they'll be able to drag the whole team, or even the whole company, along with them when they do.
Years ago, I had an interesting "A Tale of Two Scrums" experience. I was on a product team that had been doing Scrum as their own internal thing for years. Quite successfully, too, everyone was happy, it was possibly the highest morale team I've ever been on. But then senior management decided they wanted the company to go Agile. So they brought in $FAMOUS_AGILE_CONSULTANT to deliver a week of workshops which was mandatory for all the developers and skipped by all the managers, including product managers. And then we had this very top-down, Taylorist, how-much-blood-can-we-squeeze-from-this-stone brand of Scrum rammed down our throats from above. Half the team left the company within 2 years. I found later that, not too long after I left, the company had subsequently divested of the product I worked on. While I was there, it was the market leader and cash cow.
Long story short, there's Scrum, and then there's Scrum. I can't tell you how much I loved doing Scrum, but I also can't tell you how much I hated doing Scrum.
I guess the real question is why the fuck are management courses still founded in Taylorism?
Taylorism was known to only work on short interventions at the time it was created, and it's known to create all sorts of problems (one being absolutely demotivating everybody).
You know that high you get when your software finally works, with all the little cogs churning away and producing something much bigger than any one of them as a system effect? That, but with people. Feeling powerful feels good.
> I guess the real question is why the fuck are management courses still founded in Taylorism?
im not sure if its just management courses, ive been under technical managers that think the same way... they like numbers and measurables... maybe to them its like code coverage and performance metrics... but with people...
The desire for metrics is also a major driver of surveillance capitalism and why every web site and app is now larded up with telemetry.
Some of surveillance capitalism is indeed about making the user the product, but a decent chunk of it is about people having metrics to show to people higher up the chain.
People who sell advertising need metrics to sell it. Managers need metrics to show the value of feature X or Y or the application in general. Companies need metrics to woo and placate investors. Investors need metrics to show to their investors/LPs. It goes all the way up the chain.
I'd say there is a general lust for metrics on the part of everyone who answers to anyone. Markets are not DAGs, but undirected graphs of power relationships, so everyone has a boss somewhere and thus everyone wants metrics.
I'm sure there are a lot of Daniels out there. (The combination of "Nebraska" and "2003" tells me that there's at least one more, since Daniel Stenberg (a.k.a. Mr cURL) is living in Sweden.)
And I agree - we all owe him a lot. I'm sending him $5 every month via github sponsorship. That itself will not make him rich, but if all users of cURL did.. ;-)
This is awesome! I always enjoy reading about projects like this, even though I'm nowhere near a situation where I actually need to bootstrap an Amiga. :D
I guess it's the combination of computer history/nostalgia and the impressive level of knowledge that motivates me the most.
At least it would be safe to assume that they had access to several systems which received Orion updates. Given the attacker's dedication to the whole process, I'd say someone else's servers probably ran the tests for them.
> it's not like you give them much more information than they already have.
My thoughts exactly. I'm hoping that we'll see a move from Google Analytics towards more privacy friendly services as they improve.
Then again, I'm not sure marketing people will want to move away from Google Analytics. And I don't think marketing people will read news from cloudflare.. so we'll have to educate them.
A move away from google to a new private entity offering seemingly friendly service? That reminds me of why I installed Google Analytics in the first place.
What's the point?
You could go with good log parsing programs like awstats.
Only on Ubuntu, in which the first click must be "0". On Windows it could for example be "2" and then you have to guess which of the adjacent 8 cells are the 2 mines...
That's not what's being discussed here; the above comments talk about whether it's guaranteed that the first click is not a mine, not that it's not adjacent to a mine. As far as I'm aware, just about every version guarantees that the first click is not a mine, but many (such as the Windows original) allow the first click to be adjacent to a mine.
There are some variants without luck. For example there was one which guaranteed that you never get a mine when you have to guess, but always get one when you guess in a situation where you can still open another field safely.
You generate the game map once it's known where the player clicked. During generation you take into account that the first clicked cell has to be mine-free. In general, when distributing mines, the algorithm has to take into account already that there must be N mines in the field, so avoid instances where it assigns two mines to the same field twice. You treat the starting field as just another "mine" whose position is hardcoded.
A simple algorithm would be to re-do the map generation until there is no mine collision, but with larger numbers of mines, the collision chance increases, so your runtime will go up superlinearly. A simple but very effective improvement is to keep the positions of the mines up until the collision and then only change the seed for subsequent mines. Then your algorithm should in fact scale much better. You can also use different methods, like assigning indices in a certain way so that invalid states are avoided completely. A bit more complicated but way "cleaner".
There is an even simpler approach: for maps with K total cells there would be always K-1 unclicked cells and a single clicked cell after the first click, so you can generate K-1 cells and insert a "gap" corresponding to the first clicked cell. This works because every unclicked cell has an equal probability to become a mine.
This is what actually happens in the Windows original (it doesn't look for a random empty square, it tries them in reading order starting from the top left).
Well you have all the cells in an array so that's easy enough. Pick a cell at random until you find an empty one, which won't take long. There's no regeneration, just moving one mine to an easy to find place.