I have always felt that the foo/bar demo/example snippets have held me back in comprehending code, because there was no reasonable logic to it. It just means nothing to me, other than the FUBAR reference others have mentioned.
I personally, and professionally, think it’s a horrible convention.
It's supposed to mean nothing; that's the point. You use "foo" and "bar" (and "baz" and "qux", etc) when the names of the things in your example do not matter. It's the same way you'd see examples featuring "x", "y", and "z" when learning algebra: maybe your textbook also has story problems, but most of the examples will simply show an equation in terms of x, y, and maybe z, without pretending that those abstractions refer to anything concrete.
I think meaningless abstractions are fine to a point, but when you have a less trivial example, it can make it harder to keep track of the relationship of different things. For example, you might see something like
trait Foo {
fn frobulate()
}
struct Bar;
struct Baz;
impl Foo for Bar {
fn frobulate() {
// TODO
}
}
fn qux<T: Foo>(...)
It's ends up being too abstract. A more concrete example would help clarify the relationship of the different elements.
I understand your perspective, and have felt similarly at times. OTOH I appreciate having some culture and some fun things in our field and teaching materials that would otherwise be pushed out by being 100% reasonable and logical all the time.
My mind would be the source. This is actually one of my least confident comments; I was almost not going to post it. I consider putting a text together later.
Regardless—I've observed, to one example, a young girl I know wanting to buy horse toys to make YouTube horse videos with play & interest in horses being a side motive. The adults are proud of her basically being an instrument of marketing. On one hand, making your own video's at young age is of course something to be proud of and encouragement for children sounds healthy.
Yet this behaviour also has no principles outside "status games." Even knowing that girls of young age are particularly prone to socialization & conformity, one can compare this to "cool new toy everyone has" trope to notice that this dark evolution would be closer to "being a popular advertisement like everyone else" which partly the South Park movie Cred portrays aptly.
In the same vine, people "doing it for the 'Gram": you go to a fancy vacation or restaurant not to enjoy the experience, but to enjoy the validation from the thumbs up, hearts, comments of your friends social media...
I wonder if that's a valid craving after all, a craving for social contact, sadly a craving being answered not by real life interaction, but by a mobile client hitting some API endpoint called something like /post/{$ID}/reaction/heart , ending on your phone pinging with the notification "$friend liked your post"...
Absolutely. It makes me think about the things in life that don't need
"validation".
Maybe it's a cliche but my dad would say about Korea and other wars
"no pics, no words, you had to be there". So that was a teenage trope
in the 80s and 90s too for my generation, if you were trying to be
cool just say "you had to be there". It draws a circle around a
personal or group experience that explicitly does not or cannot be
shared. I think maybe it somehow earns more respect and interest than
a photo, and I think with ubiquitous AI image manipulation the
currency of "pics or it didn't happen" and "for the Gram" is going to
vanish in a puff of incredulity. Now you can just text-prompt for a
picture of you and some celebrity you "randomly met" in front of
Buckingham Palace or the Taj Mahal! You can probably rent some bots to
"auto-like" you on social media, right? So who is fooling who now?
> Maybe it's a cliche but my dad would say about Korea and other wars "no pics, no words, you had to be there". So that was a teenage trope in the 80s and 90s too for my generation, if you were trying to be cool just say "you had to be there".
sounds like a partial retroactive justification to me. sure, you wouldn't get the full experience via a photo or verbal anecdote, but it's not like camera smartphones were ubiquitous in the 80s either.
Oh I'm not "justifying" it, because I don't need to. This isn't that
conversation. I'm just remarking on a difference of culture over time
for those who are interested. As you say, there were no cellphones
back then, so a quite different world.
In the coming flood of AI slop and faked "scientific" studies I'd say
there is no better source. Real science always starts with anecdata of
n=1, so trust what you see. And I'll just add; regardless of the truth
of your observation, regardless of any supporting work, these kind of
observations are worthwhile as discussion in themselves so do
investigate more and write about it, please.
FWIW my interest was piqued by your claim that "learned helplessness"
eclipses humane interpersonal behaviours.
That sounds hard to evaluate, especially in children, but I think you
may be on to something and that ubiquitous AV technology is the cause
of a reward "short circuit". Once kids get AI servants that simulate
their achievements for them I think child mental health will implode.
(which of course is Jonathan Haidt's thesis)
I noticed many years ago, if someone is 25 then, they'd have grown up with social media, and its trappings of social validation through likes and comments. Facebook was opened to everyone from 13 years old in 2006. Instagram went big in 2012. If you were a teen in late 00's/early 10's, you were probably on these networks, and didn't experience any time growing up without them...
Here, a regional town in Australia; we had the council approve build of a petrol station on one side of a children’s crossing from a school block.
No joke; the children’s crossing now terminates on the “island” of the petrol station, with entry and exit for the vehicles of the station either side of the island.
It boggles my mind, truly. I fear it’s only a matter of time before someone gets hurt.
So.. not hard to relate with your post.
I can’t help but notice how poorly people treat eachother in the bigger cities, too. To the point I get constantly complemented for just being a decent person, or aggressively attacked for the same.
Not sure what the answer for any of these problems is..
Interesting, can you elaborate on that? The behaviour I observed is that going into S3 behaves exactly the same as if you try to unload and reload the amdgpu driver.
As noted be another commenter; the game is eerily similar to Marble Madness, a game I have on the old Amiga somewhere in the shed. The similarity is uncanny, especially part of the demo video here.
It has a pretty awesome soundtrack, imho..
I cannot find anything re this Sphere Spectacle, however.
It think the general rule of thumbs is that accepting money to help other companies monopolize their markets is fine as long as you don't try to monopolize your own.
Personally, I think that all the stuff like this should also be illegal, even without a monopoly. Exclusivity contracts and stuff with that goal should be illegal.
However, we'll have to pass new laws to get there.
If I make wheat and you run a bakery I might offer you a 30% discount for the next 5 years, but only if I'm your only wheat supplier.
This sort of arrangement allows us to spread the risk of market uncertainties over that period.
I can then build extra capacity, and you get a cheaper product, or one with less price uncertainty.
Maybe we should do away with this, and say that bakeries must buy wheat like airlines buy fuel, i.e. through more complex financial instruments.
Now, I may be doing this because I'm one of only two companies making wheat, and I'm looking to drive the other one out of business. That's basically what Intel was doing here.
But it's not clear to me that we should conclude that exclusivity deals in general are bad.
Say I'm a local wheat producer. I sign a deal with the local bakery. That stops a multi-national from undercutting me.
The contract cements the benefit that both I and the bakery have from mutually working together, and removes the risk to both parties of some outside party damaging both of us.
In other words exclusivity works for small companies in spaces dominated by behemoths.
It's far more likely that the multi-national has already signed such a contract with the bakery, preventing you from entering the market in the first place.
Even if you were to get there first, when the contract comes up for renewal the multi-national can offer terms that would be unsustainable for a small producer.
The primary "risk" removed here is competition. I think exclusivity contracts as a risk hedging tool are bad for the economy. If you need to hedge price risk, we should use insurance or other financial tools for that. Additionally, you can have contracts that lock in prices and quantities without requiring exclusivity.
I would be bummed if my local tap roomed signed an exclusivity contract with a local brewery. These days there often many local breweries and new ones open regularly. It would be a shame for the local taproom to miss out on offering a wider range of beers because they got locked into an exclusivity contract.
If you are providing a discount or kickback or whatever for exclusivity, that is an anticompetitive behavioral, regardless of size.
If Intel comes to my computer company and offers me a sweet rebate, what do I know about what they're doing with my competitors?
So if it were illegal to receive money to help a company monopolize their market, how would the company receiving the funds be able to determine that's the case?
It's a good point, and would be the right thing to do in this case, but I think it opens a can of worms than even the EU is no ready to open now.
It goes down to "are exclusive deals illegal ?" and I'd assume it could go as far as small companies accepting to get bought by the market behemoth, or employees getting paid under no-compete contracts to not work at rival companies.
I think there'd need to be both regulation and enforcement in order to make that work out. If you consider 1. the game-theoretic problem that the vendors have (specifically the prisoner's dilemma), and 2. the fact that many of them seem to operate on smaller margins and 3. certainly do not have the benefit of operating from monopoly that Intel had, it may be that accepting the bribes is the way to go. Are you going to wait 10, 20, maybe 30 years for some government to finally step in on your competitors that accepted the bribes?
Not that shocking really. They take parts, put it in boxes and sell it to customers that don't care much whether label in part says Intel or AMD.
If Intel gonna say "we literally will pay you more to use our parts", that's pure profit to them, customers still gonna buy it because if they cared more about the CPU than the rest of the value propositon of Dell/Lenovo they wouldn't buy from them in the first place.
it's not really all that surprising when you think it through. this has the same affect, without tying up their limited number of lawyer for years. sometimes you just have to compromise.
Rebates are usually legal in the US, but go to the distributor or the final buyer. I used to sell huge amounts of computer hardware at a small loss, and a month later would be making 15-20% profit once rebates were paid to my company.
Bribes are paid by (or directed to be paid to someone else by) the person who is buying (or a sales representative selling) and their employer never sees the money. That's why direct incentives to salespeople (spiffs) should be authorized in writing, by the employer. An example of a bribe would be a purchasing agent requiring me to donate 5% of the deal value to their church to get a deal.
Offering a good deal isn't bribery. Bribery is when you give a decisionmaker money to make him make decisions against the wishes of the organization he represents. In this case it is in the interests of these companies to take these deals since they were good for them, they weren't bribed to take it.
It is comforting to know others feel this way, especially the opening paragraph. Adopting complexity to solve complexity blows my mind.
What’s ironic, too, is the complexity is so far abstract from the problem it’s solving (and the problems it then introduces) that troubleshooting becomes a nightmare.
In my experience, too; few pushing adoption/application of these technologies are system administrators, or even architects/designers - or are only so fresh out of uni (which i do not mean to suggest is bad..)
The accountability becomes really muddy, not even considering security boundaries and other multi-business-level disciplines these “solutions” impact.
I get it. There’s a place for these technologies. But almost every adoption I’ve experienced to date is more of a shiny-thing syndrome.
I personally, and professionally, think it’s a horrible convention.