I think it’s health related, as the article mentions.
>One executive in the entertainment industry said younger people were less inclined to go out raving until 6am as they were more health conscious and less frivolous with money than previous generations
This is the same generation that has 12 step skincare routines, eats only organic food, chooses to vape or zyn rather than smoke because of secondhand smoke, everyone has an Apple watch on their wrist tracking calories, etc.
If anything I’m surprised that binge drinking and going out late as survived as long as it has.
And as far as the money comment, this generation is not less frivolous there’s just less money to go around haha.
Having seen this generation at music festivals, I kind of disagree. I feel like the current generation go really hard on drugs.
In Australia, nightclub entry can be expensive, ranging from $20-50 per club. 10 years ago, you’d club hop, maybe going to 3-4 clubs from 11pm until 7am. These days, it’s not worth it. Drinks are like $12-16 for a basic mixed drink. A lot of patrons just drink at home, then drink (free) water and take MDMA and/or ketamine at clubs, which is significantly cheaper than a night of buying drinks.
There’s some recency bias to that for sure though - silent and greatest generations were not as big on partying like x or the boomers. Of course things like smoking were more common but the heath risks weren’t as well understood.
But most generations before us also didn't have the same awareness about the health risks associated with a lot of those acts unlike younger people today.
And the generation after us will probably think we were dumb about stuff as well (eg. social media, disinfo, Delta9, etc).
A classic example would be that if you have a test (say for cancer) with a false positive rate of “””only””” 5%, and your disease has an incidence of say 1 in 1000.
Let’s say that you get a positive diagnosis for the disease, and you ask someone the question:
What is the probability you actually have the disease?
Most people will say 95% or 99%, but your actual probability of having the disease in this example is <2%
Unfortunately that well-worn example usually only proves that "false positive" as a technical term fails to match people's intuitions. The underlying problem about the base rate is important to teach, but it's easy for well-meaning people to try and teach the base rate lesson but fail by instead teaching a bullshit gotcha about the definition of "false positive."
Well, theoretically you could make your current "Add Link" button and your current "Remove" button just trigger a server-side request and then refresh the page.
maybe some combination of the <noscript> tag and then if so wrapping the buttons in <form> and making the buttons submit those forms?
As a kdb+/Q programmer I would say it depends on the type of problem.
For example, when working with arrays of data it certainly is easier to think and write “avg a+b” to add two arrays together and then take the average.
In a non-array programming language you would probably first need to do some bounds checking, then a big for loop, a temporary variable to hold the sum and the count as you loop over the two arrays, etc.
Probably the difference between like 6ish lines of code in some language like C versus the 6 characters above in Q.
But every language has features that help you reason about certain types of problems better. Functional languages with algebraic data types and pattern matching (think OCaml or F#) are nicer than switch statements or big if-else-if statements. Languages with built-in syntactic sugar like async/await are better at dealing with concurrency, etc.
Well no, not in a non-array programming language. In any language that has a semi-decent type/object system and some kind of functional programming support, `avg a+b` would just be `avg(a, b)`, which is not any easier or harder, with an array type defined somewhere. Once you make your basic array operations (Which they have to be made in q anyways, just in the stdlib), you can compose them just like you would in q, and get the same results. All of the bounds checking and for-loops is unnecessary, all you really need are a few HKTs that do fancy maps and reduces, which the most popular languages already have.
A very real example of this is Julia. Julia is not really an array-oriented programming language, it's a general language with a strong type system and decent functional programming facilities, with some syntactic sugar that makes it look like it's a bit array oriented. You could write any Q/k program in Julia with the same complexity and it would not be any more complex. For a decently complex program Julia will be faster, and in every case it will be easier to modify and read and not any harder to write.
I don't know what you mean by the q array operations being defined in the standard library. Yes there are things defined in .q, but they're normally thin wrappers over k which has array operations built in.
I don't consider an interpreted language having operations "built-in" be significantly different from a compiled language having basic array operations in the stdlib or calling a compiled language.
It is syntactically different, not semantically different. If you gave me any reasonable code in k/q I'm pretty confident I could write semantically identical Julia and/or numpy code.
In fact I've seen interop between q and numpy. The two mesh well together. The differences are aesthetic more than anything else.
There are semantic differences too with a lot of the primitives that are hard to replicate exactly in Julia or numpy. That's without mentioning the stuff like tables and IPC, which things like pandas/polars/etc don't really come close to in ergonomics, to me anyway.
Do you have examples of primitives that are hard to replicate? I can't think of many off the top of my head.
> tables and IPC
Sure, kdb doesn't really have an equal, though it is very niche. But for IPC I disagree. The facilities in k/q are neat and simple in terms of setup, but it doesn't have anything better than what you can do with cloudpickle, and the lack of custom types makes effective, larger-scale IPC difficult without resorting to inefficient hacks.
None of the primitives are necessarily too complicated, but off the top of my head things like /: \: (encode, decode), all the forms of @ \ / . etc, don't have directly equivalent numpy functions. Of course you could reimplement the entire language, but that's a bit too much work.
Tables aren't niche, they're very useful! I looked at cloudpickle, and it seems to only do serialisation, I assume you'd need something else to do IPC too? The benefit of k's IPC is it's pretty seamless.
I'm not sure what you mean by inefficient hacks, generally you wouldn't try to construct some complicated ADT in k anyway, and if you need to you can still directly pass a dictionary or list or whatever your underlying representation is.
> None of the primitives are necessarily too complicated, but off the top of my head things like /: \: (encode, decode), all the forms of @ \ / . etc, don't have directly equivalent numpy functions. Of course you could reimplement the entire language, but that's a bit too much work.
@ and . can be done in numpy through ufunc. Once you turn your unary or binary function into a ufunc using food = np.frompyfunc, you then have foo.at(a, np.s_[fancy_idxs], (b?)) which is equivalent to @[a, fancy_idxs, f, b?]. The other ones are, like, 2 or 3 lines of code to implement, and you only ever have to do it once.
vs and sv are just pickling and unpickling.
> Tables aren't niche,
Yes, sorry, I meant that tables are only clearly superior in the q ecosystem in niche situations.
> I looked at cloudpickle, and it seems to only do serialisation, I assume you'd need something else to do IPC too? The benefit of k's IPC is it's pretty seamless.
Python already does IPC nicely through the `multiprocess` and `socket` modules of the standard library. The IPC itself is very nice in most usecases if you use something like multiprocessing.Queue. The thing that's less seamless is that the default pickling operation has some corner cases, which cloudpickle covers.
> Im not sure what you mean by inefficient hacks, generally you wouldn't try to construct some complicated ADT in k anyway, and if you need to you can still directly pass a dictionary or list or whatever your underlying representation is.
It's a lot nicer and more efficient to just pass around typed objects than dictionaries. Being able to have typed objects whose types allow for method resolution and generics makes a lot of code so much simpler in Python. This in turns allows a lot of libraries and tricks to work seamlessly in Python and not in q. A proper type system and colocation of code with data makes it a lot easier to deal with unknown objects - you don't need nested external descriptors to tag your nested dictionary and tell you what it is.
Again, I'm not saying anything is impossible to do, it's just about whether or not it's worth it. 2 or 3 lines for all types for all overloads for all primitives etc adds up quickly.
I don't see how k/q tables are only superior in niche situations, I'd much rather (and do) use them over pandas/polars/external DBs whenever I can. The speed is generally overhyped, but it is significant enough that rewriting something from pandas often ends up being much faster.
The last bits about IPC and typed objects basically boil down to python being a better glue language. That's probably true, but the ethos of array languages tends to be different, and less dependent on libraries.
Which is why C# is the giant ever increasing bag of tricks that it is (unkind people might say bloat…) ;-) Personally, I’m all for this; let me express the problem in whatever way is most natural.
There are limits, of course, and it’s not without downsides. Still, if I have to code in something all day, I’d like that “something” be as expressive as possible.
Relying on an economic system to make moral decisions is like driving in screws with a hammer.
Capitalism is not perfect, but at least to me this is more of a moral issue than an economic one.
I think your CO2 point is a great example, why would a Communist or Socialist economic system be inherently more eco-friendly than a Capitalist system? You now have a system where there is a government willing to "plan" or "intervene" in certain aspects of the economy, but unless the government actually cares about pollution nothing will happen. You could even theoretically have a centrally-planned economy that WANTS to increase emissions.
The largest polluter in the world is a Communist country, and in the last 30 years their CO2 emissions have sextupled, while the US's emissions per year have remained flat.
> The largest polluter in the world is a Communist country, and in the last 30 years their CO2 emissions have sextupled
I guess you're talking about China here, who has a supposed "socialist market economy" (with strong private sector, stock markets and foreign investments), high wealth inequality, authoritarian governance, private property ownership and very low social welfare.
The only way China could be considered communist would be because it's a one-party state with a self-proclaimed party with "Communist" in the name, but in reality and on the ground, China isn't communist at all so not a great example.
Not talking about communism or socialism here. Just that the "version" of capitalism we're running now produces inequalities that allow people to pull up that kind of stunts.
Of course it's a moral issue, but give enough money to immoral people, and see what happens.
“Smart People Should Build Things”