If the mutations were non-synonymous, resulting in different amino acids, the fact that they keep the natural function is still kinda cool. Very much a pure research result AFAICT, but worth a little something.
This is already a well known fact: protein structure (and consequently function) is much more conserved than sequence, mostly due to biophysical constraints.
If non-synonymous mutations do not change the biophysical features of the amino acid residues, then the structure is usually kept. Alternatively, it can be the case that a disruptive mutation is compensated by another one that keeps the structure/function/phenotype. This is the basis for evolutionary coupling based structure prediction methods, such as Alphafold.
It's fairly common for two proteins to have almost identical structures but different (down to 30% or lower sequence identify) and it's also possible to mess up a nice protein that folds easily with a single amino acid change.
It turns out he's claiming they're different if x^2 is interpreted as squaring each element in the interval x, while x * x is interpreted as a cross product: the interval obtained by multiplying all pairs of elements in the interval. But I haven't ever seen anyone use x^2 to mean pointwise squaring on an interval x. Is that some kind of standard notation?
"Pointwise squaring on an interval x" is just a weird way of describing the usual function f(x) = x^2 with domain restricted to an interval. It's pointwise because that's how functions f : R -> R are defined: given a point, or value, of the domain, give me a new point in the codomain.
If you think of `x` as a whole interval unto itself, and not just a single point, then I think the options become more interesting. The most natural product on two sets is indeed the cross product; but for intervals, I can imagine defining a common parameterization over both intervals and then multiplying pointwise up to that parameterization.
It makes sense if instead of thinking about intervals, you think about the supports of random variables[1]. Given two independent random variables, X is not indepent of itself, so supp(X) = supp(Y) does not imply supp(X * X) = supp(X * Y).
Yes, I see. There's a desire to map intervals pointwise through functions, but also a desire to produce intervals by all-pairs calculations, and the impossibility of representing both interpretations in one notation leads to some inconsistencies.
There's some abuse of poor notation going on in the article. I don't think the author is intending to be confusing through this imprecision, but instead is just faithfully representing the common way people discuss this kind of stuff.
But it is confusing. And it is imprecise.
(I'll use x below to mean multiplication due to HN's weird formatting rules)
Nominally, if we have two intervals A and B we might anticipate there's a difference between AxA and AxB. In normal math we expect this because we use the different letters to indicate the potential for A and B to be different. Another way of saying it is to say that AxA = AxB exactly when A = B.
The trick of language with interval math is that people often want to write things like A = (l, h). This is meaningful, the lower and upper bounds of the interval are important descriptors of the interval itself. But let's say that it's also true that B = (l, h). If A = B, then it's definitely true that their lower and upper bounds will coincide, but is the converse true? Is it possible for two intervals to have coincident bounds but still be unequal? What does equality mean now?
In probability math, the same issue arises around the concept of a random variable (rv). Two rvs might, when examined individually, appear to be the same. They might have the same distribution, but we are more cautious than that. We reserve the right to also ask things like "are the rvs A and B independent?" or, more generally, "what is the joint distribution of (A, B)?".
These questions reinforce the idea that random variables are not equivalent to their (marginal) distributions. That information is a very useful measurement of a rv, but it is still a partial measurement that throws away some information. In particular, when multiple rvs are being considered, marginal distributions fail to capture how the rvs interrelate.
We can steal the formal techniques of probability theory and apply them to give a better definition of an interval. Like an rv, we'll define an interval to be a function from some underlying source of uncertainty, i.e. A(w) and B(w). Maybe more intuitively, we'll think of A and B as "partial measurements" of that underlying uncertainty. The "underlying uncertainty" can be a stand in for all the myriad ways that our measurements (or machining work, or particular details of IEEE rounding) go awry, like being just a fraction of a degree off perpendicular to the walls we're measuring to see if that couch will fit.
We'll define the lower and upper bounds of these intervals as the minimum and maximum values they take, l(A) = min_w A(w) and u(A) = max_w A(w).
Now, when multiplying functions on the same domain, the standard meaning of multiplication is pointwise multiplication:
(A x B)(w) = A(w) x B(w)
and so the lower and upper bounds of AxB suddenly have a very complex relationship with the lower and upper bounds of A and B on their own.
l(A x B) = min_w A(w) x B(w)
u(A x B) = max_w A(w) x B(w)
So with all this additional formal mechanism, we can recover how pointwise multiplication makes sense. We can also distinguish AxA and AxB as being potentially very different intervals even when l(A) = l(B) and u(A) = u(B).
(As a final, very optional note, the thing that makes interval math different from probability theory is that the underlying space of uncertainty is not endowed with a probability measure, so we can only talk about things like min and max. It also seems like we can make the underlying event space much less abstract and just use a sufficiently high-dimensional hypercube.)
About the last remark, my intuition is that even though there are operational differences, any formalism to represent uncertainty should be roughly as useful as each other
I mean. Can you express Bayes rule using interval arithmetic? Or something similar to it
I think a more complete way to say it would be that probability theory is a refinement of interval theory. Per that last remark, I suspect that if you add any probability measure to intervals such that it has positive weight along the length of the interval then the upper and lower bounds will be preserved.
So in that sense, they're consistent, but interval theory intentionally conveys less information.
Bayes' Law arises from P(X, Y) = P(X | Y)P(Y). It seems to me in interval math, probability downgrades to just a binary measurement of whether or not the interval contains a particular point. So, we can translate it like (x, y) \in (X, Y) iff (y \in Y implies x \in X) and (y \in Y) which still seems meaningful.
I don't. I've never actually seen interval theory developed like I did above. It's just me porting parts of probability theory over to solve the same problems as they appear in talking about intervals.
Yeah it sounds like something he's made up. For matrices x^2 is just x*x, not element-wise power (which if you want to be deliberately confusing is also known as Hadamard power). The latter is apparently written like this: https://math.stackexchange.com/a/2749724/60289
I mean, the site is pretty blatant viral marketing for both his drop-shipped-hats-from-china side hustle and (I'm going to go out on a wild limb here and guess) his employer's ML-dataset-management-related startup.
I wish cool stuff like this wasn't always sullied by the slimy feeling from it only being done to draw attention to some startup sitting smack in the middle of the trendiest buzzwords of the month.
OpenCV was not the "AI" here, the "AI" was a computer vision model trained at the roboflow website that he mentioned multiple times and that he used in the line commented with "# Directly pass the frames to the Roboflow model".
In the past I have thought about it as you do. But it occurs to me now that this is also a property of tech products, isn't it? These products can be built by a small team and serve millions or billions of people. The most successful ones usually expand the team (Google Maps now employs over 7,000 people!) but it remains true that only a small team is really needed to keep the thing going.
So why are tech employees thought of differently than entertainers? Why is the math so different, such that tech employees have much more predictable and favorable employment prospects?
That sounds like an argument against UBI. People want the work to be done, but not enough people are able and willing to do it without significant pay? Lots of people want to make art but can't because not enough people value the product?
I haven't made up my mind about UBI, and I do think that it would be valuable to society for all persons to have room to experiment and innovate around ideas (effectively, enable society wide R&D), but I think it's an open question as to whether it is worth the costs.
I already struggle to hire anybody help with fixing the many problems with my house, and it's very inefficient for me to need to learn to become an expert in so many different fields. I quite enjoy it, and there are positive externalities, but having specialists (or just "willing hands") in areas that there is demand for is also quite important to a functioning society.
But is that "willingness to do for a low price" some kind of inherent property of artists as a group of people, or does it come from somewhere else? (The "or" here is not necessarily XOR.)
What if it all comes down to supply and demand? Maybe the supply of artists is much greater than the demand for art, while for tech products it is reversed?
People WILL try to create the next Google for free though.
Fortunately all of us temporarily embarrassed billionaires have a fallback plan. Just a few years ago, in my late twenties I was seriously trying to apply to YC, with the dreams of getting some funding .
Now that I'm a bit older and more discouraged, I've made peace of being a worker bee until I retire. I'm still hoping to get a role with an equity package that allows me to retire sooner rather than later.
I have tried solo game dev off and on for years and found game dev exactly the same as all other art forms - You either make it or you don't, and almost nobody makes it.
The difference being that a game is expected to ship all its own assets. The game's exe plays the game's audio and renders the game's graphics and runs the game's logic.
And I can only play one game at a time and listen to one song at a time, so why buy a bunch of games?
But a video codec works on millions of existing movies, so anyone watching _any_ movie needs a video codec. And a VM can run _any_ OS. And a network can transfer _any_ data.
Now we're talking about products that complement each other instead of only competing. Sometimes I've had more fun writing tools, because the sum of my own hammer and a world of nails is greater than trying to make my own nails from scratch.
So ah, don't go into game dev lol. I nearly did, then I got a generic CS degree and made lots of money doing software that solves existing problems and doesn't try to create a standalone world.
While the law allows it, in my opinion there is no real moral basis for a company to lay claim to all of an employee's time. If the employee is sufficiently responsive and productive, there is no issue. If not, the company can dismiss the employee. There is no reason for the company to have more surveillance and control power over the employee than this.
Why not go even further? Why not say that the whistleblower was wrong and Microsoft business leadership was right? Maybe their profits from ignoring this issue have been fantastic, and the externalities from e.g. mass theft of national security secrets are not Microsoft's problem.
Well, because as a security person I can only evaluate his actions from the point of security. Evaluating actions of MS business leadership is beyond my expertise.
I highly doubt that the senior leadership would willingly accept this kind of liability. But you need to put it into right terms for them to understand. Politics play important role at that level as well. There are ways of putting additional pressure on the c-suite, such as making sure certain keywords are used in writing, triggering input from legal or forcing stakeholders to formally sign off on a presented risk.
Without insight knowledge, it's impossible to figure out what went wrong here, so I'm not assigning blame to the whistleblower, just commenting that way too often techies fail to communicate risks effectively.
Context: in the 2010s, Wells Fargo management looked the other way while its sales force scammed customers, creating millions of fraudulent accounts (with associated fees) to meet performance targets and quotas [1]. The Fed imposed an asset cap as punishment in 2018, and as of today, the asset cap remains in place.
This isn't any context on the article - it may be context on Wells Fargo as a company, but you may want to specify that it is context on the company and not the article
That's appealing to emotion and outrage about something unrelated that happens to involve one of the parties, which is an organization made up of over 100,000 people. There could be bad food in their cafeteria as well, but it wouldn't make sense to invoke that here either.
Providing objective, accurate, relevant contextual information that reasonably makes people outraged is not in itself an appeal to emotion and outrage.
The information is relevant to how we view Wells Fargo as an ethical entity. Bad food in the cafeteria would not be relevant.
Many of you will be familiar with this story: military pilot gear was once designed for the average person, but then they realized that actually, most people deviate significantly from the average in at least one way. So they made the gear adjustable, and that greatly improved performance and reduced mistakes.
Why is it that in tech we are often told a seemingly contrary narrative -- that everything is better, or at least more profitable, when targeted to some hypothetical average person, and who cares about the diversity of individuals?
Might be that military pilots are much more engaged with the product than the average google-user with search.
Or how these digital tools pervade spaces where everyone has to be able to use them, even if they're the type that refuses to engage with the text displayed in message boxes or technical jargon like "files" and "tabs", because they have the expertise that is more valuable to the business than the peripheral software. A greater expectation and insistence that things "just work", that the tools get out of the way instead of integrating with the user.
Maybe adjusting some straps and seat positions is more intuitive than digging for advanced options. Maybe it's significantly more difficult to surface options in digital mediums without introducing friction as a side-effect, because you're always fighting over screen real estate and screen legibility, instead of being able to just add a latch on the strap that's there when you need it and invisible when you don't.
> Maybe adjusting some straps and seat positions is more intuitive than digging for advanced options. Maybe it's significantly more difficult to surface options in digital mediums without introducing friction as a side-effect, because you're always fighting over screen real estate and screen legibility, instead of being able to just add a latch on the strap that's there when you need it and invisible when you don't
You design a different car to win F1 races, to take a couch across town, to drive a family on a weekend trip, to win rally races, to haul a boat … but in software we don’t want to do that. We want everything to do everything because “niche” markets are too small for companies to keep growing into the stratosphere.
See also: Every program attempts to expand until it can read mail. Those programs which cannot so expand are replaced by ones which can. (zawinski's law)
There’s a military, and by proxy a government and a country’s populace behind a pilot who are all invested in a pilot’s success. In battle or on missions they don’t get many do-overs and pilots and planes are expensive to mobilize and to lose. Millions of dollars are on the line each time they take off, better to get it right the first time.
For ad driven search engine products the more you as a user flail the more ads you can be served on subsequent searches, so long as they ride the line of not driving you away entirely. A string of ten searches that fail you is bad because their product looks ineffective but two or three searches to get what you want is better for their bottom line than nailing it on your first attempt.
The military in general seem to be more rationally grounded than civilians, as far as work is concerned. Promiscuity with death must encourage a different "work culture".
I get the feeling that there’s an inverse correlation between the number of people that think the military is a competent meritocracy and the number of people that actually served in the military.
It’s a giant government bureaucracy, with plenty of stupid internal politics, and gross incompetency. No better or worse than any other large organization.
Thinking that people that have trait X in common to also have some admirable trait Y is unfortunately wishful thinking. The military may for some be one of the last areas of such thinking.
My direct experience is very limited, but I've heard a few decent things from people better involved than I am. I suppose "the military" is a wide thing, there must be consequential differences between, say, American bureaucrats and French field soldiers in Africa for example. The former shouldn't be as close to death, or to soldiers who are, on a daily basis.
Within one organization the form is standardized, and across companies it's very similar. But sure, maybe not the right word. I'm comparing it to "just letting the interviewers chat and decide based on whatever" which is typical in industries without a skills test.
Leetcode-style tech interviews fall far short of standardized tests. In real standardized tests, the same questions are given to everyone, and effort is made to develop new questions not known to the test takers.
They're more like a standardized test than a pure shoot-the-shit interview is, but still pretty far away.