Hacker News new | past | comments | ask | show | jobs | submit | joshAg's comments login

We built something similar for the managed DB we use, but i think it's a mistake to autogenerate the migration scripts instead of autogenerating the schema from the migration. Things like changing an enum, adding a nonnull column that shouldn't have a default to an existing table that already has data in it, and migrating data from one representation to another (eg, 'oh hey, we definitely shouldn't have made our users table have an fname and an lname field. let's change to full_name and preferred_name') are easily done in a migration script but hard, if not impossible, to infer from just schema changes.


What are side-effects but undocumented arguments and returns?

Firstly, you want to ensure your functions are pure with respect to input. That is to say, they might reference a configuration or context object that is passed to them as an argument, but they'll never reference some global object/variable.

So then the docker image inside some docker registry? Both the image and the registry are values in the config/context argument at the least. Maybe they're their own separate arguments depending on whether you prefer a single big object argument or a bunch of smaller more primitive arguments.

So then the pure function that expects the docker image to exist in some registry is no longer

  Int -> Int
It's now

  String -> String -> Int -> Int
because it needs a registry and an image. Maybe it's

  String -> String -> String -> String -> Int -> Int
because there's a username and password required to access the registry. Icky, but if we make a few types like

  data Registry { 
    user :: String,
    password :: String,
    url :: String
  }
that becomes

  Registry -> String -> Int
But we could make it better by doing something like

  data Foo { 
    reg:: Registry, 
    image :: String
  }
and now the function can be

  Foo -> Int -> Int
This doesn't fix the image not actually existing in the registry, but at least now we know that the two functions aren't composable, and when it fails because the image doesn't exist we can hopefully trace through to see who the caller is that's giving it incorrect data.

PS: sorry if i got the haskell typing wrong. I don't know haskell so that's the result of what i could cobble together from googling about haskell type syntax


It's not meaningless, the author just doesn't like it. Open-source and source available were always meant to be watered-down versions of the FSF's free software specifically to be more palatable to businesses. That's not a bug and it's not even a feature. It's the freaking mission statement.


The aviation industry doesn't usually reduce incidents down to a single proximate or ultimate cause like that, but instead notes every decision/event that contributed to the incident occurring. And then they usually spend time on how they can reduce the likelihood of such an event reoccurring at every single point in that chain. It appears that the coast guard plane lined up on the runway to prepare for takeoff instead of holding short of the runway as instructed and the JAL pilots did not see that their runway was not actually clear until it was too late to avoid a collision (if they saw the other plane at all).

From the JAL plane's perspective, automation is only as good as the sensors and logic which would be checking for if a runway is actually clear. Regardless of whether extant solutions are better than humans right now or will be better in the future, those solutions will still have a same failure mode as humans, which is suddenly realizing a runway that appeared to be clear actually isn't clear when it is too late to abort.

From the coast guard plane's perspective and from the controller perspective, automation and warnings might have been able to alert or prevent, however, automation can't just be thrown around as a solution without deep knowledge of the system and environment in which it will work. The main reason for this is ensuring that the transition between automated-control and human-control is clearly evident to the humans involved, that it occurs with enough time for the human to actually be able to avert a problem, and that the human is actually ready to take control. If the automated system silently disengages, a plane with permission to take off from a runway will instead just sit on the runway because the pilot assumes the automation will begin takeoff as expected, which brings us right back to a plane on the runway when it shouldn't be there and a landing plane not seeing it until it is too late. It is actually possible to land a plane purely with automation (military drones do it all the time), however that isn't done on commercial aircraft, because the constraints are so tight that if the automation were to fail for any reason there is a large chance the human pilot would be unable to prevent a crash even when perfectly monitoring the system (and perfect monitoring can't be assumed).


> The aviation industry doesn't usually reduce incidents down to a single proximate or ultimate cause like that

Correct. Folks here might be interested in the swiss cheese model: https://en.m.wikipedia.org/wiki/Swiss_cheese_model


There's actually a legal reason for tacking on anyone who is plausibly liable. The basic idea is to sue everyone in a single case and let the court sort out actual liability for each party as part of that single case.

Say the lawsuit is originally against just Holman Fleet Leasing and FedEx is the one legally liable (Maybe FedEx is the one that is doing something naughty. Maybe there's some contractual language around Fedex assuming all legal liabilities for the vehicles sold.). You're going to spend a bunch of time in court arguing with Holman about if they're even the right party to sue, and your case is either going to get thrown out or you're going to lose. Meanwhile, the statute of limitations is still ticking, so if it takes a long enough time to adjudicate the case against Holman, you won't even be able to refile the same case with the correct respondent. Oops. but if the statute of limitations miraculously hasn't run out yet, that's not even considering the possibility that the kind of person who would roll back an odometer would also have a punishingly short document retention policy, so all the documents that still existed at the time you filed against Holman have long since been shredded and destroyed, so your discovery in the new case against Fedex is going to be a single email saying "yeah, we don't have anything going back that far. Oops again.

Now consider the lawsuit filed initially against both Holman and Fedex. Assuming your list of respondents is complete, the case isn't going to get thrown out because you sued the wrong person. Liability will still be adjudicated (and the case amended to drop respondents as the proper liability holder gets determined), but now you don't need to worry about the statute of limitations running out as you wait for the determination of liability against the first respondent. And the document retention clock starts with that lawsuit and covers the time where you're just determining who hold liability, so now they can't delete those documents even if they other wise would be. Both of them are now going to be legally required to retain all the stuff you list in discovery for the duration of at least their involvement in the case. Sure, they could destroy those records anyway, but that sort of thing is regularly used to infer guilt of the respondent with the worst possible inferences when it's destroyed in violation of discovery.


These things never see the inside of a court room. It'll end up as a settlement check, with none of the involved parties admitting to anything. The lawyers will then move onto the next low hanging fruit.

I've learned over time, it doesn't matter how righteous your defense is - all that matters is the money it'll cost to make the issue go away. Turns out, it's almost always cheaper to write a check than defend yourself.


> Why are acquisitions legal?

It'd take at least semester of public policy, a semester of economics, a semester of history, and a semester of legal studies to adequately answer that.

The shorter answer is that nonnatutal persons and natural persons have the same rights to do what they want with their property, barring very specific exemptions. One of those exemptions is monopolies, but (and add this to the list of shit Ronald fucking Reagan and the university of chicago screwed up, too) in the 1980s US anti-monopoly enforcement switched from focusing on ensuring a competitive marketplace to focusing on ensuring economic efficiency and consumer welfare, so it became much much much easier to merge and acquire competitors.


".... i can't. No one can. It's a mathematical impossibility as a general solution for at least 2 separate reasons.

The first issue is that we're taking 64 bits of data and trying to squeeze them into 16 bits. Now, sure it's not that bad, because we have the sign bit and NANs and infinities, but even if you toss away the exponent entirely, that's still 53 bits of mantissa to squeeze into 16 bits of int.

The second issue is all the values not directly expressible as an integer, either because they're infinity, NAN, too big, too small, or fractional.

The only way we can overcome these issues is to decide what exactly we mean by "converts", because while we might not _like_ it, casting to an int64 and then masking off our 16 most favorite bits of the 64 available is a stable conversion. That might be silly, but it brings up a valid question. What is our conversion algorithm?

Maybe by "convert" we meant map from the smallest float to the smallest int and then the next smallest float to the next smallest int, and then either wrapping around or just pegging the rest to the int16.max.

Or maybe we meant from the biggest float to the biggest int and so on doing the inverse of the previous. Those are two very different results.

And we haven't even considered whether to throw on NAN or infinity or what to do with -0 for both those cases.

Or maybe we meant translate from the float value to the nearest representable integer? We'd have a lot of stuff mapping to int16.max and int16.min, and we'd still have to decide how to handle infinity, NAN, and -0, but still possible.

Basically, until we know the rough conversion function, we can't even know if NAN, infinity and -0 are special cases and we can't even know if clipping will be an edge case or not. There's lots of conversions where we can happily wrap around on ourself and there are no edge cases, and there's lots of conversions where we have edge cases, but we can clip or wrap, and there's lots of conversions where we have edge cases and clipping/wrapping."


You are hired!!!


> With 16-bit unsigned integers, you can store anything from 0 to 65,535. If you use the first bit to store a sign (positive/negative) and your 16-bit signed integer now covers everything from -32,768 to +32,767 (only 15 bits left for the actual number). Anything bigger than these values and you’ve run out of bits.

That's, oh man, that's not how they're stored or how you should think of it. Don't think of it that way because if you think "oh 1 bit for sign" that implies the number representation has both a +0 and a -0 (which is the case for ieee 754 floats) that are bitwise different in at least the sign bit, which isn't the case for signed ints. Plus, if you have that double zero that comes from dedicating a bit to sign, then you can't represent 2^15 or -2^15, because you are instead representing -0 and +0. Except, you can represent -2^15, or -32,768, by their own prose. So there's either more than just 15 bits for negative numbers or there's not actually a "sign bit."

Like, ok, sure, you don't want to explain the intricacies of 2's complement for this, but don't say there's a sign bit. Explain signed ints as a shifting the range of possible values to include negative and positive values. Something like

> With 16-bit unsigned integers, you can store anything from 0 to 65,535. If you shift that range down so that 0 is in the middle of the range of values instead of the minimum and your 16-bit signed integer now covers everything from -32,768 to +32,767. Anything outside the range of these values and you’ve run out of bits.


> ...If you shift that range down so that 0 is in the middle of the range of values instead of the minimum...

Not a downvoter, but: your concept of "shifting the range" is also misleading.

In the source domain of 16-bit numbers, [0...65535] can be split into two sets:

    [0...32767]
    [32768...65535]
The first set of numbers maps to [0...32767] in 2's complement.

But the second interval maps to [-32768...-1].

So it's not just a "shift" of [0...65535] onto another range. There's a discontinuous jump going from 32767 to 32768 (or -1 to 0 if converting the other direction).

And actually, we don't know if the processor used 2's complement or 1's complement -- if it was 1's complement, they would have a signed 0!

I think they'd have to say "remapping" the range? On the whole, I think OP did about as well as you're going to do, given the audience.


> And actually, we don't know if the processor used 2's complement or 1's complement -- if it was 1's complement, they would have a signed 0!

We can infer it used two's complement, and absolutely rule out one's complement or any signed-zero system, because the range is [-2^15,2^15) and with a signed zero you can't represent every integer in that range in 16-bits, you have one too many unique numbers.


The range is of values, not their representation in bits, which can be mapped in any order. You could specify that the bit representation for 0 was 0x1234 and the bit representation for 1 was 0x1134 and proceeded accordingly and the range of values for those 16 bits could still independently be [-32768, 32767] or [0, 65535] or [65536, 131071] if you wanted.

We know the signed int they're talking about can't be the standard 1's complement because its stated range of values is [-32768, 32767]. If the representation were 1's complement the range would be [-32767, 32767] to accommodate the -0. It could be some modified form of 1's complement where they redefine the -0 to actually be -32768, but that's not 1's complement anymore.


Everything written in those three sentences you've highlighted from the article is correct. You may not like how they've chosen their three sentences, but these three sentences contain no lies.

Every negative number has 1 as it's first bit, every positive number (including 0) has 0 as its first bit. Therefore first bit encodes sign. The other 15 bits encode value. They may not use the normal binary encoding for negative integers as you'd expect from how we encode unsigned integers, but you cannot explain every detail every time.


It's not a sign bit or a range shift. Signed integers are 2-adic numbers. In an n-bit signed integer, the "left" bit b represents the infinite repeated tail sum_{k >= n-1} {b 2^k} == - 2^{n-1} of the 2-adic integer value.

https://zjkmxy.github.io/posts/2021/11/twos-complement-2-adi...


Hey, so this is admittedly monday morning quarterbacking, but in the future, you can definitely consider moving from Google Auth to Twillio's authy [1]. It lets you move devices and all your secrets come with you (it's also got other cool features, but the one that is killer IMO is the ability to migrate from device to device).

https://authy.com/


I can’t recommend Authy enough. It’s multi device from the start and has cloud backup.

I once broke my phone with Google Authenticator on it and I spent 2 days locked out from my work accounts. Never risking that again.


One important note, though, is that the backup and multidevice requires their cloud servers* so the threat model is a little different. They've got a blog on how they do the cloud backup**, but since you need a password it either needs to be something you can remember or be stored in a password vault that doesn't rely on getting a 2fa code from authy for access.

* for the paranoid, there's a mode where it doesn't backup to the cloud, which makes it function the same as google auth, but that does defeat a lot of authy's benefits.

** https://authy.com/blog/how-the-authy-two-factor-backups-work...


Sounds like they're finding out why most companies won't fuck around with outbidding competitors for talented employees just so that they can't work for a competitor.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: