A basic way to head off most of the security issues is throwing it behind a VPN (eg: wireguard) - no need to put stuff on the public internet if it's just for your own consumption. You can still include your mobile devices etc.
Separately I think k8s is a solution to much of the difficulty. I don't use it outside of work as the baseline costs are too much (my personal cloud bill is under $10 and I want to keep it in that range), but the packaging offered by well maintained helm charts is hard to pass by - people dunk on it for being complex but imo it only exposes inherent complexity and simplifies a lot of other stuff.
Ha, I got that same book from the public library in my early teens.
I never completed it at the time either, though it created a foundation that enabled me to learn action script (Adobe flash) with relative ease and ultimately go on to complete a computer science degree despite pressure from my high school teachers to go into mechanical engineering or similar.
On balance I got to pursue something that genuinely interested me and happened to pay well and I'll always have a fond memory of the Sam's book, as well as the free Ubuntu CDs that got me onto Linux years before we got broadband
My specific use case is pattern matching http status codes to an expected response type, and today I'm able to work around it with this kind of construct https://github.com/mnahkies/openapi-code-generator/blob/main... - but it's esoteric, and feels likely to be less efficient to check than what you propose / a range type.
There's runtime checking as well in my implementation, but it's a priority for me to provide good errors at build time
Tying legislation/compliance requirements to specific vendor (Apple/Google) that happen to be dominant today feels wild to me (as opposed to open standards).
Surely that directly entrenches their moat, and raises the difficulty of any new market entrants competing (leaving us with the effective duopoly we have today)
I fear this is increasingly becoming the case for most digital businesses through blanket requirements that don't taper into effect with the maturity/scale of the business - it's a legislative pulling up of the ladder behind them by creating high barriers to entry.
One way we dealt with this in the past was assigning an "affinity" to each tenant and basically routing their writes/reads to that host, excepting if that host was down.
You would still get weird replication issues/conflicts when requests failed over in some conditions, but it worked fairly well the majority of the time.
These days I'd stick to single primary/writer as much as possible though tbh.
Not a bathroom one, but the number of times I've tried to pay for public transport with my work/office fob is mental. Generally happens on days where I'm feeling sharper than average but also consumed with problem solving
Fun thing is I'm actually onboard with that if all said data stays local and in my control. I can understand why it's beneficial to providing a more useful experience, but also very concerned about the potential negative externalities.
Sadly that doesn't seem to be the direction we're headed in, but I could happily get behind proprietary models that run on my machine/infrastructure - I'd always prefer OSS but the key issue with generative AI as it stands today (for me) is data privacy/sovereignty.
I want to use these tools to accelerate my progress, I don't want to hand over my business ideas, or personal life to the void to be seen by quality checkers and so forth.
The balance I'm striking at the moment is experimenting heavily with them for my OSS work, since it's in the public domain regardless, and being very cautious around anything more commercial.
It's probably improved the past 8 years or so, but I remember Safari was particularly bad for bugs around DST and just dates in general back then, even when using valid input.
We ended up with a bunch of Safari specific workarounds that weren't necessary on Chrome (it was mostly a webview use case so Safari and Chrome were the two we cared about at the time)
It's a fun quiz, and there's a lot of surprising behaviour. However in my opinion from a practical perspective it mostly doesn't matter.
Think hard about whether your use case really cares about local time, try and find ways to make instants appropriate. Then stick to UTC ISO 8601 strings / Unix timestamps and most of the complexity goes away, or at least gets contained to a small part of your software.
I know this isn't always possible (I once had to support a feature that required the user took a break covering two periods of 1-5am local, which was obviously fun around DST boundaries) - but in my experience at least the majority of the time you can find ways to minimise the surface area that cares.
If you're passing raw/unvalidated user input to the date parser you're holding it wrong.
Given that the right way to turn user input into validated input is to _parse_ it, passing it to the language feature called the _date parser_ is a completely reasonable thing to do. That this doesn't work probably doesn't surprise javascript programmers much.
Yeah this is a fair take - I guess my unwritten stipulation was don't expect anything from the JavaScript standard library to behave as you'd expect, outside of fairly narrow paths.
TBH even when working with other languages I'd skew towards doing this, possibly because I've worked with JavaScript/TypeScript too much. It's a balance but there's a certain comfort in making constraints really explicit in code you control over blindly trusting the standard library to keep it's behaviour over the lifetime of the product.
It's not just JS. I'm familiar with a language interpreter that used C++ stream I/O for parsing numbers. When C++ expanded the syntax for numbers it broke the interpreter in some cases. This isn't too bad if you catch it quickly but if people start relying on the new, undocumented feature it can be impossible to fix without upsetting someone.
Every time someone says "just stick to UTC ISO 8601 strings / Unix timestamp", it's clear they've only worked with dates in very specific ways.
Try that tactic with FUTURE dates.
Meet at 7pm still means meet at 7pm when timezones change, countries make changes to when their summer time starts, etc. Which happens all the time.
And it's actually a more subtle problem. You actually need the context of the timezone in some applications. If your application is showing dinner reservations, for example, you want to display the time in the local time of the restaurant, not the local time of the user. You want to know when it's booked THERE, not where you happen to be right now. I want to know my booking is at 7pm, not 1pm because I happen to be in America right now.
So using GMT/UTC is not a panacea for all the date problems you face.
It's only a solution for dates in the past. And even then you might argue that sometimes it's worth also storing the local time of the user/thing the event happened to, or at the very least the timezone they were in when it happened in a separate field.
> when timezones change, countries make changes to when their summer time starts, etc. Which happens all the time.
The frequency in which time zones are changed surprised me the first time I looked it up. For a single country it's probably quite a big deal that doesn't happen too often. But internationally there are several changes each year. I think it was like 4-6 changes per year in the past decades.
The IANA timezone database does occasionally revise historical rules when relevant information comes to light. So you might have calculated that some historical figure died at such-and-such a time UTC (based on a documented local time), but actually that's inaccurate and you might not know because you haven't redone the calculation with the new rules.
Are any other languages more sane with this stuff? Because I've also been to this particular corner of hell. Also in a web app.
But - wouldn't this be just as horrible in Go or Rust or any other language? (Or god forbid, C?) Are there better timezone APIs in other languages than what you can find in NPM that make these problems any easier to deal with?
Elixir handles all of this pretty well, for example differentiating between Date, Time, NaiveDateTime (a date and time), and DateTime (a date and time in a particular timezone). The Timex library is a great improvement too.
JS has libraries that make the problem easier too. Though you'll never find a magic library that makes all the complexity disappear, because at the end of the day, you do need to tell the computer whether you mean 7 pm on the wall clock of where the users happen to be, 7 pm on a specific date in a specific time zone, etc. and doing math between dates is inherently complex with e.g. daylight savings changes. You might end up creating a datetime that doesn't exist (which Elixir handles well).
> If you're passing raw/unvalidated user input to the date parser you're holding it wrong.
Oh absolutely. But the difference between a sane api and other APIs is that a sane one will fail in a sane way, ideally telling me I’m holding it wrong.
Especially: the key at every turn is to fail hard if there is anything even slightly wrong rather than do something potentially incorrect. Many JS apis seem designed to continue at any cost which is the fundamental problem. You never really want a NaN. You don’t want to coerce strings to [whatever].
I agree with this. I do think it’s an easy trap to fall into if you’re unfamiliar, and hopefully this quiz has made a whole wave of folks more familiar. :)
I agree, and/or give an option to specify the DST offset. That is sometimes useful. I was always confused that Excel did not infer the format when using CSV's though.
"If you're passing raw/unvalidated user input to the date parser you're holding it wrong."
Exactly. I would have never thought about using the Date class in this way. So the behavior is pretty much wtf and local time can get pretty complicated, but I wouldn't expect to get the right time, when passing some vague strings.
The entire point of a parser is to parse unknown values. That's the entire job of a parser: take unstructured (usually string) data and turn it in to something structured, such as a date. If it can't do that reliably with error reporting then it's not a good parser on a very fundamental level.
There are so many valid and reasonable cases where this will bite you.
"Real-world data" from CSV files or XML files or whatnot (that you don't control) and sometimes they have errors in them, and it's useful to know when instead of importing wrong data.
You do something wrong by mistake and getting wrong/confusing behaviour that you literally never want, and then you need to debug where that's coming from.
The user gives you a date, you have no idea what they entered and you want to know if that's correct.
I agree on a theoretical level, but this is javascript and the web we are talking about. Invalid input is rather the norm in genereral, with the expectation the browser should still display something.
But I do dream of a alternative timeline, in where the web evolved different.
Let's say you're setting an appointment. The user puts in nonsense, so you helpfully schedule an appointment for a nonsense date (thank you so much, we'll get right to that in -124 years). Instead of... catching a parsing error and asking the user to try again or something? It's wild that a nonsense date would be considered for any purpose at all in a user-centric system.
If you really ask me, I don't build forms that accept strings as dates from users. There is a date picker element, that can be restricted and validated.
You still need to do some validation of the input because it's difficult to impossible (in many cases) to be absolutely sure the input you receive only comes from your validated form. Even code running entirely within the browser can receive broken/malicious input from an extension, user script, or even the host OS.
It can be a bit belt and suspenders doing validation of specific forms but shit happens. It's much better to catch stuff before it's persisted on the back end or on disk.
And there can also be man in the middle attacks or whatever, the efforts you do for validation depends still on your task at hand. How critical an error would be.
But even for the most trivial tasks I would never think of passing some user strings to Date and expect to get a valid value.
Really wish that GCP had a AWS lightsail equivalent offering. I'd happily make use of services like GCS and PubSub for my personal projects if they did, but I can't justify the GCE cost
Separately I think k8s is a solution to much of the difficulty. I don't use it outside of work as the baseline costs are too much (my personal cloud bill is under $10 and I want to keep it in that range), but the packaging offered by well maintained helm charts is hard to pass by - people dunk on it for being complex but imo it only exposes inherent complexity and simplifies a lot of other stuff.
reply