The people purchasing these hot spots, who find their bankruptcy claims against Helium being inhibited by intentional breach of contract.
> not like a packet that's a result of sharing/reselling suddenly "weighs" more
It does due to peering arrangements. Data transfer having value is also Helium’s claimed value added. It’s awkward to claim it costs nothing to implement but has value that can be sustainably charged.
People running the hotspots should care, they could get prosecuted for unauthorized access of their ISPs network. Sharing you connection is not authorized. Don't like it, lobby to change the law but what they are doing is explicitly illegal.
I suppose ISPs could try to make an example of a few customers. But I think that would be horrible PR as well as very difficult to prove.
How would they distinguish traffic that’s authorized and unauthorized? As their customer, how much do you think I can authorize? Can I grant my guests access? Can my smart lightbulbs access?
Frequently at design time you don't know enough. This is exactly why SQL is great, it is incredibly flexible, it can also allow you to get in your own way, this is just a feature to help detect that.
For example, imagine a system where items are soft-deleted immediately upon user action, but not actually deleted for a few days to facilitate restoration (recycle bin).
There is going to be some nightly/hourly/scheduled job that actually really deletes these records. Initially it will have little work to do, but over time as the system grows it may become slow. Typically this would be hard to separate from other slow queries, you would have to catch it running while causing other queries to pile up. Query time isn't necessarily useful here as you may have enough I/O to cover the slow query replacing pages in the cache, but that I/O would be better serving user facing requests instead of this cleanup job.
This feature allows for the "work" it really takes to serve the query to cause it to error, instead of time which may grind down for other reasons. At this point you know its time to re-think the soft-deletion strategy, you disable the job. Maybe you sweep more frequently? Maybe you keep a look-aside of things to sweep to avoid scans? Maybe you sweep during a low-traffic time? Whatever. It buys you time to think.
I'd love to see a explanation of the security implications of each flow. As I understand it the "most secure" flow is OAuth 1.0a (three-legged), but its a total pain so it is mostly avoided. OAuth 2.0 is dramatically simpler, but there are bespoke additions (Google and Facebook come to mind) that you have to handle, typically in the name of security. I am ignorant of all the implications and would like a guide.
You don't have to re-learn puppet/chef/cfengine/bash script every time. You learn it once. It requires much less effort to "maintain" than manually updating more than 1 box.
Honest? Then, if I were you, I'd try to be a bit less consensual from time to time. I got plenty of downvotes. I would hope even pg get his share, that would show sane critical thinking.
[one of the first computer models I ever wrote for - a three address machine: first a source operand, second destination, third address of next instruction. The memory was a rotating drum]
That would have been really cool, and a true hackers machine, but I imagine that the days when LISP needs (or benefits from) custom hardware are long gone.
Worked for an employer who used it. FAAANTASTIC. I met with the entire Executive team out in SFO on Friday, great company. I have never seen anyone take such care in doing massive QA (including month+ long regression testing on every generations of their platform)
For backend nginx instances (we use nginx to balance application servers, and nginx right in front of Unicorn on those application servers) use the Real IP module to have logs transparently show the original request IP not the load balancer's IP.