I guess it can be useful for tracking fugitive political dissidents, terrorists, etc. If you can narrow their location down to 250 miles, it's already very useful information. And without raising any suspicions.
It's not really narrowing it down to 250 miles; its narrowing it down to a circle whose radius is at least 250 miles or ~196,000mi^2.
My closest Cloudflare CDN is just listed as "DFW". The DFW metro area is about 8,700mi^2, and I imagine I could be even further than the "metro area" and still get the "DFW" Cloudflare datacenter.
In their little video animation, the area inside the overlap of those two circles encompasses several states. The edges of the two circles go from Washington to Florida and almost include Chicago. The target could have been in Denver or St Louis or Las Vegas or Phoenix or San Diego or San Francisco or Amarillo or El Paso.
If only we knew OBL's Discord handle then we would have known he was about where we figured he was all along...
And then this whole thing gets thrown off if one uses a VPN with an endpoint somewhere other than where you are. Click a button, suddenly my datacenter is AMS. Click it again, suddenly its OTP...
>If only we knew OBL's Discord handle then we would have known he was about where we figured he was all along...
Discord is just an example, this can apparently work with many apps that store user attachments on Cloudflare.
>Click a button, suddenly my datacenter is AMS. Click it again, suddenly its OTP...
Well, if the location keeps changing, it's obvious it's not their real location. But if it’s always the same, no matter what, that’s a huge clue. Of course, this works best when you’ve got some other data to back it up. It’s kind of like playing Akinator - the more answers you get, the closer you get to figuring out the target. One answer might not tell you much, but three or four?
In their example target it pinged two datacenters, one in Dallas and on in San Franciso. Their requests might bounce between datacenters even if they aren't on a VPN.
This assumes that Osama bin Laden has poor enough opsec that he's using (eg.) Discord without a proxy. State actors have much more sophisticated techniques available.
(It's still an interesting vector, though! But it's true that the headline and writeup are a bit sensationalized.)
Passing the current user ID/tenant ID inside ctx has been super useful for us. We’re already using contexts for cancellation and graceful termination, so our application-layer functions already have them. Makes sense to just reuse them to store user and tenant IDs too (which we pull from access tokens in the transport layer).
We have DB sharding, so the DB layer needs to figure out which shard to choose. It does that by grabbing the user/tenant ID from the context and picking the right shard. Without contexts, this would be way harder—unless we wanted to break architecture rules, like exposing domain logic to DB details, and it would generally just clutter the code (passing tenant ID and shard IDs everywhere). Instead, we just use the "current request context" from the standard lib that can be passed around freely between modules, with various bits extracted from it as needed.
What’s the alternatives, though? Syntax sugar for retrieving variables from some sort of goroutine-local storage? Not good, we want things to be explicit. Force everyone to roll their own context-like interfaces, since a standard lib's implementation can't generalize well for all sitiations? That’s exactly why contexts we introduced—because nobody wanted to deal with mismatched custom implementations from different libs. Split it into separate "data context" and "cancellation context"? Okay, now we’re passing around two variables instead of one in every function call. DI to the rescue? You can hide userID/tenantID with clever dependency injection, and that's what we did before we introduced contexts to our codebase, but that resulted in allocations of individual dependency trees for each request (i.e. we embedded userID/tenantID inside request-specific service instances, to hide the current userID/tenantID, and other request details, from the domain layer to simplify domain logic), and it stressed the GC.
An alternative is to add all dependencies explicitly into function argument list or object fields, instead of using them implicitly from the context, without documentation and static typing. Including logger.
Main problems with passing dependencies in function argument lists:
1) it pollutes the code and makes refactoring harder (a small change in one place must be propagated to all call sites in the dependency tree which recursively accept user ID/tenant ID and similar info)
2) it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID", it's an implementation detail to more efficiently store data, and if we just rely on function argument lists, then we'd have to litter actual business logic with various infrastructure-specific references to tenant IDs and the like so that the underlying DB layer could figure out what to do.
Sure, it can be solved with constructor-based dependency injection (i.e. request-specific service instances are generated for each request, and we store user ID/tenant ID & friends as object fields of such request-scoped instances), and that's what we had before switching to contexts, but it resulted in excessive allocations and unnecessary memory pressure for our highload services. In complex enterprise code, those dependency trees can be quite large -- and we ended up allocating huge dependency trees for each request. With contexts, we now have a single application-scoped service dependency tree, and request-specific stuff just comes inside contexts.
Both problems can be solved by trying to group and reuse data cleverly, and eventually you'll get back to square one with an implementation which looks similar to ctx.Context but which is not reusable/composable.
>Including logger.
We don't store loggers in ctx, they aren't request-specific, so we just use constructor-based DI.
I believe this problem isn't solvable under our current paradigm of programming, which I call "working directly on plaintext, single-source-of-truth codebase".
Tenant ID, cancellations, loggers, error handling are all examples of cross-cutting concerns. Depending on what any given function does, and what you (the programmer) are interested in at a given moment, any of them could be critical information or pure noise. Ideally, you should not be seeing the things you don't care about, but our current paradigm forces us to spell out all of them, at all times, hurting readability and increasing complexity.
On the readability/"clean code", our most advanced languages are operating on a Pareto frontier. We have whole math fields being employed in service of packaging up common cross-cutting concerns, as to minimize the noise they generate. This is where all the magic monads come from, this is why you have to pay attention to infectious colors of your functions, etc. Different languages make slightly different trade-offs here, to make some concerns more readable, but since it's a Pareto frontier, it always makes some other aspects of code less comprehensible.
In my not so humble opinion, we won't progress beyond this point until we give up on the paradigm itself. We need to accept that, at any given moment, a programmer may need a different perspective on the code, and we need to build tools to allow writing code from those perspectives. What we now call source code should be relegated to the role of intermediary/object code - a single source of truth for the bowels of the compiler, but otherwise something we never touch directly.
Ultimately, the problem of "context" is a problem of perspective, and should be solved by tooling. That is, when reading or modifying code, I should be able to ignore any and all context I don't care about. One moment, I might care about the happy path, so I should be able to view and edit code with all error propagation removed; at another moment, I might care about how all the data travels through the module, in which case I want to see the same code with every single goddamn thing spelled out explicitly, in the fashion GP is arguing to be the default. Etc.
Plaintext is fine. Single source of truth is fine. A single all-encompassing view of everything in a source file is fine. But they're not fine all together, all the time.
Monads but more importantly MonadTransformers so you can program in a legible fashion.
However, there's a lot of manual labour to stuff everything into a monad, and then extract it and pattern match when your libraries don't match your choice of control flow monad(s)!
This is where I'd prefer if compilers could come in.
Imagine being in the bowels of a DB lib, and realising that the function you just write might be well positioned to terminate the TCP connection that it's using to talk to the database with. Oh no: now you have to update the signature and every single call-site for its parent, and its parent, and...
Instead, it would be neat if the compiler could treat things you deem cross-cutting as a graph traversal problem instead; call a cancelable method and all callers are automatically cancelable. Decisions about whether to spawn a cancelable subtree, to 'protect' some execution or set a deadline is then written on an opt-in basis per function; all functions compose. The compiler can visualise the tree of cancellation (or hierachical loggers, or OT spans, or actors, or green fibers, or ...) and it can enforce the global invariant that the entry-point captures SIGINT (or sets up logging, or sets up a tracer, or ...).
So imagine the infrastructure of a monad transformer, but available per-function on an opt-in basis. If you write your function to have a cleanup on cancellation, or write logs around any asynchronous barrier, the fiddly details of stuffing the monad is done by the compiler and optionally visualised and explained in the IDE. Your code doesn't have to opt-in, so you can make each function very clean.
Yes, there's plenty of space for automation and advanced support from tooling. Hell, not every perspective is best viewed as plaintext; in particular, anything that looks like a directed graph fundamentally cannot be well-represented in plaintext at all without repeating nodes, breaking the 1:1 correspondence between a token and a thing represented by that token.
Still, I believe the core insight here is that we need different perspectives at different times. Using your example, most of the time I probably don't care whether the code is cancellable or not. Any mention of it is distracting noise to me. But other times - perhaps next day, or perhaps just five minutes later, I suddenly need to know whether the code is cancellable, and perhaps I need to explicitly opt out of it somewhere. It's highly likely that in those cases, I may not care about things like error handling logic and passing around session identifiers, and I would like that to disappear in those moments, etc.
And hell, I might need an overview of the which code is or isn't protected, and that would be best served by showing me an interactive DAG of functions that I can zoom around and expand/collapse, so that's another kind of perspective. Etc.
EDIT:
And then there's my favorite example: the unending holy war of "few fat functions" vs. "lots of tiny functions". Despite the endless streams of Tweets and articles arguing for either, there is no right choice here - there's no right trade-off you can make here up front, and can never be, because which one is more readable depends strictly on why you're reading it. E.g. lots of tiny functions reduce duplication and can introduce a language you can use to effectively think about some code at a higher level - but if there's a thorny bug in there I'm trying to fix, I want all of that shit inlined into one, big function, that I can step through sequentially, following the actual execution order.
It is my firm belief that the ability to inline and uninline code on the fly, for yourself, personally, without affecting the actual execution or the work of other developers, is one of the most important missing piece in our current tooling, and making it happen is a good first step towards abandoning The Current Paradigm that is now suffocating us all.
Second one would be, along with inlining, the ability to just give variables and parameters fixed values when reading, and have those values be displayed and propagated through the code - effectively doing a partial simulation of code execution. Being able to do it ad hoc, temporarily, would be a huge aid in quickly understanding what some code does.
Promises are so incredibly close to being a representation of work.
The OS has such sophisticated tools for process management, but inside a process there are so many subprocesses going on, & it feels like we are flailing about with poorly managed process like things. (Everyone except Erlang.)
I love how close zx comes to touching the sky here. It's a typescript library for running processes, as a tagged template function returning a promise. *const hello = $`sleep 3; echo hello world`. But the promise isnt just a "a future value", it is A ProcessPromise for interacting with the promise.
I so wish promises were just a little better. It feels like such a bizarre tragedy to me the "a promise is a future value" not a thing unto itself won the day in es6 / es2015, destroyed the possibility of a promise being more; zx has run into a significant number of ergonomic annoyances because this small world dogma.
How cool it would be to see this go further. I'd love for the language to show what promises if any this promise is awaiting! I long for that dependency graph of subprocesses to start to show itself, not just at compile time but for the runtime to be able to actively observe and manage the subprocesses within it at runtime. We keep building workflow engines, build robust userland that manage their own subprocesses, user user lands, but the language itself seems so close & yet so far from letting the simple promise become more a process, and that seems like a sad shame.
> it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID"
I'm not sure I understand how hiding this changes anything. Could you just not pass "tenant ID" to doBusinessLogic function and pass it to saveToDatabase function?
That's exactly what what they're talking about, "tenantId" shouldn't be in the function signature for functions that aren't concerned with the tenant ID, such as business logic
I've worked on (and variously built and ripped out) systems like that, and I end up in the "more trouble than it's worth" camp here. Context-ish things do have considerable benefits, but the costs are also major.
If context isn't uniform and minimal, and people can add/remove fields for their own purposes, the context becomes a really sneaky point of coupling.
Adapting context-ful code from a request-response world to (for example) a parallel-batch-job world or continuous stream consumer world runs into friction: a given organization's idioms around context usually started out in one of those worlds, and don't translate well to others. If I'm a worker thread in a batch job working on a batch of "move records between tenant A and tenant B" work, but the business logic methods I'm calling to retrieve and store records are sensitive to a context field that assumes it'll be set in a web request (and that each web request will be made for exactly one tenant), what do I do? If your business is always going to be 99% request/response code, then sure, hack around the parts that aren't. But if your business does any continuous data pipeline wrangling, you rapidly end up with either a split codebase (request-response contextful vs "things that are only meant to be called from non-request-response code") or really thorny debugging around context issues in non-request-response code.
If you choose to deal with context thread-locally (or coroutine locally, or something that claims to be both but is in reality neither--looking at you, "contextlib"), that sneaky context mutation by the concurrency system multiplies the difficulties in reasoning about context behavior.
> it violates various architectural principles, for example, from the point of view of our business logic, there's no such thing as "tenant ID"
I think a lot of people lose sight of how incredibly useful explicit dependency management is because it's classed as "tight coupling" and "bad architecture" when it's nothing of the sort. I blame 2010s Java and dependency inversion/injection brainrot.
Business logic is rarely pure; most "business" code functions as transforming glue between I/O. The behavior of the business logic is fundamentally linked to _where_ (and often _how_ as well--e.g. is it in a database transaction?) it interacts with datastores and external services. "Read/write business code as if it didn't have side effects" is not a good approach if code is _primarily occupied with causing side effects_--and, in commercial software engineering, most of it is!
From that perspective, explicitly passing I/O system handles, settings, or whatnot everywhere can be a very good thing: when reading complex business logic, the presence (or absence) of those dependencies in a function call tells you what parts of the system will (or can) conduct I/O. That provides at-a-glance information into where the system can fail, where it can lag, what services or mocks need to be running to test a given piece of code, and at a high level what data flows it models (e.g. if a big business logic function receives an HTTP client factory for "s3.amazonaws.com/..." and a database handle, it's a safe bet that the code in question broadly moves data between S3 and the database).
While repetitive, doing this massively raises the chance of catching certain mistakes early. For example, say you're working on a complex businessy codebase and you see a long for-loop around a function call like "process_record(record, database_tenant_id, use_read_replica=True, timeout=5)"? That's a strong hint that there's an N+1 query/IO risk in that code, and the requirement that I/O system dependencies be passed around explicitly encodes that hint _semantically_.
That kind of visibility is vastly superior to "pure" and uncluttered business logic that relies on context/lexicals to plumb IO around. Is the pure code less noisy and easier to interpret? Sure, but the results of that interpretation are so much less valuable as to be actively misleading.
Put another way: business logic is concerned with things like tenant IDs and database connections; obscuring those dependencies is harmful. Separation of concerns means that good business code is code that avoids mutating, or making decisions based on, the dependencies it receives--not that it doesn't receive them/use them/pass them around.
I have a feeling, if Context disappears, you'll just see "Context" becoming a common struct that is passed around. In Python, unlike in C# and Java, the first param for a Class Method is usually the class instance itself, it is usually called "self" so I could see this becoming the norm in Go.
Under the hood, in both Java and C# the first argument of an instance method is the instance reference itself. After all, instance methods imply you have an instance to work with. Having to write 'this' by hand for such is how OOP was done before OOP languages became a thing.
I agree that adopting yet another pattern like this would be on brand for Go since it prizes taking its opinionated way of going about everything in a vintage kind of way over being practical and convenient.
As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
For example, I hate that there's no inheritance. I wish I could create a ContainerImage object and then a RemoteContainerImage subclass and then QuayContainerImage and DockerhubContainerImage subclasses from those. However, being able to do inheritance, and especially multiple inheritance, can lead to awful, idiotic code that is needlessly complicated for no good reason.
At a previous job we had a script that would do operations on a local filesystem and then FTP items to a remote. I thought okay, the fundamental paradigms of FTP and SFTP-over-SSH via the paramiko module are basically identical so it should be a five minute job to patch it in, right?
Turns out this Python script, which, fundamentally, consisted of "take these files here and put them over there" was the most overdesigned piece of garbage I've ever seen. Clean, effective, and entirely functional code, but almost impossible to reason about. The code that did the actual work was six classes and multiple subclasses deep, but assumptions were baked in at every level. FTP-specific functionality which called a bunch of generic functionality which then called a bunch of FTP-specific functionality. In order to add SFTP support I would have had to effectively rewrite 80% of the code because even the generic stuff inherited from the FTP-specific stuff.
Eventually I gave up entirely and just left it alone; it was too important a part of a critical workflow to risk breaking and I never had the time or energy to put my frustration aside. Golang, for all its flaws, would have prevented a lot of that because a lot of the self-gratification this programmer spent his time on just wouldn't have been possible in Go for exactly this reason.
> As a newcomer to Go, a lot of their design decisions made a lot of sense when I realized that a lot of the design is based around this idea of "make it impossible to do something that could be dumb in some contexts".
You are indeed a newcomer :) God bless you to shoot feet only in dev environments.
It sounds like you may have some friction-studded history with Go. Any chance you can share your experience and perspective with using the language in your workloads?
It's mostly dead locked networking code. Hard to investigate, hard to search the culprit. And of course code bases without linter for err propagation and handling. And this-null for "methods".
> instead of using them implicitly from the context, without documentation and static typing
This is exactly what context is trying to avoid, and makes a tradeoff to that end. There's often intermediate business logic that shouldn't need to know anything about logging or metrics collection or the authn session. So we stuff things into an opaque object, whether it's a map, a dict, a magic DI container, "thread local storage", or whatever. It's a technique as old as programming.
There's nothing preventing you from providing well-typed and documented accessors for the things you put into a context. The context docs themselves recommend it and provide examples.
If you disagree that this is even a tradeoff worth making, then there's not really a discussion to be had about how to make it.
I disagree that it's a good approach. I think that parameters must be passed down always, as parameters. It allows compiler to detect unused parameters and it removes all implicitness.
It is verbose indeed and may be there should be programming language support to reduce that verbosity. Some languages support implicit parameters which proved to be problematic but may be there should be more iterations on that manner.
I consider context for passing down values to do more harm than good.
Other responses cover this well, but: the idea of having to change 20 functions to accept and propagate a `user` field just so that my database layer can shard based on userid is gross/awful.
...but doing the same with a context object is also gross/awful.
"ChatGPT reveals in its responses that it is aligned with American culture and values, while rarely getting it right when it comes to the prevailing values held in other countries. It presents American values even when specifically asked about those of other countries. In doing so, it actually promotes American values among its users," explains researcher Daniel Hershcovich, of UCPH’s Department of Computer Science."
>The law banning TikTok, which was scheduled to go into effect Sunday, allows the president to grant a 90-day extension before the ban is enforced, provided certain criteria are met.
Lots of American social media are banned here by the Russian government (all for the same reason of protecting citizens from foreign avdersaries), and we just use VPN. We're used to it, and if a service is popular (like Instagram), it's practically impossible to ban it. Monetization provided by the service is replaced by embedding sponsors' videos directly in the video (and getting money directly from the sponsor without third parties), or by selling merchendize to fans.
I wonder how many Americans will just use VPN? Is it common to use VPN in the US? Here, almost everyone uses it now. A few weeks ago they suddenly banned Viber for some reason and I barely noticed it.
As someone in Australia which I assume is fairly similar, we really don't use VPN's, at the very least the average person doesn't and their use isn't common knowledge. However I have friends in China like you, where VPN's are used by the majority.
We are used to having access to pretty much everything we want access to.
The most popular apps and services used around the world are largely readily available in the US and do not need VPN's to use.
A Tiktok ban is in my memory probably the first time that a major platform used by the masses has been banned for use in the US. Because of the lack of VPN usage by every day people, I'd say everyone will flock to Instagram rather than continuing to try accessing Tiktok. If nobody else you know is using Tiktok, then why use it would be the question.
So I opened "gpt4o with scheduled tasks" in the mobile app and there was no hint in the UI how to use it. I asked, "what's a scheduled task" and it answered with a generic response about scheduled tasks in general. Then I tried my luck and said, "remind me to pet my cat in 5 minutes," and it seemed to work. I then closed the mobile app, but no push notification came after 5 minutes, however I got an email, which I didn't expect (I expected push notifications). Clearly the feature needs more polish.
reply