For curious people, here is what the UI looks like, you have a list of projects to choose, I choose the goo.gl project, and a "Current project" tab which shows the project activity.
For those in the now, is this heavy on disk usage? Should I install this on my disk drive or my SSD? Just want to avoid tons of disk writes on an SSD if it's unnecessary.
IMO it's less Google's fault and more a crappy tech education problem.
It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
And really it's not much different than anything else online - it can disappear on a whim. How many of those shortened links even go to valid pages any more?
And no company is going to maintain a "free" service forever. It's easy to say, "It's only ...", but you're not the one doing the work or paying for it.
> It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors. They didn't publish a book or write an academic paper in a vacuum - somebody around them should have known better and said something.
It's a great idea, and today in 2025, papers are pretty much the only place where using these shortened URLs makes a lot of sense. In almost any other context you could just use a QR code or something, but that wouldn't fit an academic paper.
Their specific choice of shortened URL provider was obviously unfortunate. The real failure is that of DOI to provide an alternative to goo.gl or tinyurl or whatever that is easy to reach for. It's a big failure, since preserving references to things like academic papers is part of their stated purpose.
Even normal HTTP URLs aren't great. If there was ever a case for content-addressable networks like IPFS it's this. Universities should be able to host this data in a decentralized way.
A DOI handle type of thing could certainly point to an IPFS address. I can't speak to how you'd do truly decentralized access to the DOI handle. At some point DNS is a thing and somebody needs to host the handle.
Not infrequently, someone being smart in one field doesn't necessarily mean they can solve problems in another.
I know some brilliant people, but, well, putting it kindly, they're as useful as a chocolate teapot outside of their specific area of academic expertise.
Who's lost out at the end of the day? People who didn't understand the free market and lost access to these "free" services? Or people who knew what would happen and avoided them? My links are still working...
There are digital public goods (like Wikipedia) that are intended to stick around forever with free access, but Google isn't one of them.
Nope! There have in fact been education campaigns about the evils of URL shorteners for years: how they pose security risks (used for shortening malicious URLs), and how they stop working when their domain is temporarily or permanently down.
The authors just had their heads too far up their academic asses to have heard of this.
>"It wasn't a good idea to use shortened links in a citation in the first place, and somebody should have explained that to the authors"
???
DOI and ORCID sponsored link-shortening with Goo.gl. Authors did what they were told would be optimal, and ORCID was probably told by Google that it'd hone its link-shortening service for long-term reliability. What a crazy victim-blame.
I find it amusing that you are complaining about not having a computer to click a link while glossing over the fact that you need a computer to use a link at all.
This use case of "I have a paper journal and no PDF but a computer with a web browser" seems extraordinarily contrived. I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs. If we cared, we'd use a QR code.
This kind of luddite behavior sometimes makes using this site exhausting.
Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.
Reading paper was more comfortable then reading on the screen, and it was easy to annotate, highlight, scribble notes in the margin, doodle diagrams, etc.
Do grad students today just use tablets with a stylus instead (iPad + pencil, Remarkable Pro, etc)?
Granted, post grad school I don't print much anymore, but that's mostly due to a change in use case. At work I generally read at most 1-5 papers a day tops, which is small enough to just do on a computer screen (and have less need to annotate, etc). Quite different then the 50-100 papers/week + deep analysis expected in academia.
>Perhaps times have changed, but when I was in grad school circa 2010 smartphones and tablets weren't yet ubiquitous but laptops were. It was super common to sit in a cafe/library with a laptop and a stack of printed papers to comb though.
I just had a really warm feeling of nostalgia reading that! I was a pretty average student, and the material was sometimes dull, but the coffee was nice, life had little stress (in comparison) and everything felt good. I forgot about those times haha. Thanks!
> I have literally held a single-digit number of printed papers in my entire life while looking at thousands as PDFs.
This is by no means a universal experience.
People still get printed journals. Libraries still stock them. Some folks print out reference materials from a PDF to take to class or a meeting or whatnot.
I’m unconvinced the researchers acted irresponsibly. If anything, a Google-shortened link looks—at first glance—more reliable than a PDF hosted god knows where.
There are always dependencies in citations. Unless a paper comes with its citations embedded, splitting hairs between why one untrustworthy provider is more untrustworthy than another is silly.
I feel like all that is beyond the point. People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.
> People used goo.gl because they largely are not tech specialists and aren't really aware of link rot or of a Google decision rendering those links unaccessible.
Anyone who is savvy enough to put a link in a document is well-aware of the fact that links don't work forever, because anyone who has ever clicked a link from a document has encountered a dead link. It's not 2005 anymore, the internet has accumulated plenty of dead links.
This is the answer; turns out that non-transformed links are the most generic data format, without any "compression" - QR codes or a third-party-intermediary - needed.
It is built for the task, and assuming worse case scenario of sunset, it would be ingested into the Wayback Machine. Note that both the Internet Archive and Cloudflare are supporting partners (bottom of page).
(https://doi.org/ is also an option, but not as accessible to a casual user; the DOI Foundation pointed me to https://www.crossref.org/ for adhoc DOI registration, although I have not had time to research further)
Crossref is designed for publishing workflows. Not set up for ad hoc DOI registration. Not least because just registering a persistent identifier to redirect to an ephemeral page without arrangements for preservation and stewardship of the page doesn’t make much sense.
That’s not to say that DOIs aren’t registered for all kinds of urls. I found the likes of YouTube etc when I researched this about 10 years ago.
It really depends what you’re trying to do. Make something citable? Findable? Permalink?
Crossref isn’t the only DOI registration agency. DataCite may be more relevant, although both require membership. Part of this is the commitment to maintaining the content.
If Cloudflare provides the infra (thanks Cloudflare!), I am happy to have them provide the compute and network for the lookups (which, at their scale, is probably a rounding error), with the Internet Archive remaining the storage system of last resort. Is that different than the Internet Archive offering compute to provide the lookups on top of their storage system? Everything is temporary, intent is important, etc. Can always revisit the stack as long as the data exists on disk somewhere accessible.
This is distinct from Google saying "bye y'all, no more GETs for you" with no other way to access the data.
This is much better positioned for longevity than google’s URL shortener, I’m not trying to make that argument.
My point is that 10-15 years ago, when Google’s URL shortener was being adopted for all these (inappropriate) uses, its use was supported by a public opinion of Google’s ‘inevitability’. For Perma, CF serves a similar function.
You aren't responsible if things go offline. No more than if a publisher stops reprinting books and the library copies all get eaten by rats.
A reader can assess the URl for trustworthiness (is it scam.biz or legitimate_news.com) look at the path to hazard a guess at the metadata and contents, and - finally - look it up in an archive.
I thought that was the standard in academia? I've had reviewers chastise me when I did not use wayback machine to archive a citation and link to that since listing a "date retrieved" doesn't do jack if there's no IA copy.
Short links were usually in addition to full URLS, and more in conference presentations than the papers themselves.
I think this is the only real answer. Shorteners might work for things like old Twitter where characters were a premium, but I would rather see the whole URL.
We’ve learned over the years that they can be unreliable, security risks, etc.
I just don’t see a major use-case for them anymore.
While an interesting attempt at an impact statement, 90% of the results on the first two pages for me are not references to goo.gl shorteners, but are instead OCR errors or just gibberish. One of the papers is from 1981.
In the first segment of the very first episode of the Abstractions podcast, we talked about Google killing its goo.gl URL obfuscation service and why it is such a craven abdication of responsibility. Have a listen, if you’re curious:
Can't someone just go through programmatically right now and build a list of all these links and where they point to? And then put up a list somewhere that everyone can go look up if they need to?
When they began offering this, their rep for ending services was already so bad I refused to consider goo.gl. Amazing for how many years now they have introduced then ended services with large user bases. Gmail being in "beta" for five years was, weirdly, to me, a sign they might stick with it.
I have always struggled with this. If I buy a book I don’t want an online/URL reference in it. Put the book/author/isbn/page etc. Or refer to the magazine/newspaper/journal/issue/page/author/etc.
We are long, long past any notion that URLs are permanent references to anything. Better to cite with title, author, and publisher so that maybe a web search will turn it up later. The original URL will almost certainly be broken after a few years.
It makes me mad also, but something we have to learn the hard way is that nothing in this world is permanent. Never, ever depend on any technology to persist. Not even URLs to original hosts should be required. Inline everything.
Its almost like as if once a company becomes this big, burning them to the ground would be better for society or something. That would be the liberal position on monopolies if they actually believed in anything.
Modern webservers are very, very fast on modern CPUs. I hear Google has some CPU infrastructure?
I don't know if GCP has a free tier like AWS does, but 10kQPS is likely within the capability of a free EC2 instance running nginx with a static redirect map. Maybe splurge for the one with a full GB of RAM? No problem.
Exactly. I've seen goo.gl URLs in printed books. Obviously in old blog posts too. And in government websites. Nonprofit communications. Everywhere.
Why break this??
Sure, deprecate the service. Add no new entries. This is a good idea anyway, link shorteners are bad for the internet.
But breaking all the existing goo.gl URLs seems bizarrely hostile, and completely unnecessary. It would take so little to keep them up.
You don't even need HTML files. The full set of static redirects can be configured into the webserver. No deployment hassles. The filesystem can be RO to further reduce attack surface.
Google is acting like they are a one-person startup here.
Since they are not a one-person startup, I do wonder if we're missing the real issue. Like legal exposure, or implication in some kind of activity that they don't want to be a part of, and it's safer/simpler to just delete everything instead of trying to detect and remove all of the exposure-creating entries.
Of maybe that's what they're telling themselves, even if it's not real.
Countless books with irrevocably broken references - https://www.google.com/search?q=%22://goo.gl%22&sca_upv=1&sc...
And for what? The cost of keeping a few TB online and a little bit of CPU power?
An absolute act of cultural vandalism.