I'm all for a lightweight approach to software development. But I don't understand the concept here.
Looking at the first example:
First I had to switch it from TS to JS. As I don't consider something that needs compilation before it runs to be lightweight.
Then, the first line is:
import {html, css, LitElement} from 'lit';
What is this? This is not a valid import. At least not in the browser. Is the example something that you have to compile on the server to make it run in the browser?
And when I use the "download" button on the playground version of the first example, I get a "package.json" which defines dependencies. That is also certainly not something a browser can handle.
So do I assume correctly that I need to set up a webserver, a dependency manager, and a serverside runtime to use these "light weight" components?
Or am I missing something? What would be the minimal amount of steps to save the example and actually have it run in the browser?
I guess for most people the standard is to install things from NPM which explains the format of the documentation. If you want to do something completely raw, you can replace 'lit' with something like this:
I estimate the vast majority of "web projects" begin with npm installing something of some sort, yes. React is dominating the web development space (judging from the average "popular web stack 2025" search result), and it and a significant portion of the competing platforms start with installing some dependencies with npm (or yarn, or what have you). Especially projects that compete in the same space as Lit.
That isn't a criticism of projects that don't use npm, and it doesn't make them less valid, but it makes sense for the documentation to match the average developer's experience.
Because that is the easiest to implement, the easiest to write, the easiest to manually test and tinker with (by writing it directly into the url bar), the easiest to automate (curl .../draw_point?x=7&y=20). It also makes it possible to put it into a link and into a bookmark.
This is great for API's that only have a few actions that can be taken on a given resource.
REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
The best API's I've seen mix and match both patterns. RESTful API endpoints for data, "function call" endpoints for often-used actions like voting, bulk actions and other things that the client needs to be able to do, but you want the API to be in control of how it is applied.
> REST-API's then are especially suited for acting as a gateway to a database, to easily CRUD and fetch lists of information.
I don't disagree, but I've found (delivering LoB applications) that they are not homogenous: The way REST is implemented, right now, makes it not especially suitable for acting as a gateway to a database.
When you're free of constraints (i.e. greenfield application) you can do better (ITO reliability, product feature velocity, etc) by not using a tree exchange form (XML or JSON).
Because then it's not just a gateway to a database, it's an ill-specified, crippled, slow, unreliable and ad-hoc ORM: it tries to map trees (objects) to tables (relations) and vice versa, with predictably poor results.
Bots, browsers that preload URLs, caching (both browser and backend and everything in between), the whole infrastructure of the Web that assumes GET never mutates and is always safe to repeat or serve from cache.
Using GET also circumvents browser security stuff like CORS, because again the browser assumes GET never mutates.
Then that does not conform to the HTTP spec. GET endpoints must be safe, idempotent, cachable. Opening up a site to cases were web crawlers/scrapers may wreak havoc.
Indeed, user embedded pictures can fire GET requests while can not make POST requests. But this is not a problem if you don't allow users to embed pictures, or you authenticate the GET request somehow. Anyway GET requests are just fine.
CORS prevents reading from a resource, not from sending the request.
If you find that surprising, think about that the JS could also have for example created a form with the vote page as the target and clicked on the submit button. All completely unrelated to CORS.
CORS does nothing of the sort. It does the exact opposite – it’s explicitly designed to allow reading a resource, where the SOP would ordinarily deny it.
That any bot crawling your website is going to click on your links and inadvertently mutate data.
Reading your original comment I was thinking "Sure, as long as you have a good reason of doing it this way anything goes" but I realized that you prefer to do it this way because you don't know any better.
If you rely on the HTTP method to authenticate users to mutate data, you are completely lost. Bots and humans can send any method they like. It's just a string in the request.
Use cookies and auth params like HN does for the upvote link. Not HTTP methods.
> If you rely on the HTTP method to authenticate users to mutate data, you are completely lost
I don't know where you are getting that from but it's the first time I've heard of it.
If your link is indexed by a bot, then that bot will "click" on your links using the HTTP GET method—that is a convention and, yes, a malicious bot would try to send POST and DELETE requests. For the latter, this is why you authenticate users but this is unrelated to the HTTP verb.
> Use cookies and auth params like HN does for the upvote link
If it uses GET, this is not standard and I would strongly advise against it except if it's your pet project and you're the only maintainer.
Follow conventions and make everyone's lives easier, ffs.
Because HTTP is a lot more sophisticated than anyone cares to acknowledge. The entire premise of "REST", as it is academically defined, is an oversimplification of how any non-trivial API would actually work. The only good part is the notion of "state transfer".
Not a REST API, but I've found it particularly useful to include query parameters in a POST endpoint that implements a generic webhook ingester.
The query parameters allow us to specify our own metadata when configuring the webhook events in the remote application, without having to modify our own code to add new routes.
I used to do that but I've been fully converted to REST and CRUD gang. Once you establish the initial routes and objects it's really easy mount everything else on it and move fast with changes. Also using tools like httpie it's super easy to test anything right in your terminal.
From a quick gander.
WASM is not to talk to the servers.
WASM can be utilized to run AI Agents to talk to local LLMs from a sandboxed environment through the browser.
For example in the next few years if Operating System companies and PC producers make small local models stock standards to improve the operating system functions and other services.
This local LLM engine layer can be used by browser applications too and that being done through WASM without having to write Javascript and using WASM sandboxed layer to safely expose the this system LLM Engine Layer.
They're using some python libraries like openai-agents so presumably it's to save on development efforts of calling/prompting/managing the HTTP endpoints. But yes this could just be done in regular JS in the browser, they'd have to write a lot of boilerplate for an ecosystem which is mainly Python.
You never need WASM (or any other language, bytecode format, etc) to talk to LLMs. But WASM provides things people might like for agents, eg. strict sandboxing by default.
Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
I have started writing web apps that simply store the user data as a file, and I am very pleased with this approach.
It works perfectly for Desktop and Android.
iOS does not allow for real Chrome everywhere (only in Europe, I think), so I also offer to store the data in the "Origin private file system" which all browsers support. Fortunately it has the same API, so implementing it was no additional work. Only downside is that it cannot put files in a user selected directory. So in that mode, I support a backup via an old-fashioned download link.
This way, users do not have to put their data into the cloud. It all stays on their own device.
What about those of us who use multiple devices, or multiple browsers? I've been using local storage for years and it's definitely hampering adoption, especially for multiplayer.
I never tried it, but from the descriptions I have read, Dropbox detects conflicting file saves (if you save on two devices while they are offline) and stores them as "conflicting copies". So the user can handle the conflict.
As a developer, you would do this in the application. "Hey, you are trying to save your data but the data on disk is newer than when you loaded it ... Here are the differences and your options how to merge.".
> Hey, you are trying to save your data but the data on disk is newer than when you loaded it
You're suggesting an actual API-facilitated data sync via Dropbox? Sure, but at that point why? Unless the data also needs to be read by 3rd party applications, might as well host it myself.
Syncthing pls. Pls try to use open source alternative whenever possible even though they are not as developed as the closed sourced one, it works better for the public.
TIL! I enjoy building cloudless apps and have been relying on localstorage for persistence with an "export" button. This is exactly what I've been looking for.
A lot of what I've read about local-first apps included solving for data syncing for collaborative features. I had no idea it could be this simple if all you need is local persistence.
At least on the Android front, I'd prefer the app allow me to write to my own storage target. The reason is because I already use Syncthing-Fork to monitor a parent Sync directory of stuff (Obsidian, OpenTracks, etc.) and send to my backup system. In effect it allows apps to be local first and potentially even without network access, but allow me to have automatic backups.
If there were something that formalized this a little more, developers could even make their apps in a... Bring Your Own Network... kinda way. Maybe there's already someone doing this?
I may have misunderstoood. Does that mean with this API on both desktop and phone I can point to an arbitrary drive on the system without restriction? If so, it does indeed do what I'd like.
> Do we still need a back-end, now that Chrome supports the File System Access API on both desktop and mobile?
Could this allow accessing a local db as well? Would love something that could allow an app to talk directly to a db that lives locally in my devices, and that the db could sync across the devices - that way I still get my data in all of my devices, but it always stays only in my devices
Of course this would be relatively straightforward to do with native applications, but it would be great to be able to do it with web applications that run on the browser
Btw, does Chrome sync local storage across devices when logged in?
Like IndexDB? It’s a browser API for an internal key-value storage database.
> Btw, does Chrome sync local storage across devices when logged in?
Syncing across devices still requires some amount of traffic through Google’s servers, if I’m not mistaken. Maybe you could cook something up with WebRTC, but I can’t imagine you could make something seamless.
> Btw, does Chrome sync local storage across devices when logged in?
No, but extensions have an API to a storage which syncs itself across logged-in devices. So potentially you can have a setup where you create a website and an extension and the extension reads the website's localStorage and copies it to `chrome.storage.sync`.
I've been playing with chrome extensions recently, and have made them directly talk to a local server with a db. So using extensions, it's relatively easy to to store data locally and potentially sync it across devices
I like the idea of leveraging chrome.storage.sync though, I wonder what the limitations are
Currently, what I do is that when an IP requests insane amounts of URLs on my server (especially when its all broken urls causing 404s) I look up the IP and then block the whole organization.
For example today some bot from the range 14.224.0.0-14.255.255.255 got crazy and caused a storm of 404s. Dozens per second for hours on end. So I blocked the range like this:
iptables -A INPUT -m iprange --src-range 14.224.0.0-14.255.255.255 -j DROP
That's probably not the best way and might block significant parts of whole countries. But at least it keeps my service alive for now.
At git.ardour.org, we block any attempt to retrieve a specific commit. Trying to do so triggers fail2ban putting the IP into blocked status for 24hrs. They also get a 404 response.
We wouldn't mind if bots simply cloned the repo every week or something. But instead they crawl through the entire reflog. Fucking stupid behavior, and one that has cost us an extra $50/month even with just the 404.
I like rate-limiting. I know none of my users will need more than 10qps. I set that for all routes, and all bots get throttled. I can also have much higher rate-limit for authenticated users. Have not had bots slamming me - they just get 429s
Certificates are still a pain in the butt. One of the most cumbersome aspects of the web.
Especially domain wide certs which need DNS auth.
DNS auth would be okish if it was simply tied to a txt entry in the DNS and valid as long as the txt entry is there. Why does LetsEncrypt expire the cert while the acme DNS entry is still there? Which attack vector does this prevent?
Also, why not support file based auth in .well-known/acme-challenge/... for domain wide certs? Which attack vector does that prevent?
> Why does LetsEncrypt expire the cert while the acme DNS entry is still there?
That's like saying "why does the government expire my passport/driver's license when I haven't changed my name". That's not how it works; the document is stamped valid for a specific amount of time, and you get a new document with a new expiration time when you renew it.
The certificate from LE will expire automatically 90 days after it was provided, that's why you need to renew it before the 90 days are up.
If you hate setting up automated certificate renewal, you can still get longer-lasting certificates from paid certificate providers. It used to be that you needed to pay a company to generate a certificate for you every year, now you just get the option to have a free one every 90 days.
> Also, why not support file based auth in .well-known/acme-challenge/... for domain wide certs
An ACME challenge file on a web server proves that you control a specific server at a specific domain, so you get a certificate for a specific domain.
A DNS entry proves you control the entire domain, so you (can) get a certificate for the domain.
By uploading a file to tekmol.freewebhost.com, you haven't proven that you control either .freewebhost.com or .tekmol.freewebhost.com. You have just proven that you control tekmol.freewebhost.com.
> If you hate setting up automated certificate renewal, you can still get longer-lasting certificates from paid certificate providers. It used to be that you needed to pay a company to generate a certificate for you every year, now you just get the option to have a free one every 90 days.
I took the easier route and let Cloudflare generate and handle certs for my domains. I’m on the free tier. I secure traffic between them and my host with an origin cert. By default those are valid for 15 years.
I know CF is frequently criticised around here, but wanted to mention it as an option.
That works too, of course. You don't even need a specific certificate or even an open port by leveraging Cloudflare tunnels, which means you can host your website on a local server behind three layers of NAT if you had to.
And it's not just Cloudflare; there are plenty of other redirect-everything-through-a-CDN hosts available. If you don't mind giving Cloudflare control of your website (and barring visitors from countries like India where CGNAT makes everyone fill out CAPTCHAs every page load), this approach will take care of just about everything.
I’ve been impressed with how much I get on the free tier (my sites are small). With the DDoS protections, rate limit, WAF rules, and Turnstile, it feels like I can keep a significant amount of abusive traffic from reaching my host. It’s a pretty compelling tradeoff for me, anyway.
You now have described the status quo in length. But you have not touched on why it is supposed to make sense. You have not provided attack vectors for the easier alternatives.
By uploading a file to tekmol.freewebhost.com,
you haven't proven that you control either
.freewebhost.com
Who said that putting a file on a subdomain should grant you a cert on the domain? Putting it on domain.com/.well-known/acme-challenge/ should.
They attempted to indicate wildcards there, but HN ate them. That should say "you haven't proven that you control either *.freewebhost.com or *.tekmol.freewebhost.com".
Now, I can definitely see there being a system where the owner of the root domain (eg, freewebhost.com) can set up something in their own .well-known directory that specifies that any subdomains can only declare certs for that specific subdomain, rather than being able to claim a wildcard, and then we can allow certs that sign wildcards in cases where such a limiter is not in place.
In any case, this would only solve the DNS auth hurdle, not the overall expiration hurdle.
> That's like saying "why does the government expire my passport/driver's license when I haven't changed my name". That's not how it works; the document is stamped valid for a specific amount of time, and you get a new document with a new expiration time when you renew it.
Because certificate lifetimes need to be determined when they’re issued. They aren’t dynamic and so can’t be changed in response to whether an acme challenge file exists.
The government expires your driver's license because they want to charge you for a renewal. You can tell that this is the only reason because it's the only thing they want in order to give you a new one. They do nothing to confirm that you still know how to drive.
But Let's Encrypt doesn't charge anything. All they want is to confirm that you still control the domain. So why doesn't "the DNS record they had you add to begin with is still there" satisfy that requirement and allow you to repeatedly renew the certificate until it stops being there?
Tie the DNS challenge to the public key in the certificate. Then as long as it hasn't changed you can update the certificate without giving the update process modify access to the DNS server.
Most governments primarily don’t want stolen identity documents circulating without any time bounds, especially given that they often get used improperly these days (e.g. by allowing a photo/scan of somebody’s ID to constitute “authentication” without comparing the photo to a real person, which is a bizarre notion that’s getting more and more common).
Passports and license renewals are often for periods in the nature of 10 years. Is that meaningfully different from the self-invalidation already implied by an ID that claims you're 144 years old? The mean time between mass data breaches is certainly already less than the existing renewal interval.
Meanwhile how does a stolen scan of an identity document become invalidated by requiring a renewal? The new document is identical and even contains the same ID number. The only difference is the date which anyone could trivially alter with a computer. For that matter the only thing they need from the stolen ID is the name and number, so even if you completely redesign the layout of the ID, someone with the old one can recreate a scan of the new layout using only the information on the old stolen one.
The problem here is not that you need IDs to expire, it's that you need fools to stop trying to rely on a computer image of an ID to authenticate anything.
> The new document is identical and even contains the same ID number.
Quite a few IDs contain a 2D barcode, and I believe at least some of these contain some offline-validateable signature over the basic data of the document, including the expiry date. That's not as trivial to forge.
On top of that, document expiry does help a bit with people trying to use a lost/stolen ID of somebody they happen to look similar to, adds a forcing function to make people eventually upgrade to newer/more secure document standards etc.
> Quite a few IDs contain a 2D barcode, and I believe at least some of these contain some offline-validateable signature over the basic data of the document, including the expiry date. That's not as trivial to forge.
Most government IDs don't have that, and it's still not clear what good it would be doing when data breaches happen on much shorter timescales than ID expirations. Who cares if they stop being able to use the ID after 10 years, when they can use them for 10 years and there will be another breach providing them with a new batch of IDs to use in a matter of days rather than years?
> On top of that, document expiry does help a bit with people trying to use a lost/stolen ID of somebody they happen to look similar to
That seems like an extremely narrow advantage. If they just need some ID that looks like them then they can just get another one from the batch of fresh ones. If they need the ID of a specific person, the government still isn't authenticating renewals, so wouldn't they just use the existing stolen ID and pay for a renewal in that case, in which case we're back the only thing happening being the government extracts money?
Or no, in that case it's worse, because then they can submit a fresh picture of themselves to renew the ID which has someone else's name on it.
> adds a forcing function to make people eventually upgrade to newer/more secure document standards etc.
Which you already have because everybody eventually dies.
Regarding "the DNS record they had you add to begin with is still there", it generally isn't. Part of the automation process for certbot using the DNS-01 challenge is the removal of the DNS record, following successful validation of said record. In any complex DNS environment, leaving TXT records around just increases the debris.
It's the Let's Encrypt people who make certbot, so that's just an implementation detail, and the premise here anyway is that you would be doing it manually (once) because the inconvenience to be avoided is when certbot can't update the DNS records automatically.
No, it's not the LetsEncrypt people who make certbot. Certbot is an EFF project, managed by separate people. Additionally, most of the DNS implementations will require the use of a specific plug-in/library for your selected DNS platform, and those, also, are developed separately.
Let's Encrypt was an EFF project to begin with. They're still the same people.
The DNS plugins only matter if you're trying to automate updating the DNS entry. The whole point is that you could have certbot spit out a DNS TXT record for the user to manually add to their DNS once, e.g. which contains the public key fingerprint of the certificate they want Let's Encrypt to renew on an ongoing basis, and then certbot would be able to renew the certificate as long as the DNS record remains in place.
No, LetsEncrypt was not an EFF project to begin with. Look, it works how it's documented to work. If you wish it worked some other way, to solve your particular suggested workflow, you're likely free to fork it and make it work that way.
Certificates have a static expiry date by design - it's not "LetsEncrypt expiring the cert". There is no way to avoid expiring a cert if the DNS entry is still there - all you can do is make it easier to renew the cert. That means it must be automated, in which case it doesn't matter if you need to re-create a DNS entry.
In my experience, it takes a little effort to set things up the first time, but from then on it just works.
I think the parent commenter would be satisfied if they could authorize their DNS by creating a DNS challenge entry one time, and then continue to renew their certificate as long as that entry still existed.
And I'm sympathetic to the concerns that automating this type of thing is hard - many of the simpler DNS tools - which otherwise more than cover the needs for 90% of users - do not support API control or have other compromises with doing so.
That said, I do think LE's requirements here are reasonable given how dangerous wildcard certs can be.
> many of the simpler DNS tools -...- do not support API control
That's on the DNS provider in my opinion. They can, if they want to, make things easy and automatic for their customers, but they choose not to. There's a whole list of provider-specific plugins (https://eff-certbot.readthedocs.io/en/stable/using.html#dns-...) with many more unofficial ones available (https://pypi.org/search/?q=certbot-dns-*). Generic ones, like the DirectAdmin one, will work for many web hosts that don't have their own APIs.
If you like to stick with whatever domain provider you picked and still want to use Let's Encrypt DNS validation, you can create a CNAME to another domain on a domain provider that does have API control. For instance, you could grab one of those free webhosting domains (rjst01.weirdfreewebhostthatputsadsinyourhtml.biz) with DirectAdmin access, create a TXT record there, and CNAME the real domain to that free web host. Janky, but it'll let you keep using the bespoke, API-less registrar.
I imagine you could set up a small DNS service offering this kind of DNS access for a modest fee ($1 per year?) just to host API-controllable CNAME DNS validation records. Then again, most of the time the people picking weird, browser-only domain registrars do so because it allows them to save a buck, so unless it's free the service will probably not see much use.
> DNS auth would be okish if it was simply tied to a txt entry in the DNS and valid as long as the txt entry is there. Why does LetsEncrypt expire the cert while the acme DNS entry is still there? Which attack vector does this prevent?
An attacker should not gain the ability to persistently issue certificates because they have one-time access to DNS. A non-technical user may not notice that the record has been added.
> Also, why not support file based auth in .well-known/acme-challenge/... for domain wide certs? Which attack vector does that prevent?
Control over a subdomain (or even control over the root-level domain) does not and should not allow certificate issuance for arbitrary subdomains. Consider the case where the root level domain is hosted with a marketing agency that may not follow security best practices. If their web server is compromised, the attacker should not be able to issue certificates for the secure internal web applications hosted on subdomains.
Of course - but that requires the owner to know they were attacked, know the attacker added a TXT verification, potentially overcome fear of deleting it breaking something unexpected, etc.
If the owner does not find out that someone got control of their DNS server, the attacker can do anything with the domain anyhow. Including issuing certs.
Yes, but once that access is revoked, that is enough to be certain that the attacker can no longer issue certs. With your proposal, I would then have to audit my TXT records and delete only attacker-created records.
(Which in general would be a good practise anyway, because many services do use domain validation processes similar to what you propose)
> Certificates are still a pain in the butt. One of the most cumbersome aspects of the web.
They will likely always be a pain and many aspects of Web security are cumbersome. It is simply a reflection of the fact that the Web, like e-mail, was not designed to be secure in the first place, being used in organisations where you can rely on trust. As a result the security stuff is just bolted on and often only in response to the previous solution failing. The previous layers stick around like zombie flesh until they are unceremoniously deprecated and cut away a decade later. A new system designed from scratch would be less cumbersome.
Since my DNS provider(IONOS) has an API and there is a plugin for my Webserver (caddy), DNS certificates were completely painless, even for *.<my domain>.
The solutions exist, depensa on the providers and your client.
To me, Mac OS X looks so much better than todays Mac OS. It looks clear and orderly and I feel like "Great in this environment I can get some work done!".
Current Mac OS feels like "Help, I fell into a sack of candies, how do I get out of here?" to me.
I feel like I'm becoming a fan of old gray interfaces (win 95, macos 9). They feel like tools to me, like a calculator is just a tool, and it's comforting.
Honestly, no; the parts of the UI that I see and work with are limited to the menu bar (just flat text, no embellishments), three dots and sometimes the Spotlight bar but I don't actively look at it unless it's slow. Same thing with Windows. I never work with the OS and rarely with native apps, it's all browser based and/or crossplatform applications that use third party design systems.
Sad part is there's really no reason they couldn't offer this look & feel in modern MacOS, except for the obvious reason (poorly designed software that lacks modularity). I'm tired of pretending that software companies are remotely good at software.
My favourite one is 10.3 Panther with the mix of aqua and brushed metal. 10.4 Tiger is similar but it has a glossy top menu bar that didn’t age well in my opinion. 10.5 Leopard has the fancy cheesy 3D dock, transparent top menu bar, and the more modern gradients. It looked great at the time but gradients aren’t as cool as brushed metal and aqua.
I had the same reaction looking at the screenshots. Sure it could use a new coat a paint (maybe not _everything_ needs to be gray) but the foundation is fantastically usable.
We have billions and billions of old devices with ancient batteries laying around, pretty much every house in the developed world has at least one, more likely multiple, lithium batteries laying around dead for years.
There is no need to do research or dig into it, the experiment has already been running for years in every house in the nation, and random battery fires are still rare enough to be news worthy. If you find a forbidden pillow (swelled battery pouch), dispose of it, but even those almost never convert to fire/explosion.
A more responsible answer would've been something along the lines of: There is a very small chance, but if you take the little time to responsibly dispose unused batteries every once in a while, then you do not even have to think about this.
The chance is already so low that it is firmly in the "you don't even have to think about it" category. Stove tops cause 160,000 home fires a year, killing 135 people annually, but when was the last time you felt unsettled when looking at your stove?
Lithium batteries are just prime fear porn for the media to run with. New and scary technology. But the statistics paint a wildly different picture.
Lithium batteries mostly burn when the devices are drawing significant current below the safe cutoff voltage. You can safely discharge a battery to zero chemical energy with a slow draw. They make discharge devices for batteries in RC cars and planes for this. Once the battery has lost enough performance you safely drain it and then dispose of it (not in the landfill because the chemicals are still toxic)- you don't want to leave it partially charged in case it gets punctured in the disposal process.
Letting phone batteries drain naturally is pretty safe, because just leaving it in a low-power mode over time will cause it to self-drain at a pretty slow pace over several months. They should still be disposed of with electronics recycling so that the toxic stuff can be handled, but leaving a phone in a drawer is normally safe. Software bugs that try to turn the phone on into a high-power state right at the safety threshold are the biggest risks, or that you might try to turn it on yourself right at the safety threshold.
Phone fires are typically from software bugs that fail to cut power at the safety threshold, either while a user is trying to squeeze out the last few percent of battery, or if they are in luggage and presumably jostled and buttons keep getting pushed.
Yes, but they get used a lot in devices which folks consider to be collectible, e.g., Nintendo 3DS handhelds which were available in a sufficient variety of case designs that my son has at least 3....
>Even when a battery is completely drained in an electrical sense, it retains a lot of chemical energy that can be released if things go wrong.
That's basically just saying "you can burn anything" but in words that are attrativce to the audiences biases.
Yeah, there's a bit more to go wrong with a dead battery than the plastic and whatnot device it's in but if it can't ignite itself it can't ignite itself and that's more or less the end of the story.
Li-ion batteries release something like 3x its advertised capacity when lit on fire. IOW, the label capacity is like 1/3rd its worth as fuel(don't do this, it releases stuffs like HF). That's not "you can burn anything".
Anecdotal story, but I don't "keep" old electronics anymore save for ones I know are in a fireproof container.
My attic and furnished top floor room adjacent to it can on a bad day get to 110 without the attic fan on. Very old NE colonial on a hill getting full sun from sunrise to sunset.
Items i cannot keep up there, as I have watched them explode or turn into puddles of goo:
Aersol cans (oils, cleaners, sunscreen etc)
Normal squeeze trigger bottles (IE, chemical cleaners, auto detailer)
Tapes, adhesives
Electronics
Just a single anecdata: I bought a second-hand Xbox360 once[0] with two pads but use donly one for some months. One day I tried to use the other one but smelled the odor of burned plastic - for some reason the batteries caused a piece of the plastic melt.
This single event made me a bit more cautious about batteries in general. There are decades of no accidents and then unexpectedly something like this happens.
[0] Yest I still think Kinect was ahead of its time and I'm very sorry it got discontinued
Burning isn’t as likely. But I had my old iPhone 4 or 5S battery swell and destroy itself that way while sitting in temperature controlled storage for a couple years.
I won’t be storing such items in combustible containers anymore even though the risk is pretty low and they mostly just swell.
I don't use Debian for servers nor personal computers anymore, but the fact that they themselves host a page explaining potential privacy issues with Debian makes me trust them a lot more, and feel safer recommending it to others when it fits.
Hey, I might be too late to the party, but I'd love to get some more info to your comment.
Imagine me: I'd consider myself a Linux noob, although I probably aren't anymore. I use Arch Linux for about 3 years now as my daily driver. I'm not young anymore - I didn't grew up with computers - I don't have it in my blood. I don't have formal education in anything computer and have never worked in the field. During Covid I learnt Linux from the Arch wiki. Now I'm using it. I configured some things and can control my computer through the command line.
Everytime I read comments like yours, I get the shivers. Did I miss something integral? What do I not know about? Especially network stuff is a blind spot for me. I didn't touch network stuff beyond the default wiki pages.
When I read comments like yours "Arch is a minefield" "With Arch it is so easy to shoot yourself in the foot", I never know what this could be specifically. How could this look like? Can you give me something more concrete? I'm really eager to know what everyone is talking about.
Well, with packages you want various filtering steps to happen before they make it into users' systems. Or layers of security that make it harder for a system to be compromised.
Let's take a look at the xz incident, then at how fast rolling release distros get their packages in. That's part of the equation. Bottom line is: you're the first line of defense against potential malicious supply chain attacks. This is why Fedora is Red Hat's testing distro, why Debian has an unstable branch or why openSUSE Tumbleweed exists. Now, Arch isn't just a "testing distro", but it is, by design, more susceptible to these attacks. Thinking bleeding edge is more secure is a fallacy. It is but a consequence of assuming the source maintainers are on your side, which is usually the case, but not always. Or, assuming software is properly tested for bugs every release. If you are still doubting this, look at npm.
Furthermore, have you ever asked why you need to constantly update package signing keys? There is no central build server for Arch. Maintainers are building packages in whatever machine they are on, signing with whatever keys they have there and uploading the binary blobs. This isn't trustworthy. There is now a clean chroot process and all, but still, maintainers are still able to build the packages in their machine and upload it.
The other problem is not having any mandatory access control security policy by default (SELinux, AppArmor, etc...). You can, of course, install your own and go through the trouble of actually creating the security profiles yourself for the various packages on the system. This is in stark contrast with other distros, where not only they provide a security policy by default but their packages also ship with security profiles when needed to make sure it actually works (Fedora and openSUSE come to mind).
Finally, the AUR is cool and all, but my god are you at the mercy of whatever is put on there. Sure the PKGBUILD is super legible, but are you really checking where things are being pulled from? There is a layer of filtering being taken away here, you are the one doing your due diligence.
Now I'm sure different people have different takes on this, some might say that security policies are dumb and useless, others might prefer to be in the bleeding edge assuming the latest and greatest is safer. But take all the layers I have mentioned here, and how their non-existence on Arch could affect security. I hope to have drawn a clearer picture.
(Edit: I must say, I like Arch, I've used it a lot and when in a pinch is my go to. But I've come to appreciate how other distros approach security, and how they layer the process so they have more time to assess vulnerabilities. It is a balancing game, and I hope Arch improves on their processes, I really do.)
This is a duplicate command to increase my chance of getting a late reply :) Hope that's fine.
When I read comments like yours "Arch is a minefield" "With Arch it is so easy to shoot yourself in the foot", I never know what this could be specifically. How could this look like? Can you give me something more concrete? I'm really eager to know what everyone is talking about.
Imagine me: I'd consider myself a Linux noob, although I probably aren't anymore. I use Arch Linux for about 3 years now as my daily driver. I'm not young anymore - I didn't grew up with computers - I don't have it in my blood. I don't have formal education in anything computer and have never worked in the field. During Covid I learnt Linux from the Arch wiki. Now I'm using it. I configured some things and can control my computer through the command line.
Everytime I read comments like yours, I get the shivers. Did I miss something integral? What do I not know about? Especially network stuff is a blind spot for me. I didn't touch network stuff beyond the default wiki pages.
Not really, sometimes it forces me to apply updates on shutdown/restart, even though I don't want to do it. None of the registry hacks seems to be able to disable this behavior. I've heard some people talking about a special distribution/version of Windows where you can disable this, but don't really feel like re-installing the entire OS just so when I boot into/away from Windows I don't get forced to wait for the slow update twice (one now, another in the future when I boot Windows next time).
All because Ableton cannot be bothered to support Linux :/ I understand that though, just sucks...
I'm on the market for a decent laptop. Don't want to side-line the thread, but is Arch supported decently on, say, Dell or any "enterprise grade" laptops?
More color: I was happy running Arch on a 2012 vintage Dell Latitude (Intel, integrated graphics) for several years. I'm currently quite happy running Arch on a Lenovo Thinkpad T14s (gen2, AMD, integrated graphics).
I haven’t tried much, but as long as you avoid nvidia or fancy laptops with weird components, you will be good. My recommendation is to go for business line, as they have more standardized peripherals. Better if there’s some linux support guarantee.
If in doubt, search the Arch forums for posts about the model you consider to buy. Best case: Some threads come up, but all problems could be solved. Worst case: No threads, or a lot of threads about obscure errors.
I have a Dell Vostro 7620 currently running Arch. Even with the Nvidia graphics card I have run into very few issues (only once did a nvidia driver update did break the system), so I'd say go for it.
This policy is missing from nixpkgs, although there is a similar policy for the build process for technical reasons.
So I can add spotify or signal-desktop to NixOS via nixpkgs, and they won’t succeed at updating themselves. But they might try, which would be a violation of Debian’s guidelines.
It’s a tough line — I like modern, commercial software that depends on some service architecture. And I can be sure it will be sort of broken in 10-15 years because the company went bust or changed the terms of using their service. So I appreciate the principles upheld by less easily excited people who care about the long term health of the package system.
In the process of trying to update, Spotify on NixOS will likely display some big error message about how it's unable to install updates, which results in a pretty bad user experience when everything is actually working as intended. It seems fair to patch software to remove such error messages.
To be fair, we (Nixpkgs maintainers) do remove or disable features that phone home sometimes even though it's not policy. That said, it would be nice if it was policy. Definitely was discussed before (most recently after the devbox thing I guess.)
Discord downloads stuff every single time I start it. So there is definitely not a policy to remove this behaviour.
And yes, good point, this was indeed discussed when devbox enabled AI training by default. It somehow seems like there is more than one category of phoning home at play here, since it is obviously tolerated in other cases.
What I mean is, it definitely isn't explicit policy to remove features that phone home; but, it is sometimes still done at the package maintainer's discretion. For things that are unfree all bets are off. (Removing or interfering with such code may be against the license.)
Why can't I get GNOME stop calling home? (on a Debian installation) Each time I fire up my Debian VM with GNOME here on my OSX host system Little Snitch pops up because some weird connection to a GNOME web endpoint. One major pet peeve of mine.
I was extremely disappointed to recently learn that visidata(1) phones home, and that this functionality has not been disabled in the Debian package, despite many people requesting its removal:
Infuriating. The developer is just making excuses and refusing to address the users' actual concern. And why are they phoning home in the first place? What is this critical use case that requires this intrusion?
"This daily count of users is what keeps us working on the project, because otherwise we have feel like we are coding into a void."
So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!
> So, they wrote code to phone home (by default) and then digging in and defending it... just for their feelings? You've got to be kidding me!
Is that better or worse than phoning home to serve ads?
Also, if feels misleading to me to call fetching a motd phoning home. You know Ubuntu does this too right? That feels more worthy of outrage than this.
If someone tells me, this software phones home, and it's not transmitting anything other than a ping; kinda feels like they're lying to me about what it's actually doing.
I'm not upset by the author wanting a bit of human connection to the people who enjoy his software. I empathize with the desire to see people enjoy the stuff I've made. Is it a privacy risk? Perhaps, but it's not even on the top 1k that I see daily. There's more important windmills to tilt at.
But... if you really just wanna be outraged; I recently wrote a DNS server that I use as the default for my home system. Currently It prints every request made, you might wanna try something like that. If you're that upset about this, you're gonna be blown away by what else is going on you didn't even know about.... and that's just dns queries, it's not even the telemetry getting sent!
No they don't. The formulation in TFA is a bit too generic - Debian will usually not remove any code that "calls home". There are perfectly valid reasons for software to "phone home", and yes, that includes telemetry. In fact, Debian has its own "telemetry" system:
Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data
Telemetry contains personal data by definition. It just varies how sensitive & how it's used. Also it's been shown repeatedly that 'anonymized' is shaky ground.
In that popcon example, I'd expect some Debian-run server to collect a minimum of data, aggregate, and Debian maintainers using it to decide where to focus effort w/ respect to integrating packages, keeping up with security updates, etc. Usually ok.
For commercial software, I'd expect telemetry to slurp whatever is legally allowed / stays under users' radar (take your pick ;), vendor keeping datapoints tied to unique IDs, and sell data on "groups of interest" to the highest bidder. Not ok.
Personal preference: eg. a crash report: "report" or "skip" (default = skip), with a checkbox for "don't ask again". That way it's no effort to provide vendor with helpful info, and just as easy to have it get out of users' way.
It's annoying the degree to which vendors keep ignoring the above (even for paying customers), given how simple it is.
Why it has to include PII by definition? I'd say DNF Counting (https://github.com/fedora-infra/mirrors-countme) should be considered "telemetry", yet it doesn't seem to collect any personal data, at least by what I understand telemetry and personal data to mean.
I'm guessing that you'd either have to be able to argue that DNF Counting isn't telemetry, or that it contains PII, but I don't see how you could do either.
Yes, so the vendor must not store it. Something along those lines is usually said in the privacy policy. If you don't trust the vendor to do that, then do not opt-in to sending data, or even better, do not use the vendor's software at all.
Sometimes, we have to or we simply want to run software from developers we don't know or entirely trust. This just means that the software developer needs to be treated as an attacker in your threat model and mitigate accordingly.
I would argue that users can't inherently trust the average developer anymore. Ideas about telemetry, phoning home, conducting A/B tests and other experiments on users, and fundamentally, making the software do what the developer wants instead of what the user wants, have been thoroughly baked in to many, many developers over the last 20 or so years. This is why actually taking privacy seriously has become a selling point: It stands out because most developers don't.
I can't argue that you are wrong, but I can argue that, for myself, if I don't trust a developer to not screw me over with telemetry, I cannot trust the developer to not screw me over with their code. I can't think of a scenario where this trust isn't binary, either I can trust them (with telemetry AND code execution), or I can't trust them with either.
Could you describe what scenario I am missing?
You’re not missing anything. In general, I don’t think you can really trust the vast majority of software developers anymore. Incentives are so ridiculously aligned against the user.
If you take the next step: “do not use software from vendors you don’t trust,” you are severely limiting the amount of software you can use. Each user gets to decide for himself whether this is a feasible trade off.
Yeah, isn't that a shame? Wouldn't it be nice if instead of catastrophizing that telemetry data is always only ever there to spy on us, that we might assume that there are actually trustworthy projects out there? Especially for FOSS projects, which can usually not afford extensive in-house user testing, telemetry provides extremely valuable data to see how their software is used and where it can be improved, especially in the UX department, where many FOSS is severely lacking. This thread here is a perfect example of this kind of black/white thinking that telemetry must be ripped out of software no matter what, usually based on some fundamental viewpoint that anonymity is impossible anyway, so why bother even trying. This is not helping. I usually turn on telemetry for FOSS that offers it, because I hope they will use this to actually improve it.
Many corporate privacy policies per their customer contracts agree with this. Even a single packet regardless of contents is sending the IP address and that is considered by many companies to be PII. Not my opinion, it's in thousands of contracts. Many companies want to know every third party involved in tracking their employees. Deviating from this is a compliance violation and can lead to an audit failure and monetary credits. These policies are strictly followed on servers and less so on workstations but I suspect with time that will change.
I can only repeat myself from above: it's about what data you store and analyze. By your definition, all internet traffic would fall under PII regulations because it contains IP addresses, which would be ludicrous, because at least in the EU, there are very strict regulations how this data must be handled.
If you have a nginx log and store IP addresses, then yes: that contains PII. So the solution is: don't store the IP addresses, and the problem is solved. Same goes for telemetry data: write a privacy policy saying you won't store any metadata regarding the transmission, and say what data you will transmit (even better: show exactly what you will transmit). Telemetry can be done in a secure, anonymous way. I wonder how people who dispute this even get any work done at all. By your definitions regarding PII, I don't see how you could transmit any data at all.
By your definitions regarding PII, I don't see how you could transmit any data at all.
On the server side you would not. Your application would just do the work it was intended to do and would not dial out for anything. All resources would be hosted within the data-center.
On the workstation it is up to the corporate policy and if there is a known data-leak it would be blocked by the VPN/Firewalls and also on the corporate managed workstations by IT by setting application policies. Provided that telemetry is not coded in a way to be a blocking dependency this should not be a problem.
Oh and this is not my definition. This is the definition within literally thousands of B2B contracts in the financial sector. Things are still loosely enforced on workstations meaning that it is up to IT departments to lock things down. Some companies take this very seriously and some do not care.
> Telemetry is perfectly acceptable as long as it is opt-in and does not contain personal data, and both apply to Go's telemetry, so there's no need for a fork.
This changed somewhat recently. Telemetry is enabled by default (I think as of Golang 1.23?)
Attempts to contact external telemetry servers under default configuration is the issue. That not all of the needlessly locally aggregated data would actually be transmitted is separate.
“Will remove” means that it’s one of the typical/accepted reasons why patches are applied by Debian maintainers, as in meaning 4 here [0], not that there is a guarantee of all telemetry being removed.
Between snap and having completely different network implementations between "desktop" and "server" versions really made me fall back down the learning curve of nix.
Especially since I was novice at best before the systemd thing, and my Ubuntu dive involved trying to navigate all 3 of these pretty drastic changes at once (oh yea and throw containers on top of that).
I went into it with the expectation that it was going to piss me off, and boy did it easily exceeded that threshold.
God, I wish someone would do this to discord already. I'm so sick of updating it through my package manager every other day only for discord to then download its own updates anyway.
Yes, I've disabled the update check. No, it doesn't solve the problem.
Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
And there's the inclusion of non-Free software in the base install, which is completely against the Debian Social Contract.
The Debian Project drastically changed when they decided to allow Ubuntu to dictate their release schedule.
What used to be a distro by sysadmins for sysadmins, and which prized stability over timeliness has been overtaken by Ubuntu and the Freedesktop.Org people. I've been running Debian since version 3, and I used to go _years_ between crashes. These days, the only way to avoid that is to 1) rip out all the Freedesktop.Org code (pulseaudio, udisks2, etc.), and 2) stick with Debian 9 or lower.
> Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
No, it's not. Stable ships ESR which has its update mechanism is disabled. Same for Testing/Unstable. It follows standard releases, but autoupdate is disabled.
Even Official Firefox Package for Debian from Mozilla has its auto-updates disabled and you get updates from the repository.
Only auto-updating version is the .tar.gz version which you extract to your home folder.
This is plain FUD.
Moreover:
Debian doesn't ship pulseaudio anymore. It's pipewire since forever. Many people didn't notice this, it was that smooth. Ubuntu's changes are not allowed to permeate without proper rigor (I follow debian-devel), and it's still released when it's ready. Ubuntu follows Debian Unstable, and Unstable suite is a rolling release, and they can snapshot it and start working on it whenever they want.
I'm using Debian since version 3 too, and I still reboot or tend my system only at kernel changes. It's way snappier w.r.t. Ubuntu with the same configuration for the same tasks, and is the Debian we all know and like (maybe sans systemd. I'll not open that can of worms).
Firefox only updates on its own if installed outside of the package manager. This applies to Debian and its forks. If I click on Help -> About it says, "Updates disabled by your organization". I personally would like to see distributions suggest installing Betterfox [1] or Arkenfox [2] to tighten up Firefox a bit.
>Most obvious example is Firefox. The Debian Project allows Firefox to update outside the packaging system, automatically, at the whim of Firefox.
It seems likely that you personally chose to install a flatpak or tar.gz version probably because you are running an older no longer supported version of Debian.
>These days, the only way to avoid that (crashes) is...
Running older unsupported versions with known never to be fixed security holes isn't good advice nor is ripping out the plumbing. Its almost never a great idea to start ripping out the floorboards to get at the pipes.
Pipewire seems pretty stable and if you really desire something more minimal it's better to start with something minimal than stripping something down.
Long time Debian fan, current Devuan user. I'm sure it still has it's problems, but it feels nice and stable, especially on older hardware that is struggling with the times. (Thinkpad R61i w/core2duo T8100 swapped in and middleton bios)
So that the read lock is lifted even if reader.read() throws an error.
Does this only hold for long running processes? In a browser environment or in a cli script that terminates when an error is thrown, would the lock be lifted when the process exits?
The spec just says that when a block "completes" its execution, however that happens (normal completion, an exception, a break/continue statement, etc.) the disposal must run. This is the same for "using" as it is for "try/finally".
When a process is forcibly terminated, the behavior is inherently outside the scope of the ECMAScript specification, because at that point the interpreter cannot take any further actions.
So what happens depends on what kind of object you're talking about. The example in the article is talking about a "stream" from the web platform streams spec. A stream, in this sense, is a JS object that only exists within a JS interpreter. If the JS interpreter goes away, then it's meaningless to ask whether the lock is locked or unlocked, because the lock no longer exists.
If you were talking about some kind of OS-allocated resource (e.g. allocated memory or file descriptors), then there is generally some kind of OS-provided cleanup when a process terminates, no matter how the termination happens, even if the process itself takes no action. But of course the details are platform-specific.
Browser web pages are quintessential long running programs! At least for Notion, a browser tab typically lives much longer (days to weeks) than our server processes (hours until next deploy). They're an event loop like a server often with multiple subprocesses, very much not a run-to-completion CLI tool. And errors do not terminate a web page.
The order of execution for unhandled errors is well-defined. The error unwinds up the call stack running catch and finally blocks, and if gets back to the event loop, then it's often dispatched by the system to an "uncaught exception" (sync context) or "unhandled rejection" (async context) handler function. In NodeJS, the default error handler exits the process, but you can substitute your own behavior which is common for long-running servers.
All that is to say, that yes, this does work since termination handler is called at the top of the stack, after the stack unwinds through the finally blocks.
Looking at the first example:
First I had to switch it from TS to JS. As I don't consider something that needs compilation before it runs to be lightweight.
Then, the first line is:
What is this? This is not a valid import. At least not in the browser. Is the example something that you have to compile on the server to make it run in the browser?And when I use the "download" button on the playground version of the first example, I get a "package.json" which defines dependencies. That is also certainly not something a browser can handle.
So do I assume correctly that I need to set up a webserver, a dependency manager, and a serverside runtime to use these "light weight" components?
Or am I missing something? What would be the minimal amount of steps to save the example and actually have it run in the browser?
reply