I am irritated that "self hosted" seems to mean "in your own house" and everyone just agrees.
To me, self hosted also means I rent a machine with Hetzner and run the server software on it. Its cheap, stable, fast, secure and Hetzner wont screw me over with my data. I have a LOT less headache and I can rent a vserver for a long time until the hardware cost for a server running at home is surpassed.
I can also very simply assign a domain to it
and am pretty sure that software like nextcloud offers oauth access so my friends would NOT be required to sign up for my "weird app". Well, technically they do but oauth automates it.
Why do you claim that Hetzner won’t screw you over with your data?
What you’re doing with Hetzner is just a few less layers of abstraction compared to AWS or Azure. They can still theoretically take down the machine or steal your data, if they wanted to.
I don’t know what the correct definition of self hosted is, but there is a big ideological difference between what you’re doing and self-hosting actual, physical hardware in your home.
Sure, in theory Hetzner could pull the plug or access the data on my VPS. But that’s true of any infrastructure... just like someone could break into my house and steal my self-hosted server.
In fact, I’d argue the physical risk of loss, theft, or data compromise is much higher at home than in a professional datacenter with power redundancy, security controls, and constant uptime monitoring.
It’s a bit like saying, "Don’t trust the bank, they could take your money and freeze your account — keep all your money under the mattress." Technically possible, yes. But come on.
I never said any of those things, you’re literally arguing with made up points I didn’t make?
But my point Hetzner isn’t self-hosted. Similar to how storing money in a bank isn’t self-banking. Self-hosting means you host your content yourself in a server room that you have physical access to. Hetzner isn’t that.
And that’s fine, self-hosting is pretty silly for most use cases nowadays. And all of the positives you mentioned are true. But I hate how we’ve turned “self-hosting” into a political dog-whistle for hosting with companies that seem more trustworthy than the big guys. And there’s nothing wrong with not wanting to trust GCP or Azure - just don’t call it self hosting.
you are not, I consider both self hosting.
I used hetzner for a long time and they were doing a great job. These days I run a server in my basement, because I had the hardware around. Most months of the year it also contributes to heating the house :D
i've been using llm-based tools like copilot and claude pro (though not cc with opus), and while they can be helpful – e.g. for doc lookups, repetitive stuff, or quick reminders – i rarely get value beyond that. i've honestly never had a model surface a bug or edge case i wouldn’t have spotted myself.
i've tried agent-style workflows in copilot and windsurf (on claude 3.5 and 4), and honestly, they often just get stuck or build themselves into a corner. they don’t seem to reason across structure or long-term architecture in any meaningful way. it might look helpful at first, but what comes out tends to be fragile and usually something i’d refactor immediately.
sure, the model writes fast – but that speed doesn't translate into actual productivity for me unless it’s something dead simple. and if i’m spending a lot of time generating boilerplate, i usually take that as a design smell, not a task i want to automate harder.
so i’m honestly wondering: is cc max really that much better? are those productivity claims based on something fundamentally different? or is it more about tool enthusiasm + selective wins?
Sorry but the approach is too naive and the tech isnt there yet.
You can't make up a couple of conversation topics and expect the LLMs to do the rest by just switching languages. People approach the same topics completely different in different languages. The app looks like someone picked a couple of topics and the rest is "just" ChatGPT advanced voice mode.
And the worst thing is that the LLMs in TTS do not sound native and cannot teach you pronounciation and learning to listen and understand (which is the whole point in having spoken conversation).
And the other way around, the STT will not notice pronounciation mistakes made by the student - so the app cannot tell you: oh, its pronounced like this.
I wonder about the mentioned application in mobile devices. with mobile and tablet devices one usually has a very durable glass layer between the screen and the outside world - not sure if sound would ve able topass through that.
Hm, you cannot simulate sunlight at all with an RGB LED ring. You can create something that looks cool for our human eyes but the average plant wouldnt make it long beneath them because its basically always living in the dark; the important wavelengths are missing.
This is also a huge problem for people having a terrarium with geckos or saurians - they see and need much different wavelengths than we humans do.
I am no expert for carnivorous plants - maybe they are fine but seeing that there is no UV emitting part in the lighting setup, there may be an important part of the spectrum missing for the plants.
From skimming Wikipedia it seems like most absorption happens in the blue range and a bit less in red, almost nothing inbetween at green. Most LEDs are blue with a phosphor, so you usually get more blue light than the rest already.
But surely there are LEDs optimised for that task (Cannabis grow lights)
Not that I am planning to travel to the US at any point but the first thing that came to my mind was: why not just sending the phone by parcel, fly without it and pick it up later on? Even tough I find it embarassing that such hacks are necessary in the first place.
International shipments, not to mention transcontinental, are expensive and unreliable enough. Then you have to manage receiving international package at the hotel or in some temporary housing....
Colocation, I don't know... Without questioning peoples preferences but I think at that point I'd rather look for a decent fiber network connection for my home and let that raspberry run in my own cupboard. I mean, its a RASPBERRY you would probably do fine even without the fiber connection.
It depends what it is. I have 25/25 fiber at home for practically nothing (~70 USD a month) but I can only go so far even with a UPS. If I loose power for too long or my internet goes down I have no backup which I would have at a co-location facility.
If those circumstances occur, what would you be serving that can't afford to go offline for a couple days?
It's important to have an answer to that question, rather than to assume that being offline when your home internet is down is inherently a problem. You can safely estimate using "X nines will cost X digits per month":
1 nines, 36.5 days/year downtime is $#/month. (openwifi tier)
2 nines, 3.65 days/year downtime is $##/month. (residential tier)
4 nines, 1 hour/year downtime, is $####/month. (datacenter tier)
5 nines, 5 minutes/year downtime, is $#####/month. (carrier tier)
Speaking from experience, it's both important to decide which 'nines' you require before you invest in making things more resilient — and it's important to be able to say things to yourself like, for example, "I don't care if it's down 4 days per year, so I won't spend more than $##/month on hosting it".
What does 25/25 mean here? Gbps feels high for the home, but Mbps feels insanely expensive at that rate. (And also I didn't know they did fiber that slow, it's really only available in 1 Gbps and sometimes 500 Mbps here)
I don't see why 25Gbps symmetric would be so surprising. My current ISP Ziply Fiber offers 100Mbps, 300Mbps, 1Gbps, 2Gbps, 5Gbps, 10Gbps, and 50Gbps (all of them symmetric) in most of their service areas. I’m sure there are other providers with similar offerings, in some parts of the country. My previous ISP, Sonic.net, offers speeds up to 10Gbps. The reported price is pretty nice though.
Damn, that sounds nice, the fiber in Switzerland linked to in other comment.
Though the small cost is probably overshadowed by the large infra costs at home. So now you need a 25Gbp/s router, together with the rest of topology like qsfp+ switches, and then actually computers with >= 25Gb/s nics to make use of it. And then all the appropriate cooling for it. It’s starting to sound a lot like a home data center :P
You can get 25G switches/router for not much nowadays, check Mikrotik. Throw a couple Intel NICs from ebay in your machines' PCIe ports and really it's not that much of a deal.
It's always a surprise to me how expensive internet access can be in the US. Here in France 1Gb/700Mb fiber connection costs 30€/month (and this is without commitment and includes TV stuff - "more than 180 channels" whatever that means, and landline phone)
The EU invested pretty heavily into making sure even very remote parts of Europe, like northern Finland, have great Internet. I was very pleasantly surprised when I was able to work from home at the in-laws'!
Because of these new subsidised fiber deployments, it's not uncommon anymore for rural/semi-rural areas to have better connectivity than urban or sub-urban areas, which is bit awkward.
Internet speeds and prices are all over the place in the US. I pay $60 per month for 1Gb synchronous fiber (which really performs at 1.2 synchronous, yay me) at my house and $60 per month for 500/30 cable internet at my rental. Two different areas three postal codes apart with different vendors, prices, and products (even when the vendor is available in both).
The way we sliced up space for utilities (lots of legal shared monopolies/guided capitalism) and their desire to build the last mile in their area leads to many different prices and products within a walkable distance. Before that 500/30 service showed up the best we had was unreliable 200/15 from another provider.
and it varies widely. I pay $170 a month for 30mbps down and 15 up lmao and I have 2 options to choose from who have the exact same service for the exact same price. Telecoms in the US is beyond horrifyingly bad.
These days I'm less excited about residential fiber deployments as they are more often than not some passive optical setup, which is worlds apart from a proper active fiber that you'd get in a DC or a dedicated business line. For example standard 10G-PON is asymmetric shared 10G down/2.5G up (10G-EPON is even worse, 10G/1G asymmetric), with up to 128 way split. That means that with your fancy fiber in the worst case you might get barely 20 Mbps upload capacity.
IME most new residential fiber deployments in the US are using XGS-PON which provides 10 Gbps in both directions. Typically ISPs don't put the maximum number of clients in a node that the standard allows. I've heard 32 is a common number in practice.
Obviously it'd still be a bad idea to run a high traffic server on a residential connection, but as long as you're not streaming 4K video 24/7 or something you'll probably be OK.
Here proper fiber is the norm. Doesn't mean that it's not oversubscribed to the next hop though, typical oversubscription is 30x, it would be insanely expensive of they didn't do it.
> in the worst case you might get barely 20 Mbps upload capacity
"in the worst case" being the key point, and frankly, 20 Mbps doesn't actually sound too bad as the theoretical minimum.
In practice you're unlikely to hit situations where this is a problem even if everyone was hosting their blog/homelab/SaaS/etc.
This is only a problem (and your ISP will end up giving you hell for it) if you're hosting a media service and are maxing out the uplink 24/7. For most services (even actual SaaS) it's unlikely to be the case.
PSA: if you're having RPi SD-card-corruption issues, get a higher-amperage power supply (the official power supplies work well). Low-voltage warnings are a telltale sign.
SD cards are indeed a really bad deal when it comes to reliability, especially if like me you tend to slam raspis everywhere almost as a reflex: before you know it, you end up with a large-ish fleet of the things in your house.
But: Raspis these days work 100% fine with SSD's, and while a small SSD is not yet as cheap as an SD card, it's not far off.
I have entirely stopped using SD cards for my Raspis for quite a long while now.
I also had good experience setting up PIs with read-only root devices. All data needs to be sent off-device (or at least onto external storage), but it wasn’t too tricky and should avoid the usual SD-card issues.
Well, it depends whats your cup of tea in terms of learning. There area a LOT of courses on Udemy on that topic (if you prefer learning from videos).
I would recommend looking around on HuggingFace altough I found it a bit intimidating at the beginning. The place is just HUGE and they assume some knowledge.
I would also recommend creating a platform user on OpenAI and/or Anthropic and look up their docs. The accounts there are free but if you put a few dollars in there, you can actually make requests against their APIs which is the most simple way of playing around with LLMs imho.
Here are some topics you could do some research about:
- Foundation models (e.g., GPT, BERT, T5)
- Transformer architecture
- Natural Language Processing (NLP) basics
- Prompt engineering
- Fine-tuning and transfer learning
- Ethical considerations in AI
- AI safety and alignment
- Large Language Models (LLMs)
- Generative models for images (e.g., DALL-E, Stable Diffusion)
- AI frameworks and libraries (e.g., TensorFlow, PyTorch, Hugging Face)
- AI APIs and integration (also frameworks to build with AI like LangChain/
LangGraph)
- Vector databases and embeddings
- RAG
- Reinforcement Learning from Human Feedback (RLHF)
I recently wrote an article challenging the common urge to dive straight into coding when inspiration strikes. As developers, we often feel compelled to start coding immediately, but I've found that taking a step back for extensive planning can lead to more efficient and successful projects.
Key points:
- Mental modeling is faster than actual coding
- "Mental coding" can happen anywhere, anytime
- Tools like Obsidian help organize thoughts before coding
- LLMs can be used as brainstorming partners
- Always keep the MVP in mind, then iterate
I'm curious to hear how other developers approach the planning phase of their projects. Do you have any unique pre-coding rituals or tools you find particularly useful? How do you balance planning with the desire to start coding?
To me, self hosted also means I rent a machine with Hetzner and run the server software on it. Its cheap, stable, fast, secure and Hetzner wont screw me over with my data. I have a LOT less headache and I can rent a vserver for a long time until the hardware cost for a server running at home is surpassed.
I can also very simply assign a domain to it and am pretty sure that software like nextcloud offers oauth access so my friends would NOT be required to sign up for my "weird app". Well, technically they do but oauth automates it.
Am I missing something?