I've been running it for years; I see none of what you mention. I turned off a couple of settings (literally took <5 minutes) and I don't see anything crypto or AI related anymore.
Same. I’ve been using it exclusively on macOS and iOS for a few years now. When you first open it they do show you all those features but as ochronous said you can completely disable them. If they forced crypto and AI on me I’d be out in a flash.
What do you mean exactly by wildcard domains in the context of setting up an app in Dokploy, et al, etc? Can you explain your use case and how you did or didn't get it working in Dokploy? Right now I'm trying to figure out which of these to use and your feedback would help me. Thanks!
I'm not sure that's what you're implying, but DNS-level blocking is not more powerful than filtering in the browser, at least in the sense of what it is capable of.
Content-filtering can and most definitely does block domains entirely, but it can also filter page elements served from the same domain which match a known ad "signature".
Though, if you can't run an adblocker, e.g. your Smart TV's browser then sure, DNS-level blocking is your best bet.
There's one angle which most of these arguments miss (totally not in the scope of the article, which is fine). A couple of statements I believe are true (based on my and my network's limited experience):
1. Juniors grow. Most of them grow fast, becoming solid mid-levels in 1-1.5y (in the right environment)
2. The industry depends heavily on a wide pool of mid-levels. These are the folks who can produce decent quality solutions, and don't need hand-holding anymore. They are the “velocity” of the team. Engineers tend to spend a few years there, before they might grow into seniors.
3. Seniors will age out.
4. AI doesn't grow (as it is today), it's stuck on the low-junior level mostly. This might change, but currently there are no signs for this.
5. Seniors would need to spend _a lot_ of time fixing AI's output, and course-correcting.
Now, all of this combined: the junior --> senior transition takes say, 5+ years on average (I know, depends). If we move on with the "1/2 senior + AI agents" model, how does a company form a new team? When those seniors move away / retire, who's taking their place? What happens to the velocity of the team without mid-levels now?
If we let this go on for a couple of years before a reckoning of "oh crap" happens, it'll be very hard to come back from this --> certain "muscles" of the aforementioned seniors will have atrophied (e.g. mentoring, growing others), a lot of juniors (and mediors!) will have left the industry.
I hope companies will recognize this risk in time...
The problem the industry faces isnt that juniors dont grow it's that they spend up to 18 months being a drain on productivity after which point they tend to leave.
Who would pay for that?
Not a lot of companies, which acts as a filter.
As it turns out, a few companies do because they are super strapped for cash. That's why a lot of junior first experiences are a trial by fire in environments that are on another level of dysfunction working with either no seniors or bottom of the barrel seniors.
This acts as another filter. Some juniors give up at this point.
These filters prevent junior engineers from becoming senior. This is actually pretty good for seniors - being a rare, in demand commodity usually is.
I dont think AI changes this calculus much except insofar as AI amplifies the capacity for juniors to build ever bigger code jenga towers.
> The problem the industry faces isnt that juniors dont grow it's that they spend up to 18 months being a drain on productivity after which point they tend to leave.
Thankfully, that's not my experience with most juniors. Again, my experience is limited (as all of ours is), but if you filter juniors well during hiring, you can get a wonderful set of high-potential learning machines, who, with the right mentors, grow like crazy.
Ive worked with a bunch of great juniors too. None of this changes the fact that they cant hit the ground running, they make mistakes which have to be unpicked and mentoring them eats up time.
>The problem the industry faces isnt that juniors dont grow it's that they spend up to 18 months being a drain on productivity after which point they tend to leave.
This is largely a result of the compensation behaviour of the industry. A junior that gets hired and grows does not get a raise in their salary to the market rate, the only way for them to get the compensation commensurate with their new skills is to leave and get hired somewhere else. Companies can avoid this problem by not doing this.
thats kind of my point. if theyre going to subsidize a junior being trained and then pay market rate at the end why not just avoid the subsidy bit and poach somebody else's pretrained junior?
It's kind of a tragedy of the commons effect except the "tragedy" is for tech employers - who are stroppy coz other companies dont have to provide their workers with a free sushi bar so why should they????
>I hope companies will recognize this risk in time...
I more than wholeheartedly agree with this great analysis. But we have to ask ourselves, when have most companies (by which I mean mostly those run by the hired CEOs) ever really recognised the risks? And why should we hope for it? Those that dont will hopefully wither away, their place to be filled in by those more, for the lack of a better word, "agile" :)
> AI doesn't grow (as it is today), it's stuck on the low-junior level mostly. This might change, but currently there are no signs for this.
This is such a bizarre take. The entire history of AI is of growth, to say nothing of the last few decades, or even the past few years. To say that there are no signs that AI grows is, if nothing else, counter-proof that humans don't grow from generation to generation. We make the same logically fallacies that we did millennia ago.
I'm having a hard time identifying any thing which could be labeled "AI" in my programming experience, outside of LLMs. Considering things I've also read about, that probably takes us back to 1950.
One more thing to add: AI enhances capabilities of everyone, including juniors. Juniors with LLMs can do more than juniors without them could and therefore their value proposition is greater than before.
But they don't learn from that, they turn the crank of the AI tooling and once they have something that works it goes in and they move on. I've seen this directly, you can't shortcut actual understanding.
I disagree. LLM assisted coding is yet another level of abstraction. It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.
Today’s juniors still learn the same core skill, the abstract thinking and formalization of a problem is still there. It’s just done on a higher level of abstraction and in an explicitly formulated natural language instead of a formal one for the first time ever. They’re not even leaving out the formal one completely, because they need to integrate and fine tune the code for it to work as needed.
Does it introduce new problems? Sure. Does it mean that today’s juniors will be less capable compared to today’s seniors once they have the same amount of experience? I really doubt it.
>It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.
Not the same. Whether you are writing Assembly or Java, you have to determine and express your intent with high accuracy - that is a learned skill. Providing problem details to an LLM until it produces something that looks correct is not the same type of skill.
If that is the skill that the business world demands in 5 years, so be it, but arguing that it's simply the next step in the Assembly, C, Java progression makes no sense.
> Providing problem details to an LLM until it produces something that looks correct
If you're using LLMs to code like this, you're using them incorrectly. You need to specify your intent precisely from the start which is a skill you learn. You can use other tools like OOP languages incorrectly with "working" results as well, that's why code quality, clean code and best practices are a thing.
I'm sure many junior developers use LLMs the way you described but maybe that's exactly what the universities and seniors need to teach them?
You might, just as ages ago when people were complaining about juniors c+ping stack overflow answers you might have said you used them to learn from.
LLMs are a turbo charged version of the same problem, only now rather than copying a code fragment from stack overflow you can have an LLM spit out a working solution. There is no need to understand what you're doing to be productive, and if you don't understand it you have no model or reasoning to apply in the future to other problems, maybe AI will save you there too for a while, but eventually it won't and you'll find you've built your career on sand.
Or maybe I'm wrong and we're all headed for a future of being prompt engineers.
Generally the understanding happens fairly quickly just by glancing through the code.
I was a sceptic up until recently, where I failed to create a solution myself.
Since I am mostly a hobbyist programmer(for 25 years and counting) I often find I don’t have the time to sit down for prolonged periods to reason about my code.
ChatGPT helps my tired brain from work develop my code 10x quicker, easily, by removing road blocks for me.
Indeed. With AI juniors can create horrible software faster, and in bigger quantities. Based on my experience AI really only enhances your existing talent, and if you have none, it enhances the lack of it, since you still need to be able to know how your AI slop fits in the larger system, but if you're a junior you most likely don't know that. Couple that with skipping the step where you must actually understand what you copy and paste from the internet for it to work, you also lengthen the time it takes for a developer to actually level up, since they do much less actual learning.
That's at least my experience working with multiple digital agencies and seeing it all unfold. Most juniors don't last long these days precisely because they skip the part that actually makes them valuable - storing information in their head. And that's concerning, because if to make actually good use of AI you have to be an experienced engineer, but to become an experienced engineer you had to get there without AI doing all your work for you, then how are we going to get new experienced engineers?
I think what you’re describing is caused by the fact that people that would previously pursue law, medicine or some other high paying field now pursue software engineering because it pays well and is a pretty comfortable job overall.
The nerdy tinkerers stay the same and AI empowers them even more. Are they rare? Yes. But this shifts the topic from
science/engineering to economics/sociology. Granted, that was the topic of the original submission but for me that’s the less interesting part.
Remembering some PRs from before... It was always about how much attention to detail/quality the particular junior dev pays. Often times it’s not much at all which might be harder to notice now.
it seems pretty obvious to me that you grow as a programmer more from "sit down and write code to do a thing" than you do from "sit down, watch an LLM shit out code and give it a pretty cursory review"
I disagree. LLM assisted coding is yet another level of abstraction. It’s the same thing as an assembly programmer saying that OOP programmers don’t learn from OOP coding.
Today’s juniors still learn the same core skill, the abstract thinking and formalization of a problem is still there. It’s just done on a higher level of abstraction and in an explicitly formulated natural language instead of a formal one for the first time ever. They’re not even leaving out the formal one completely because they need to integrate and fine tune the code for it to work as needed.
Does it introduce new problems? Sure. Does it mean that today’s juniors will be less capable compared to today’s seniors once they have the same amount of experience? I really doubt it.
I'm not making any pronouncement about if LLM are good things or not for juniors but your OOP analogy doesn't track.
One can be confident that they wrote correct Java code without knowing what the JVM machine code output is. But you can't know if the code outputted by an LLM is correct without understanding the code itself.
> One can be confident that they wrote correct Java code without knowing what the JVM machine code output is
I'm sure there're some pretty major bugs in production code because someone used some Java functionality intuitively without understanding it fully and in some edge case it behaves differently than anticipated.
Of course this issue is much more prominent in LLM assisted coding but we're back to square one. The higher the level of abstraction provided by the tool, the more room for mistakes it leaves but the higher the productivity is. It's easier to avoid bugs of this type when using assembly vs when using Java.
Idk if I learned more from actually typing code or debugging and looking at the output/logs, sometimes even running code in my mind to figure out the problems. Maybe it's just "cursory".
AI for coding tasks still needs a great deal of handholding. Someone senior needs to delete roughly 1/4 of the consistent garbage it currently adds to the acceptable output and still maintain the same code.
> Now, all of this combined: the junior --> senior transition takes say, 5+ years on average (I know, depends). If we move on with the "1/2 senior + AI agents" model, how does a company form a new team? When those seniors move away / retire, who's taking their place? What happens to the velocity of the team without mid-levels now?
> If we let this go on for a couple of years before a reckoning of "oh crap" happens, it'll be very hard to come back from this --> certain "muscles" of the aforementioned seniors will have atrophied (e.g. mentoring, growing others), a lot of juniors (and mediors!) will have left the industry.
> I hope companies will recognize this risk in time...
As someone in an org who has some exposure to legacy support, don't underestimate management's ability to stick its head in the sand until it's too late in order to focus on the new shiny.
I think we're down to two mainframe developers, still have a crap-ton of dependencies on it and systems that have been "planned to be decommissioned" for decades, and at least one major ticking-time bomb that has a big mainframe footprint.
reply