Hacker Newsnew | past | comments | ask | show | jobs | submit | jorl17's commentslogin

Alec is probably my favorite YouTuber. I remember catching his videos before he really blew up and they ticked all my nerd boxes! Unlike other youtubers I enjoy, I never seem to get tired of his content — keep going!


His channel is a fresh breath of air on today's YouTube. No clickbait titlea/thumbnails, no exaggeration, no drama, no filler content? That's rare these days. Everything is well organized and clearly explained. His videos are often long, but every minute is valuable. His videos are like the opposite of CNET -- you learn more after watching 2 minutes of Technology Connection compared to 20 minutes of CNET.


him and "cathode ray dude" are likely my favorite youtubers for this exact reason. When their videos come out I'm cracking out the snacks and watching it on my OPS tv


I just wish cathode ray dude would sometimes put out a 25 minute video instead of always 90 minute videos


he did, on his second channel for a little while in the little guys era


I like his humor as well.


YT recently recommended his explanation of how pre-computer pinball machines worked to me - a series of 3, hour-long videos. Gave me something to look forward to on my commute. I shared it with everyone I know, and now I'm sharing it with you:

https://www.youtube.com/watch?v=ue-1JoJQaEg

Fascinating (and insanely impressive) to see how a bunch of switches and stepper motors implement complex logic.


Not from the same Youtuber but that video reminded me of another great one about how mechanical bowling alley machines work: https://youtu.be/Iod6uwUGM2E


I find myself randomly recommending his videos to friends in the middle of conversations. Content like this is why I love YouTube.


Early arcade video games (pre Space Invaders) also didn't use universal microprocessors but relied only on circuit boards without software.


TTL logic, timers, oscillators, triggers, and more.

The circuit is the game.


I've been a huge fan of these videos. They explain electro-mechanical pinball machines incredibly well plus they're beautifully photographed. A remarkable amount of effort, thought and care went into creating them.


I have my name listed in all of his videos going back to right around when he started his Patreon. You can find me on the first "page" as it scrolls by. Love his videos.


That has me wondering, do any youtubers sell Executive Producer credits for funding like films?


Lots of youtubers with Patreons do have tiered credits, with bigger doners having separate credit sections with fancier titles, and usually their names are bigger and/or stay on the screen longer, which kind of seems similar


A big difference here is that EPs on a feature can get ROI on their money. Of course the cliche about Hollywood account can play games with that, but I doubt any Patreon supporter at any level would ever start to see any kind of revenue sharing from the YouTube's monetization.


Well, you deal with that the same way you deal with Hollywood accounting: you negotiate a cut of the gross for any episodes you sponsor.


*donors


C&Rsenal (~hour long historic firearms documentaries) does


Defunctland offers EP credits but is currently sold out.


His videos are so interesting. I went out and bought a rice cooker after watching his explanation of its mechanism.


Same here. I used it every day during COVID


I can also recommend:

  VWestlife
  This Does Not Compute
  Michael MJD
  Tech Tangents
  Janus Cycle
  LGR
  Posy
  Cathode Ray Dude


Posy seconded. He's weird (and I'm certain he would agree), but in a fun and interesting way. The music used in his videos is composed and recorded by himself, btw.

A recommendation of mine is Bad Obsession Motorsport. Two men in a shed put a Celica engine in an Austin Mini. So far it's taken 12 years and 41 episodes. Some astonishing engineering going there.

If you're into cars, I'll also recommend "driving 4 answers". Very well researched and presented videos about engine technology.


>He's weird

He's just Dutch :)


He's very Dutch indeed. His English is also full of dutch-isms that maybe only dutch people recognize. I'm Dutch, but live in Canada. Watching his videos make me miss my home country.


Seconding Bad Obsession. Not only is their build quality outstanding, their video production efforts are top notch. Their dedication to the concept and execution of Project Binky is nothing short of amazing.


I got a little less interested in the videos once they got to the trimming out the car part. I liked when they were building the car and were doing (what seems to me) like excellent work. then they got to the dashboard and it became what ever goes. like the dashboard clock in the latest video...


I feel like some of that wonkiness (like the ridiculous clock) has to do with their drive to keep the car's feel as "80s Mini" as possible.


+ Techmoan


If the idea of just chilling out and appreciating old tech with a slick presentation sounds good to you, you might like https://youtube.com/@PosyMusic


I know it's a cliche to say a Youtuber is unique, but Posy really is quite incredible. He's certainly not the only one making videos about vintage 80s technology but his great videography, calm tone, odd manner of speech, occasional goofy humor, and beautiful custom-made audio soundtracks make for a mesmerizing presentation.


A couple more, adjacent:

Ahoy (if you like Amiga and old video games, I cannot recommend enough)

Ben Eater

Majulaar

Tantacrul

And of course Veritasium with the consistently super interesting science videos.


Ahoy is essential just for how well produced their deep dive content is. The great art and music really elevate it from a "Watch something for an hour and learn some computing history" to "Have an experience for an hour"


Majulaar's Ultima retrospective is one the highlights of my subscription feed


Ooh, nice to find someone who is following that too ahahah

Ultima Underworld is my favourite so far, that's an outstanding game.


It's a shame that Druaga1 stopped posting on YouTube because he should be on that list.


CelGenStudios and Usagi Electric are good channels for vintage computing stuff.


+1 for LGR. Also adding TechMoan to that list.


  Huygens Optics
  CuriousMarc
  Applied Science (<- not the journal)  
  clabretro
  xkcd's What If?
  optimum


Calum, LowSpecGamer, Mustard, Rhystic Studies


I find his content wildly good but his voice to be so grating I can barely stand it.


Don't watch Aging Wheels then. Love both of them, but my wife complains when I watch either on the living room TV.


Then you'd really be missing out, because Aging Wheels is awesome


Same!


The first time I came across his channel I felt similarly, but coupled with the dry humor, passive aggressive offhand comments, and intentionally long pauses waiting for the joke to land, I began to feel like it went with the tone of the content. I wasn’t sure at first, but he seems very self aware.

The whole thing reminds of some 80s PBS and Wes Anderson mashup in the best way.


Yeah, he rides right up to, and sometimes crosses, the line of being a bit too hokey/jokey for me. But the other 95% of the content of his videos are so amazingly good that I can get over the eye-rolly bits. He absolutely deserves his success.


Thankfully YouTube allows you to 2x the playback. That was the only reason I watched most of this video.


I don’t know his experience with academics but if the stars aligned, he would be an amazing university lecturer.


Majored in hotel management and that was his job until the channel took off. You get the sense that he'd be good at literally anything.


I also just seem to like the guy. He exudes knowledgeable and level headed.


I am a big fan of his channel but in a lot of his videos lately, the tone has been somewhere between holier-than-thou and outright preachy. Just because you spent a week researching a semi-obscure topic enough to present about it on YouTube of all places doesn't make you an authority on the matter, and it absolutely doesn't mean you're suddenly qualified to dismiss people who disagree with your conclusions.

I prefer his videos where the vibe was more along the lines of, "Hey, I've been playing with this neato old technology lately, what say we nerd out about it for 38 minutes or thereabouts?"


This is only very vaguely related but this title made me think of a very touching book on the subject of learning: Flower for Algernon.

I definitely recommend it for those who enjoy thinking about what life is like for people of different perceived "intelligence" levels.


Do you have any resources (search engines, prompts, MCP, other tools) to help with this?

I feel that it is quite obvious the next century will have China leading the pack, and I'd really like to be able to prepare for that.


I'm not sure what the parent poster is getting at about information on Chinese business, politics and culture being hard to find because that stuff is widely written about in the global media, and there are plenty of English language sources. It almost seems counterproductive to provide links to resources because it's artificially limiting what you will be exposed to, but here we go anyway...

China Media Project (media analysis) - https://chinamediaproject.org/

China Leadership Monitor (political analysis) - https://www.prcleader.org/

Made in China Journal (social analysis) - https://madeinchinajournal.com/

What's on Weibo (pop culture reporting) - https://www.whatsonweibo.com/

The China Project (formerly SupChina, general reporting) - https://thechinaproject.com/

* edit to add: seems like The China Project shut down end of 2023, but leaving the link for context

Sixth Tone (state-owned media specializing in human interest stories) - https://www.sixthtone.com/

On the state-owned media tip there are also more blatant propaganda outlets like Global Times, People's Daily etc, plus private-owned media that largely toe the party line like South China Morning Post.

There are also a set of mostly US-based thinktanks that do solid macro-level reporting on geopolitical and economic issues, guys like Jamestown, CSIS, German Marshall Fund etc.

Then there are countless blogs and newsletters and influencers who report on specific niches, everything from economic analysis to boyslove fandom... You can jump on Bilibili to watch shows and see all the "bullet chat" jargon and memes, you can rub shoulders with the upper middle class on Xiaohongshu, read millions of Steam reviews or check out the forums of games popular in China, follow ABC or expat channels on YouTube etc. I find it very hard to believe that people in 2025 can't find any information about what's going on in China.

All that said, I do share the sense that there is a bit a trough between Chinese tech workers and foreign tech workers, and it's because most Chinese tech workers don't tend to prioritize learning English to the same degree that tech workers around the rest of the world do. There are lots of publications that report on the Chinese tech industry from an investor or economic perspective, probably written by all those MBAs who went to study overseas, but nerd-to-nerd level exchange is lacking imo. I suppose you could ask an LLM to summarize content from v2ex.com (HN-ish Reddit), tieba.baidu.com (Reddit-ish Reddit), segmentfault.com (StackOverflow) etc, but that doesn't really do much to engage in a social way so I'm not sure if it's what you're looking for. Chinese-language Github projects are one place you could explore, if you specifically want to interact with developers over there.


Thank you for the link to V2ex.com!

Their comments section has

Please do not copy and paste AI-generated content when answering technical questions

on their footer.


My "favorite" Google dark-pattern, for which the dreamy kid in me hopes they get fucking sued to oblivion for how offensive it is[1]:

1. Open safari

2. Type something so that it goes search google

3. A web results page appears

4. Immediately a popup appears with two buttons:

- They have the same size

- One is highlighted in blue and it says CONTINUE

- The other is faint and reads "Stay in browser" (but in my native language the distinction is even less clear)

5. Clicking CONTINUE means "CONTINUE in the app", so it takes me to the Google App (or, actually, to the app store, because I don't have this app), but this does not end there!

6. If I go back to the browser to try to fucking use google on my fucking browser, as I fucking wanted to, I realize that doing "Back" now constantly moves me to the app (or app store). So, in effect, I can never get the search results once I have clicked continue. The back button has been highjacked (long pressing does not help). My only option is to NEVER click continue

7. Bonus: All of this happens regardless of my iPhone having the google app installed or not

So: Big button that says "CONTINUE" does not "CONTINUE" this action (it, of course, "CONTINUES" outside).

I just want to FUCKING BROWSE THE WEB. If I use the google app, then clicking a link presumably either keeps me in its specific view of the web (outside of my browser), or it takes me out of the app. This is not the experience I want. I have a BROWSER for a reason (e.g. shared groups/tabs...)

Oh! And since this happens even if I don't have the app, it takes me to the app store. If I install the app via the app store, it then DOES NOT have any mechanism to actually "Continue". It's a fresh install. And, of course, if I go back to the browser and hit "back", I can't.

So for users who DO NOT HAVE THE APP, this will NEVER LET THEM CONTINUE. It will PREVENT THEM FROM USING GOOGLE. And it will force them to do their query AGAIN.

Did the people who work on this feature simply give up? What. The. Fuck?

This behavior seems to happen on-and-off, as if google is gaslighting me. Sometimes it happens every time I open Safari. Some other times it goes for days without appearing. Sometimes in anonymous tabs, sometimes not. Logged in or not, I've seen both scenarios.

I can't be sure, but I genuinely believe that the order of the buttons has been swapped, messing with my muscle memory.

Basically it's this image: https://www.reddit.com/r/iphone/comments/1m76elp/how_do_i_st...

Except a still image cannot describe the excruciating process of dealing with it — especially realizing "oh, wait, I clicked the wrong button, oh wait, no no no, get out of the app store, oh oh oh what did I type again? Damn I lost it all!..."

[1]I would quit before implementing this feature. It disgusts me, and we're talking about google, not some run-of-the-mill company whom you have to work for to barely survive. This is absolutely shameful.


Can't reveal for confidentiality reasons but I know several examples, and have worked and been working on a couple, too.

But my claim isn't that there's no developer involved, it's two-fold:

1. LLMs do allow for features which were not possible before, or which would require significantly much more engineering, if possible at all. For example: producing a sensible analysis of a piece of poetry (or thousands of pieces of poetry) in seconds.

2. LLMs, if used correctly (not just "stick a prompt in it and pray") allow for very fast time-to-market, building quick solutions out of which you can then carve out the bits that you know you can (and should) turn into proper code.

Point 2. should not be understated. A smaller team (of developers!) can now get to market very quickly, as well as iterate to appropriate product-market-fit fast, offloading logic to LLMs and agentic loops, while slowly and selectively coding in the features. So, slowly, we replace the LLM/agents with code.

Not only have I worked on and seen products which fit point 1. (so very hard to do without LLM's abilities), but I have seen a lot of 2.

Furthermore, I've seen a sentiment on HN (and with peers) which I find is incredibly true: LLMs and agents allows us to offload the parts we would never work on due to not enjoying them in the first place. They effectively let us to "take the plunge" or "finally pull the trigger" on a project which we would have otherwise just never been able to start. We are able to try new things more often, and take more risk. As a personal example, I hate frontend development, something which always prevented me from starting a bunch of projects. Now I've been able to start a bunch of these projects. It has definitely unlocked me, allowing me to test more ideas, build projects that people actually use (the frontend only has to be "good enough" — but it has to exist), or eventually bring in more people to that project.

So LLMs have undoubtedly dramatically changed at least my life as an engineer, developer, and product guy. I can't say it has changed the industry for sure, but if I had to bet, I'd say "hell yes".

(LLMs have definitely had a very profound impact on many other aspects of my life as well, outside of work)


We have a system to which I can upload a generic video, and which captures eveeeeeerything in it, from audio, to subtitles onscreen, to skewed text on a mug, to what is going on in a scene. It can reproduce it, reason about it, and produce average-quality essays about it (and good-quality essays if prompted properly), and, still, there are so many people who seem to believe that this won't revolutionize most fields?

The only vaguely plausible and credible argument I can entertain is the one about AI being too expensive or detrimental to the environment, something which I have not looked sufficiently into to know about. Other than that, we are living so far off in the future, much more than I ever imagined in my lifetime! Wherever I go I see processes which can be augmented and improved though the use of these technologies, the surface of which we've only barely scratched!

Billions are being poured trying to use LLMs and GenAI to solve problems, trying to create the appropriate tools that wrap "AI", much like we had to do with all the other fantastic technology we've developed throughout the years. The untapped potential of current-gen models (let alone next-gen) is huge. Sure, a lot of this will result in companies with overpriced, over-engineered, doom-to-fail products, but that does not mean that the technology isn't revolutionary.

From producing music, to (in my mind) being absolutely instrumental in a new generation of education or mental health, or general support for the lonely (elderly and perhaps young?), to the service industry!...the list goes on and on and on. So much of my life is better just with what little we have available now, I can't fathom what it's going to be like in 5 years!

I'm sorry I highjacked your comment, but it boggles the mind how so many people so adamantly refuse to see this, to the point that I often wonder if I've just gone insane?!


People dislike the unreliability and not being able to reason about potential failure scenarios.

Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

And lastly, you've went to great lengths of completely air gapping the systems holding your customers' IP. Do you really want some Junior dev vibing that data into the Alibaba cloud? How about aging your CFO by 20 years with a quote on an inference cluster?


I mostly agree with all your points being issues, I just don't see them as roadblocks to the future I mentioned, nor do I find them issues without solutions or workarounds.

Unreliability and difficulty reasoning about potential failure scenarios is tough. I've been going through the rather painful process of taming LLMs to do the things we want them to, and I feel that. However, for each feature, we have been finding what we consider to be rather robust ways of dealing with this issue. The product that exists today would not be possible without LLMs and it is adding immense value. It would not be possible because of (i) a subset of the features themselves, which simply would not be possible; (ii) time to market. We are now offloading the parts of the LLM which would be possible with code to code — after we've reached the market (which we have).

> Then there's the question whether a highly advanced AI is better at hiding unwanted "features" in your products than you are at finding them.

I don't see how this would necessarily happen? I mean, of course I can see problems with prompt injection, or with AIs being lead to do things they shouldn't (I find this to be a huge problem we need to work on). From a coding perspective, I can see the problem with AI producing code that looks right, but isn't exactly. I see all of these, but don't see them as roadblocks — not more than I see human error as roadblocks in many cases where these systems I'm thinking about will be going.

With regards to customers' IP, this seems again more to do with the fact some junior dev is being allowed to do this? Local LLMs exist, and are getting better. And I'm sure we will reach a point where data is at least "theoretically private". Junior devs were sharing around code using pastebin years ago. This is not an LLM problem (though certainly it is exacerbated by the perceived usefulness of LLMs and how tempting it may be to go around company policy and use them).

I'll put this another way: Just the scenario I described, of a system to which I upload a video and ask it to comment on it from multiple angles is unbelievable. Just on the back of that, nothing else, we can create rather amazing products or utilities. How is this not revolutionary?


12 months ago, if I fed a list of ~800 poems with about ~250k tokens to an LLM and asked it to summarize this huge collection, they would be completely blind to some poems and were prone to hallucinating not simply verses but full-blown poems. I was testing this with every available model out there that could accept 250k tokens. It just wouldn't work. I also experimented with a subset that was at around ~100k tokens to try other models and results were also pretty terrible. Completely unreliable and nothing it said could be trusted.

Then Gemini 2.5 pro (the first one) came along and suddenly this was no longer the case. Nothing hallucinated, incredible pattern finding within the poems, identification of different "poetic stages", and many other rather unbelievable things — at least to me.

After that, I realized I could start sending in more of those "hard to track down" bugs to Gemini 2.5 pro than other models. It was actually starting to solve them reliably, whereas before it was mostly me doing the solving and models mostly helped if the bug didn't occur as a consequence of very complex interactions spread over multiple methods. It's not like I say "this is broken, fix it" very often! Usually I include my ideas for where the problem might be. But Gemini 2.5 pro just knows how to use these ideas better.

I have also experimented with LLMs consuming conversations, screenshots, and all kinds of ad-hoc documentation (e-mails, summaries, chat logs, etc) to produce accurate PRDs and even full-on development estimates. The first one that actually started to give good results (as in: it is now a part of my process) was, you guessed it, Gemini 2.5 pro. I'll admit I haven't tried o3 or o4-mini-high too much on this, but that's because they're SLOOOOOOOOW. And, when I did try, o4-mini-high was inferior and o3 felt somewhat closer to 2.5 pro, though, like I said, much much slower and...how do I put this....rude ("colder")?

All this to say: while I agree that perhaps the models don't feel like they're particularly better at some tasks which involve coding, I think 2.5 pro has represented a monumental step forward, not just in coding, but definitely overall (the poetry example, to this day, still completely blows my mind. It is still so good it's unbelievable).


Your comment warrants a longer, more insightful reply than I can provide, but I still feel compelled to say that I get the same feeling from o3. Colder, somewhat robotic and unhelpful. It's like the extreme opposite of 4o, and I like neither.

My weapon of choice these days is Claude 4 Opus but it's slow, expensive and still not massively better than good old 3.5 Sonnet


Exactly! Here's my take:

4o tens do be, as they say, sycophantic. It's an AI masking as a helpful human, a personal assistant, a therapist, a friend, a fan, or someone on the other end of a support call. They sometimes embellish things, and will sometimes take a longer way getting to the destination if it makes for a what may be a more enjoyable conversation — they make conversations feel somewhat human.

OpenAI's reasoning models, though, feel more like an AI masking a code slave. It is not meant to embellish, to beat around the bush or to even be nice. Its job is to give you the damn answer.

This is why the o* models are terrible for creative writing, for "therapy" or pretty much anything that isn't solving logical problems. They are built for problem solving, coding, breaking down tasks, getting to the "end" of it. You present them a problem you need solved and they give you the solution, sometimes even omitting the intermediate steps because that's not what you asked for. (Note that I don't get this same vibe from 2.5 at all)

Ultimately, it's this "no-bullshit" approach that feels incredibly cold. It often won't even offer alternative suggestions, and it certainly doesn't bother about feelings because feelings don't really matter when solving problems. You may often hear 4o say it's "sorry to hear" about something going wrong in your life, whereas o* models have a much higher threshold for deciding that maybe they ought to act like a feeling machine, rather than a solving machine.

I think this is likely pretty deliberate of OpenAI. They must for some reason believe that if the model is much concise in its final answers (though not necessarily in the reasoning process, which we can't really see), then it produces better results. Or perhaps they lose less money on it, I don't know.

Claude is usually my go-to model if I want to "feel" like I'm talking to more of a human, one capable of empathy. 2.5 pro has been closing the gap, though. Also, Claude used to be by far much better than all other models at European Portuguese (+ portuguese culture and references in general), but, again, 2.5 pro seems just as good nowadays).

On another note, this is also why I also completely understand the need for the two kinds of models for OpenAI. 4o is the model I'll use to review an e-mail, because it won't just try to remove all the humanity of it and make it the most succinct, bland, "objective" thing — which is what the o* models will.

In other words, I think: (i) o* models are supposed to be tools, and (ii) 4o-like models are supposed to be "human".


> 12 months ago, if I fed a list of ~800 poems with about ~250k tokens to an LLM and asked it to summarize this huge collection, they would be completely blind to some poems and were prone to hallucinating not simply verses but full-blown poems.

for the past week claude code has been routinely ignoring CLAUDE.md and every single instruction in it. I have to manually prompt it every time.

As I was vibe coding the notes MCP mentioned in the article [1] I was also testing it with claude. At one point it just forgot that MCPs exist. It was literally this:

   > add note to mcp

   Calling mcp:add_note_to_project

   > add note to mcp

   Running find mcp.ex

   ... Interrupted by user ...

   > add note to mcp

   Running <convoluted code generation command with mcp in it>
We have no objective way of measuring performance and behavior of LLMs

[1] https://github.com/dmitriid/mcp_notes


This, and if you add in a voice mode (e.g. ChatGPT's Advanced Mode), it is perfect for brainstorming.

Once I decide I want to "think a problem through with an LLM", I often start with just the voice mode. This forces me to say things out loud — which is remarkably effective (hear hear rubber duck debugging) — and it also gives me a fundamentally different way of consuming the information the LLM provides me. Instead of being delivered a massive amount of text, where some information could be wrong, I instead get a sequential system where I can stop/pause the LLM/redirect it as soon as something gets me curious or as I find problems with it said.

You would think that having this way of interacting would be limiting, as having a fast LLM output large chunks of information would let you skim through it and commit it to memory faster. Yet, for me, the combination of hearing things and, most of all, not having to consume so much potentially wrong info (what good is it to skim pointless stuff), ensures that ChatGPT's Advanced Voice mode is a great way to initially approach a problem.

After the first round with the voice mode is done, I often move to written-form brainstorming.


This 100%. Though think there is a personality component to this. At least I think when I speak.


The difference is the agenda of the reader, sadly.


Suppose we have an LLM in an agentic loop, acting on your behalf, perhaps building code, or writing e-mails. Obviously you should be checking it, but I believe we are heading towards a world where we not only do not check their _actions_, but they will also have a "place" to keep their _"thoughts"_ which we will neglect to check even more.

If an LLM is not aligned in some way, it may suddenly start doing things it shouldn't. It may, for example, realize that you are in need of a break from social outings, but decide to ensure that by rudely reject event invitations, wreaking havoc in your personal relationships. It may see that you are in need of money and resort to somehow scamming people.

Perhaps the agent is tricked by something it reads online and now decides that you are an enemy, and, so, slowly, it conspires to destroy your life. If it can control your house appliances, perhaps it does something to keep you inside or, worse, to actually hurt you.

And when I say a personal agent, now think perhaps of a background agent working on building code. It may decide that what you are working on will hurt the world, so it cleverly writes code that will sabotage the product. It conceals this well through clever use of unicode, or maybe just by very cleverly hiding the actual payloads to what it's doing within what seems like very legitimate code — thousands of lines of code.

This may seem like science fiction, but if you actually think about it for a while, it really isn't. It's a very real scenario that we're heading very fast towards.

I will concede that perhaps the problems I am describing transcend the issue of alignment, but I do think that research into alignment is essential to ensure we can work on these specific issues.

Note that this does not mean I am against uncensored models. I think uncensored/"unaligned" models are essential. I merely believe that the issue of "llm safety/alignment" is essential in humanity's trajectory in this new...."transhuman" or "post-human" path.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: