Hacker News new | past | comments | ask | show | jobs | submit | Culonavirus's comments login

Maybe go all in on local on device low latency LLMs? They make their own hardware so they could probably do it and do it well with enough focus and investment.


The M chips are pretty much perfect for this.


8GB of RAM is not perfect for this. The M chips are only known for AI because unified memory allows you to get huge amounts of "VRAM", but if you're not putting absurd amounts of RAM on the phone (and they're not going to) then that advantage largely goes away. It's not like the NPU is some world-beating AI thing.


No reason why most apps couldn’t be cloud hosted and devote all the significant local edge compute to such a digital companion.


That seems like the opposite of what they were suggesting? Unless by “edge compute” you meant user’s devices, but I assume you intended the usual meaning of CDN edges.


No I meant user’s device. It’s the edge of the application space.


> This book, however, requires the reader to have some knowledge of probability theory and linear algebra.

This is so funny to me, I see it often and I'm always like "yea, right, some knowledge"... these statements always need to be taken with a grain of salt and an understanding that math nerds wrote them. Average programmers with average math skills (like me) beware ;)


This usually means that average CS or EE university level students should be able to easily follow it even if they have never touched the topic. It's far below the level of math and physics degrees, but still somewhat above what you could expect from an average self taught programmer.


I'm not even self-taught, it's just that when I was studying (CS degree, like 15 years ago) we did have a mandatory linear algebra course, graph theory course, statistics course etc., but we never * actually * used any of that in practice, it was all algo this, big o that, data structures, design patterns, languages, compilers, SQL etc. Now that I'm thinking about it pretty much the only course we had to use some linear algebra in was the 3d rendering one. ...

And then you work on .net/java/sql/server crap for a decade and you forget even the little math you used to know :D


Honestly if long context (that doesn't start to degrade quickly) is what you're after, I would use Grok 3 (not sure when the api version releases though). Over the last week or so I've had a massive thread of conversation with it that started with plenty of my project's relevant code (as in couple hundred lines), and several days later, after like 20 question-aswer blocks, you ask it something and it aswers "since you're doing that this way, and you said you want x, y and z, here are your options blabla"... It's like thinking Gemini but better. Also, unlike Gemini (and others) it seems to have a much more recent data cutoff. Try asking about some language feature / library / framework that has been released recently (say 3 months ago) and most of the models shit the bed, use older versions of the thing or just start to imitate what the code might look like. For example try asking Gemini if it can generate Tailwind 4 code, it will tell you that it's training cutoff is like October or something and Tailwind 4 "isn't released yet" and that it can try to imitate what the code might look like. Uhhhhhh, thanks I guess??


10? Try 30+ ...


> This is hacker news, so I'm asking for an answer without political rhetoric

LMAO. That's like going to Reddit and asking for relationship advice.


The single instance of a fire that could be seen in the stream was in the hinge area of a bottom flap.


Shock diamonds are normal for first stages. Unless you're smarter than BO/SpaceX/ULA engineers and want to tell us more, I call BS on your post.


Eeeh, the Electron issue is oveblown.

These days the biggest hog of memory is the browser. Not everyone does this, but a lot of people, myself included, have tens of tabs open at a time (with tab groups and all of that)... all day. The browser is the primary reason I recommend a minimum of 16gb ram to F&F when they ask "the it guy" what computer to buy.

When my Chrome is happily munching on many gigabytes of ram I don't think a few hundred megs taken by your average Electron app is gonna move the needle.

The situation is a bit different on mobile, but Electron is not a mobile framework so that's not relevant.

PS: Can I rant a bit how useless the new(ish) Chrome memory saver thing is? What is the point having tabs open if you're gonna remove them from memory and just reload on activation? In the age of fast consumer ssds I'd expect you to intelligently hibernate the tabs on disk, otherwise what you have are silly bookmarks.


> Eeeh, the Electron issue is oveblown.

> These days the biggest hog of memory is the browser.

That’s the problem: Electron is another browser instance.

> I don't think a few hundred megs taken by your average Electron app is gonna move the needle.

Low-end machines even in 2025 still come with single-digit GB RAM sizes. A few hundred MB is a substantial portion of an 8GB RAM bank.

Especially when it’s just waste.


And this company that says: let's push to the users the installer of our brand new app, that will reside in their tray, which we have made in electron. Poof. 400MB taken for a tray notifier that also accidentally adds a browser to the memory

My computer: starts 5 seconds slower

1mln of computers in the world: start cumulatively 5mln seconds slower

Meanwhile a Microsoft programmer whose postgres via ssh starts 500ms slower: "I think this is a rootkit installed in ssh"


Your argument against electron being a memory hog is that chrome is a bigger one? You are aware that electron is an instance of chromium, right?


This is a good point, but it would be interesting if we had a "just enough" rendering engine for UI elements that was a subset of a browser with enough functionality to provide a desktop app environment and that could be driven by the underlying application (or by the GUI, passing events to the underlying app).


Problem there is Electron devs do it for convenience. That means esbuild, npm install react this that. If it ain't a full browser this won't work.


Funny thing about all of this is that it's just such oppressive overkill.

Most GUI toolkits can do layout / graphics / fonts in a much simpler (and sane) way. "Reactive" layout is not a new concept.

HTML/CSS/JS is not an efficient or clean way to do layout in an application. It only exists to shoehorn UI layout into a rich text DOCUMENT format.

Can you imagine if Microsoft or Apple had insisted that GUI application layout be handled the way we do it today back in the 80s and 90s? Straight up C was easier to grok that this garbage we have today. The industry as a whole should be ashamed. It's not easier, it doesn't make things look better, and it wastes billions in developer time and user time, not to mention slowly making the oceans boil.

Every time I have to use a web-based application (which is most of the time nowadays), it infuriates me. The latency is atrocious. The UIs are slow. There's mysterious errors at least once or twice daily. WTF are we doing? When a Windows 95 application ran faster and was more responsive and more reliable than something written 30 years later, we have a serious problem.

Here's some advice: stop throwing your web code into Electron, and start using a cross-platform GUI toolkit. Use local files and/or sqlite databases for storage, and then sync to the cloud in the background. Voila, non-shit applications that stop wasting everybody's effing time.

If your only tool is a hammer, something, something, nails...


>otherwise what you have are silly bookmarks.

My literal several hundreds of tabs are silly bookmarks in practice.


If you zoom in and squint your eyes, it does look like some kind of shiny coin.

What I'd like to know though... is how is the model so bad that when you tell it to "remove this artifact" ... instead of it looking at the surroundings and painting over with some DoF-ed out ocean... it slaps an even more distinct artifact in there? Makes no sense.


A lot of current inpainting models have quite a lot of "signal leak". They're more for covering stuff vs removing it entirely.

Ironically, some older SD1/2-era models work a lot better for complete removal.


I mean, this is notable because it screwed up. It usually does a pretty good job. Usually.

In this case there are better tools for the job anyways. Generative fill shines when it’s over something that’d be hard to paint back in - out of focus water isn’t that.


> Using AI

...

Do people enjoy this crap being put into every fkin discussion on this site? Because I sure as hell have my tank full.


This is an irate response to a perfectly fine comment. What about AI on HN is ruffling your feathers?


It is an irate response. I guess when you read a 1000th "perfectly fine comment" it gets to you. I remember this site being full of interesting stuff, like really vibrant CS things, math for programmers, all kinds of new languages and frameworks and patterns and algorithms, but nowadays all of that diversity is eaten and shitted out as AI AI AI AI AI ... It's nauseating. I'm sorry I know this is a rant but damn, what the fuck happened :(


That things do not require “AI”. An expert system will do just fine.


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: