Hacker Newsnew | past | comments | ask | show | jobs | submit | more hyperknot's commentslogin

Nice project! Related question, how would you recommend detecting which font is being used for names like ui-sans-serif, system-ui on a given device/browser?


That's a difficult one, you would need information about the device and operating system to infer the font.

But I imagine, if you realllly needed that info. You could go the hard route and render the font on a canvas, vectorise and perform some sort of nearest neighbour search.


Thanks! My idea was to just render a Lorem Ipsum paragraph and compare the calculated width-height across a know list of default fonts. Of course it wouldn't work with fixed width fonts, for that I'd need a canvas bitmap comparison.


I'm thinking of launching a new app where a single click share would be a feature worth implementing, but definitely not part of the core functionality. I'm trying to figure out if this feature is worth implementing at all, or it's just asking for trouble.


I always appreciate shareable permalinks (if that's what you mean), but I think that's a different question than SEO.

Personally I would expect share links to be unlisted (not indexed by search engines, not publicly discoverable, with a long enough uuid to be effectively unguessable). "Publish publicly" could be a separate function you add later, with moderation?


Luckily I'm still at the planning stage, no UGC problem yet. I'm thinking of launching a new app where a single click share would be a feature worth implementing, but definitely not part of the core functionality. I'm trying to figure out if this feature is worth implementing at all, or it's just asking for trouble.


What are the benefits? The costs are pretty clear (time spent moderating), but I'm not sure what the benefits of the feature would be.

That is probably worth spending time thinking about.


I guess the benefit would be possibly higher SEO / DR, as you'd have a lot of backlinks scattered around the interwebs. But then it can be misused and then all your domain can also be trashed.

I believe this is probably beneficial for bigger companies / VC funded projects where they have the resources for moderation.


Absolutely! It's actually shown by default, OP shouldn't be turning it off.

Also, if you run out of free Mapbox credits, feel free to change the basemap to openfreemap.org (I'm the creator).


Totally, it was old internal code so I think I had it removed then. I have updated it. Changing to openfreemap.org this weekend, I love your work.



What's the difference between using Kagi and Perplexity? On X everyone talks about Perplexity, on HN, about Kagi. Do they both search -> put results in an LLM as input text?


Kagi is a regular search engine, like Google. They started to bundle some optional AI features with the subscription, but I personally never use them.

Perplexity is yet another AI startup trying to find a way to monetize LLMs. Will they survive when the bubble pops? Idk. Do I trust them with my data? Fuck no. Just look at all the advertising they're investing in and ask yourself how they plan to make that money back.


> On X everyone talks about Perplexity

Must be your X echo chamber. I haven't heard about Perplexity in ages.

> Do they both search -> put results in an LLM as input text?

No. Kagi does have optional LLM-based answers, but at the core they present as a search engine and not a chatbot like Perplexity.


Probably. Still, I find Perplexity extremely useful for giving me a one sentence answer based on reading 10 reddit threads.


There was also a video where they are resoldering memory chips on gaming grade cards to make them usable for AI workloads.


That only works for inference, not training.


Why so?


Because training usually requires bigger batches, doing a backward pass instead of just the forward pass, storing optimizer states in memory etc. This means it takes a lot more RAM than inference, so much more that you can't run it on a single GPU.

If you're training on more than one GPU, the speed at which you can exchange data between them suddenly becomes your bottleneck. To alleviate that problem, you need extremely fast, direct GPU-to-GPU "interconnect", something like NV Link for example, and consumer GPUs don't provide that.

Even if you could train on a single GPU, you probably wouldn't want to, because of the sheer amount of time that would take.


But does this prevent usage of cluster or consumer GPUs to be used in training? Or does it just make it slower and less efficient?

Those are real questions and not argumentative questions.


Consumer GPUs don't have Nvlink so they don't work very well in cluster.


No, it's a totally different stack. Have a look at GitHub as well, it tells in detail how it's done.


openfreemap.org creator here. Yes, with vector tiles you are basically hosting static files, the server has nothing to do, except HTTPS encryption. Even gzipping is already done in the tiles.


For vector tiles osm.org this is not the case. They should be generated on the fly from the database to show mappers the current state of the map with minimal delay. Yes, the resulting results can be cached like static files, but much more work is done on the server.

You can learn more about this in the blog of the developer who develops this tile server: https://www.openstreetmap.org/user/pnorman/diary/403600

p.s. current link to the demo page: https://pnorman.github.io/tilekiln-shortbread-demo


OP asked "Does this reduce the operating costs of hosting OSM-based maps".

openstreetmap.org has a very complex setup for real-time updates, but in general, hosting OSM-based maps is much cheaper with vector tiles.


It's also the first step in professional display calibration.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: