Hacker News new | past | comments | ask | show | jobs | submit | bencevans's comments login

I'm quite happy to read something in a paper that ChatGPT or equivalent has written, just as i'm happy to read something that has been written with the assistance of other tools such as Grammarly, LanguageTool, Spell Check etc. I see it as the (human) authors responsibility to ensure that what has been written is factually correct.


I'm not sure i'm unusual in using the GitHub feed to keep an eye on commit activity on watched repos, all of a sudden there's no option to see push activity in the feed. o.O


I've recently used this approach for generating Open Graph images for display when a link to the site is used on Twitter, WhatsApp, Facebook, etc [1]. I was pleasantly surprised at how quickly something could be implemented. The last time I'd done something similar was using Cairo and needing to write more of the scaling dynamics. I don't think I ever got it to adjust to dynamic content very well. This time I put together a prototype in Inkscape, converted it to a template and render it to PNG with Sharp [2].

[1]: https://hntrends.net/api/og?word=twitter [2]: https://github.com/lovell/sharp


Looking forward to GPU support for workers. Cloudflare announced they were working on it in 2021 [1], but it doesn't seem generally available yet; they still have a signup page for notification [2].

I know other companies have struggled with demand, so maybe they're doing it on an invite basis.

[1]: https://blog.cloudflare.com/workers-ai/ [2]: https://www.cloudflare.com/nvidia-workers/


Good things come to those who wait :-)


I was under the assumption that, because of how cloudflare works, it has to be globally available.

That would mean support in every dc and a easily to expand capacity.

They can't just make it available in one region and then see how it goes.

I like how cloudflare works, but this use-case for them ( on the edge) seems more difficult to plan. It's not just the tech, it's the infrastructure in this case.

Just my 2 cents


They probably just want to see how many people would sign up if they had it. Before they start building something nobody wants to pay for.


Agreed, but this is a blocker for anyone seriously considering using the product. CPU-only inference simply isn’t good enough for anything besides toy workloads. If they’re waiting for people to use Constellation before investing in GPU support, nobody will use constellation because it doesn’t have it, so they’ll never end up investing in it, and so on…


I think people will be surprised how far CPU optimization will go for specialized inference. An example of the progress being made - http://ggml.ai/


They use it themselves to route traffic, which probably is a not gpu intensive.

I've ran face detection on cpu and it takes 1 / secondv and it's probably a pretty intensive "action".

Plenty of use-cases that don't require a gpu

And plenty that require one too though.


Cheers! Yes, I already have 'it's' and 'it’s', but that one escaped! Added now.


Thanks for the suggestion, yes a good idea! Aha, yes, I've spotted that with Reddit, and it is actually true as appose to a bug, as there's something like just one post back in 2016, and it mentions, thus, 100% popularity. I'll add some other metrics to the page soon.


Also couldn't find on your page where you source the "top books" from. These are just affiliate links (completely okay with it) and don't show statistics why this is a top book.


It's a static list ordered by the number of mentions of all time. I plan to make this dynamic in the future and allow ordering by total score and filtering by time range. Also, I'd like to add separate pages for each book showing the stats and reviews, and maybe some form of sentiment analysis would be cool.


Thanks for the heads up! Fixed


Thanks! Same; I've been coming to HN for books to read for years and finally got them in one place. I plan to extend the functionality there in the future :D


Also causing issues due to a change that's made release source tars change [1] (changing hash), so build systems are rejecting [2].

1: https://gitlab.com/gitlab-org/gitlab/-/issues/402616 2: https://github.com/microsoft/vcpkg/issues/30481


Isn't that the same "bug" that happened to github a few weeks ago when they updated their Git version too? It wasn't a bug per say but they still had to revert because the new hashes were causing massive build problems. Maybe it's a different root cause though.


Holy cow. The cardinal DevOps sin of changing the contents of a versioned file.



Consider applying for YC's Spring batch! Applications are open till Feb 11.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: