Hacker News new | past | comments | ask | show | jobs | submit | franzb's comments login

This isn't about AI research, it's about delivering AI at unimaginable scale.


180 million users for chatgpt isn’t unimaginable but it does exceed the number of iPhone users in the United States.


A striking picture! Thanks for sharing!


For your first question: https://platform.openai.com/tokenizer


I saw that, but the language makes me think it's not quite the same as what's really being used?

"how a piece of text might be tokenized by a language model"

"It's important to note that the exact tokenization process varies between models."


That's why they have buttons to choose which model's tokenizer to use.


Yes, thank you, I understand that part.

It's the might condition in the description that makes me think the results might not be the exact same as what's used in the live models.


The results are the same.


Understandable but unbearable: these aren’t mutually exclusive.


A dumb thumbnail is a useful signal to know that I won’t want to watch the video anyway :)


Interesting, I’ve never heard about that. Do you have a source?


I don't think they specifically understood the idea of a "game engine" as the core product at the time. But there are plenty of references if you Google a bit that Quake was designed for modders due to the popularity of DOOM mods - so developer experience was absolutely taken into account from the start.

They had already done licensing deals for the DOOM engine at that point, including the greatest game of all time "Chex Quest."

https://en.wikipedia.org/wiki/Chex_Quest


Yeah, this is why Quake's logic for a lot of game things - monsters, weapons, moving platforms - is written in a byte-code interpreted language (QuakeC). The idea was to separate it from the engine code so modders could easily make new games without needing access to the full engine source. (And QuakeC was supposed to be simpler as a language than C, which it... is, but at the cost of weird compromises like a single number type (float) which is also used to store bitfields by directly manipulating power of two values. Which works, of course, until your power of 2 is big enough to force the precision to drop below 1...)


Keep in mind that they had developed a good relationship with Raven Software and made a good chunk of money off of their use of idTech 1 for Heretic.


Reminds me of this saga I went through as an early adopter of AMD Threadripper 3970X:

https://forum.level1techs.com/t/amd-threadripper-3970x-under...

HN discussion: https://news.ycombinator.com/item?id=22382946

Ended up investigating the issue with AMD for several months, was generously compensated by AMD for all the troubles (sending motherboards and CPUs back and forth, a real PITA), but the outcome is that I've been running since then with a custom BIOS image provided by AMD. I think at the end the fault was on Gigabyte's side.


Reminded me of the Intel Skylake bug found by the OCaml compiler developers: https://tech.ahrefs.com/skylake-bug-a-detective-story-ab1ad2...


Holy cow I had no idea CPU vendors would do this for you.


Supermicro gave us same type of assistance. Then new feature of bifurcation did not work correctly. Without it, enterprise telecommunications peripheral that costs 10x more than 4 socket Xeon motherboard can't run at nominal speed, and it was ran on real lines, not test data.

They sent us custom BIOSes until it got stabilized and said they'll put the patch in the following BIOS releases.

The thing is neither Intel nor AMD nor Supermicro can test edge cases at max usage in niche environments without paying money, but they would really love to claim with backup they can be integrated for such solutions. If Intel wants to test stuff in space for free they have to cooperate with NASA; the alternative is in-house launch.


NASA has super-elaborate testbeds and simulators. Maybe producers can provide some format/interfaces/simulators for users, users would write test-cases for it, and give back to providers to run in-house.

If users pay seven figures+ it might make sense.


When you’re not only helping them debug their own hardware but are also spending money on their ridiculously overpriced HEDT platform, it probably makes them want to keep you happy.


That is true and also lots of people use OCaml


I built an app to make dealing with Jira less painful. It caches Jira tickets in a SQLite database, then uses GPT-3.5 to translate natural language queries into SQL that it then executes. It also uses Ollama/Mixtral to summarize Jira tickets and GitHub PRs. It can generate a summary of a single Jira ticket with its associated GitHub PRs or a whole sprint. It's written in Python and runs in the terminal.



This is absolutely amazing! It brings back great memories of my childhood when I was doing similar mock UIs in QBasic, then QuickBasic, Turbo Pascal and Turbo C. Thanks for sharing!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: