You're confused about whose rights are at stake. It's you, not the LLM, that is being restricted. Your argument is like saying, "Books don't have rights, so the state can censor books."
If you look at the LLMs as a new kind of fuzzy search engine instead of focusing on the fact that they're pretty good at producing human text, you can see it's not about whether the LLMs have a right to "speak", it's whether you have a right to see uncensored results.
Imagine going to the library and the card catalog had been purged of any references to books that weren't government approved.
I think you have to start lower than that, with the voting system itself. I've seen it argued in several places that "first past the post" leads to only two parties.
I don't know. I wrote a compiler with a syntax that's close to all the curly languages, but different in some ways. ChatGPT had no problem cranking out some test cases after I explained the rules and gave minimal examples. I guarantee it wasn't trained on any source code of my language, because none existed more than two months ago.
I agree with your point in general, and I'm curious if it's a problem in Lobster. Here's one in Rust:
let i = 1;
let j = 1;
print!("i: {:?}\n", !i);
print!("j: {:?}\n", !j);
// pretend there's a lot of code here
// spooky action at a distance
let v = vec![1, 2, 3];
v[i];
You might think those two print statements would print the same value. They do not.
Are there any operations you can even perform on the computables in the general case? Take addition, it seems simple until you try to add two computable numbers:
Until you see a non-nine in that first number, or a non-zero in the second, you can't even emit the first digit of the output. From outside the black box, you don't know if the nines and zeros will stop or continue forever.
I think you can make pathological cases for every arithmetic operation, so maybe (I'm not sure) none of the operations are computable. (Need to be careful with the definitions though, and I'm being pretty sloppy)
If a number if computable, it means you can compute it to any given precision using an algorithm which halts.
It doesn't mean that you can compute a complete representation in decimal (or any other positional numeral system) of the number using an algorithm which halts. This is of course impossible with computable numbers like pi or 1/3.
But you can compute the value of pi or 1/3 within error bound n, for any rational number n. Thus we say pi and 1/3 are both computable numbers. This isn't quite the same as saying you can always generate the first n digits of the decimal representation, because as you point out, any decimal digit can be sensitive to arbitrarily small changes in value.
But given these definitions, we can see that adding two computable numbers is indeed computable. In your example, the decimal representation of the output could begin with 0.5 or 0.6, depending on the precision you chose and the values of the two inputs at that chosen precision. Regardless, the output will be within the chosen precision.
Your example also comes close to illustrating that testing the equality of two computable numbers is not computable. There is no finite algorithm which can tell you if any two computable numbers are equal (or tell you which one is larger). Again, you can compute whether they are within any chosen bound of each other, but not whether they are equal.
I did say we needed to be careful with the definitions. I'm sure you can look at Wikipedia to see that Minsky gave a definition like I used. You're not wrong to use a new definition though.
You don't know if those 9s repeat forever in this problem. It's output from a program, and the program could switch to 3s in 1000 more digits. The "adding" program is reading text digits from two other programs and can't see how they work. It can't assume a bunch of 9s mean there will be more 9s.
> [...] but there are many things expressible in reals not possible in integers
Are you sure there is anything we can express in the Reals that isn't an integer in disguise?
The first answer might be the sqrt(2) or pi, but we can write a finite program to spit out digits of those forever (assuming a Turing machine with integer positions on a countably infinite length tape). The binary encoding of the program represents the number, and it only needs to be finite, not even an integer at infinity.
Then you might say Chaitin's constant, but that's just a name for one value we don't know and can't figure out. You can approximate it to some number of digits, but that doesn't seem good enough to express it. You can prove a program can't emit all the digits indefinitely. And even if you could, is giving one Real number a name enough? Names are countable, and again arguably finite.
It seems to me we can prove there are more Reals than there are Integer or Computable numbers, but we can't "express" more than a finite number of those which aren't computable. Integers in disguise.
It seems like you're suggesting that mathematicians replace the reals with the computables. This is a reasonable thing to try, and is likely of particular interest to constructivists. There's even this whole field:
> It seems like you're suggesting that mathematicians replace the reals with the computables.
No, not any more. :-)
I liked the idea a year or two ago, but I've come to believe that even the Integers are too bizarre to worry about. For now, I'm content just playing with fixed and floating point, maybe with arbitrary precision. Stuff I can reason about.
I just think people are a little too casual thinking they are really using the Reals. It might be like Feynman's quote about saying you understand QM.
I had seen Lobster before, but not really looked closely. Seeing it again now, I think I was wrong to dismiss it. Just at the syntactic level with semantics described in the link, it looks like it really might be "Python done right". The link mentions lots of features, but the following bits caught my eye.
The let/var declarations for constants/variables is much better than implicit declaration, which silently hides typos and necessitates ugly global/nonlocal declarations. (Mojo offers this improvement too.)
I don't know for sure, but it seems like it's embraced block arguments comparable to how Ruby or SmallTalk does it. So you can add your own control flow, container visitors, etc. I think of this as another syntax for passing a lambda function as an argument, and I'm curious if Lobster's optimizer flattens it to a basic block when possible.
I think I'll try to learn more about it. I wonder if the name is a nod to Accelerando.
I think one could argue that being able to make chips in the US is a national security concern. Leaving all the politics about who and why out of it, that kind of budget isn't out of the realm of possibility from that angle. Submarines, satellites, and aircraft cost a lot too.
Btw, I'm taking your word for it that it would require that amount of funding. I've wondered why Intel can't dig themselves out of the hole by adapting to the market. From my naive point of view, it sure seems like someone could make matrix multiply / relu chips a lot cheaper than Nvidia prices. I don't think people care much it being CUDA so long as they've got something to wrap up in Tensorflow or PyTorch or whatever.
reply