Veritas used to represent abstract truth is not out of place. Obviously it assumes a different connotation in a Christian context ("Veritas vos liberabit" from the gospel of John being the obvious example), but it's not the only usage. See examples here: https://latinitium.com/latin-dictionaries/?t=lsn50557
Data is the past participle of the verb "do". It doesn't necessarily imply that usage.
I do agree that the construction is weird though, in particular the infinitive.
I think we're saying the same thing about "veritas." I'm saying, in essence, that if I ran into this absolutely bizarre expression in a manuscript or papyrus, and I wanted to publish an edition of the text, I might capitalize the "v."
(Edit: I just realized why. It's because the action "tueor" describes has a strongly physical connotation. It's as though the author wrote "clean/wash the truth." You don't watch (in tueor's sense) an abstract concept; you 'watch' (tueris) something physically manifest. Using tueor this way is how you'd talk or write about a god, not a concept.)
"Tuere" isn't an infinitive. It's the second person singular present imperative active of the deponent 'tueor.' As a deponent, it has only passive-voice forms, which have active-voice sense, so (edit: infinitival) 'tuere' isn't a valid form, because it's the present infinite active, a form that a deponent verb by definition can't have.
Edit: "data" in this usage would mean "gifts." That is the idiomatic meaning of the fourth principal part of the verb when it's used in the neuter plural. This isn't debatable. It's Latin 101 basics, almost certain to appear in the very first set of exercises and vocab lists that any beginning Latin student will encounter (and it's certain to be the answer to a question on your first vocab quiz). See here[0], esp: 'Part. perf. sometimes (mostly in poets) subst.: dăta , ōrum, n., gifts, presents.'
Isn't that happening already? Half the usual CS curriculum is either math (analysis, linear algebra, numerical methods) or math in anything but name (computability theory, complexity theory). There's a lot of very legitimate criticism of academia, but most of the times someone goes "academia is stupid, we should do X" it turns out X is either:
- something we've been doing since forever
- the latest trend that can be picked up just-in-time if you'll ever need it
I've worked in education in some form or another for my entire career. When I was in teacher education in college . . . some number of decades ago . . . the number one topic of conversation and topic that most of my classes were based around was how to teach critical thinking, effective reasoning, and problem solving. Methods classes were almost exclusively based on those three things.
Times have not changed. This is still the focus of teacher prep programs.
Parent comment is literally praising an experience they had in higher education, but your only takeaway is that it must be facile ridicule of academia.
In CS, it's because it came out of math departments in many cases and often didn't even really include a lot of programming because there really wasn't much to program.
Right but a looot of the criticism online is based on assumptions (either personal or inherited from other commenters) that haven’t been updated since 2006.
Well, at more elite schools at least, the general assumption is that programming is mostly something you pick up on your own. It's not CS. Some folks will disagree of course but I think that's the reality. I took an MIT Intro to Algorithms/CS MOOC course a few years back out of curiosity and there was a Python book associated with the course but you were mostly on your own with it.
I won't complain about a strict upgrade, but that's a pricy boi. Interesting to see differential pricing based on size of input, which is understandable given the O(n^2) nature of attention.
Python still doesn't have tail recursion, and uses a small stack by default.
I'll note that in modern imperative languages is harder than it looks to figure out if calls are really in tail position, things like exception handling, destructors etc. interfere with this, so even as a SICP fanboy I'll admit it's fair enough that some languages don't want to bother.
Regrettable, but did it take o3 mega pro to find out about real and nominal value? Even something a trivial as an iPhone is a far bigger purchase if you're not on a Bay Area salary.
It's a phase. I used to try and customize everything, tiling window managers, custom color schemes, Arch, etc. Right now I'm on a Mac so vanilla I didn't even change the wallpaper.
Was about to mention this. 25y+ linux user here, we all had our ricing phase, where we'd customize our desktop and shell to oblivion. Now, I'm always on a as-vanilla-as-possible Ubuntu machine, or a Macbook with the same default wallpaper that came when I bought it.
The only thing I do to my new systems is installing oh-my-zsh, because that gives me a lot of goodies for basically zero configuration (I just use and learned the default presets to be "my own")
Since we're now bragging about how vanilla our systems are, the only things I install are wezterm, nushell, helix, nix. I've moved everything else into git repo's so they're no longer system configs, but project configs.
Last week I took a repo full of notes about the sizes of building materials and made inkscape and gimp "dependencies" of that project.
Next time I install Linux I think I'm going to make the filesystem immutable so that I not only don't configure it, but can't.
I guess I am still in that phase then, after 25y+ of Linux. Not that I rice constantly but that I configure my desktop exactly how I like it and then let it stay. Usually the ricing/configuring comes when I buy new hardware.. so not that often. Or when a major change like Wayland comes around which is what made switch from Arch/X11/Bspwm to Arch/Wayland/Hyperland. I have tried but can not use vanilla for long... I just have to adapt the system to me. I feel constrained if I have to adapt to the system.
i'm using the default macos wallpaper as well. i almost never see the desktop, anyways... on my sway desktop, i don't have gaps or anything -- doesn't matter to me, i'm too busy doing something.
No, they aren't. Most benchmarks use ground truth, not evaluation by another LLM. Using another LLM as verifier, aside from the obvious "quis custodiet custodes ipsos", opens an entire can of worms, such as the fact that there could be systematic biases in the evaluation. This is not in and of itself disqualifying but it should be addressed, and the article doesn't even say anything.
Even the benchmarks for maths only checked numerical answers for ground truth, which means the LLM can output a lot of nonsense and guess the correct answer to pass it
Ground truth evaluation is not that simple unless you are doing multiple-choice-style tests or something similar where the correctness of an answer can be determined by a simple process. Open ended natural language tasks like this one are incredibly difficult to evaluate and using LLMs as judge is not just the current standard, it is basically the only way to do it at scale economically.
> So there's no ground truth; they're just benchmarking how impressive an LLM's code review sounds to a different LLM. Hard to tell what to make of that.
The comment I replied to was:
> That's how 99% of 'LLM benchmark numbers' circulating on the internet work.
And that's just false. SWE-Bench verified isn't like this. Aider Polyglot isn't like this. SWE-Lancer Diamond isn't like this. The new internal benchmarks used by OpenAI in GPT-5's model card aren't like this.
Maybe this benchmark is a special snowflake and needs LLM-as-a-judge, but this doesn't invalidate the original concern: setting up a benchmark this way runs into a series of problems and is prone to show performance differences that might not be there with a different setups. Benchmarks are already hard to trust, I'm not sure how this is any more indicative than the rest.
Benchmarks that execute code are to some degree the only thing where you can automate testing at scale without humans in the loop, but even that has its caveats [1]. Regardless, when your output is natural language text (as is in this case), there is simply no viable alternative to measure accuracy economically. There is frankly no argument to be had here, because this is simply not achievable with current technology.
Not massively off -- manifold yesterday implied odds this low were ~35%. 30% before Claude Opus 4.1 came out which updated expected agentic coding abilities downward.
It's not surprising to AI critics but go back to 2022 and open r/singularity and then answer: what "people" were expecting? Which people?
SamA has been promising AGI next year for three years like Musk has been promising FSD next year for the last ten years.
IDK what "people" are expecting but with the amount of hype I'd have to guess they were expecting more than we've gotten so far.
The fact that "fast takeoff" is a term I recognize indicates that some people believed OpenAI when they said this technology (transformers) would lead to sci fi style AI and that is most certainly not happening
>SamA has been promising AGI next year for three years like Musk has been promising FSD next year for the last ten years.
Has he said anything about it since last September:
>It is possible that we will have superintelligence in a few thousand days (!); it may take longer, but I’m confident we’ll get there.
This is, at an absolute minimum, 2000 days = 5 years. And he says it may take longer.
Did he even say AGI next year any time before this? It looks like his predictions were all pointing at the late 2020s, and now he's thinking early 2030s. Which you could still make fun of, but it just doesn't match up with your characterization at all.
I would say that there are quite a lot of roles where you need to do a lot of planning to effectively manage an ~8 hour shift, but then there are good protocols for handing over to the next person. So once AIs get to that level (in 2027?), we'll be much closer to AIs taking on "economically valuable work".
Data is the past participle of the verb "do". It doesn't necessarily imply that usage.
I do agree that the construction is weird though, in particular the infinitive.
reply