Hacker News new | past | comments | ask | show | jobs | submit | refulgentis's comments login

What?

I read some guy complaining some podcast complained about his book and elevate it into some weird organized political movement that he's already declared is dead, and he's happy those kind of rancid speech-haters are gone...punchline... they're the illiberals!

Okay then!

Be honest with yourself, O Reader!

Are you sure he's not writing a satire of the same piece you've seen written every year since 1990, just with a shifting name for it?

He is a comedian after all...

Are you sure he's serious?


I was sure there was going to be a series of these books

Crap Governments Crap Businesses Crap Websites Crap Engineers Crap Media....


TL;Dr same binary runs on Nvidia and ATI today, but not announced yet

> more than half of new research is fake

You committed the same sin you are attempting to condemn, while sophomorically claiming it is obvious this sin deserves an intellectual death penalty.

It made me smile. :) Being human is hard!

Now I'm curious, will you acknowledge the elephant in this room? It's hard to, I know, but I have a strong feeling you have a commitment to honesty even if it's hard to always enact all the time. (i.e. being a human is hard :) )


It's "tier 5", I've had an account since the 3.0 days so I can't be positive I'm not grandfathered in, but, my understanding is as long as you have a non-trivial amount of spend for a few months you'll have that access.

(fwiw for anyone curious how to implement it, it's the 'moderation' parameter in the JSON request you'll send, I missed it for a few hours because it wasn't in Dalle-3)


API shows either auto or low available. Is there another secret value with even lower restrictions?

Not that I know of.

I just took any indication that the parent post meant absolute zero moderation as them being a bit loose with their words and excitable with how they understand things, there were some signs:

1. it's unlikely they completed an API integration quickly enough to have an opinion on military / defense image generation moderation yesterday, so they're almost certainly speaking about ChatGPT. (this is additionally confirmed by image generation requiring tier 5 anyway, which they would have been aware of if they had integrated)

2. The military / defense use cases for image generation are not provided (and the steelman'd version in other comments is nonsensical, i.e. we can quickly validate you can still generate kanban boards or wireframes of ships)

3. The poster passively disclaims being in military / defense themself (grep "in that space")

4. it is hard to envision cases of #2 that do not require universal moderation for OpenAI's sake, i.e. lets say their thought process is along the lines of: defense/military ~= what I think of as CIA ~= black ops ~= image manipulation on social media, thus, the time I said "please edit this photo of the ayatollah to have him eating pig and say I hate allah" means its overmoderated for defense use cases

5. It's unlikely openai wants to be anywhere near PR resulting from #4. Assuming there is a super secret defense tier that allows this, it's at the very least, unlikely that the poster's defense contractor friends were blabbing about about the exclusive completely unmoderated access they had, to the poster, within hours of release. They're pretty serious about that secrecy stuff!

6. It is unlikely the lack of ability to generate images using GPT Image 1 would drive the military to Chinese models (there aren't Chinese LLMs that do this! even if they were, there's plenty of good ol' American diffusion models!)


I'm Tier 4 and I'm able to use this API and set moderation to "low". Tier 4 only requires a 30 day waiting period and $1,000 spent on credits. While I as an individual was a bit horrified to learn I've actually spent that much on OpenAI credits over the life of my account, it's practically nothing for most organizations. Even Tier 5 only requires $5,000.

OP was clearly implying there is some greater ability only granted to extra special organizations like the military.

With all possible respect to OP, I find this all very hard to believe without additional evidence. If nothing else, I don't really see a military application of this API (specifically, not AI in general). I'm sure it would help them create slide decks and such, but you don't need extra special zero moderation for that.


> With all possible respect to OP, I find this all very hard to believe without additional evidence. If nothing else, I don't really see a military application of this API (specifically, not AI in general). I'm sure it would help them create slide decks and such, but you don't need extra special zero moderation for that.

I can't provide additional evidence (it's defense, duh), but the #1 use I've seen is generating images for computer vision training mostly to feed GOFAI algorithms that have already been validated for target acquisition. Image gen algorithms have a pretty good idea of what a T72 tank and different camouflage looks like, and they're much better at generating unique photos combining the two. It's actually a great use of the technology because hallucinations help improve the training data (i.e. the final targetting should be invariant to a T72 tank with a machine gun on the wrong side or with too many turrets, etc.)

That said, due to compartmentalization, I don't know the extent to which image gen is used in defense, just my little sliver of it.


We can talk about it here, they put out SBIRs for satellite imagery labeling and test set evaluation that provide a good amount of detail into how they're using it.

Tier 4 requires $250 spent. I'm tier 4 as well and I can see how they get easily mixed, but it actually says $1,000 spent to move to next tier.

Oops, thank you! So, even easier!

There are plenty of fairly mundane applications for this sort of thing in the military. Every base has a photography and graphic design team that makes posters, signs, PR materials, pamphlets, illustrations for manuals, you name it. Imagine a poster in the break room of a soldier in desert gear drinking from his/her canteen with a tagline of "Stay Alive - Hydrate!" and you're on the right track.

You don't need a special no moderation version to do that stuff.

I am actually talking about the OpenAI API :)

I'm not aware of the moderation parameter here but these contractors have special API keys that unlock unmoderated access for them, they've apparently had it for weeks.


Worked on Android from 2016-2023.

Vouch. (modulo Chrome aping Edge dark patterns)

And it's not an accident, or just an unthinking corporation with big divisions accidentally working at opposites, or just something looks bad when someone writes it up from the outside.


In general, AFAIK, the general assumption is every font is absurdly easy to steal, and that you'll do so before purchasing it.

So it's de facto "free unlimited trial, free for personal use, pay for business if you have a soul and shame"


Depends on the country.

I researched it for Russia recently and apparently the law is much stricter about fonts here than in the US. Both the character shapes and the "code" are copyrightable so you ain't getting away with converting it into a different format either. Companies did get sued over this and did have to pay millions of rubles in fines and licensing fees for their past usage. Not sure about individuals but I wouldn't try my luck with any non-free fonts made by Russian designers.


> I wouldn't try my luck with any non-free fonts made by Russian designers.

Depends if your home country cares about Russian civil court or not.


Huh, this is interesting. Given that Russia has been the hub of internet piracy for theast three decades.

That's because copyright in Russia is only enforced for companies. If you pirate something for personal use, no one would care, thankfully.

I would suggest not pushing your luck with webfonts though, because in that case you are distributing the actual copyrighted "code" of the font, not just the minimally protected shapes that it outputs. There are services which crawl the web actively looking for pirated webfonts on behalf of foundries (and their lawyers).

I had this happen to a client and even though they had both the web and print licenses they were hit with a 50k suite because the font file was malformed somehow. I'm not sure how it shook out but I hope they didn't pay a god damn cent.

How robust is that identification? Does it just look for file hashes or identical character shapes? I imagine it is trivial to repackage a font file to break the hash fingerprint.

Got a link to such a service?

https://www.fontradar.com is one. They also claim to analyze apps somehow.

The older I get, the more I get more interested in the tidal flows of information

It's both misunderstood and understood.

Ex. Given a 9 year old article, we jump from "so many people freaked out" to "nobody cares about it now." --- nobody cares is easy falsified --- but then we confirm it and attribute it to "human nature". (which much like astrology, people will fill in that gap with any time they perceived others as not-caring about something they care about it)

But, we also recognize the article is reposted and on the front page again, indicating it is novel to a large subset of people. Despite the fact it has been posted no less than 25 times.


If I gave OpenAI 100K engineers today, does that accelerate their model quality significantly?

I generally assumed ML was compute constrained, not code-monkey constrained. i.e. I'd probably tell my top N employees they had more room for experiments rather than hire N + 1, at some critical value N > 100 and N << 10000.


I think it depends on whether you think there's low-hanging fruit in making the ML stack more efficient, or not.

LLMs are still somewhat experimental, with various parts of the stack being new-ish, and therefore relatively un-optimised compared to where they could be. Let's say we took 10% of the training compute budget, and spent it on an army of AI coders whose job is to make the training process 12% more efficient. Could they do it? Given the relatively immature state of the stack, it sounds plausible to me (but it would depend a lot on having the right infrastructure and practices to make this work, and those things are also immature).

The bull case would be the assumption that there's some order-of-magnitude speedup available, or possibly multiple such, but that finding it requires a lot of experimentation of the kind that tireless AI engineers might excel at. The bear case is that efficiency gains will be small, hard-earned, or specific to some rapidly-obsoleting architecture. Or, that efficiency gains will look good until the low-hanging fruit is gone, at which point they become weak again.


It may sound plausible, but the actual computations are very simple, dense and highly optimised already. The model itself has room for improvements, but this is not necessarily something that an engineer can do, it requires research.

> very simple, dense and highly optimised already

Simple and dense, sure. Highly optimized in a low level math and hardware sense but not in a higher level information theoretic sense when considering the model as a whole.

Consider that quantization and compression techniques can achieve on the order of 50% size reduction. That strongly suggests to me that current models aren't structured in a very efficient manner.


Is the slatestarcodex guy "well-versed in all related fields"? Isn't he a psychologist?

What would being well versed in all related fields even mean?

Especially in the context of the output, a fictional overthetop geopolitics text that leaves the AI stuff at "at timestamp N+1, the model gets better"

It's of the same stuff as fan fiction, layers of odd geopolitical stuff, no science fiction. Even at that it is internally incoherent quite regularly (the White House looks to jail the USA Champion AI Guy for some reason while they're in the midst of declaring it an existential war against China)

Titillating, in that Important Things are happening. Sophomoric, in that the important things are off camera and an excuse to talk about something else.

I say that as someone who believes people 20 years from now will say it happened somewhere between sonnets agentic awareness and o3s uncanny post human ability to turn a factual inquiry about the ending of a TV show into an incisive therapy session


The prime mover behind this project is Daniel Kokotajlo, an ex-OpenAI researcher who documented his last predictions in 2021 [1], and much of that essay turned out to be nearly prophetic. Scott Alexander is a psychiatrist, but more relevant is that he dedicated the last decade to thinking and writing about societal forces, which is useful when forecasting AI. Other contributors are professional AI researchers and forecasters.

[1] https://www.lesswrong.com/posts/6Xgy6CAf2jqHhynHL/what-2026-...


Oh my. I had no idea until now, that was exactly the same flavor and apparently, this is no coincidence.

I'm not sure it was prophetic, it was a good survey of the field, but the claim was...a plot of grade schooler to PhD against year.

I'm glad he got a paycheck from OpenAI at one point in time.

I got one from Google in one point in time.

Both of these projects are puffery, not scientific claims of anything, or claims of anything at all other than "at timestamp N+1, AI will be better than timestamp N, on an exponential curve"

Utterly bog-standard boring claim going back to 2016 AFAIK. Not the product of considered expertise. Not prophetic.


Furthermore, there were so many predictions by everyone - especially people with a vested interest for VC to make them flow money in - that something has to be true.

Since the people on less wrong like bayesian statistics, the probability of having someone says the right thing given the assumption that there a shitton of people saying different things is... Not surprisingly, high


I don't understand why someone who is not a researcher (Scott and other authors) into that academic field should be taken into consideration, I don't care what he dedicated to, I care what the scientific consensus is. I mean, there are other researchers - actual ones, in academia - complaining a lot about this article, such as Timnit Gebru.

I know, it's a repeat of my submissions of the last days, but it's hard to not feel like these people are making their own cult


Personally I think Scott Alexander is overrated. His writing style is extraordinarily verbose, which lends itself well to argumentative sleights of hand that make his ideas come across as much more substantive than they really are.

Verbose? Only that? That guy had done a meta review of ivermectin or similar things that would make anybody think that's a bad idea but no, apparently he's so well versed he can talk about ai and ivermectin all at once

i also wonder why he had to defend such a medicine heavily talked about one side of the political spectrum...

Then you read some extracts of the outgroup and you see "oh i'm just at a cafe with a nazi sympathizer" (/s but not too much) [1]

[1] https://www.eruditorumpress.com/blog/the-beigeness-or-how-to...


This is so damn good, thanks for sharing. I've gotten some really really good links from HN the last month that I never would have guessed existed and are exactly the intellectual argument I'm missing for some of my extreme distastes.* I gotta get widen outside Twitter.

* Few things tell me more than when someone invoke "grey tribe" as an imaginary group of 3rd people who, of course, think ponderously and have the correct conclusions, unlike all those other people with motivated thinking.


I'm absolutely glad about someone finding my comments - and more than my comments, the article posted - enlightening.

On my submbissions you might find some similar articles i posted in the last days, i'll link here some:

this is an asnwer of an actual ai researcher to some of the ai 2027 research and the sphere in general: https://www.linkedin.com/posts/timnit-gebru-7b3b407_ive-just...

https://www.truthdig.com/articles/before-its-too-late-buddy/

https://www.truthdig.com/articles/effective-altruism-is-a-we...

https://web.archive.org/web/20130426115531/http://plover.net...

https://www.currentaffairs.org/news/2021/07/the-dangerous-id...

there's also a sarcastic subreddit that dunks on this sphere called sneerclub


I stopped reading him ~10 years ago, so I didn't keep up with what he wrote about ivermectin.

Thanks for sharing that blog post. I think it illustrates very well what I meant by employing argumentative sleights of hand to hide hollow ideas.


And they call themselves rationalist but still believe low quality studies about iq (which of course find whites to be higher iq than other ethnicities).

the more you dig deep, the more it's the old classism, racism, ableism, misoginy, dressed in a shiny techbro coat. No surprise musk and thiel like them


Scott was enlisted to evangelize the project. My understanding is that he didn't have a hand in the forecast, and if you watch his interview with Dwarkesh[1], he states his timeline is _longer_ than the one that the team published in ai-2027.com.

[1] https://www.youtube.com/watch?v=htOvH12T7mU&t=9882s&pp=ygUTZ...


> the White House looks to jail the USA Champion AI Guy for some reason

Who?


One of the distinct parts of this writing output, which you may recognize from a certain blog, is an incessant need to scream "I am thinking about no one in particular and neither should you."

We would think of it as Sam Altman.

The actual writing has some twee name like "CEO of Open-meta-gle".


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: