This is where things go to die. People won’t succeed with this shit, but it will clog up previously useful channels and they will be shut down. Like email has mostly devolved to “communicate with those I already communicate with”.
And phone calls.. not in my contacts? To voicemail with you!
Anyways, thinking more about the topic at hand, things like ChatGPT are just pushing us further towards a world of nepotism and social back-channel connections. Like in the modern art world where it isn't about talent, it's about who you are and who you know. AI means it's not about what you can write or create, it will be about your personal connections. IMO this is a bad thing for society as it leads to entrenched stratification and eventual ossification.
It's always been like this. For a brief period over the last century, we had the illusion of social mobility, that we were headed towards a more meritocratic society. But it was always that: an illusion. AI is not even a reversion to the mean, just a disillusionment.
>I am, somehow, less interested in the weight and convolutions of Einstein’s brain than in the near certainty that people of equal talent have lived and died in cotton fields and sweatshops.
Addendum: Drowned out amongst millions of AI augmented voices.
Yes, the industrial revolution made labor competitive enough to bargain. The computer, internet and now AI revolutions have chipped away at this since the 70s. Power and wealth accumulation has never been higher.
Only if you view in black/white absolutist terms where if every single person isn't on 100% merit then it doesn't count at all.
But the only thing this argument achieves is making the people actively causing or cheerleading the damage feel better about what they are doing. For everyone else, it's a massive regression.
"Nepotism" also partially exists to keep "the rabble" out of whatever it is someone finds important, out of some misguided belief they are (and always will be) inferior. The last bit is important: AI assisted content generation may be bad now, but what about 5 years from now? 10? Will it then still be bad people generate stuff automatically?
Nepotism doesn't have a defense, because it's defense is essentially "tradition". I've genuinely not heard a compelling argument for that either - it always boils down to something anachronistic (i.e. racism, classism, etc).
We have two different meanings for "nepotism" here. His is "hiring within the family because I know their qualifications better." Yours is "hiring within the family even though someone outside has better qualifications."
Those are close to the same thing, though. _Obviously_ there exist people outside a family who are more qualified for any given thing than any of the people in the family.
It might be hubris, delusion, or some combination of the two, but while reading through comments some of them jump out at me as being ChatGPT generated. I'm sure the tech will advance and eventually be able to appear as more organic, but for now it seems to me to be more of a spam issue than an issue of appearing authentic.
Absolutely not, it would produce 0 value since it cant generate interesting insight. Ask it why cow's eggs are better than monkey's eggs, will it reply "wait are you trying to trap me here, I know im just a software but cows and monkeys have placentas and dont make eggs"?...
The vast majority of Hacker News comments don't generate interesting insight, either, and most conversations are banal and repetitive anyway. And that's not just here, it's everywhere - Sturgeon's Law prevails over everything.
But if ChatGPT can be trained to remain civil and avoid prejudice and conspiracy theory, the quality of Hacker News would objectively improve even if everything ChatGPT says is qualitative nonsense. And it won't be, and as software improves, it will be less and less so. You can't say that about people, unfortunately. The quality of people tends to degrade over time, the more people the more degradation. That's how you get Eternal September. ChatGPT, however, can maintain a high degree of quality over time, unimpeded by irrationality or emotion.
Maybe we could split the difference and have all comments filtered through ChatGPT. Run sentiment analysis to catch and remove or edit low quality comments where feasible.
So is the idea here that the submitters figure they have a miniscule chance of sneaking through, but it costs them basically nothing to generate, and if by some miracle they win, they get a payout that's a significant amount of money in some parts of the world?
Or is it like wanna-be writers who can't tell the difference between ChatGPT output and really good writing, and figure that if they can just put their brilliant idea for a story into a prompt and have ChatGPT clean it up, their talent will finally be recognized?
Most likely, the mass submissions are from the former while the one-off submissions are from the latter. The first group (most likely) caused them to close.
Neil Clarke said about "I worry that this path will lead to an increased number of barriers for new and international authors". So, brainstorming potential solutions, where the goal isn't "to stop AI content writers" as much as ensure that writers are have "to do the actual work".
What does proof-of-work look like for fiction? Uploading drafts, ensuring "adequate" time between them and "adequate" revisions? What else would work?
Will I need to use a git commit after every sentence? I write my essays in plain text that is backed up automatically by git once per day with a script and a generic message. There is no say to prove I wrote stuff and did not paste in ChatGPT output.
What if the author is simply required to compose all drafts in some special web editor that streams events to the server? Multiple phases of approval? Statistical analysis of time domain signals?
Proof of work could just mean "do your work here".
Require a summary and/or discussion of the creative process in a cover letter. Similar to requiring software candidates to be able to intelligently discuss what's on their GitHub.
To work it will likely have to be attached to a unique identity in costly ways. i.e. some form of ID verification such as Driver's license/Utility Bills/Bank statement/passport/etc...
For my own entertainment, I've tried to have ChatGPT write a science fiction story a few times, but its stories weren't entertaining.
The main problem is that it sounds like it is summarizing. For example, here is a typical start to one of its stories:
"
In a world much like our own, a team of scientists were working on an experimental device known as the Gravitational Wave Interferometer. They had been working on the project for years, but had been unable to make it work. The device was designed to detect the subtle ripples in space-time caused by gravitational waves, but it required an incredible amount of precision and sensitivity."
That's fine, but it sounds like the summary of a story rather than the story itself. It's telling rather than showing. For comparison here is the intro from the popular book The Hunt for Red October:
"Captain First Rank Marko Ramius of the Soviet Navy was dressed for the Arctic conditions normal to the Northern Fleet submarine base at Polyamyy. Five layers of wool and oilskin enclosed him. A dirty harbor tug pushed his submarine’s bow around to the north, facing down the channel. The dock that had held his Red October for two interminable months was now a water-filled concrete box, one of the many specially built to shelter strategic missile submarines from the harsh elements. On its edge a collection of sailors and dockyard workers watched his ship sail in stolid Russian fashion, without a wave or a cheer."
This sets the stage in a more interesting way than ChatGPT does.
What kinds of prompt strategies can I use to make ChatGPT's stories sound more like storytelling and less like a synopsis or summary? Can someone share a specific prompt?
Stories are entertaining when they raise the expectations of the reader or viewer and make them wonder how a difficult situation could possibly resolve. Entertaining means you keep on reading because you are curious about what will happen next.
But language-models just produce an average over the many texts they have been trained on. While each story they read had their own "expectation loop", when you average over many such stories the expectation-raising disappears, like multiple waves interfering and disrupting each other.
The output of ChatGPT is bland, because there is no single author behind its output, who could "feel" whether the story was entertaining enough to be published. ChatGPT of course can not "feel" anything. It can not evaluate its own output as to will it raise curiosity in its readers. ChaptGPT has its model of language and texts, but not a model of what its texts MEAN and thus no model of how readers might react to it.
Still not nearly as beautiful written as the Hunt for Red October, but I wrote this on ChatGPT:
“Set the scene introducing a protagonist in a Soviet missile depot in the 1980s Soviet Union in the style of Tom Clancy. Make it ominous, descriptive and imaginative.”
And got:
The Soviet missile depot was a forbidding place, set deep in the heart of the country's vast and unforgiving landscape. It was a place where men spoke in hushed tones and shadows seemed to lurk around every corner, where the hum of electricity and the beeping of computer terminals were the only sounds to be heard.
At the heart of the depot stood a solitary figure, a man with a weathered face and piercing eyes that seemed to see straight through you. He was dressed in a drab olive uniform that was stained with grease and sweat, and he wore a cap with a red star on it, the symbol of the Soviet Union.
His name was Ivan Petrov, and he was a veteran missile technician with over twenty years of experience. He had spent his entire adult life in the service of the Soviet state, and he knew every inch of this place like the back of his hand.
As he walked through the maze of tunnels and corridors, he could feel the weight of the world on his shoulders. He knew that the missiles stored here were capable of destroying entire cities, and that the fate of the world might one day rest on his shoulders.
The air was thick with tension and uncertainty, and Ivan felt a sense of foreboding that he could not shake. He knew that the Soviet Union was on the brink of collapse, and that the future was uncertain.
But he also knew that he had a duty to his country, and that he would do whatever it took to protect it, even if it meant sacrificing his own life.
As he stood before the missile control panel, his fingers poised over the launch button, he could feel the weight of history bearing down on him. He knew that this moment would define the rest of his life, and the fate of the world.
In that moment, Ivan Petrov was the most powerful man in the world, and he would do whatever it took to protect his country and its people, even if it meant unleashing the most destructive force in human history.
>> " In a world much like our own, a team of scientists were working on an experimental device known as the Gravitational Wave Interferometer. They had been working on the project for years, but had been unable to make it work. The device was designed to detect the subtle ripples in space-time caused by gravitational waves, but it required an incredible amount of precision and sensitivity."
I think the problem with ChatGPT's output in this case is that it's not really a story, or even a summary of a story. It's possible the summary of the start of a story, but where is the ending? What happened with the Gravitational Wave Interferometer? Did the team of scientists make it work, after all? Was it eaten by space sharks? Was it engulfed by the sun going nova? Did Nurgle hold a wedding feast on the planet? How does the story end? How does it even continue?
But, if the story had an ending, it would make decent flash fiction, in my humble. In fact I like to write very short stories like that, that sound like summaries. It can work- but it has to be a summary of something interesting to read. A story with a beginning but no middle or end doesn't cut it.
You can tell it to do dialogue heavy stories, which helps a little, and tell it to rewrite what it just write to have more dialogue. Ultimately it's a very limited tool for fiction writing I've found.
I've had success with some promopts but for others it just falls into vague nonsense like "In a flashback a hero fights a villain." Telling it to come up with some more specific than "a bad guy" often doesn't work.
Have you tried asking it to rewrite more in the style of a particular author, or to flesh out story elements in detail or with a bit more vagueness rather than like writing a blurb?
Yes, I've tried both strategies you mention (write in style of ___; flesh out elements in detail rather than like a blurb.) It didn't really work. At best it writes like your sibling comment shows. While it's pretty good, as that example shows, it still includes lines like "The air was thick with tension and uncertainty" which is just a summary of its prompt ("ominous" in this case) rather than showing something ominous.
Anecdote: I asked ChatGPT whether it created a ChatGPT created poem (not generated by me) and it answered yes. I asked it about one it didn't create and it answered no. I repeated this several times and it was accurate in all those cases. Of course, it may give the wrong answer any time.
The best solution will be for ChatGPT to have a way to verify that certain creations are not generated. Just like identifying duplicate content using tools like Copyscape. Of course, there are issues like user privacy which will need to be taken into consideration, but that seems to be the only fool-proof way.
Because the problem isn't "chatgpt can write science fiction". The problem is that chatgtp can generate spam that's not easily filterable from genuine science fiction submissions without investing human effort, so the site is having a problem with spam and they have to close the submission channel.
what makes ChatGPT's science fiction non genuine? human authors get ideas from other human others, and ChatGPT gets ideas from human authors, and maybe even from itself
It has nothing to do with it being genuine or not. The number of submissions have increased by 100x and nearly all of it is crap, but crap that can't be easily filtered by automated tools. The bots aren't making stories that people want to read, but they are drowning out all the stories that people do. The philosophical and pragmatic implications of AI being able to replace people are interesting, but that isn't what this is - it's just spam by low-effort grifters.
>there are so many people here who don't understand the core issue here.
Sufficiently advanced AI is indistinguishable from people who don't understand the core issue here. It doesn't feel pity, or remorse, or fear. And it absolutely will not stop... ever...
ChatGPT generates cargo-cult fiction. It is imitating the low-level structure of prose, but its model can't (currently) stretch to synthesize the high-level structure of a narrative.
We currently don't have any machine-level filter that would identify whether or not a piece of prose is actually a coherent work of narrative fiction or not. And annoyingly, if we did, AFAIK we'd also have everything we needed to train an AI (specifically, a GAN) to generate narrative fiction. So we're going to be dealing with this right up until the point where it doesn't matter any more, rather than being able to do any kind of quick hack to banish it "early."
The interesting thing is that there exist techniques to generate coherent and even interesting stories, that do not rely on copying large amounts of examples thereof:
Unfortunately those are a) virtually unknown outside academic circles and b) about to die a death. The latter, the death, they're about to die it because their space is now being taken over by a much more prolific technology that generates low-grade bullshit cheaply.
Thats part of my concern with all the money being dumped into openAI et al after they productive their generative models.
If all the money is in generative models that give "good looking" results, who is going to spend money investing in the techniques that don't produce as "good looking" results right now but who have a potential beyond recombining it's training data?
1) It takes time to review each submission and further resources to publish any that are accepted.
2) The volume of submissions has grown massively because spammers are using LLMs to generate spam submissions in the hope of getting paid.
3) The current generation of LLMs are not particularly good at generating quality literature on their own, so this massive deluge of submissions is mostly garbage.
Also I would go to venture to that what GPT produces passes the initial scan. That is mostly spelled and grammatically correct. I think those sort of submissions are common enough previously, but relatively fast to reject with "needs edit".
On other hand GPT might technically be good enough. But there isn't any actual substance. The reason why we read literature. This is varying bar, but still it does exist.
Human authors do get ideas from existing material, and they do imitate existing work, but ChatGPT does not actually have any ideas. It is technically incapable of understanding. It generates the appearance of ideas, but that appearance is a reproduction of word relationships from its source materials.
It’s not an issue of “genuine”, it’s just crap. But to determine it’s crap someone needs to read it and that’s not scalable when you can generate hundreds of crappy submissions in an instant.
> it can't be original, it will forever recycle/regurgitate past SF
I can’t see why. It could very well, on its own, use other genre’s narrative structures, stories or styles that are uncommon or inexistant in SF.
It could also be used by a human writer to flesh out the story and concepts the writer has created. There are a few authors I find brilliant but whose writing and characters were sorely lacking.
It’s really more about what you do with it than wether or not you use it. I mean, computers have enabled people to simply copy-paste stuff and not produce anything meaningful. Some do just that. Others have thrived.
It's kind of like Bitcoin: huge amount of money and reputation sitting out there to be claimed once your algo is good enough to publish a story in a major market. Similar to beating humans at chess. As a fiction author this does scare me. I don't see why the ai won't get as good or better than us.
All artificial "intelligence" is just a glorified database crawler. Because you think it means something shows how far you have strayed. Pure inference according to everything that has already been done. The definition of the past. You will trick yourselves into thinking that a robot can instantly assess all lies and truth professed by all languages committed to the universal knowledge repository. You are impressed by the automation of curiosity.
Now you do not have to pay attention to anything. This is utopia. It was a waste of time for my father to tell me to work hard.