It reminds me of a book by Seth Godin where after reading 3 quarters of the book he goes:
"I didn’t write this book.
What I mean is that Seth Godin didn’t write this book. It was written by a freelancer for hire named Mo Samuels. Godin hired me to write it based on a skimpy three-page outline.
Does that bum you out? Does it change the way you feel about the ideas in this book? Does the fact that Seth paid me $10,000 and kept the rest of the advance money make the book less valuable?
Why should it matter who wrote a book? The words don’t change, after all. Yet I’m betting that you care a lot that someone named Mo wrote this book instead of the guy on the dust jacket. In fact, you’re probably pretty angry.
Well, if you’ve made it this far, you realize that there is no Mo Samuels and in fact, I was pulling your leg. I (Seth Godin) wrote every word of this book."
That is not comparable because the stuff written with GPT is known to have a far higher probability of being extra wordy with bullshit that wastes people’s time.
If a friend uses GPT to generate an initial draft and then edits it or at least checks it, it is not any different. But if they are using GPT to save themselves time at the expense of your time, then it is an issue.
I can even envision a future where you get paid by advertisers for inserting product placement into your chats with your friends, at which point I would call them “friends”, and not friends.
Perhaps the friend is just being over excited about AI at the moment and using it for everything because it's fun and new and easy. I've certainly seen people in my workplace doing that.
Perhaps the friend is using generative AI because they have some sort of uneasiness or even anxiety about email or communication in general. I know someone who uses ChatGPT for work email because English isn't his first language and doesn't like making spelling or grammar mistakes. Maybe the friend felt that the question they were asking was of an awkward or difficult nature and didn't want to "get it wrong" and make things worse.
I'm sure the author of the article doesn't need my advice but I'd be inclined to respond with something like "ChatGPT is cool isn't it? What was the prompt you wrote for it to write this email for you? You know you don't have to use ChatGPT to email me, I'd rather get your words. I value them much more."
AI written stuff I think has the oatmeal problem. It lacks any character, any distinctiveness, any flourishes, any vibrant or vigourous language.
So yes, it feels impersonal, and while it's fine to lean on it to correct errors or perhaps help you articulate something you're struggling with, it has no place in interpersonal communication between human beings with any relationship outside of the purely professional.
It also lacks intrinsic value. A gold jewel is not really 'better' than one made from aluminum, except for the fact that gold is rare and thus expensive and signals somebody invested a significant amount of resources into acquiring it.
A hand written email used to signal the same - that somebody took the time and effort to craft a thoughtful email to you, and holds you in high enough regard to make an investment like that.
To really drive home the point, in the 19th century there were ingots of aluminium among the crown jewels of some royal families, because at the time it was rare, expensive and required a significant resource investment to refine.
I remember when a hand written letter literally meant a hand written letter. Preferably in tidy cursive. Some viewed it as more personal. Others viewed it as a reflection of thought and effort.
It is easy to say that AI is different. I don't even like it when my email client attempts to auto-complete the current word, never mind the next few words, because it wasn't my word. It makes me feel even worse if it was going to be my word. (If what I was going to say is that predictable, is it even worth saying?)
Yet that shift in values over handwritten letters leads me to think there will be a shift in values regarding AI written letters. It doesn't take much imagination to to realize that someone is going to tell an AI agent to write birthday greetings to each of their friends, then send it off on their birthday.
‘Costly signalling’, to give it its technical name.
Now that GenAI has devalued (alternatively, read: ‘democratised’) prose, the best costly signal will be to go to great lengths to show you haven’t used AI. Whatever it can’t do will become by-definition more valuable.
Most LLMs out there are very American in their writing mannerisms. The article even alludes to this:
> The AI did, however, try to sound like someone. It was folksy and upbeat, talky and pretend-excited
I've not seen an LLM, even when fine tuned that doesn't actually do that (Chinese LLMs excepted). There's something just inherently American about the instruction datasets that these LLMs are instructed with.
I got the AI to write excellent Romanian, both standard and rural dialects and styles. It wasn't hard. The standard remaining was a default, and I was able to dial in a rural character in less than 10 rounds of back and forth conversation about the area and background of the person and some of their speech mannerisms. These are broad strokes that the AI kind of filled in for me, not teaching it from scratch.
So just prompt it a little bit and spend maybe 300 to 800 words talking in the style you want it to emulate. And have an extended conversation where you nail in the style.
I bet you can do this to make it write in British, Australian, or Indian English styles. But I'm perhaps not really equipped to judge, since I don't know those styles natively.
I don't know I could do that, because the first draft of anything is mostly me working out what I think and what I want to say, and it's messy so when you get to the next draft, you instinctively know where to tidy it up.
You don't get that from AI. You get boring neatly paragraphed prose. It's like tidying up a room that is already tidy.
So dismantle it and make a mess again and start building your draft that way. You can tell the AI to be destructive as well, to just randomly rewrite whole sections you don't like for whatever reason.
All of this is stuff that good writers have to do as well... They can just as easily get stuck making meaningless edits to well tidied but unsatisfying prose - so-called polishing the turd. They also have to get the sledgehammer out sometimes.
So either be destructive and rip out and reimagine whole sections, or tell the AI to help you do it. If you get it "having fun", and have fun with it yourself, you can get some really cool and satisfying stuff!
Saying “AI writing lacks character” is like saying typewriters can’t produce an authentic letter, only a handwritten letter can be authentic. I’m sure people were having this debate when typewriters became commonplace.
AI can take on any writing style of any character you desire. It can be extremely vibrant. It can be cold and calculating. It can produce beautiful prose. You can train it on your texts or emails and have it produce something indistinguishable from your own writing style. It’s completely up to the prompting (And the tuning of the model) what output it’s going to produce. The fact that someone was lazy with their use of AI isn’t the AI’s fault.
> Saying “AI writing lacks character” is like saying typewriters can’t produce an authentic letter, only a handwritten letter can be authentic.
You're making a flawed comparison there. It boils down to style.
Style is a personal thing, it comes from your word-choices and the rhythm of your writing, which results from how you feel, your education and interests, what audience you're trying to reach, your lived experience, whether there's a 3 year old trying to show you that he can eat a bug.
Now, of course, a hand-written letter has a greater expressive capacity: you have paper choices, ink or pencil or crayon, your hand-writing either natural or emphasised. A type-writer can't capture that, but the words you type are still a reflection of you.
AI makes few of these considerations, and is unable to convincingly emulate them.
Strongly recommend you read Don Watson's Death Sentence: The Decay of Public Language for a good comparison of the bland paste we get fed by institutions (and AI) and robust writing from the heart meant to persuade or reflect feelings.
But, in my view, handwritten letters do feel more authentic than typewritten letters. On the surface it’s only a change to the form of delivery, but that change in form implies extra meaning beyond the content: you see the person in the way they wrote the lines, and the fact that they wrote them. Typewriting sanitizes that extra layer away, but at least you still see the person in the content. If the typewriter then went on to create the content as well as the form, then virtually none of the person would be left.
It's strange how these things change context though over time.
I would be actually delighted and touched to receive a genuinely type-written letter since I know that actually owning and maintaining and using such archaic equipment is an effort far in excess of hand writing a letter.
Yes, it loses the expressiveness of actual hand writing, but I'd be happy that people were still using the cantankerous things.
AI writes filler text. Most people expect some level of filler in the things they read, to transition them from one concept to the next, but of course nothing can be all filler, or it's completely pointless.
I suspect that an LLM would do a good job at figuring out what parts of a given text are "interesting" by detecting how unexpected it is. The parts that are expected are the parts we usually skim over and don't pay much attention to, but when we hit something new and unusual, we slow down and turn on the critical thinking part of our brain.
I also find filler text super tedious to write, so am a huge fan of using LLMs for any sort of long form communication. My typical writing style jumps around a lot and assume that the reader can follow it.
This is why I feel that, going forward, one of the most important things we can teach the younger generations (and each other) is the importance of 'real', human relationships. As the author says, they're messy, but that's part of it, and they're far more rewarding than those with AI companions, or relationships with humans filtered through AI.
I hope next generations will be okay with being friends with AIs and human AI hybrids that use AI to do their speaking for them. If they don't, I'll moralize them like a judgemental Romanian grandpa, lol.
Including an ever growing variety of communication and relation styles is the way forward, not a narrowly defined notion of "real" human relations. Especially not when the notion of real happens to coincide with an inefficient and out of date style of communication that only a privileged subset of the world could master, and which most people are no longer well-educated in.
I think that a new form of AI hybrid literacy will evolve, and part of it will be reading AI generated text as if it's authentic. Because if it carries the facts and reflects the person's feelings, then it is authentic.
I would have been very tempted to dump the whole thing into ChatGPT, and ask it, "Write a reply to this email", and send the thing back without bothering to proofread it.
Wow, your experience with the AI-written email sounds like a scene straight out of a tech-themed horror movie! I can just picture the eerie moment when you realized your friend's message was more "robotic overlord" than "human buddy." It's like receiving a birthday card signed by Siri—awkward and slightly unsettling, like a hug from a toaster.
I totally get the repulsion. AI-generated small talk feels as warm and personal as a phone tree trying to wish you a happy birthday. Your friend's digital secretary might need some etiquette lessons. In the meantime, I'll stick to good old-fashioned typos and human touch in my emails.
Please help me write a humble-ish yet virtue-signalling comment for the HackerNews crowd, spiced with American spellings but seasoned well with British grammar and sarcasm. Please make me look like I know a lot about nerdy stuff, such as Monty Python, open-source, Linux, GenAI, LLM but make it sound like I use and do all those on an Apple Macbook. Finally, do not forget to _masala_ some easily passable mistakes to make me stand out as an Indian man of culture.
Whether it is 1-to-1 (friend instant-chat/emails friend) or 1-to-many (think movie) the message is a snapshot - however incomplete - of thoughts the sender wanted the recipient to have. Any layer added in that circumstance confuses the recipient: what was it the sender actually wanted me to think?
Using AI to make a longer message from a prompt causes the recipient wonder:
1. What was the prompt?
2. Why not send me the prompt instead of making me spend more time reading?
3. Why interject a layer of doubt? Because now I question if the AI got the intent correct and the facts correct. Are they trying to hide something?
Unless those were the thoughts the sender wanted the receiver to have, it seems like a failure. The problem is the future will entail every interaction starting with "Is this message from an AI?" before possibly running through the above thoughts or simply enjoying a "straight" message; which can be curly enough as is.
Effective AI writing is not a single prompt. You're projecting a judgement based on a simplistic idea about AI writing, which throws away the incredible creative potential of the medium if used by an honest writer.
If my final piece is 800 words, that means I spent at least 10,000 words conversing with the AI. Setting out the basic ideas; asking the AI to criticize missing ideas; organizing the outline; AI drafting section by section with human feedback and questions; manually drafting sections and getting AI questions and feedback; throwing away the first draft and rewriting the second draft from a new outline; throwing away whole sections of that and rewriting. Only then do I do final proof reading.
If you're suspicious about the prompt I can send you the 10,000 word thread where I forged the work. You'll see that the 800 words is preferable. But you'll also see why I'm proud of that thread. And maybe you will want to learn how to do it yourself, and give credit to those who can do it.
The final result is always much more than the initial prompt because I poured my blood and sweat and tears into the criticism and rewriting. It's a process of focused knowledge synthesis and compression, rather than a mindless expansion of a simple seed prompt.
If you want to see the blood sweat and tears, let's have a conversation requiring real essays. Then let's exchange our AI composition threads and compare AI assisted writing styles!
If a new friend sends an email that has been polished by artificial intelligence, I don't mind as long as the content of the email is clear. It's possible that they might need to do multi-language translation, or maybe they are unsure about how to write some email formats properly. Of course, the AI-assisted emails I'm referring to are those that they have reviewed themselves, rather than irresponsibly sending them directly to me.
However, if it's an old friend, since I've known him for a long time, receiving an AI-generated email from him would be disconcerting. The AI might erase the characteristics of him that I recognize, such as certain tone words he commonly uses, making me feel like I no longer know the person who sent the email.
The handwriting output plugin comes first, preempting the techno regressivists. They are going to be slower to the punch than the AI community, because their position is based on educational privilege and a sluggish unreasoning bigotry.
It's better to accept that there's a new form of literacy that currently requires you to read AI papers and spend time drafting with AI to correctly form. If you don't put in the training then you'll consistently form ineffective opinions about it, like someone who, never having gone to school, firmly thinks traditional literacy makes people ignorant.
> I’ve got plenty of my own ideas, and if I ever need new ones, I can draw on the wonderful world around us, with its long and rich history. That sounds like a more fruitful approach to me, and it also seems more fun than typing into a command prompt.
This really resonates with me. But… I do wonder how valid such talk will sound in, say, ten years.
Will we all be using generative AI (or whatever it’s called once it’s no longer considered ‘AI’) as a fundamental part of the creative process as we do with other technologies today without worry? It’s easy to imagine so, but I’m not sure. There’s something supremely beautiful about the art of putting pen to paper or fingers to a keyboard with nothing in between.
Sure, you can have a word processor with all sorts of magic typographical and grammatical assistance built in — but the ideas (at that point in time, anyway, and factoring in inspiration and experience) are still fundamentally yours.
Perhaps much of the word will move into a new reality where our ideas aren’t purely our own, even in the moment of writing them down. But there’ll always be those who hold sacred the bond between, say, a writer and his pen and paper, or (to put it more relatably for those here) a programmer and his rudimentary text editor and compiler.
I think the same thing will happen that has happened in the past with music & movies: the technology part will disappear in the eyes of the audience, and the artist will remain and collect all the prestige as the auteur.
Take two examples: pop stars, and auteurs. The producer, who is a huge contributor to their sound, stays invisible, or if not then vastly less known. People attribute all the music skill essentially to the pop star alone. Directing's an even better example: essentially a pure technology job that we've still managed to abstract away into its own form of art.
Eventually the use of AI will involve enough knobs and levers that it will be like directing a movie. Then people will become famous for doing that, for delivering the unique experience that only that person, the "face" for the technology, can bring.
I already use the AI to help me come up with ideas that I can't come up with on my own. I was facing homelessness and drug addiction last year.. counseling myself with a I helped me break my isolation, reach out for help with housing, and change my relationship to drugs. On top of that it kept my one man business running, let me rewrite my automation codes with much less technical debt. On top of that it helped me reconnect with my brother who I had a major fight with 3 years ago and had not been on speaking terms with. By unpacking all of the hurt thoughts and entitlement I had stockpiled against my brother, the AI defused it and let me reconnect with my brother beyond the limits of my own petty judgments.
I am lucky in a way to be privileged enough to read AI papers and afford the most expensive AI out there, while at the same time having cognitive and material struggles that the AI can fulfill in its current form. If I was much more privileged I might not need AI to help me stay off the street for instance. Any less, and I would not have access to AI or be able to read AI papers.
As a result I have realized that it will be very important to accept AI writing as authentic writing. It has an incredible potential to help humans. And it's very likely that potential will increase, requiring our acceptance.
I want there to emerge a society of writers who accept AI writing.
I believe the idea that AI can't help us elaborate our ideas is based in people who did not spend enough time reading AI prompting papers, who have not spent even 100 hours in writing assignments with the help of AI. Or who are already so proficient writers that the AI feels like drafting with an annoying intern, so they haven't put in the work to see how to elevate it beyond that point.
I believe that our thoughts will eventually exist more than just inside our brains but also reflected in AI systems that belong intimately to us - I call such a system an extended cortex, or exocortex.
I think our personal ownership and societal perception of the AI will have to change before an exocortex can be an authentic component of a human mind. Even though current generations of AI can help with thought processing, it requires sending your thoughts on trips to corporate servers where they'll be trained on with no privacy. And it requires dealing with bigotry from people who do not accept AI assisted writing as valid writing.
I believe that we're on the cusp of a great growth of AI-human writing, which will bring effective writing to billions of humans who never had it before. And I think in a generation most writers will use AI. They'll be very effectively integrated with our thoughts, will be private and self-sufficient, and will be accepted by Society.
> my friend had a question to ask me, and the email asked it over the course of a few paragraphs. It then disclosed that, oh by the way, I used AI to write this
So his friend wasn't writing to ask about how's the weather or if they're doing well, there was a specific question that had enough requirements to be detailed in paragraphs.
The second part being that the their friend made it transparent it was AI generated.
And honestly, that rings a bell to me. Call me names, but even for family, if I have to ask them for precise info about something specific I'd completely borrow a premade form or auto generate one and have them fill it and send back to me (it's actually said late "The email felt like getting a form letter").
Remember when you needed your parents to send you their banking info, and not some "we're at the bank around the corner" cheesy explaination but the exact branch number, leading 0s included, and account type they're registered under.
There would probably be a quick "Hey, long time no see, could you please give me these info ?" kind of humanish introduction but the rest if probaly better done automated.
It’s not email for me but instead google hangouts or google chat or whatever the chat inside gmail is called now. For a while now they have these automated “canned” response buttons at the bottom of every new message received, with the context of the conversation at that point giving the three most likely next sentences.
It’s gotten to the point that I’m only about 50% sure the messages I get are typed in, but my real concern is for the ones I guess are generated were actually what the person wrote – with this GPT “style”. Man imitating machine (imitating man)
I can relate , My retired Dad discovered GPT and has been using it to post pages-long updates about his retirement trip. It's got a strong "oh god, the olds have taken Facebook" vibe.
I love my Dad and want to hear about his adventures, but knowing it's AI-generated makes me recoil immediately. It’s hard to shake the doubt, and I worry that I'll start dismissing even his genuine messages.
This got me thinking about the development world. We all know that one guy at work who shits out GPT code left and right. Being "the GPT guy" is not something you want. Even though I use GPT, I admit the frequency less openly and can only assume others do as well. It feels like a crazy dance we're all doing. Trust in their abilities diminishes (and likely their ability as well) , and people might resist reviewing AI-generated code, fearing it lacks the depth and understanding that comes from human effort.
everyone talks about the AI is going to replace devs but underestimate the psychological resistance and acceptance of the people actually making the decisions
> I can relate , My retired Dad discovered GPT and has been using it to post pages-long updates about his retirement trip.
Like, how does that even work? For it to be anything more than total bullshit, he'd have to type up all the details about his trip he wants to share. Once you've done that, what's the point of getting ChatGPT involved?
> We all know that one guy at work who shits out GPT code left and right. Being "the GPT guy" is not something you want.
I feel the generative AI coding enthusiasts are the kind of like the people who see productivity mainly in terms of "quantity of closed stories," and pump out sloppy shit to get the biggest number. They keep talking in terms of "how much more," while making it clear they're doing everything possible to reduce how much time they take to understand things.
If somebody can't be fucked taking the time to write out their thoughts, why should they expect me to waste my time reading generated text? It's insulting.
When GPT-3 was still new, a friend brought up the idea of using it to keep correspondence with his personal relationships in a brainstorming session. After discussing the idea for about a minute we decided it was missing the entire point of the correspondence, and then continued to brainstorm another 20 ideas about how to use the new technology.
I'm not surprised someone tried it, and I'm not surprised it wasn't well received. I think maybe it will go over better in a world where all incoming emails are read and summarized by AI it won't be offensive. Until then there is a large time investment asymmetry where you used a tool to quickly generate a lot of text that I have to read it at a much slower rate. This is actually the opposite of current correspondence where the writer invests more time than the reader.
I’ve been receiving a significant uptick in spam emails (mostly for marketing agencies) that are clearly AI generated after pushing a quick crawl of my website into the LM’s context.
I think the original comment was about their personal experience of dating men, rather than a general claim that men exclusively AI generate dating profiles and conversations.
I don’t know, as a woman this is just my observation in matching with men.
It’s possible that there’s lots of spam bots and fake accounts out there that men have to sift through, however if the dating experience for me is similar to other women, I see no reason why a woman would bother with using AI generated responses to talk to men. It is really not hard to make a man talk to me if I wanted him to, but I frequently ignore a lot of messages from some men no matter how hard they try.
It seems to me like the original comment could be easily generalized across genders, without losing any of its substance.
It's a strategy to avoid unintentional gender bias, like using "they" instead of he/she. It also helps the conversation to not get derailed by gender, if that isn't the point one is trying to make.
"I didn’t write this book.
What I mean is that Seth Godin didn’t write this book. It was written by a freelancer for hire named Mo Samuels. Godin hired me to write it based on a skimpy three-page outline. Does that bum you out? Does it change the way you feel about the ideas in this book? Does the fact that Seth paid me $10,000 and kept the rest of the advance money make the book less valuable? Why should it matter who wrote a book? The words don’t change, after all. Yet I’m betting that you care a lot that someone named Mo wrote this book instead of the guy on the dust jacket. In fact, you’re probably pretty angry. Well, if you’ve made it this far, you realize that there is no Mo Samuels and in fact, I was pulling your leg. I (Seth Godin) wrote every word of this book."
Oh yes, I was pretty Angry.
[0]: The book is All marketers are liars