Read the whole course. There's a great amount of sourced case studies of people using AI in here with societal pushback which I find interesting to pull from.
Disappointed though that the answer is so nuanced. There aren't hard and fast rules to when/when not to use AI but a set of 18 or so proposed principles that should guide are usage. And defense for those principles. The principles are at the bottom of each chapter.
Also learned about the Eliza Effect as a term and that I found the passage in Ch14 by Ted Chiang to be really insightful, from a general social perspective.
> When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered.
Disappointed though that the answer is so nuanced. There aren't hard and fast rules to when/when not to use AI but a set of 18 or so proposed principles that should guide are usage. And defense for those principles. The principles are at the bottom of each chapter.
Also learned about the Eliza Effect as a term and that I found the passage in Ch14 by Ted Chiang to be really insightful, from a general social perspective.
> When someone says “I’m sorry” to you, it doesn’t matter that other people have said sorry in the past; it doesn’t matter that “I’m sorry” is a string of text that is statistically unremarkable. If someone is being sincere, their apology is valuable and meaningful, even though apologies have previously been uttered.