Not just text. Deepfakes are going to ruin voice- and video-based interactions, too. You're only going to be able to trust in-person interaction. And even then, you're going to have to know where the person you're talking to got their information.
Epistemological trust is broken. We're back to "what you have seen with your own eyes".
And that's going to seriously impede progress, because it means the number of people you can learn from is now very small.
But Deepfakes only make those things easier, they're not what makes it possible in the first place. Have someone with makeup and a similar voice, and you can fool someone with a video call. Prank-calls with people sounding like some politician were a perennial source of entertainment for radio shows etc.
The trust was misplaced in the first case, but we're now getting a clear demonstration why. Like someone walking up to your car with a universal remote and unlocking it within 60 seconds.
I would argue that the trust wasn't necessarily always misplaced; one of the lessons of the book "Lying For Money" is that some level of fraud is unavoidable because the cost-benefit tradeoffs of always checking everything don't make sense to try to drive fraud to zero.
However, AI moves the point on the cost-benefit curve.
The internet has grown massively over the last two decades, but maybe AI leading to a contraction would be a good thing. Maybe an end to Eternal September.
Epistemological trust is broken. We're back to "what you have seen with your own eyes".
And that's going to seriously impede progress, because it means the number of people you can learn from is now very small.