FDA regulation doesn't encourage manufacturers to look at long term effects.
In granting the initial approval. There's various stuff to watch for bad long term effects, which it must be noted also take a long time to show, and are also known to be tied to uniquely vulnerable sub-populations who won't necessarily be well represented in clinical trials.
Heck, for some time I took a drug what almost certainly would have never gotten approval if anyone had realized it killed the livers of a small number of people taking it, when the total number was large enough. The FDA black boxed it and the original company stopped manufacturing it, but many like me who'd passed the danger period and had no signs of liver problems continued taking a generic version for a while.
If I understand your question, the additional testing is done in the only way that makes sense, on the population at large (that's taking the drug, of course). Drug companies and the FDA are pretty sure they understand the risks of the drug before the latter approves it, and I can't see it making sense to require even larger and longer formal double blind etc. trials. The whole process is already so expensive that it's probably at net killing people compared to backing off some.
See GFK_of_xmaspast's link to the FDA page on Vioxx, where the drug company's monitoring effort pulled the plug before anything else.
I'm saying that if the FDA orders a drug company to conduct post-approval studies and the drug company doesn't actually conduct them, then in how many cases has the drug been pulled off the market?