keywords? These are embedding models. Clip puts those phrases into an embedding that encompasses a location in the space you want to avoid. No need for the "keywords" to be in the image dataset.
So the problem with that, is you're visualising the space with only points that exist in the image dataset. The language embedding has more information that comes from the language that isn't contained in images.
It handles bad, and it handles anatomy. If there aren't single images that cover that - that's exactly what language embeddings solve for.