Perhaps, but language is the common denominator in a multi-model world. E.g., I pass the GPT output into other models which are fine tuned for that sub domain. You can do embedding to embedding conversion, but not sure it's worth the effort.
Imagine if OpenAI made GPT3's final hidden states available via an API ("GPT3 deep sequence embeddings v1.0"), next to each generated text token: [(text_token, deep_emb), (text_token, deep_emb), ...]. You and anyone else could build apps on top. Those hidden states would incorporate much more, and much richer, information than the text. Higher-level models could be trained to act on such information!