I would say robots.txt is meant to filter access for interactions initiated by an automated process (ie automatic crawling). Since the interaction to request a site with a language model is manual (a human request) it doesn't make sense to me that it is used to block that request.
If you want to block information you provide from going through ClosedAI servers, block their IPs instead of using robots.txt.
If you want to block information you provide from going through ClosedAI servers, block their IPs instead of using robots.txt.