Nice work but, Sorry but I don’t feel comfortable either proxying my llm calls through a 3rd party unless the 3rd party is a llm gateway like litellm or arch or storing my prompts in a SaaS. For tracing, I use OTEL libraries which is more than sufficient for my use case.
We are looking into adding an Otel Collector as OTel-semantics are maturing around LLMs. For now many features that are key to LLMOPs are difficult to make work with OTel instrumentation as the space is moving quickly. Main thread on this is here: https://github.com/orgs/langfuse/discussions/2509