Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Can we not just point LLMs at OpenAPI documents and achieve the same result? All of the example functions in the article look like very very basic REST endpoints.


Exactly. We already have lots of standards for defining APIs (OpenAPI, GraphQL, SOAP if I'm showing my age, etc. etc.) Part of my original "wow this is magic" moment with AI came when OpenAI released some of their plugins and showed how you could just point it at an API spec and the LLM could just figure out, on its own, how to use it.

So one real beauty of AI is that it is so good at taking "semi structured" data and structuring it. So perhaps I'm missing something, but I don't see how MCP benefits you over existing API documentation formats. It seems like an "old way" of thinking, where we always wanted to define these interoperation contract formats and protocols, but a huge point of AI is you shouldn't really need any more protocols to start with.

Again, I don't know all the ins and outs of MCP, so I'm happy to be corrected. It's just that whenever I see examples like in the article, I'm always left wondering what benefit MCP gives you in the first place.


Well, one benefit is the precision and focus of the protocol that can be used to train/finetune LLMs.

More focused training -> more reliable understanding in LLMs.


I hear you but what exactly about MCP is more precise or training-friendly than other approaches? I can think of at least one way that it isn't: MCP doesn't provide an API sandbox the way an Apigee or Mulesoft API documentation page could.


There's no reason it couldn't.


I understand what you're saying, but I'm still not clear why any of this should be necessary or is a benefit for LLMs. Another commenter mentioned that MCP saves tokens and is more compact. So what? Then just have the LLM do a one-time pass of a more verbose spec to summarize/minify it.

Any human brainspace needed to even think about MCP just seems like it goes against the whole raison d'être of AI in that it can synthesize and use disparate information much faster, more efficiently, and cheaper than a human can.


Don't forget HATEOAS if we're listing prior art of self-discoverable APIs!


You can, most MCP servers are just wrappers around existing SDKs or even rest endpoints.

I think it all comes down to discovery. MCP has a lot of natural language written in each of its “calls” allowing the LLM to understand context.

MCP is also not stateless, but to keep it short. I believe it’s just a way to make these tools more discoverable for the LLM. MCP doesn’t do much that you can’t with other options. Just makes it easier on the LLM.

That’s my take as someone who wrote a few.

Edit: I like to think of them as RCP for LLM


OpenAPI definitions are verbose and exhaustive. In MCPs you can remove a lot of extra material, saving tokens.

For example in [1], whole `responses` schema can be eliminated. The error texts can instead be surfaced when they appear. You also don't need duplicate json/xml/url-encoded input formats.

Secondly, whole lot of complexities are eliminated, arbitrary data can't be sent and received. Finally, the tool output are prompts to the model too, so you can leverage the output for better accuracy, which you can't do with general purpose apis.

[1] https://github.com/swagger-api/swagger-petstore/blob/master/...


So why can't the LLM just take the verbose OpenAPI spec, summarize it and remove the unnecessary boilerplate and cruft (do that once), and only use the summarized part in the prompt?


There is probably an MCP for that


author of the article here.

You can use OpenAPI as well. With MCP however, there's an aspect of AI-nativity that MCP offers that reifies patterns that show up in building integrations that helps building and adoption (Tools, Prompts, Resources etc). It's a different layer of abstraction. There's some things like Sampling that I cannot find an OpenAPI equivalent of easily.

I definitely barely scratched the surface with my example, but it's true that most MCP Servers I have seen and used are basic REST endpoints exposed as tool calls.

That said, the MCP server layer has some design considerations as well, since it's a different layer of abstraction from a REST API. You may not want to expose all the API endpoints, or you may want to encode specific actions in a way that is better understood by an LLM rather than an application that parses OpenAPI.


That’s basically what we did before MCP. And what (for example) langchain does.

It’s great to have a standard way to integrate tools but I can’t say I have much love for MCP specifically.


The docs are often pretty wrong. It's nice to formalize the glue in a server.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: