Hacker News new | past | comments | ask | show | jobs | submit login

It's not really stateless. How do you want to support SSE or "Streamable HTTP" on your lambda? Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

The protocol is absolute mess both for clients and servers. The whole thing could have been avoided if they picked any sane bidirectional transport, even websocket.






> Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

It seems your knowledge is a little out of date. The big difference between the older SSE transport and the new "Streamable HTTP" transport is that the JSON-RPC response is supposed to be in the HTTP response body for the POST request containing the JSON-RPC request, not "some other long-running SSE stream". The response to the POST can be a text/event-stream if you want to send things like progress notifications before the final JSON-RPC response, or it can be a plain application/json response with a single JSON-RPC response message.

If you search the web for "MCP Streamable HTTP Lambda", you'll find plenty of working examples. I'm a little sympathetic to the argument that MCP is currently underspecified in some ways. For example, the spec doesn't currently mandate that the server MUST include the JSON-RPC response directly in the HTTP response body to the initiating POST request. Instead, it's something the spec says the server SHOULD do.

Currently, for my client-side Streamable implementation in the MCP C# SDK, we consider it an error if the response body ends without a JSON-RPC response we're expecting, and we haven't gotten complaints yet, but it's still very early. For now, it seems better to raise what's likely to be an error rather than wait for a timeout. However, we might change the behavior if and when we add resumability/redelivery support.

I think a lot of people in the comments are complaining about the Streamable HTTP transport without reading it [1]. I'm not saying it's perfect. It's still undergoing active development. Just on the Streamable HTTP front, we've removed batching support [2], because it added a fair amount of additional complexity without much additional value, and I'm sure we'll make plenty more changes. As someone who's implemented a production HTTP/1, HTTP/2 and HTTP/3 server that implements [3], and also helped implement automatic OpenAPI Document generation [4], no protocol is perfect. The HTTP spec misspells "referrer" and it has a race condition when a client tries to send a request over an idle "keep-alive" connection at the same time the server tries to close it. The HTTP/2 spec lets the client just open and RST streams without the server having any way to apply backpressure on new requests. I don't have big complaints about HTTP/3 yet (and I'm sure part of that is a lot of the complexity in HTTP/2 was properly handled by the transport layer which for Kestrel means msquic), but give it more time and usage and I'm sure I'll have some. That's okay though, real artists ship.

1: https://modelcontextprotocol.io/specification/2025-03-26/bas...

2: https://github.com/modelcontextprotocol/modelcontextprotocol...

3: https://learn.microsoft.com/aspnet/core/fundamentals/servers...

4: https://learn.microsoft.com/aspnet/core/fundamentals/openapi...


Client is allowed to start a new request providing sessionid which should maintain the state from previous request.

Where do you store this state?




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: