Hacker Newsnew | past | comments | ask | show | jobs | submit | nicomt's commentslogin

I find this really interesting, I can see a few different ideas on GitHub to claim IPs, but I don't see any of those reaching that scale.

https://github.com/search?q=ipv4.games%2Fclaim&type=code&p=1

While running ads is definitely a possibility, reaching 9% of all available IPs sounds like a crazy expensive campaign. I don't know what the ratio of people to public IP is but I doubt it's one.


20 million unique users is not that much. I don't understand the claim that this constitutes 9% of all IP addresses. It doesn't. There are about 4 billion public IPv4 address. 9% of that would be closer to 300 million.


You're right, like others said in the comments the 9% in the comments is from total active hosts tracked by Censys (~231 million). But I still think it's challenging to have that much reach and unlikely to be an ad campaign. Using numbers from the website bellow the cost of getting 20 million impressions would be around $43,200 on the low-end for YouTube ads and can be much higher on different platforms. That is also assuming perfect efficiency were you we have exactly one impression per IP which is unlikely to be the case.

https://www.guptamedia.com/social-media-ads-cost


Is it reasonable to assume these aren’t 100% static IP addresses? If so, maybe there’s some double counting going on.


Google suspended my payments account a few months ago without even notifying me. I never received a reason for the suspension, but I suspect it's related to a failed game refund from Stadia, as I see a refund error in my payment notifications. Dealing with Google Support has been a Kafkaesque loop where I have to explain the same thing over and over, only to get the same scripted instructions that take me nowhere. After this dance, they finally say they will "escalate" the issue and then close the case. At this point, I've given up and am in the process of de-googling my life.


It's not open-source or self-hosted but putting it out there: CloudFlare zero-trust is amazing and free. In my setup, I have a cloudflared tunnel configured in my homelab machine and I expose individual services without a VPN or opening up my firewall. You can also set up authentication with SSO, and it happens before reaching the backend application which makes it more secure. This is easy for family and friends to use, because they don't need to setup anything from their side, just go to the URL and login. https://developers.cloudflare.com/cloudflare-one/connections...


I seriously don’t understand why would people choose this over not exposing anything at all, except for Wireguard port. I have my client to automatically connect my home LAN when I’m not on WiFi and get access to all my self-hosted services without risking anything. You rely on third party solution which may or may not be made to government agencies. You also need to trust they Cloudflare doesn’t make mistakes, either.

Also, how do you configure Cloudflare for a road warrior setup? How do you track ever changing dynamic IPs? As mentioned, all I need is a Wireguard client and I’m golden.


> You rely on third party solution which may or may not be made to government agencies.

That's a fair point, but for my use case, I feel comfortable enough with CloudFlare given the trade-offs.

> You also need to trust they Cloudflare doesn’t make mistakes, either.

I think the chances of CloudFlare making a mistake are much lower than me or any other individual Developer.

> Cloudflare for a road warrior setup? How do you track ever changing dynamic IPs?

I think you need to read the docs. All of that works without any extra config when using tunnels.


CloudFlare zero-trust is very good, but i thought you need to have Cloudflare as man-in-the-middle on your domain to have this authentication flow work? ie. the TLS certs needs to live with Cloudflare.


Yeah, that is how I use it. You can technically host any TCP including end to end encrypted data through CloudFlare tunnels but you need the cloudflared app installed on the client side to access it (SSO still works even for this case). I find having to manage certificates and installing cloudflared everywhere is too much of a hassle. I understand that proxing through CloudFlare gives them a lot of visibility and control, but I find that risk acceptable given my application.


The post misses the mark on why a stateless protocol like MCP actually makes sense today. Most modern devs aren’t spinning up custom servers or fiddling with sticky sessions—they’re using serverless platforms like AWS Lambda or Cloudflare Workers because they’re cheaper, easier to scale, and less of a headache to manage. MCP’s statelessness fits right into that model and makes life simpler, not harder.

Sure, if you’re running your own infrastructure, you’ve got other problems to worry about—and MCP won’t be the thing holding you back. Complaining that it doesn’t cater to old-school setups kind of misses the point. It’s built for the way things work now, not the way they used to.


It's not really stateless. How do you want to support SSE or "Streamable HTTP" on your lambda? Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

The protocol is absolute mess both for clients and servers. The whole thing could have been avoided if they picked any sane bidirectional transport, even websocket.


> Each request will hit a new random worker, but your response is supposed to go on some other long-running SSE stream.

It seems your knowledge is a little out of date. The big difference between the older SSE transport and the new "Streamable HTTP" transport is that the JSON-RPC response is supposed to be in the HTTP response body for the POST request containing the JSON-RPC request, not "some other long-running SSE stream". The response to the POST can be a text/event-stream if you want to send things like progress notifications before the final JSON-RPC response, or it can be a plain application/json response with a single JSON-RPC response message.

If you search the web for "MCP Streamable HTTP Lambda", you'll find plenty of working examples. I'm a little sympathetic to the argument that MCP is currently underspecified in some ways. For example, the spec doesn't currently mandate that the server MUST include the JSON-RPC response directly in the HTTP response body to the initiating POST request. Instead, it's something the spec says the server SHOULD do.

Currently, for my client-side Streamable implementation in the MCP C# SDK, we consider it an error if the response body ends without a JSON-RPC response we're expecting, and we haven't gotten complaints yet, but it's still very early. For now, it seems better to raise what's likely to be an error rather than wait for a timeout. However, we might change the behavior if and when we add resumability/redelivery support.

I think a lot of people in the comments are complaining about the Streamable HTTP transport without reading it [1]. I'm not saying it's perfect. It's still undergoing active development. Just on the Streamable HTTP front, we've removed batching support [2], because it added a fair amount of additional complexity without much additional value, and I'm sure we'll make plenty more changes. As someone who's implemented a production HTTP/1, HTTP/2 and HTTP/3 server that implements [3], and also helped implement automatic OpenAPI Document generation [4], no protocol is perfect. The HTTP spec misspells "referrer" and it has a race condition when a client tries to send a request over an idle "keep-alive" connection at the same time the server tries to close it. The HTTP/2 spec lets the client just open and RST streams without the server having any way to apply backpressure on new requests. I don't have big complaints about HTTP/3 yet (and I'm sure part of that is a lot of the complexity in HTTP/2 was properly handled by the transport layer which for Kestrel means msquic), but give it more time and usage and I'm sure I'll have some. That's okay though, real artists ship.

1: https://modelcontextprotocol.io/specification/2025-03-26/bas...

2: https://github.com/modelcontextprotocol/modelcontextprotocol...

3: https://learn.microsoft.com/aspnet/core/fundamentals/servers...

4: https://learn.microsoft.com/aspnet/core/fundamentals/openapi...


Client is allowed to start a new request providing sessionid which should maintain the state from previous request.

Where do you store this state?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: