Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
Ask HN: What are the next internet infra problems?
106 points by vabahe7646 on April 10, 2022 | hide | past | favorite | 223 comments
There are multiple companies that were born to solve specific internet's infrastructural problems (e.g. Equinix, Akamai). Looking at the way internet usage has evolved, what kind of infra challenges do we have to face (now or future)?

[books/papers suggestions are welcome!]



The other 3.5b who aren't online yet. They're the 20 at the end of the 80:20 solution we deployed for all the metro/urban and economically exploitable agri locations. They're the ones with zero persisting power and infra unrelated to telecoms, who share one honda generator with a village, who have to walk 20km to get antibiotics, who pay middlemen a cut of every transaction, who live in debt bondage.

The ITU made a high frontier claim that was their core mission in the 10-20 year window. They made this claim 10 years ago. I'm not seeing strong evidence its being solved, aside from trickle down re-purposing of old 3G systems and Chinese investment in not-very-good infra to offset their coltan mining in Africa.

An accquaintance lives 10km outside metro Bangkok. It's an amazing city and its as plugged in and switched on as anywhere else in Asia, the rats nest of wires is a testament to ad-hock hackery. 10km outside the city margins, its a dead zone for high speed service. And Thailand has a huge rural population.

Rinse and repeat for Africa.


There's also a huge difference between fiber to the whatever, and the 10-20 year old infra in the US suburbs being slowly upgraded to gigabit DOWN only. The suburbs are often regulatory captured by anti-competitive agreements with little enforcement oversight or forethought by the small cities. Maybe (I personally suspect) kickbacks in various forms too.

I agree that the 50% of the population with even less need things too, but it's myopic to think that we're even as good as the rosy outlook. Help to raise the local industrial and civil capacity in other nations, in my mind and opinion, would also make them nicer places to live and thus more likely to have happy citizens with a stable and responsible government who can help make the world a better place overall. (Surely there's been no study on this, but I also theorize this would be the most net cost effective way of battling terrorism; since if there's something positive to live for you're less likely to be angry and hateful towards others.)


I think solving the "last mile" problem in the US is our greatest problem, especially in rural areas. Too many state and local governments have been paid off by lobbyists to pass laws to block any competition from offering cheaper and better options.

Too many people are stuck with slow, expensive, and unreliable cable and/or DSL ISPs. Some have no choice whatsoever. Some others get to choose only from two equally awful options. We need legally available competition EVERYWHERE.


I think it's a problem even in urban areas. A friend of mine is working on something called Flume: https://www.flumeinternet.com/ Their funding model (at least right now) is basically to serve customers where the government will pay for it. Many of their customers are getting home broadband for the first time in their life. (They remark that it's fun when their friends come over and they can give them the wifi password.) This is all happening in New York City; some people are getting Internet access for the first time in their lives, in the largest city in the richest country in the world.

I can only imagine how fucked rural America is.


What I don't quite understand is how these rural areas got electricity. If it's so expensive to run something to a rural area, who ate the cost of providing grid access? Or is cable/fiber just significantly more expensive per mile compared to electricity?


The federal government ate the cost. FDR passed the Rural Electrification Act in 1935 as a part of the New Deal, which gave large loans to fund electrifying rural parts of America.

https://en.wikipedia.org/wiki/Rural_Utilities_Service


As another poster noted, the Feds funded a lot of the cost of electrification. There is starting to be some action on the internet front. But...

Here are the economics for a fiber optics deployment in 2021 for a rural town of ~800 premises, where about 60% got service. They charge customers about $100/month for symmetric 75/75mbps internet plus phone.

It costs about $40K per mile to run the fiber down the road on existing utility poles. It costs between $2K and $4K to run the drop from the utility pole to the home.

Those two numbers (miles of road, number of premises) get you in the ballpark of the cost of deployment in a town/county. More premises per mile obviously decreases the cost per premise - that's why utilities prefer dense areas (cities).


Many areas in the US still don’t. I remember living on a farm as a child, circa 1998, and our neighbors had no power or running water. It wasn’t lack of wanting it, “the money just ran out.” This was the mountains of Virginia, near Roanoke.


> Many areas in the US still don’t.

In the scale of the US, that's a blatantly false statement.

It's < 50,000 people out of 330 million. Or less than 0.02% of the population. Most of that is because they're living in quite isolated locations far from any population.

The figure is so high the WorldBank lists the US at 100% access to electricity with all the other affluent nations. Brazil is at 99.8% for reference, Vietnam is 99.4%, India is 97.8%.


Many, as in numerous.

Areas, as in places you can live.

I wasn’t saying it was whole chunks of the country. Just that there are many places, not necessarily in the same place and not necessarily enough for anyone to care about. The latter kind of being the problem…


The New Deal electrified America.


No, fiber is actually significantly cheaper.


They formed cooperatives.


As a society we had a much higher tolerance for risk when the electrical grid was being built. We were perfectly fine with the idea some people would die building it and some people would get electrocuted using it. The world we live in now has much lower tolerance for risk. You can't even screw a wall mount bracket in without carrying liability insurance.


A major reason for needing to have so much liability insurance in the United States is the insane cost of our medical care.


Society's risk tolerance varies with domain. Electrical power systems are well understood and thus we expect them to be as safe as practical.

Contrast that to the reaction to Tesla's autopilot that will happily drive you into an embankment at fatal speeds.


Electric power systems are much more dangerous than communication networks.


... and?


Unfortunately that problem space is so large (every new line of fibre has how many stakeholders?) that a single individual or organisation might not be able to have any impact here. Which I think is the root of this question.


If there's above-ground electricity, there should be rules about getting pole access. And from what I understand those poles, and anything underground, work by getting easements rather than negotiating separately with every individual property owner.


Fibre op engineer (Canadain) here - there are rules, but those are a double edged sword; the proper paperwork to get on a pole is a large part of the total cost to attach. Ever see the mess of cables in a developing country? The laxer rules that lead to those tangles substantially reduces the cost of building out internet.


The director of the NTIA $70B broadband expansion program is taking questions wednesday in a fireside chat.

https://discuss.broadband.money/c/broadband-grant-events/ala...


It might be our greatest problem, but it's not a technical problem.


There’s still no good way for me to write an open source web application and have its users bear the cost of running it.

This is a major regression from open source desktop software, and IMO is the reason open source web applications haven’t taken off more.


Forget open source—there never was a simple way for users to self-host web ~servers~ applications that gained any traction whatsoever.

Opera gave it a go more than a decade ago with Opera Unite, but that went nowhere. Like a sibling comment mentioned, Sandstorm was a push in the right direction, but it also eventually failed. Docker and current container solutions are still way too complicated for the average user.

The fact web founders focused so much on making the consumption of web content easy rather than producing/serving it is partly the reason why the web is so centralized today.


This is a weird take on history and divorced from the reality we live in even now.

The first ever web browser written by Tim Berners Lee at CERN on a NeXt computer was a web editor as well.

He envisioned that every web user would be a web creator. He is quoted many times as saying this.

Today: you can still stand up a web server at your house and forward the ports on your router. Unless you’re behind a carrier grade NAT (as those are becoming much more common now that IPv4 is exhausted).


Seems like you agree that there isn’t a simple way to do it.

The browser-as-editor didn’t even survive into the next generation of web software — Mosaic already was read-only. And that’s because Berners-Lee’s vision lacked any unified solution to hosting the web creations.


There was https://en.wikipedia.org/wiki/Amaya_(web_editor)

I've used that until about 2010 for clicking together concepts of pages, to refine them manually, if at all.


> Today: you can still stand up a web server at your house and forward the ports on your router

And you just lost 97%+ of the Internet userbase. They have no idea what you just said.


Yea. Talk about being "divorced from reality".


Then we should really be focusing on getting IPv6 everywhere so that the firewall can be a single click "expose website from this computer".


It's not just a matter of exposing the service to the internet. That's only part of the solution. The usability difference between consuming web content via browsers and creating/exposing it via web servers is vast. Most web users wouldn't even know where to start with today's tooling, and what's worse: even if such tooling existed, you'd have to convince them why they should use it in the first place. This is partly why adoption of any decentralized platform is an uphill battle.


Idk about web hosting and web servers, but a significant portion of the population at some point used torrenting software.

I feel like at the very least it should be possible to make tutorials for how to make and seed torrents (and why you might want to), and people can send or post torrent files or at least magnet links in places that already exist, like telegram (the app, not telegraphs from before phones were common) or email or sms text messages or forums or qr codes or anywhere that you can post at least a magnet link.

I feel like if people want bad enough to share and receive large files they can learn the basics of using torrent programs (and I've also seen more than one Android app (Libretorrent being one of them, and you can get it on the free/libre/open source Fdroid store if you want to))

Another thing to consider is that irc and matrix exist and I think they claim to be peer-to-peer, but I'd suggest taking a good look at whether that's true and what their limitations are, with irc seeming particularly limited when I tried it years ago (what do you mean it doesn't keep any history? I can't see what the convo was right before I showed up? That's dumb. Oh, and I can't see any messages somebody was trying to send me if I wasn't online and in that irc room at the time? That's super dumb. And where's the edit button? You mean I can't edit what I sent if I did a typo or sent a message before it was ready or something? And I can't even delete a sent message so I can redo it and replace the messed up message? What decade was this written in? Oh, like decades ago? Ok, maybe that makes some sense, but we have standards for our communication apps today) Oh, and it was very obviously made by people with a commandline-first mentality, and you either know what command to tell it or you don't, and if you don't and you don't understand any help text there may or may not be, you're at the mercy of whoever introduced you to irc, especially if you're not used to thinking the way hardcore/old programmers think (which makes help text less than helpful).

No, I think those aren't exactly what I'd be looking for


Sandstorm was a beautiful idea. I hope it’s time will come.


I think we need something like micro payments. Take lichess for example it's the biggest open source chess website completely ad free and runs by donations. If you divide the total monthly costs to run the servers by the number of games and you get a tiny fraction of a dollar cent. I don't remember the exact number, but it has been in the order of 1k chess games equals 1$. If you could charge 0.1 cent per game, then this website could run without any extra donations.

Another idea that comes to mind is that the server side could be somehow run by the connected users. Users have storage + cpu cycles. Similar to torrent as long there's enough users (seeders) the game server will continue to work. Lichess for instance has tens of thousands players online 24/7.


"How to pay" always seemed like the easy half of the problem to solve, "how to manage what should be paid" seems much harder as nobody wants to deal with sorting through fraction of a penny payment approvals. Is 0.01 cent per game acceptable? 0.05? If you accept at 0.01 and it goes to 0.05 do you have to re-accept? If everyone decides sets 0.1 cents as an auto-approve boundary does that mean every site is now going to try to charge 0.0999 cents? Does some centralized entity try to set these rates instead? How does the system protect against the equivalent of collect call scams? Does it protect against that in a way that doesn't limit actually using a service very quickly?

It's like the permissions problem (really easy to prompt, really annoying to do so, really really bad to just assume yes or no all the time) except worse.

In regards to distributed serving using the client endpoints it has a strong tendency to be more work, less reliable, and not as scalable (see peertube). What has seemed to work is offloading as much of the functionality for that user as you possibly can to that user's device. E.g. want to play a game against the computer? Run the chess engine WASM bundle on your device instead of the server (lichess does this).


> "How to pay" always seemed like the easy half of the problem to solve

With the cut payment providers currently keep it still seems like an unsolved problem for micropayments


A model that is similar to donations that I feel has not been fully explored is charging for cosmetic items. Reddit essentially does this now by allowing people to buy reddit gold, silver, or other awards.

This revenue model has turned out to be quite profitable in gaming and creates an experience where most people can use the software for free but a long tail of users spend lots of money to have icons next to their name or elsewhere on the site.

For chess you can imagine how this might work, for other software it is not always as clear.


https://www.businesswire.com/news/home/20220127005808/en/Fed...

still has a 5.5 cent overhead but that's a lot better than whats currently on offer


The micro donations idea kind of exists for the use case you mention in the form of the service Flattr.


Sandstorm.io was basically doing that, and it’s the direction I’d want to see as a user. Unfortunately it isn’t attractive to application providers who want to monetize their applications, and without enough applications it’s also not attractive to users. Maybe it needs to be combined with a platform like NextCloud.


There's the SAFE Network that will do exactly that. It's pay on put. You as a developer would have to pay a one time cost to store the app on network, but that should quite cheap. Users would then pay to store their own private data through the (web)app. It uses its own browser and protocol as sites are stored on a peer to peer network.

It's still in somewhat unstable test versions, but I think there's hope it can be out within the next year or so.



There's actually a lot of open source apps that have this property in a particular category: web3 apps. Users (or other intermediaries, not necessarily the app dev) pay the cost of transactions, and thereby pay the cost of the data storage / computation layer.

HN doesn't seem to like web3 very much yet, but one of the most positive innovations it brings is giving open-source apps a direct business model (instead of the usual "pro" and service org or hosted-version model). There are plenty of open-source apps making hundreds of millions or billions in revenue in this space!

Even utility open-source projects receive significant funding through projects like Gitcoin (https://gitcoin.co/).


web3 apps do have high transaction costs, but that's not really reducing the costs that the web3 app would have otherwise paid to have a local database. So it costs money, but the developer doesn't really earn anything because of it.


> but that's not really reducing the costs that the web3 app would have otherwise paid to have a local database

I'm not sure what you mean? It literally makes the cost to the developer 0 per user.

And the original developer can absolutely earn money this way. In fact, in web3 it's possible for frontend devs to earn money by creating better frontends to existing protocols utilizing referral fees that can be specified in calls to the protocol!


This is unfortunate because a couple of attempts get _so close_.

You've got public docker repositories, terraform, aws service catalog (or cloudformation), all of which could fill this gap directly or via their own services if they had a slightly more beginner-accessible workflow...


How do any of these things help?

"docker repositories, terraform, aws service catalog (or cloudformation)"

These are all part of the problem, and can never be the solution.


What is the problem they are a part of, and what makes them part of that problem? They don't seem problematic to me.


You can't solve problems of complexity by piling on more and more layers.

Layers need to be collapsed


Terraform is roughly the equivalent to a Makefile. If anything it reduces complexity by encouraging operations to be run in a reproducible and testable fashion. Is it a layer? Sure, but little more than a shell script is a layer.


Citation needed.


You can.

If you write it in Go and package it as a single binary.

You can't if you write it in a scripting language that requires tons of programs to already exist on the system (a specific version of the language interpreter + a database server + other servers like redis and memcache etc etc etc).

This is a problem with programming languages, not with computer systems.

Computer systems already allow you to package and ship programs as self-contained units. That's their default mode of operation.

The prolifiration of scripting languages that require an entire environment to be configured before it can a program is something that programmers have done to themselves.


> If you write it in Go and package it as a single binary. ... > You can't if you write it in a scripting language that requires tons of programs to already exist on the system

You are close, but you are not exactly right.

If consider real case of application, which create some value (I don't mean just money, may be for example community value or scientific), you will need to depend on some libraries or tools.

This is how our civilization works, we don't create all things from scratch every time, but instead, we take ready parts and construct something new from them, possibly adding our new parts.

Some parts could been compiled/linked statically inside your big one file package, some only dynamically.

Scripting languages difference is just that they usually shipped with rich development library, and in many cases it is possible to cut unused parts of these libraries.

For example, Python now shipped even with sqlite database included, sure this is not tiny package, but it is used in many applications.

To be more precise, python for android (p4a), when compiled as helloworld, results in approximately 6mbytes .apk. As I said before, there all python standard library compiled with cython.

Sure, if you will compile helloworld native, will got something within 200k, but if add any library, it will grow, and for real application, 6mb is not to much currently.

If add functionality to p4a helloworld, it grow very slowly, because compiled Python representation is relatively compact.

BTW it could be interesting project, to add to Python/Cython possibility, to create single binary package, so it will not decompressed to fs hierarchy, but read all from those binary as need.


> If consider real case of application, which create some value (I don't mean just money, may be for example community value or scientific), you will need to depend on some libraries or tools.

> This is how our civilization works, we don't create all things from scratch every time, but instead, we take ready parts and construct something new from them, possibly adding our new parts.

This is very dubious.

If all you do is compose parts that already exist, then anyone can do it and your value add is almost nothing.

Sure, if you try to build everything from zero, it will take you forever.

But obviously, if you do what I said (what you quoted me), you are not building from scratch.

First of all, you are using the Go language and compiler. Second, you are relying on existing operating systems (likely linux) to run your program. Third, you are relying on the internet to exist so that your program can reliably serve content to users. Forth, you rely on the fact that your users have a reliable web browser that can display this content. Fifth, you are free to use any library in your source code.


> If all you do is compose parts that already exist, then anyone can do it and your value add is almost nothing.

Looks like you believe to Marx theories, that anybody could JUST make value. I mean, on such markets, you really could start sells from zero or with very limited resources.

But this is true only for very early stages of market, when competition does not matter. - On developed markets, competition is very significant, so to make moderate success, you have to use external resources - grants, credits, crowdfunding, etc.

What Marx don't said, that people divided to two big groups - passionaries (mostly entrepreneurs), and ordinary humans. They different in very important thing - passionaries have from nature need to change world (ordinary does not have such need at all), and on some moment of passionary life, earlier better, he switches to famous circle: 1. accumulate resources. 2. invest resources to some project to change environment. 3. when project on 2 approaches to some endpoint, have strong signs of success or failure, return to 1.

Ordinary people just satisfy their needs of mostly lowest levels of Maslow pyramid, they don't accumulate resources, they don't try to invest, and so they have not any success in projects, even when I agree, that many projects need very little resources to start profitable business.

Only exception, when ordinary human face some very easy to enter and extremely profitable opportunity, which does not include any risks or responsibility, than greed could defeat laziness.

But such things don't happen frequently. As I said before, most real opportunities include risks, include need to build circle of trust (so people will give their resources for free), or just get somewhere resources with conditions, like credits from financial entities.

So, return to our technical things, you are right, that exist totally free things, which could use anybody, but in most cases, they are not enough to make value, in many cases they are useless or they used to create barrier to enter this business.

I even must accent, I've been involved in few opensource projects, and in near all successful projects, their lead constantly made decisions, which features will not include in free version, to motivate people pay for them.

Most known example of free project, created as barrier - Eclipse IDE, which destroyed really big markets where lived lot of businesses - Borland, Watcom, Tiny C, etc, and when appear Eclipse, they become unprofitable.

So, life is struggle, but I repeat - technically you are absoultely right, in that near anything possible, if somebody else pay for it :)


I love how you make a faulty assumption and then go on a huge rant disproving it as if that matters.


There is probably room for a consumer appliance (like a souped-up NAS) that puts a facia and marketplace around Docker, letting end-users run self hosted apps. I know QNAP (at least) tried something similar to this, but it seemed to be fairly constricted.

Or to take it further, the same thing running in the cloud, but costs could pretty easily blow out of proportion with end-users not being fully aware of the costs of their actions.


I think that running a marketplace application on DigitalOcean is pretty easy (done in a few clicks, but depends on the application). If the image contains common sense security measures, then for the average user this is safer than a 3rd party SaaS.


A theory: When declarative IaaS gets to a certain point, we can distribute the cloud hosting version of a one-click install. Then use federated auth so people can choose any instance of the app to host their identity.


Isn't the solution just to use ads to have users fund the site? What were you thinking of?


That requires running a pile of nonfree/intrusive tracking code, and many people block this kind of thing


How is peertube these days


I think s/he wants something that the users hosts together.

Ads can be blocked, reduces the value of your service and do not pay enough to cover infra cost unless you have many users.

In comparison - the resources needed to run a desktop app is 100% provided by the user.


What's wrong with subscription fees?


One of my big realization of 2021 is that this is exactly what blockchains can provide. You can have permanent hosting of your website on Arweave[1] and interactions with the website could save your state on Ethereum or similar. You pay once to host the website, and then it's free to browse and users pay whenever they want to change the state (e.g. post something on your website). Ethereum is like this big world computer that anyone can publish to and users pay to interact with it. You don't have to worry about hosting or uptime and it will be up there forever.

[1]: https://www.arweave.org


How large is ETH overhead? I expect it to be atrociously massive.


It's massive right now, yes. Especially because of proof of work. It will get better, but I don't think we can expect it to be as good as everybody hosting their own services in the cloud. That is a big disadvantage of Ethereum, but I still think it is pretty interesting that it essentially handles devops for you forever.


Interesting question. Do we still have Internet infrastructural problems left?

Akamai solved POPs (point of presence). Equinix solved DC. Both are matching towards table stakes in the context of internet infrastructural. (Not business models). We have lots of under-sea cables / international expansions on-going and planned. And it is now more of a cost efficiency problem, not an infrastructural problem.

We have a decent Ethernet roadmap [1], Terabit Ethernet, Petabit under-sea cable by 2030. If anything I see the only internet's infrastructural problems being closer to the consumer / client side of things where Fibre Cables are not being deployed. But I sense the pandemic has changed a lot of perspective on fast internet and Government are now willing to put more pressure into making FTTH as requirement.

If we look at Mobile, even carriers were a little too optimistic in Data usage projection. 5G proved to be sufficient enough in terms of Tower capacity with enough headroom for expansion without requiring Small / Nano Cells.

It might be different set of infrastructural problems, but more regulations on internet in a per country / jurisdictions basis, which would require Internet infrastructure to adapt to these scenario.

[1] https://ethernetalliance.org/technology/roadmap/


Dealing with denial of service attacks in a way that doesn't involve needing to own more bandwidth then the attacker can saturate would be something to look into.


Well, that would have to be either not having a link for them to try to saturate (ie, edge-distribute your stuff as CDNs do), or preventing them from sending packets to you (which means telling other people's routers to run your packet-filtering code, which I'd think might be a bit of a hard sell).


That sounds like a business opportunity for a service provider specializing in packet filtering.


So cloudflare?


Please prove you are human. Not what we really want from the internet, but thanks to these companies it's what we have to endure. As a consequence nothing is automation friendly, stupid netblocks are in place on many sites and an array of VPNs are an almost a requirement now just to obtain information.


Cloudflare is working on getting rid of CAPTCHAs almost entirely - https://blog.cloudflare.com/end-cloudflare-captcha/

That blog post was released on April 1, but it isn't a joke (Cloudflare announces things that seem crazy but are real on April 1 instead of doing a joke)

Disclaimer: I'm an engineer at Cloudflare, but not on a team related to this.


The captchas are annoying, but its not the only thing cloudflare does. Cloudflare also has anti-ddos products that dont involve captchas. For example their Layer 3 magic transport


of course there's always the problem of IPSs and big companies and others being assholes, and there's at least a category of laws that should be gotten rid of, and people and society need to be nicer in general, and everybody needs to stop treating digital things that are abundant as if they're scarce (get rid of artificial scarcity of digital things and embrace the opportunities that abundance brings)


TOR is slow and unpopular, stuff like that and content addressable protocols like IPFS are probably where the next problems are.


How can we get the layperson to run a "homeserver" to host all their data locally and have a strong pki infra.

30 years ago, people would've said the same things about routers, so I think it's possible with the right ui/incentives


>I think it's possible with the right ui/incentives

I want that, but for the masses of tech-illiterate Average Joes out there, it's tough to compete against the sheer convenience of "tap next to trust big tech with all your private data and sync it all in the cloud" that you get when you unbox your iPhone/Android. And for most people their phone is their primary computing device now so their lives are tied to those ecosystems and we've been conditioned for over a decade now to just give our private data to the phones' ecosystems without asking questions because everting is so convenient and ignorance is bliss.

Trying to get average consumers off the big-tech ecosystems at this point is like trying to unplug people from the matrix. It's nearly impossible, unless some new EU-style regulations break up these monopolies first so that third party alternatives can compete on feature parity.


> tap next to trust big tech with all your private data and sync it all in the cloud

Consumer NAS management should be way easier. Why can't I tap my phone to my NAS to pair and then go anywhere in the world with a network connection?


Because

1) most consumers have no idea what a NAS is, and even when they do, consumers have been conditioned that all their data is automagically beamed to the Apple/Google cloud at any and all time there's an internet connection without any user involvement beyond entering their ID when they unbox their phone for the first time, so it's impossible for a third party device or service to compete with this level of out of the box integration and convenience

2) NAS devices aren't made by Apple and Google so NAS integration into these ecosystems is a second class experience at best, and Apple and Google will never make NAS devices as they're incentivized to get you to pay for their cloud storage subscriptions for your data. Plus, as a a cherry on top, this way they can silently data-mine you as well.

Basically the industry is moving, or we can argue that it has moved already, towards subscriptions, where you never really own your music/movies/data but have access to it as long as you pay your monthly/yearly fee, because this is so much more lucrative for big-tech than getting you to buy commodity HW like a NAS and physically owning your data.


EU-style regulations will decrease, dramatically, the likelihood of Average Joe doing it themselves: those regulations will improve privacy, data control, et al. requirements for big tech. Average Joe will feel even safer in using big tech.


The first problem to solve would be getting a symmetric fiber connection to their home so they can actually upload at more than a highly volatile 5Mbps that is probably split amongst whole neighborhood. Second would be ipv6 address to not have to deal with CGNAT and make the ISP as dumb of a pipe as possible.


Why? Honest question.

I fully appreciate owning your own data and hosting it somewhere, but no idea why we need to host anywhere but hyper-connected data centers.

I’d like to own my social graph, my profile, my permissions for who can contact me and read my data. But no reason for that to execute on my phone or home server. Have service providers do it, compete, scale and specialise. Let me host at home if I want to, but that doesn’t feel like the default we need.

But prove me wrong. I like learning!


Because it's my data. I should decide who can access it, when and on what terms. Relying on someone else to do it for me is an unnecessary middle man that exists for the only reason web developers haven't made a good technical solution that would be simple, secure and reliable enough for everyone to use. For goodness sake, we've had to come up with laws for how companies can use our data instead of solid technical solutions that address the problem.

The focus for the past 30 years has been on simplifying web content consumption. Everyone knows how to use a web browser. Why hasn't there been a similar push to make serving web content easier? There have been some attempts (Opera Unite, Sandstorm, Docker... web3?), but none have prevailed.

A possible answer could be because it has created a huge market for 3rd party services to step in and make things easier for web users. And now it's probably too late to stop the train. But there's no reason this couldn't work while empowering the user.


But then you could run some open source server that does it, just not typically from your home. It’s the home part that won’t work, just as most people don’t serve their web apps from home. And non tech people can still outsource and move between competing providers rather than have an oligopoly of social media companies.

Sadly publishing goes towards the lowest friction service. Was websites, then blogs, now Twitter. The backgrounds and themes weren’t as important as simple content. I’m not sure home hosting has any chance to beat that trend.


* Most home connections are asymmetric and lack upload bandwidth.

* Most personal devices are battery powered and intermittently operated.

* Many are mobile in terms of physical location and logical network (home WiFi, work/school WiFi, cellular).

Having a third, highly available, high upload capacity location to stage data between production and consumption wins because it is a genuinely effective solution to problems inherent in sharing data between end users.


> asymmetric

There's no technical reason for this to be the case. ISPs adapted to the needs of the web, not the other way around.

> lack upload bandwidth

This is relative to each user. The vast majority wouldn't require much upload bandwidth, and there could be technical solutions to address this (caching nodes, data expiration, P2P, etc.).

The biggest technical challenges of large web services are because of the scale needed to support the large amount of users. If we had built services and tools around the inherent distributed nature of the web, centralization and all the problems caused by scale wouldn't be an issue.

> battery powered and intermittently operated

> mobile

Why should user data be highly available? If I'm physically unreachable or just want to be offline, shouldn't my data be unreachable as well? Besides, all of these can have technical solutions as well.

> Having a third, highly available, high upload capacity location to stage data between production and consumption wins

I'm not disagreeing, but a) this wouldn't be required by most users, and b) why couldn't this be under control of users as well? The fact no such (simple) solution exists today doesn't mean that it couldn't have gained traction back when browsers were getting adopted, and today we would've had a completely different web. Unfortunately incentives are turned on its head, users aren't educated about the harms of giving their data away because they've learned that it's the way the web works, we have to pass laws to protect user data, and a vocal minority of web developers have been swimming upstream and trying to undo the harms of centralization for decades with lackluster results.


The key functionality of the major tech platforms is to convey data to other people. Almost none of the data we give them is private or proprietary in the sense you’re suggesting. We only give it to them because we want e.g. our friends to have it. Now it is unfortunate that the service provider also gets it in the process, but that’s more an issue of end to end encryption than centralization per se. We can have highly centralized yet end to end encrypted platforms, like WhatsApp and Signal. We can also have highly decentralized platforms which are panopticons, like cryptocurrencies.

One of the main reasons we “hire” the platform companies to convey our data to other people rather than opening TCP sockets to each other directly is precisely async delivery. No one wants to use a social network where we can only see each other’s content if we’re using our laptops at the same time. Just make a phone call at that point.

Now maybe you could ask for federated, open standard queueing mechanisms. But we have one of those! It’s called SMTP, and it’s not really up to the needs of modern social applications. And maybe we could do something about that. But it’s also in the nature of federated open standards that are widely deployed to ossify. The other service that Facebook and Twitter are doing for you besides pub/sub is having the coordination and agency to update their own deployments over time, something that e.g. the set of all relevant email operators does not have.


Some websites even track exactly how you move your mouse and can have a pretty good guess of your age and they can track how many times you click something that doesn't do anything (apparently older people tend to zero in on buttons and click on things that aren't clickable more often. I heard this on older episodes of the Level1news on the Level1 Techs youtube channel, which I recommend listening to for tech news that might be up to a week out of date)

It's not just the data you obviously fill in yourself

and that's before considering cookies other than letting you auto login to a specific site you already logged in to before

Also, isn't Whatsapp owned by facebook now? Don't trust facebook or anything they own


Apple is now policing the images you own. They intend to observe everything you upload to ensure it aligns with local legislation. Some good, a lot of bad.

Beyond that, if someone owns your data they can simply decide what you pay tomorrow and its burn it or pay.


The main reason for me is identity. Imagine that when you first got an internet connection, it didn't come with an @comcast.net or @att.com email address, but rather as part of the setup of your internet connection, let you register your own domain.

The mx records could point to this homeserver box and you would fetch your messages from there. It would be the norm that your graph, photos, whatever would be stored there and you would have to allow access to share them out with others. That would have lead to a very different internet than today.

But the main thing I'm thinking of is having a physical, secure enclave protected, device that allows all internet users to u2f to their ISP in order to have the option for an "ip permitted from" protocol for registration on websites. Something that can ensure that when grandmas pc gets hacked, it's not used to create 100 Twitter accounts to pump some crypto scam.

Today even if we could get ISPs to setup such a system, people wouldn't use such a system as they are already use to easy sign ups, and companies love their growth metrics


Routers work because ISPs require them. ISPs however aren’t fond of supporting the home server use case.

I wish for a world where battery tech wasn’t so limited. Imagine if everyone could just run a full-fledged server 24/7 on their phone, as a simple app, with a reasonable data plan.


> Routers work because ISPs require them. ISPs however aren’t fond of supporting the home server use case.

This is the obvious and simplest solution. A built-in self-hosting platform right in the router, extensible with an external USB drive if the user needs it. But ISP's are notoriously terrible at everything and certainly can't be trusted with something like this.


Why not ? Free (for instance) went pretty far in this direction 11 years ago (v6) :

https://en.wikipedia.org/wiki/Freebox


ISP's can't be trusted with the firmware on their current routers. Do you really want them in charge of your internet facing services in your home?


O RLY? https://floss.freebox.fr/

Edité: Liberté, égalité, cyberité!



Yes, shit happens. Use another device. Or another ISP. Or get a bizniz line where you can spec a device of your choice? Or move? Or use StarLink?

(Did you notice the source in my link?(at all?))


> Use another device.

I did. It has its own slew of problems.

> Or another ISP.

ISPs hold monopolies in many areas of the US.

> Or get a bizniz line

Not always available. Too expensive when it is.

> Or move?

Is this a joke?

> Or use StarLink?

Waiting list. Expensive. Latency.

> Did you notice the source in my link?(at all?)

Did you really expect me to read the source code from a bunch of patches for a french modem and gain some kind of magical insight?


> Did you really expect...

Err, nope? I just posted that link to show you that it doesn't have to be like you said, especially not for the device in question.

> ISPs hold monopolies...

They did, and sometimes still do elsewhere too.

I don't give a shit, since I don't live in "free 3rd-worldistan with nukez, carz, and screaming lunatics on crack, or whatever else".

> Is this a joke?

Not at all. Totally serious. It's the thing to do, to escape learned helplessness.


What about Helm?

https://thehelm.com/


I want to like Helm (mine is temporarily in storage), but there doesn't seem to be any way to run the rest of Nextcloud on it.

It's great that you don't have to start rolling your own cloud, but it would be a better offering for me if you could decide to tinker with it.


Pray that Apple makes one.

I'm only half joking. I was just fantasizing about this the other day. I'd love this to become reality but I'm worried reality diverges from this idea further with each passing day. People produce more and more data that's useless outside of a closed platform. Nobody owns media to host.


Probably some ISPs will sell NAS or similar device and, with a monthly fee, you would have external access via a custom domain name (like john_doe.verizon.com, for example).

A lot of non-techy people already have NAS, external hard drives, and things like that. I don't know how ISPs haven't done already this.


That's a DOA product. ISP's are notoriously terrible at everything; especially firmware. They can't be trusted with something like this.


Cheap reliable last mile internet. The core and edge is solved.

Around 37 percent of the world's population (2.9 billion people) have never used the Internet (1 in 3 people), per the UN’s 2021 report on the topic.

https://www.itu.int/en/mediacentre/Pages/PR-2021-11-29-Facts...


That's important for poverty-stricken people (and slaves) to be able to accept crypto donations and payments directly... but what if it becomes a scam?

Even homeless people in the west should be able to get money that way.

So an .eth domain for every person?


I think it's likely that we will see a rise in customs expectations from countries about data that is imported.

The great firewall is the prototype, but as the world becomes multipolar again, regional powers will want to control what kinds of data is imported/exported


While I agree for centralized services. I think In the end all information will be accessible and searchable. Since the beginning of the written word, there’s been an exponential expansion of information. Scrolls, book, news papers, radio, TV, internet, etc.

So in terms of infrastructure, I think a way to tap into and share information regardless of restraint will be the end result. It would need to be cheap, impossible to censor, searchable and able to easily hide access devices / methods. to said system.

I see crypto currencies as the initial stages of this.


I completely disagree. We see a continuing trend of consolidation and obfuscation. Most social media hides and scatters information intentionally to absorb user time, theres no value in providing information quickly and accessibly. I fear Google becoming worse is also by design as well. Its only time until they introduce their own infinite scroll.


I agree short term that’ll be the case. For a long time, radio was viewed as a dying form of communication, until TV started censoring large sections of what people wanted to listen to. Then you had the rise of people like Limbaugh, Jones, stern, etc

The reality is people seek the truth when they know they’re being lied to. Very few people in the west trust news

https://today.yougov.com/topics/politics/articles-reports/20...

Part of the issue with Google, etc has to do with the “Trusted News Initiative”.

I definitely agree the trend will continue. However, once an innovation with then properties I described can occur it will dominate.


Preface: Gonna do my best to not add any commentary for or against the social aspect of decentralization / blockchains. Also gonna be high-level.

I can't help but feel distributed computation is a really really fascinating problem and if the socioeconomic wave we're going through now sustains even a fraction of this current moment it'll be a longterm engineering focus.

It's impossible for me to not recognize that the diff blockchains mirror that of different database designs as the web scaled from nineties. First read capacity was needed to support e-commerce. Followed by social platforms where read/write needed to scale and adopt distributed models and eventual consistency.

Now we're scaling distributed computation and all sorts of interesting problems emerge. If things are gonna turn out to be even remotely what an idealist might lead you to believe we're at the cusp of rearchitecting every single layer of computation. Networking. Machine code compilation and execution. File storage.

PS I did a couple of cmd+f for keywords to find someone answering with this context and didn't find any. That seems crazy.


Provable identity. Yes, you can do Oauth via the google or the facebook but sooner or later we need something that isn't tied to getting all your user interaction data ...

Digital notary. So a third person (digitally) signing a transaction or other document exchange.


Haha, my father actually presented me with the idea of a digital notary. He was wondering why we send everything electronically, where one party has to trust the other hasn't faked it, and the other party has to trust that the first party won't misuse it.



Could you expand on both of those? They sound really interesting but I'm not sure I understand the issues or use cases.


Provable identity: you get an email, hi i'm a hiring manager from company X, can we get in touch about job Y, could you please send me your (secret) phone number?

It should be possible for you, the receiver of the email, to check if the email originated at company X.

Digital Notary: this came up in several data privacy discussions. You (A) are in contact with B, but you don't want to send B something like a scan of your passport (e.g. for age restricted services). So you disclose the passport scan to Notary and he sends B the message, the passport was disclosed to me and the person is >21.


Re: passport scan issue: check out what the German ID card[0] does. It's better than the passport scan sending "state of the art" on both security and privacy.

[0]: https://www.personalausweisportal.de/Webs/PA/EN/home/home-no...


> It should be possible for you, the receiver of the email, to check if the email originated at company X.

You could check the DKIM signature of the email.


Exactly. "Did this email originate at $server" is what DKIM and SPF are meant to solve and IME they work well. Setting them up is not particularly difficult and there is a wealth of open documentation about it.


The point is "proving" something without showing them the proof. E.g. someone Company X trusts looks at the documents or etc and sends a signed confirmation that they confirm X, Y, and Z about Person A.

The point being that Company X does not have a copy of the sensitive information (and neither the liability of losing it) and the Digital Notary would (in theory) have better procedures for properly deleting or storing the data as needed.


I'm pretty sure that BGP is still horribly insecure at its core, which means that all it takes for BGP hijacking to occur is for someone to forget to configure their filters properly.

(See: that time that a bunch of Google traffic started getting routed through Russia. Or the time that YouTube became inaccessible to the entire world)


BGP by itself is insecure, but an (RPKI) infrastructure has grown up around it so that it can, and should be by now, secure.

Yet BGP injection attacks (ASN or prefix theft) happen regularly. The reason is that not everybody follows the best practice here. It may well take a massively disruptive attack before this gets any better.


We've seen several major incidents caused by mistakes in the past few years. It's only a matter of time before an actively malicious attack on BGP causes major damage.


The biggest challenges are people problems. Why the hell don’t we have fiber to the home in most of America? Regulatory capture and market failures.


The cost of running new lines on poles is also insane, our infrastructure is racking up so much debt.


There was a well meaning president that wanted to solve the problem until his program was cut down to nothing. Instead, we are heading towards an economic recession. Quite amusing.


Those companies were not created to solve those problems, but to profit by them. Do you think that, say, Cloudflare would like a better web protocol which would be impossible to DDoS?


If such a thing was possible we'd be the first to roll it out. DDoS is a scourge which is why we made DDoS mitigation unmetered on all plans including free: https://blog.cloudflare.com/unmetered-mitigation/


I wonder if DDoS could be solved (for static websites at least) by using P2P as a supplementary load balancer.

This could be set to only be enabled if load is approaching a certain percent of capacity that the servers/CDN are able to handle.

Once reaching that threshold, P2P would kick in, and existing visitors could serve static content to newer visitors using something like the WebRTC + Service Worker + IndexedDB combo that www.arc.io uses for their P2P CDN.

Thoughts?


I’ve looked at P2P CDNs over the years and they seem to be solving the wrong problem. At scale bandwidth isn’t a problem it’s recognizing the DDoS and filtering it while letting through legitimate traffic to a dynamic website or API. That’s complex. Not saying it can’t be done in a P2P manner but it’s hard.


Interesting, you make a good point there. Out of curiosity, do you see other areas where P2P on the client side can have a significant benefit?


By making it free though you’ve disincentivized fixing it. It’s awfully coincidental that the current leading solution to DDoS for people running websites is “use cloudflare”.

Cloudflare has absolutely no reason to invest in solving DDoS because the existence of them is one of your best sales leads. DDoSes are cancer and you run a cancer treatment center. Gotta make sure you can treat cancer well but you wouldn’t want people to avoid it in the first place.


What short term thinking that would be! That's how companies die. They get captured by their customers and markets and they can't see change coming.

Imagine if Cloudflare stubbornly stuck to providing DDoS services and never considered the idea that there might be a solution to DDoS at the protocol level. We'd die if someone else came up with the technical solution to DDoS. So, it would both be better for the Internet and better for us if we were involved in killing off DDoS at whatever level possible.

For example, on the network level we've pushed for BCP-38 over and over again to deal with spoofing. RFC 2267 is 24 years old (https://www.rfc-editor.org/info/rfc2267)! But, yeah, sure, Cloudflare that is half that age is keeping all those DDoS attacks happening because they love the smell of $$$. Give me a fucking break.


I long ago made bcp38 and fq_codel available in openwrt. It would be great if cloudflare told more customers what better home routers they could use as a base.

We also solved the bcp38-like problem ipv6 had by using source specific routing throughout openwrt. A lot of other router makers are still not doing this right. Turris gets it right, I know.

It would be good to know what else cloudflare thinks would be a good set of DDOS protection features (route 666) home routers should have? Please add requests here: https://forum.openwrt.org/t/cerowrt-ii-would-anyone-care/110...


> We'd die if someone else came up with the technical solution

This reasoning does not prevent Google from slowly making itself irrelevant by changing the web to such an extent as to make its search algorithm impossible to get any useful results from.

> For example, on the network level we've pushed for BCP-38 over and over again to deal with spoofing.

That is a point in your favor, I will concede.

> Give me a fucking break.

Cloudflare is a huge company with more and more power over the entire internet, and it is constantly urging people to only use the internet through Cloudflare, in numerous ways. You do not get a “fucking break”.


Allow me to paraphrase your comment:

“It’s not possible to solve this problem, except by centralizing all the web through us. Aren’t we generous to not punish our customers when they get hit by this problem?”


Do not “rephrase” other’s comments. Stick to your own.


I said nothing of the sort.


Perhaps not, but it is how I interpreted it. Cloudflare is a force for centralization, and has every incentive to remain that way. I don’t see how that could change.


Just like how we "centralized" everything by helping with the testing and rolling out of such Internet standards as TLS 1.3, HTTP/3, QUIC, MASQUE, ...


I don’t see how that’s relevant. New versions of TLS, etc. neither help nor hurt centralization, which was the topic at hand.


Given that they seem to be backing IPFS, I would say so.

https://developers.cloudflare.com/distributed-web/ipfs-gatew...


I see that similarly to Google backing Firefox. On the surface, it seems odd, but probably has some shrewd reason for it, and it would probably cease the moment the backed project got any real traction.


How would a web protocol solve this? Even if you were to create an internet protocol to counter DDoS attacks by allowing destination IP addresses to request hardware accelerated IP bans of abusive source addresses you still are stuck with a hardware and authorization problem.

Even if you properly implement this system, network operators will expose themselves to firewall DDoS attacks by malicious actors that are trying to fill the firewall blacklists with garbage.

We've reached counter counter DDoS warfare. What do you do now?


Three things spring to mind:

1. IPv4 will persist, possibly forever. There's really no compelling reason to migrate to IPv6 other than address space and we've had decades at this point of getting around this problem with various flavours of NAT.

2. Ossification. We've taken the quite reasonable step of discarding any packets or traffic we don't understand from a POV of minimizing threats. For example, there were cases of bypassing security using packet fragmentation. But this makes it increasingly difficult to extend the protocols (eg reliable connectionless messaging aka a reliable UDP).

3. We don't really have a good solution for roaming. If you switch hotspot and get a new external IP it'll typically break your connections. A lot of work has been done to workaround this (eg carrier-grade NAT for mobile IPs) but identifying an endpoint with (address,port) (or just (address) for IPv6) is less than ideal.


Ossification is an interesting problem. We dealt with it in the past with IE6 basically via “shaming” users/corps into switching.

A message in Netflix/Google/Facebook home telling “we see your network is blocking X, that may result in a not optimal experience”


Low carbon footprint datacenters, but it requires better software performance. Law of Wirth explains that.

Ability to do SDR for wireless networks with smarphones. 5G is not a good solution.

Better security for routers, and generally better software security regulations, which are almost non existent right now. If cars have security regulations, software should, too.


> Low carbon footprint datacenters

My gut tells me it's fairly low per person served, and it'll only improve over time as more renewable electric sources come online.


you should not listen to your gut all the time


> Data centres contribute around 0.3% to overall carbon emissions

https://www.nature.com/articles/d41586-018-06610-y

Compare that to https://www.epa.gov/ghgemissions/sources-greenhouse-gas-emis...


0.3% is a lot for what it is, in my view.

I don't know if it takes the lifecycle of hardware into account.


>5G is not a good solution

Interesting viewpoint. Care to expand on your thoughts? To me, 5G seems like a stepping stone to UWB communication


"Ability to do SDR for wireless networks with smarphones" - you want to have SDR on the smartphone?


Yes, I don't see why not.


I think our next biggest "problem", though perhaps not an infrastructural one, is one of protocols. We've pushed almost everything into HTTP from my perspective and I think we'll be dealing or solving that next. Perhaps a resurgence of dedicated protocols and the routing/infrastructure to deal with them.


If/when interactive VR experiences go mainstream, network latency will have to be much better--not the average but rather the p99.99 latency. Having an immersive 90-120+ fps world stall/stutter routinely makes it unlivable.


How do you detect and block coordinated troll farm attacks when they use arrays of LTE modems and look like a a bunch of passionate users?


That should be relatively easy for the mobile carrier, they're always in the same spot.


And how does the carrier know that someone just didn’t say to check out example.com at a conference with thousands of people in attendance?


I'm talking about their spatial position. They're literally in the exact same spot all the time, they're not attached to little robots that try and move like people at a conference.


Ah, makes sense!


I’m very excited about the challenges explored in the SCION project and recommend having a look at their site and papers https://scion-architecture.net/


I’m very skeptical about “blank slate” approaches such as this. Can you tell me what the advantages are? From a glance I see two big downsides already: - Tightly coupled components unlike the extremely modular design of TCP/IP means that further innovation will probably be difficult.

- The design seems to mandate source routing, where the entire path needs to be known by the sender. This is much less resilient than the current internet where each hop decides how to best get a packet to its route.


they've been at this for many years now without any traction at all. all they can point to are some collaborations with 2 Swiss based ISP's and a start-up spun out of the university. They seem to be utterly lacking in business acumen, or willingness to engage with the business community which is surprising considering the momentum they could have had if they tried. I've filed this as failed already 2 years ago.


I think observability has made a lot of strides but still isn't good enough. I get instant reporting of abstruse errors like API failures but actually understanding why to the point of being able to fix it is still really hard.


Inter-planetary internet... How do you play a game when few of the players are on Mars?


Even with high ping, application layer is probably the wrong place to solve this problem. We'll likely get email working as one of the first problems and be back to correspondence chess and the like. Even Civilization 5/6 works over email.


https://gaming.stackexchange.com/questions/8111/how-do-you-p...

Apparently people mail each other save games.


TIL that Civ IV has "Play by Cloud" which automates all of this + supports webhooks to notify you when it's your turn.


Ping of 30 minutes


Turn-based strategies.


Yep. 30 minutes is super-fast compared to play-by-mail!

https://en.wikipedia.org/wiki/Play-by-mail_game


Some progress has been made on this; see http://bundleprotocol.com/


How do you even reconcile time between locations that far apart? Syncing your system clock with an off-planet NTP server will be.. problematic.


This is actually a totally solved problem using a solar reference frame and done every day for missions far out in space. NASA has a free tool[0] to see the current time for any body on earth, even correcting for light-delay. It works via email, ftp, telnet, and web.

0: https://ssd.jpl.nasa.gov/horizons/


entangled quantum pairs?


You can't send information with quantum entanglement.

This is one of the no-go theorems of quantum information: <https://en.wikipedia.org/wiki/No-communication_theorem>


True. But I think you can at least sync clocks.

https://www.nature.com/articles/s41534-018-0090-2


But you can not send information. Which is information that you didn’t send. Thus, you sent information.

This is like saying you can’t send information with electricity. You couldn’t at first (FYI), but then someone invented Morse code and the absence and non-absence of electricity made all the difference in the world.


You can send information by sending additional information through classical means (thus limited to relativity). This is the basis for quantum cryptography, where you can have a shared one-time pad that is impossible to intercept without detection or and impossible to copy (thanks to the no-cloning and no-teleportation theorems).

But you can't send information through quantum entanglement itself without rewriting a significant chunk of physics as we know it. If you're going to do that, all bets are off and you should be consulting with wizards and not physicists.


As buzzwordy as it sounds I think between AR/VR/metaverse there will be some infra challenges that extend beyond "just needs a fatter pipe".

A bit like games need complicated netcode to compensate for latency.


As one of the top comments mentioned that most of the people on our planet earth do not have reliable Internet connection especially in the rural and remote areas.

I think the most practical and affordable way of connecting people in the remote and rural areas is by wireless via terrestrial rather than satellite. Regardless of satellite or terrestrial, apparently wireless transmission itself need to be reliable in the face of interference, multi-path and physical obstructions (tree, foliage, building, limited groud clearance, etc).

Most of the efforts for improving wireless technology have been focusing on the urban settings not rural or remote areas for obvious reasons. Ironically the areas where there are less money to be made and there are the places where most of the digital divide are happening.

Personally currently I'm working on a next generation reliable wireless connection technology and the initial results are very encouraging. Hopefully this promising technology can significantly contribute to the improvement of connectivity to reduce digital divide worldwide in affordable manner. Please contact me if you are interested to collaborate on this new technology.

For global initiatives focusing on improving telecom and Internet infrastructure there is Telecom Infrastructure Project supported by the major players including Facebook, Intel, Nokia, NTT, etc:

https://telecominfraproject.com/


I don't know if this is totally related to the question, but I think in a near future will be more and more common to have 4G, 5G, or 6G in your devices (not only phone and tablets, but also laptops and desktops computers) instead of using cable, fibre or others and a router.

I don't know in the US, but in most parts of Europe we have reached such levels of speed and low ping that in not too much time it would be more clever and cheaper to ISPs to have more towers than wired solutions.


My home internet (in Seattle) is T-Mobile 5G. Only chose it because there are no high speed wired options for my address, but it works pretty well.


Separate out the "central" bit of centralisation.

I think we need DNS for all.

A "secrets service" where people can put a short encrypted message - big enough for an IPv6 address + future extendability - for all to see.

Then users can swap keys, agree on a "secrets service" (or many of them) to store their secret/IP address, and skip any other centralisation altogether.

Apps (open source or otherwise) can then leverage this service to let users simply talk directly to each other.


I don't know if this is in the category that you're asking for but right now there is tons of experimentation with "Content centric networking" e.g. "Named data networking" to better optimise how we load content inside the web. Instead of using an IP to connect to some server of e.g. Google to get content we just say what data we want and load it from where ever (with better prospects of caching).


Don't got the time to write a substantial comment but I would say we are going to have to figure out languages and transpilation next.


It would be nice to have a cable alternative to IEEE 802.15.4/6LoWPAN. IP seems to be the future for home automation. The problem with wireless is that you end up with devices you have to change batteries and reliability issues in high density housing. Laying a twisted pair ethernet cable to each temperature sensor is overkill and too expensive.


single-pair ethernet?


I'd like to see more movement away from client-server model to peer-to-peer model, with the easiest and most obvious example being file distribution via torrents for free-to-the-end-user files and downloadables, with the company or whoever seeding constantly but being helped by others who also seed the same files. It would reduce server loads and make file hosting easier and cheaper to the company/host, compared to the traditional server-client model.

That's not the only thing peer-to-peer could be good for, nor is it the only implementation possible, but I'm using torrenting as an example because it's a good peer-to-peer technology that works and has been working well for at least 10 years (whenever trackerless torrents and less reliance on trackers became standardized into the standard)


Good question!

I am biased in this answer because I am building https://hotg.ai/ but I see the world going towards a more fragmented ecosystem. So:

- Portable computations - sending your workloads to any place where the data is

- Good local storage that keeps you compliant with local laws

(edited for format)


IPv4 exhaustion is still probable to happen at some point


China being better than the rest of the world at IPv6


I have a real simple proposal: Turn off (block) ipv4 for one minute a day. Next month, increase it to 2 minutes...


I am collecting suggestions for next generation edge routers here: https://forum.openwrt.org/t/cerowrt-ii-would-anyone-care/110...


We could use a very basic standard so that I don’t have to click on cookie banners for every single website…


Making the network information aware is the next Internet infra problem.


[DISCLAIMER - I run Transcelestial which is building laser comms and we think about this question quite a lot]

Some background maybe first.

There is a massive Global Internet Distribution challenge which works around the cost/bit equation. They are:

1. Undersea cable networks - USD 0.5-1B to deploy over a multi-year project. 10s of millions to maintain with regular cable cuts. Typically now only deployed through consortiums of Internet and Telecom companies. Carry 99% of world’s international data.

2. Inter-City Distribution - National Fiber and Copper networks which connect tier 1-3 cities, towns and villages with a backbone to the nearest Internet Exchange OR telco data center (which in-turn would have a hard line back to Undersea landing stations).

3. Last Mile or within city/urban connectivity - last & middle mile within a city/town connecting homes, offices, towers and DCs.

IMHO the challenges still remain but get worse from top to bottom, costs and complexity often jumping in orders of magnitude from one to the other, with Last mile obv being the craziest.

Telcos nationally in most countries still own most of inter-city distribution and tier 2/3/4 POPs (point of presence), leasing out capacity from POPs to ISPs and Enterprises. The investment in laying these cables is EXTREMELY prohibitive and is the main cause for high Mbps rates, high latency and onerous terms when it comes to in-country network distribution (big e.g. is South Africa). Numbers range from orders of magnitude more expensive (e.g. $1.6B for Telstra Australia in Phase 1, $130-150B for US) than Undersea cables primarily due to Right of Way and operational costs of deployment.

People are now moving from Rural to Tier2-3 cities/towns and also there is reverse-migration from megacities like Manila to Tier2-3 cities/towns (as evidenced by rising cities like Cebu, Bali, Miami, Austin, Pune, etc where housing is more affordable and earning potential remotely is nearly the same). Bandwidth and latency demands are going up 100% Y-on-Y in Tier1-3 cities, especially in WFH COVID times. Starlink & others in LEO wil definitely help with most rural unconnected places (<1-2% of total bulk). Telcos will eventually build out Tier1 cities with fiber more robustly (since they have to deliver on 5G small cell and potentially 6G).

Mid-tier cities & towns where by far the larger total bulk is accumulation will need a LOT of attention and more latency optimized, cost/bit minimized backbones.

Finally, humanity's push to get into deep space in the next decade will require building out infra to support robotic and autonomous missions. Thinking of deep space objects as islands or continents is a helpful model and tightbeaming laser comms to them as "undersea cables but in space" could help address some bandwidth allocation problems in the early days (but local distribution will again have challenges)


Connectivity - we are verging on IoT needing a 24/7 internet connection, coverage is still quite a way from that.


I think it's the same as the current internet ifra problems. And I think latency is the biggest current problem.


keep hoping more deploy rfc8290 by default.


Hell, the current infrastructure is the States is still a problem. I don’t even want to think about the next version!


I think that wide acceptance of IPv6 is the next big thing. A lot of ISPs still don't have it.


Electrical waste is an obvious one. The devices that the internet run on are a part of that problem.


Centralization and decentralization. We need an internet that serves the people first.


The infra problem is people. Cloud is creating such value and complexity that companies need to start paying SWE millions to keep up. (Top SWE can easily generate 8-9 fig pnl.) But due to lack of social capital it is universally still seen as a code monkey class.


De-ossifying the Internet is necessary for solving many other problems like the IPv6 and RPKI transitions.

There's a lot of room to optimize latency whether it's removing bufferbloat, L4S, or cISP.


There is not many ways to get rid of bufferbloat if you want to keep packet routing/switching networks.


Have you looked at L4S? It does mostly fix the issue.


Very researchy. fq_codel and pie are deploying now.


The L4S-capable ECN treatment is deployed in recent low-latency DOCSIS.


Obviously we can't eliminate all buffering but excess buffering can be reduced.


Some way to companies to buy a server as an apliance to serve as a homepage and allá their internet nerds, just plug it yo tour network, setup tour domain and thats it


Re-writing all the software, in short.

It is all a steaming pile of garbage.


What programming language is "short"?


None so far, they're all steaming piles of garbage.

We're still typing single characters in text files as if we're on a terminal in 1960s.


I don't think the medium (text files) is the real problem for SW. There are ways of creating SW by clicking things on a screen.

We're in the middle of a SW quality crisis, because a lot of people have not the slightest idea what they are doing, but they are encouraged by their managers to ship immediately.


The medium is part of it. A huge number of programming language features stem around the fact that we make text files the human interface. "Clicking things" is not the next step because clearly keyboards are faster. Imagine something more like a "notion for code" where things are 1) clearly keyboard focused 2) clearly being stored as structured data not plain utf8 text.

Comments, build configuration, platform specific implementations, alternate implementations (e.g. (re)implementations of an interface used by an application), tests, and etc. could all be part of the "program". In many ways we try to do this already by hacking together git repos with everything stored in there as text and then require very particular versions of programs to run everything in there anyways.

I think it's all sufficiently powerful already to build awesome things, but I do not doubt that in the next 10 years something will be built that realizes the ideas which have been kicked around since Smalltalk/Self and raises the bar of productivity.


There’s LabVIEW, have you tried it yet?


next internet infra problems

not tryna be "that guy", but, isn't the internet concerned with interstructure? When you get to a LAN behind a firewall or code inside a walled garden, ok, that's infrastructure.

e.g. Akamai

that's interstructure, though it might require support from your infrastructure




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: