Would be curious to know how using IPFS for internal container distribution compares to using BitTorrent. IIRC BitTorrent has found similar uses in the past.
Also, how well does BitSwap work when the underlying network is congested? Do IPFS nodes do any kind of congestion control?
You can't update a torrent, if the content changes you have to create a new one. IPNS helps with that. And you can't share pieces across different torrents, if some still have the old torrent, they share it separately from the new one even if the differences are minimal.
The article doesn't mention IPNS at all, nor does it talk about the need for mutating an image while it is being shared, so I'm not sure why you think IPNS is even desirable in this use-case?
"Inter-Planetary Name System (IPNS) is a system for creating and updating mutable links to IPFS content. Since objects in IPFS are content-addressed, their address changes every time their content does. That's useful for a variety of things, but it makes it hard to get the latest version of something."
Thanks, but I already know what IPNS is. My points were that (a) it's not needed for this use-case, and (b) it's a distinct system from IPFS. I think you agreed in your TLDR.
I would think because IPNS is IPFS.. and you asked how IPNS compares to BitTorrent. Maybe I misunderstand your question, but the reply seems totally on topic and a valid answer to your question.
Not in my circles. IPFS is used a lot for storing and archiving files but IPNS is rarely used or mentioned. The only situations I've seen where IPNS would be useful, ENS was used instead as it's more reliable.
That's still not great. Imagine the only difference between two versions of the layer is that you updated a single jar in a 200MB app bundle. The effective difference could be a few tens of blocks, but you still need to redownload the whole thing.
If we can manage the assignment/padding to match ipfs fragments, that could result in a massive saving.
On top of that, most of them should already be able to understand that certain files already exist; but it seems like it’s more of a file-level feature at this point rather than block-level.
Does anyone use IPNS for anything real? It performs terribly whenever I've tried it. It almost turned me off entirely from using IPFS until I realized that it's just an optional extra and the rest of IPFS is still useful without it. I really wonder how many people try out IPFS, run into issues with IPNS, and then write off the whole project because they thought IPNS was a central piece to it. I think the project would do really well to strike all references to IPNS from their getting-started guides, bring up reliable alternatives like DNSlink records (or even ENS), and then maybe bring up IPNS as an optional extra.
The volume of traffic doesn't really matter that much. Developer productivity is just as important as the service you're actually selling when you're the size of Netflix. If your N thousand engineers are suddenly unable to work, or slowed down by X%, that's a huge problem. Large companies treat (or should treat) developer tooling issues as seriously as application outages.
If Netflix is using IPFS for anything worth mentioning, it's almost certainly substantive enough to be considered an endorsement.
Of course the service itself is more important. If there are outages in the service, you will lose customers. If developers lose time at most your new features risk delays. If developers are less productive over time you lose a bit of money.
I say this as a developer for a FANG company.
It’s still an endorsement, but not nearly as strong as if the broadcasting was somehow relying on IPFS. As it is, this is probably just some engineering manager that made some non-crucial tool and put that on ipfs.
One big optimization that could help in some cases for container platforms like Fargate is not downloading the entire image just to run the container. Instead read files (or even just blocks) from network storage on demand.
This is basically how booting from a disk image works on most cloud platforms too.
I think that was a dig at ipfs's issues with real world usage where a lot of traffic until recently was wasted on metadata and every node used lots of bandwidth to gossip. Meanwhile the actual throughout on a non-tuned node was not great at all.
It's awesome to see these kind of improvements on IPFS. A while back as a side project I created an IPFS-backed docker registry which allows you to push and pull docker images from IPFS [0]
Yay. I last used ipfs for leeching abandonware around November. Although it had a tough time getting started and it would occasionally freeze up for several minutes, it worked well when it worked. It's seems to be getting better from when I first tried it.
> The node sends out a want for each CID to several peers in the session in parallel, because not all peers will have all blocks. If the node starts receiving a lot of duplicate blocks, it sends a want for each CID to fewer peers. If the node gets timeouts waiting for blocks, it sends a want for each CID to more peers.
Trying to recall how the protocol works. Doesn’t this pattern of behavior mean that a lot of machines will end up with the beginning of a file and few will have the end?
It sounds like the start of downloading would be very fast and the end would slow down while it hunts for a source
May be why this is only 20% faster than Dockerhub.
I don't see why it'd be ordered for blocks received
If I remember correctly, a node with a full file will send out the blocks in parallel, so leechers should receive blocks effectively in random order
The only reason for some machines having only the start versus the end would be implementation-wise, where maybe you see the want list ordered, and the seeder responds by only shipping the first n blocks it reads in the want list
If the seeder responds with random ordering, you'd avoid the problem of all leechers all having the same blocks
I really don't understand why there is a big hype towards IPFS which is still in development stage although, there are other options which are already out and working like Sia - Skynet which is not even getting a fraction of attention IPFS is getting.
Skynet is barely two weeks old fwiw. As more people play around with it and see how strong it is I think it'll get a lot more attention. Lot of crypto projects are already planning to add support in the coming months.
Now every website you visit, any ad/tracker, any homecalling phone app can tell what movies and contents you watch and when you are at home. For years.
> Saving Netflix's bandwith costs by sacrificing your privacy.
> IPFS and bittorrent don't do anything to protect the data you are uploading and your IP address.
And Netflix are using it across AWS for distributing container images, not touching client devices, unless you know something more than what the article says.
This doesn't have anything to do with customer's privacy.
Do not visit this site. If you visit it once and then visit it after a while they will fill it with crap that you did not download in order to blackmail you or something I presume. Alternatively they might start tracking you only once you visit it. Even if they are honest it is extremely inaccurate (it had 8.8.8.8 torrenting anime a while ago for example)
Bogus results can simply be the result of ISP IP address recycling which, in my case, is pretty obvious. Besides, why would they wait on an IP address visit to fill it with blackmail material? The suspicion doesn't make much sense to me.
Circumventing censorship without strong anonymity is not necessarily pointless: you can publish something sensitive from a place where you'd not be prosecuted (e.g. from abroad). The point is to bring the message to those who are denied information.
Also, how well does BitSwap work when the underlying network is congested? Do IPFS nodes do any kind of congestion control?