Hacker News new | past | comments | ask | show | jobs | submit login
Hosting a simple static site on IPFS (ipfs.io)
162 points by giodamelio on Sept 16, 2015 | hide | past | favorite | 27 comments



Morphis, ipfs and Freenet all work similarly here - allowing hosting websites in a distributed datastore. It'll be interesting to track the different usages of the systems.

I started off mirroring my blog on Freenet and and now I'm experimenting with storing it there first and using a reverse proxy to make it available on clearnet [1]. This approach would work on IPFS and Morphis too. If the clearnet site gets taken down it's always available on the distributed store and it's simple to spin up a proxy somewhere else to provide clearnet access.

For systems where unpopular data goes away over time the access via the clearnet proxy provides more access to it to make that less likely since clearnet is full of crawlers, bots, search engines etc constantly hitting it.

[1] http://bluishcoder.co.nz/2015/09/14/using-freenet-for-static...


This is the first time I'm seeing IPFS. So when the IPFS servers are running, they register themselves with some central node to let everyone know they exist/are online? This is how they get away without a static IP address? Am I thinking about this correctly?


No, IPFS uses a distributed hash table to find content and nodes on the network. The only centralised servers are the gateways between the regular Internet and IPFS, due to necessity. However it is possible to run your own gateway, and many do.


How are new nodes discovered or registered?


For now, there is a default bootstrap list:

https://github.com/ipfs/go-ipfs/blob/7fbfecf6fab5920317de2e9...

When new nodes enter the network, they connect to a subset of this list and use this as a starting point for discovery.

After that, queries/operations on the DHT will result in the discovery of other nodes.


The bootstrap list isn't hard coded, it's just a default. Here's how to work with the bootstrap list: https://ipfs.io/ipfs/QmTkzDwWqPbnAh5YiV5VwcTLnGdwSNsNTn2aDxd... (from ipfs.io/docs/examples)


That's a horrific url though, any way to 'humanise' that?


Yes. (IPFS author here)

1. Add a TXT record to DNS <domain> with:

   dnslink="/ipns/QmWGb7PZmLb1TwsMkE1b8jVK4LGceMYMsWaSmviSucWPGG"
2. should be able to use https://ipfs.io/ipns/<domain>

See it in action here: https://ipfs.io/ipns/ipfs.io

WARNING: IPNS is still under dev. it's not robust yet, convergence may not be perfect.


Also - if I remember correctly - if you visit ipfs.io with the Host header set to foo.com, it will return /ipns/foo.com.


Yes exactly, the IPFS daemon running there looks at the Host: header and uses it to build an `/ipns/<the-header>` path which it will resolve and respond accordingly.


Is there any place I can track IPNS completeness? I've seen issues / PRs / commits related to this, but just wondering if I can find a single-page/issue tracker.


Just made an issue for this after seeing your comment: https://github.com/ipfs/go-ipfs/issues/1716

There should be some progress on ipns pretty soon, i'm working on a 'phase one' fix that improves upon the current situation, and will be replaced later by our final implementation once the specification and requirements for it are complete.


How is "ownership" over a domain resolved?


Just classic DNS -- you set a TXT record on that domain, and IPFS uses standard DNS resolution to get the value of it.


From section 3.7.2 of the IPFS paper [1], on IPNS ("Human Friendly Names"):

While IPNS is indeed a way of assigning and reassigning names, it is not very user friendly, as it exposes long hash values as names, which are notoriously hard to remember. These work for URLs, but not for many kinds of offline transmission. Thus, IPFS increases the user-friendliness of IPNS with the following techniques.

[1] https://github.com/ipfs/papers/raw/master/ipfs-cap2pfs/ipfs-...


I think so, yes. IPNS can bootstrap of DNS if you want. By adding a DNS TXT record of the form "dnslink=/ipfs/longhorriblehashhere", you can have paths like /ipns/example.com/2015/09/15/hosting-a-website-on-ipfs/.

For example, the ipfs.io site is hosted on ipfs/ipns, and available through the gateway at https://gateway.ipfs.io/ipns/ipfs.io/


Isn't IPNS a centralised component? If not, how does it work?


If I remember correctly, signed updates in the DHT that also is used for locating nodes with specific content. Check out the paper on ipfs.io, it explains all the components.


From section 3.7.1 of the paper

>"Because this is not a content-addressed object, pub- lishing it relies on the only mutable state distribution system in IPFS, the Routing system. The process is (1) publish the object as a regular immutable IPFS object, (2) publish its hash on the Routing system as a metadata value"

How is the ordering done? In a quickly changing content, a blockchain would be required for ordering.


Well, I would assume that you could trust the person signing things with a private key about whether it was what they wanted to be treated as the most recent thing to be signed with the private key?

But, I might be misunderstanding some things about how the updates are sent.

I thought one just sent a message signed with the key with a newer ID and sent that to whoever was keeping track, and they would verify it was signed, and had a newer id and would share it further.

Is this not how it works?

I mean I guess malicious or lazy nodes could not toward things, but that would just make things not get updated, usually couldn't cause things to revert, and couldn't make up stuff, so...

That's how it works, right?


(IPFS author here)

You're hitting on some really hard questions :)

the gist is that you use a "record system" that has some transport guarantees, for example relying on a dht has certain kind of properties, as opposed to relying on pub/sub over trusted nodes. IPFS has this part as pluggable, though we're focusing on a large public DHT first. DHTs are pretty robust today, though yes have weaknesses we're working on.

now, the key is that, on top of that, you build assurances around cryptographic freshness (i.e. "trust this record for a certain amount of time"). of course, "certain amount of time" varies with your notion of time (e.g. NTP, vs blockchain times, etc), so the user gets to set that.

if you're interested in how the "record system" works and will evolve, take a look at https://github.com/ipfs/specs/tree/master/records -- though admittedly this is not complete or exhaustive, as we have A TON to do and are focusing on pushing out reliable code.


>Whenever you update your site, just do step 4 again, and IPNS will make sure anyone asking for your peedID gets the hash of your latest site.

s/peedID/peerID :)


Thanks


If you use this tutorial to publish anything other than the example index.html, then attempt to view it through the proxy, it will fail with "Path Resolve error: context deadline exceeded." You need to run "ipfs daemon" on your local machine in order for it to be visible to the world.


I have a small question for people who know better than me about IPFS. Does IPFS includes some equivalent of the USK (Updatable keys) concept in Freenet ? How things can be updated ?


IPFS also has IPNS. IPNS allows you to use your peer id (that you get by running the daemon) as a name. So if your peed id is "A", you can point "A" to hash "123" and give "A" to the people you want to share the content with. If you now add a new thing and get the hash "456", you can update "A" to now point to that hash instead. That way, you can update the content seamlessly, without breaking things for other people.


php -S localhost:8000




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: