>They have to be special, because an IP node has to be able to transmit them before it has an IP address, which is of course impossible, so it just fills the IP headers with essentially nonsense
Not nonsense! The global IP broadcast is specified as 255.255.255.255 and is used by other protocols. The source IP address for the initial discovery is indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is handled with real IP addresses like normal IP traffic. DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks).
>Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.
Ugh, come on! RARP doesn't provide you with a route to get out of the network or other extremely useful things like a DNS server.
>and DHCP, which is an IP packet but is really an ethernet protocol, and so on.
No, it's not an ethernet protocol. It's a layer-3 address assignment protocol that runs inside of IP, which is normally encapsulated in ethernet frames. You can have a remote DHCP server running any arbitrary L2 non-ethernet protocol and if it receives a relayed DHCP request it will reply with IP unicast perfectly fine with no ethernet involved.
I think the author did a great job of explaining why DHCP feels like a gross hack, because it crosses that boundary.
You say
> No, it's not an ethernet protocol
I mean, obviously it's not, by definition, but let me ask -- why does it have a hardware address in the protocol? Is it maybe because this protocol, like ARP, is a bridge between the layers, and thus this protocol shares more in common with ARP than with IP?
RARP did not allow a lot of the things that DHCP did, but DHCP can be done in an address-aware mode as well (and BOOTP was much more bare-bones than DHCP, and that's closer to contemporaneous with RARP. All you have with BOOTP is the ability to cross broadcast domains, which are themselves a fiction anyway, as the author points out). If we let RARP do the IP assignment, then DHCP can be used to transmit configuration information to the newly assigned host very easily, and it would allow us to cut out the hardware-addressing aspect of DHCP.
>"The source IP address for the initial discovery is indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is handled with real IP addresses like normal IP traffic.
No its not. The source host putting a DHCP discover request on the wire doesn't have a real IP until the complete Discovery, Offer, Request and Ack sequence is completed which is two round trips during which time the source IP of the client is still 0.0.0.0. This is why DHCP uses raw sockets.
>"DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks)."
I would say that DHCP is very much a layer 7 protocol, as it deals with leases and renewals etc. It uses IP yes because it uses UDP and UDP must use IP but I don't think that makes it an IP protocol.
> No, it's not an ethernet protocol. It's a layer-3 address assignment protocol that runs inside of IP, which is normally encapsulated in ethernet frames. You can have a remote DHCP server running any arbitrary L2 non-ethernet protocol and if it receives a relayed DHCP request it will reply with IP unicast perfectly fine with no ethernet involved.
This reminds me fondly of "frottle". A project from the Perth WAFreeNet - a city-wide wireless network back in the early 2000s. The "hidden node" problem with WiFi over these long distances is that each node cannot listen to prevent themselves talking over other nodes (CSMA/CD) -- because they cannot hear the other nodes - only the central access point.
There were costly commercial solutions (these days there are not so costly ones, for example many UBNT Ubiquiti products implement 'AirMax') so instead they implemented the Frottle project which would hold and then later transmit packets using a user-space iptables QUEUE driver when it received it's "token" / turn from the central AP over a TCP connection. The quote isn't on the webpage anymore it seems but it was something about a layer 3 & 4 solution to a layer-2 problem - a great and free hack that worked well :)
DSL usually uses PPPoA (or PPPoE in a MPoA tunnel like below) over a subset of ATM (=DSL). PPP itself transports IP and provides configuration.
When modem and router are separate, the modem only provides an MPoA tunnel to provide Ethernet access to the DSL link, while the router connects to the AC via PPPoE over said tunnel.
The newer VDSL standards define an Ethernet PHY rather than ATM-based encapsulation (given everyone ran PPPoE over it anyway it's one less layer).
But to the original point there is no reason you could not run DHCP over a T1 directly..... no Ethernet at all involved (HDLC or something at the data-link).
>In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.
This is completely wrong, it's not pointless.
First, this can be used to easily swap out routers in a network without reconfiguring any clients or even incurring downtime. Without the intermediary gateway IP representation, this would mean you would either have to spoof the MAC on the second router or reconfigure all of the clients to point to the new gateway.
Second, ethernet addresses are a layer-2 construct and IP routes are a layer 3 construct. Your default gateway is a layer-3 route to 0.0.0.0/0. There are protocols for exchanging layer-3 routes like BGP/RIP/etc that should not have to know anything about the layer-2 addressing scheme to provide the next-hop address.
Third, routers still need to have an IP address on the subnet anyway to originate ICMP messages (e.g. TTL expired, MTU exceeded, etc).
Fourth, ARP is still necessary even for the router itself to know how to take incoming IP traffic from the outside and actually forward it to the appropriate device on the local network. Otherwise you would have to statically configure a mapping of local IP addresses to MAC addresses on the router.
So ARP is critical for separation of concerns between L2 and L3. We don't live in an ethernet-only world.
>excessive ARP starts becoming one of your biggest nightmares. It's especially bad on wifi.
Broadcast can become a nightmare. Excessive ARP is a drop in the bucket compared to other discovery crap that computers spew onto networks.
The pattern of most computers now is to communicate with the external world (from the LAN perspective) and not much else. So on a network of 1000 computers (an already excessively large broadcast domain), your ARP traffic is going to be a couple of thousand ARP messages every few hours. If this is taking down your WiFi network, you have much bigger problems considering all of those are about a modern webpage load of traffic.
Reliable rule, and not just for software. Seems like a good place for mentioning 'Chesterton's Fence'....
In the matter of reforming things, as distinct from deforming them, there is one plain and simple principle; a principle which will probably be called a paradox. There exists in such a case a certain institution or law; let us say, for the sake of simplicity, a fence or gate erected across a road. The more modern type of reformer goes gaily up to it and says, “I don’t see the use of this; let us clear it away.” To which the more intelligent type of reformer will do well to answer: “If you don’t see the use of it, I certainly won’t let you clear it away. Go away and think. Then, when you can come back and tell me that you do see the use of it, I may allow you to destroy it.
Sometimes, a thing is there because it's there, and nobody knows the original purpose, so by Chesterton, nobody would be able to tear it down, because nobody can see the use of it.
Chesterton's Fence neatly saves the utterly pointless fences in our lives, regardless of the damage they can cause.
I think your conclusion is a result of oversimplifying the concept of Chesterton's Fence. The idea can be restated as "Don't remove what you don't understand", but that doesn't imply "This thing is impossible to understand and may never be removed". A lack of understanding at a point in time doesn't imply a permanent inability to understand.
Applied properly, the principle of Chesterton's Fence should provide you with the impetus to observe and learn about the subject. In software, that could involve creating/improving tests, diagramming method/API invocations, monitoring network traffic, etc. As a result of your observations, you should understand the subject deeply enough to determine whether it can be removed safely. If it can't be safely removed, you now have documentation justifying its existence (which may, in some cases, form the basis for a plan to migrate, deprecate, and remove).
It's a principle, not an unbreakable rule. The point is to take some time for the due diligence of understanding why. Fences (or laws or code) don't just appear out of nowhere. Someone spent time and energy to put it there.
If you don't immediately see its use, you should do some more thinking/archeology.
A fence also requires maintenance. An intact fence means someone wants the fence bad enough to put it there and maintain it.
IPv6 designs are only similar to the first half of that statement. If it's in IPv6, it's because someone wanted it there. The people designing IPv6 had almost 2 decades of operational experience running IPv4 to draw from. Things were intentional.
How many bad situations have been prevented though? I've seen many situations where someone did try to remove something pointless that really wasn't pointless.
differentiating the specific cases where the seemingly pointless solution is actually the best way to solve a problem from the cases where the problem could be solved a different way that eliminates the need for the seemingly pointless solution from the cases where the solution is actually just a pointless flourish.
What he's really arguing for is a circuit-switched network, so that connections can be persistent over moves. He just needs a unique connection ID.
One amusing possibility would be to do this at the HTTPS layer. With HTTPS Everywhere, most HTTP connections now have a unique connection ID at the crypto layer - the session key. If you could move an HTTP connection from one IP address to another on the fly, it could be kept alive over moves. HTTPS already protects against MITM attacks, and if the transfer is botched or intercepted, that will break the connection.
I'm not recommending this, but it meets many of his criteria.
The trouble with low-level connection IDs that don't force routing is forgery. You can fake a source IP address, but that won't get you the reply traffic, so this is useful only for denial of service attacks. If you have connection IDs, you need to secure them somehow against replication, playback, etc.
> One amusing possibility would be to do this at the HTTPS layer. With HTTPS Everywhere, most HTTP connections now have a unique connection ID at the crypto layer - the session key.
As a network-ignoramus, who likes cryptography, I’ve long dreamt of a networking protocol where endpoints are defined, primarily, by a public key. All messages would be encrypted with the destination public key, and signed by the source private key.
When a destination node receives a packet from a neighboring node, and ACK would constitute the destination node’s signature over the received packet, thus making ACKs provable and portable (“this node has already received that packet, here’s the proof”).
Packet sources addresses would no longer be fake-able, as that would constitute breaking asymmetric cryptography.
The protocol would have no concept of a “connection”.
Routing would be left out of this protocol completely, and networks would use whichever routing protocols they find most efficient.
I wouldn’t be surprised if there were countless issues with a protocol like this, but something about it just seems so elegant to me that I haven’t stopped considering it yet.
I’d really like to remove the idea of a “connection” from the protocol, and leave that to the routing protocol. In other words, the proposed protocol would be purely logical, and would only define what constitutes a sender (one who can sign a packet with its source public key) and a receiver (one who can sign a packet with its destination public key).
Again, I’m fairly network-ignorant, but as far as I can see this constitutes an inversion of the current architecture: routers need to be aware of signatures, such that they don’t deliver an invalid packet (bad signature) to a destination. So the logical layer would be the lowest one, and a router that delivers a packet with a bad signature would be considered defunct.
The funny thing about the current architecture, in my view, is that the correct destination of an HTTP TLS packet is hidden (encrypted) inside the application data (“Host: google.com”). So routers rely on IP addresses to figure out where the packet needs to go, while the logical destination is only visible to the receiver once it decrypts the packet.
The idea would be moving this information out of the application data, making it cryptographically sound (public key is destination, not domain name), and making routers aware of it such that they know whether a packet was delivered to the correct destination by whether it responds with a valid signature.
> the correct destination of an HTTP TLS packet is hidden (encrypted) inside the application data (“Host: google.com”).
Incorrect, you might want to read about TLS SNI (Thought exercise; the server has to pass your packets to the correct vhost before decryption).
You might want to Google dTLS (TLS over UDP) and then read some of the dialogue about why it's impractical on the public internet.
Consider further that by moving your presentation layer logic into the network layer, every time you want to introduce a new cipher you'll need to upgrade every network device on the internet. Think how bad the export-grade crypto problem has been, then multiply by the momentum of Tier-1 ISP install base. Instead of making the network less important, you're handcuffing yourself to Verizon.
> As a network-ignoramus, who likes cryptography, I’ve long dreamt of a networking protocol where endpoints are defined, primarily, by a public key. All messages would be encrypted with the destination public key, and signed by the source private key.
Have you looked at WireGuard? It uses public keys to identify endpoints, although it doesn't solve the fake source address of the VPN protocol packets itself. I don't think that's possible to support at the same time as NAT and moving across Internet connections (changing IPs as you move across Wifi/Cellular etc).
Interesting idea. One thought would be add encryption and signing to the routing. Meaning unless you have the right permissions, your packets won't even get to the destination.
I'd much rather routing be about getting data from one known point to another.
A /session/ should be able to be serviced by multiple routes, maybe with a preference (use the cheaper ones first, the faster ones first, etc) or maybe over time (in the case of mobile).
Having connectivity based at the session level and having a single server be 'multi-homed' (many addresses, each conforming to a different outbound link) would peel complexity back from the lower layers and allow them to focus on being simple, robust, and easy to diagnose.
It would also move control and management back up to higher layers, and as recently shown with a description of Google's core network devices, back to the end points where a larger and more complete view can be used to determine the best overall solution.
I think my though comes from the use case where you have thing1 and thing2 that you want to be able to communicate via the internet. But you would rather not be accessible from other devices.
I’m not sure I follow why this would be desirable. As a sender of a packet, why would I care who routes my (encrypted) packet to its destination? Why would I want to restrict the number of possible routes from me to the receiver?
It would be interesting to put that post into Genius and annotate its errors. At some level the premise is both true and false.
I lived the IPV6 debate, I went to IETF meetings, I worked on network services that would be affected one way or the other, I debated with others the various ways to "improve" or "replace" V4 to get a better system. And all through that time, while everyone felt there would be billions and billions of IP addresses, I was not aware of any discussion of dynamic routing such that a network endpoint could be found anywhere in the world without configuration. For everyone at the time felt network infrastructure was fixed, and network clients moved.
In that way a network client would move from one network to another, and then in that new network it would have to establish itself and then advertise somehow its new status. Everyone agreed that there would be some disruption during this change of status but things like TCP were designed to tolerate lossy networks. The network would adapt.
That pre-supposes a lot of little networks, with their own sets of rules. Except that isn't the way cellular carriers think, they have one network and your relationship to it rarely changes. If you aren't on their network you are 'roaming' and there are fixed rules in place for that. So they trade a lot of tracking and management for ease of use on the customer. And it enables some annoying things like 'header injection' in Verizon's case.
Dumb networks versus smart networks. AT&T's original switched network around the world vision versus Bob Metcalf's self organizing collection of independent nodes following a small set of rules. Architecturally its a debate that has been going on for a long long time.
>And nowadays big data centers are basically just SDNed, and you might as well not be using IP in the data center at all, because nobody's routing the packets. It's all just one big virtual bus network.
The opposite trend is true in large data centers. L3 fabrics where everything is routed have become extremely popular because BGP (or custom SDN setups) can be used to migrate IPs and you get to utilize multiple paths (rather than the single path offered by STP convergence).
Exactly. Virtually all new DCs are being build using Leaf-Spine architectures that leverages pure IP routing and optimize for East-West traffic internally. I'm also starting to see the rapid proliferation of L3-only SDN systems like Project Calico [1] in fairly large Fortune 100 companies, for any sort of end point, particularly containers these days.
But then you're tunneling your L2 across your L3. So if you provide connectivity to a customer in a datacenter all they'll see is the L2. But you're right. Underneath everything is an IP fabric these days.
Interesting article, but it contains some weird statements.
>It is literally and has always been the software-defined network you use for interconnecting networks that have gotten too big. But the problem is, it was always too hard to hardware accelerate, and anyway, it didn't get hardware accelerated, and configuring DHCP really is a huge pain, so network operators just learned how to bridge bigger and bigger things.
IP forwarding (longest prefix match) is more complicated than mac forwarding yes, but it has been done in hardware (ASICs, typically NPUs today) for a long time now.
Operators (I assume ISPs) do not build large bridged networks as they need their networks to scale as they grow, or they will hit a breaking point where their network collapses. ISP's typically use centralised DHCP servers (as opposed to configuring their access routers) and configure their routers to use DHCP relay. DHCP server configuration is easily automated by just reading your IPAM data, it's a non-issue.
> In truth, that really is just complicating things. Now your operating system has to first look up the ethernet address of 192.168.1.1, find out it's 11:22:33:44:55:66, and finally generate a packet with destination ethernet address 11:22:33:44:55:66 and destination IP address 10.1.1.1. 192.168.1.1 is just a pointless intermediate step.
Bollocks. The abstraction allowed by using an IP address instead of a MAC address is essential, considering that IP addresses are dynamic (even when statically configures, devices can and do get replaced) and MAC adresses are set at the factory. Can you imagine updating the routing table of every device in your network because you had to replace a core router and the MAC address was different? It’s the equivalent of publishing your website on an IP address instead of a DNS hostname...
* yes, I know MAC addresses can be configured by software in many devices, but that’s even more of a hack than using arp to determine a MAC address.
Some infamous examples of such java classes (For some reason Spring always seemed to have more verbose names than other DI frameworks, which is saying something):
That is funny. I am doing some temporary consulting work in a Java shop. I created a layered and complicated solution to a complicated problem. I spent a lot of time defending the complexity of the solution as the necessity it was.
When the requirements appeared to change, I was all like, "Are you sure about that? That's great news because that means we can remove these layers from my solution and simplify the code a lot."
Apparently they had never heard about someone wanting to get rid of their own layers, because they just sat there in silence trying to come up with reasons to keep the (now) unnecessary layers.
In the end, I think they agreed that the new requirements must be incorrect, and the old ones still cover both cases. Imagine that! Throwing away new requirements solely to keep layers.
Computers don't understand concepts and people only do. Our gradual ascent from hardware to intuitively-usable mechanisms involves abstracting away the repeated "okay, given these primitives, here's how we'd use them" to go the next layer up.
But that's non necessarily a good way. It's the easy way out of "I don't like these details, so I'll try to hide them under something". A good abstraction is worth its weight in gold. A bad abstraction (the vast majority) is worth its weight in dung.
Hopefully some of the static-strictness features in the language will offer people tools to remove layers of abstraction... if they can get permission from humans to do the refactoring :p
> Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.
Actually, no. You can only set an IP address with RARP, not even a netmask (RARP comes from pre-CIDR age) or other important stuff like default gateway, DNS server, etc like you can with DHCP.
This proposal was basically a service which would host a static IP for you (similar to the LTE structure but with IP underneath instead of L2), and forward to whatever your "real" IP was using IP-in-IP encapsulation.
As the author states, layers are only ever added :)
That's basically just an IPIP tunnel. It works, but adds tons of latency. A "good" mobile IP (which would be possible with eg. QUIC) would add no per-packet latency.
Correspondent node functionality in Mobile IPv6 does exactly that, by sending the packets directly between the two nodes involved. Basically it works by associating the connection with a 128-bit GUID instead of with the node's current network address, which allows the network address to change without breaking connections.
So... basically exactly what the suggested solution would look like, if it was modified to work with all protocols and not just TCP/UDP.
>Network operators basically choose bridging vs routing based on how fast they want it to go and how much they hate configuring DHCP servers, which they really hate very much, which means they use bridging as much as possible and routing when they have to.
Very rarely does a network operator use bridging to avoid configuring DHCP. All modern protocols are built on IP so you still need an addressing scheme and most people want the Internet so the 169 auto addressing is out. So even in big bridged networks, you still have a DHCP server. In fact, you configure less DHCP in a big bridged network than DHCP for a ton of tiny networks.
The advantage to big bridging networks is that you have to setup very little routing (just the router to get in and out). If you routed between every port on the network, there would be an excessive amount of configuration involved to setup prefixes on every single interface.
Ok, so QUIC or some other common layer 4/4+5 'Modern TCP over UDP for network compatibility' solution.
Lets just throw away the concept of 'addresses' for authentication and actually use a cryptographic authentication identifier of somekind, combined with some mux iteration ID.
This seems really complicated. Is ZeroTier closer to an cjdns- / i2p-style system, or is it closer to CurveCP/MinimaLT/QUIC? (QUIC being the odd one out of the trio as it grafts on some awful HTTP semantics, but that's Google for you.)
There are a number of statements like "QUIC is functionally equivalent to TCP+TLS+HTTP/2" in the "QUIC wire specification" and other documents, and this agrees with what I remember seeing in the source in the Chromium repo when I last looked.
I've not read about the IETF version; I'll look into it.
I haven't been following QUIC very closely, but from what I understand, they have put in a proper abstraction between the TCP+TLS part and the HTTP part. While the mapping of how to use HTTP over QUIC is still part of the spec, as I understand it, there shouldn't be any major problems mapping other protocols onto it.
Drop the idea of ports too. Every program gets its own IP
Mentioning ideas like that at work get queer looks about how it'd be impossible to configure a firewall at that point
But keep going further. End up with 128 bit CPU where every byte is IP addressable. Necessary security to block random outsiders from reading your memory, but capable of potentially running various parts remotely transparently
With NAT, this has essentially already happened. You could say IPv4 is actually 48-bit addressing, at least on the client side. For all useful purposes, NAT expanded every /24 or even /32 subnet by an extra 16 bits, which is the real reason we still haven't run out of IPv4 addresses.
That could be extended to the server side if we used something like SRV records instead of defaulting to port 80/443.
On another note, I have long wished for something like specifying protocol port numbers in a DNS record. Ever since I ran my first ircd on a shared hosting server, and continuing to today with multiple httpds (nginx, znc, etc) and ircds (ZNC, bitlbee) and such on a single IP.
Not for any technical reasons, though--I'm just lazy, and specifying port numbers is annoying.
Maybe it's early and I've not drank enough coffee yet, but what benefit would this provide? You'll be in effect doubling the DNS lookups needed to connect to a website adding latency to time to first byte, with seemingly zero benefit.
The main benefit would be the ability to offer public Internet-facing services through a NAT, even a carrier-grade NAT that lumps multiple ISP customers behind a single IP address. (You would need some kind of cooperation from the NAT to be able to allow incoming connections on a port range.)
JdeBP's post provides a good answer about the SRV records. The short version is a single DNS packet can contain both A/AAAA and SRV responses.
> But keep going further. End up with 128 bit CPU where every byte is IP addressable. Necessary security to block random outsiders from reading your memory, but capable of potentially running various parts remotely transparently
But that breaks the layering (where reachability is provided by l3 and transport by l4). Now existing internetwork routers cannot handle your new 144-bit address format.
The folks at IETF meetings are doing a wonderful job trying to keep existing tech working. That's why you always extend old standards and rarely deprecate anything. Just look at BGP, for example.
We, the application software programmers on layers 5 through 7 with the "move fast and break things" attitude, could not ever have designed anything like the internet and keep it running as long as our current one has been.
With IP-per-container we're pretty close to this already. systemd could launch every service on my system into it's own network namespace with a unique IPv6 address + NAT via my IPv4.
Not even IP. Source/destinations would both only be crypto identifiers. The multiplexing part is just a convenient way of expressing different data streams to simpler protocols for QoS and other activities.
That does imply that there should be a way (possibly host to host, possibly build in to some kind of service resolution system) of looking up a 'name entry' and 'service type'.
> how it'd be impossible to configure a firewall at that point
Why is that? Wouldn't those addresses be hierarchical?
Mobile addresses for applications would make the usual case of "hey, you migrate service X to that other switch, and nothing works anymore" basically disappear.
Because of the myriad of firewalls and routers that don't understand IP protocols that isn't TCP or UDP. IPsec tunnels were commonly encapsulated in UDP packets for this very reason because its safer to assume that at least one device alobg the way won't understand protocol 50.
Multipath tcp isn't really a new protocol as much as it is an extension to tcp and wouldn't offer the benefits that UDP would as it is still constrained by (and benefiting from) the design goals of tcp.
It's hard to find a copy of that book but oh, man, if you know the stack and you lived through the ISO/OSI proposals, it's so so good.
I got lucky and read that book after I had ported Lachman's streams based networking stack into the ETA-10 and SCO's Unix. I didn't know what I was doing but I had to get the job done, I was just dealing with shorts that weren't 16 bits, stuff like that. So I was a grunt thrashing around. At some point I wanted to know more about I was doing, I still have a notebook where I wrote down every packet format, all the IP stuff, TCP stuff, UDP stuff, ARP stuff, etc. Shout out to Masscomp because I was a sys admin on their machines in college and they had a great intro to networking that formed the basis of my limited understanding of networking.
It was Padlipsky's book that brought the whole thing into focus. I dunno if you guys have had that clarity problem, I had it again when I went to Sun and was working on the kernel, had no idea what I was doing, thrashed about, and slowly, slowly, the architecture of what Sun had done came into focus. It was amazing to me when I got it. It took a lot of time just looking, reading the code, trying to see the picture. Padlipsky's book made me get networking long before I came to Sun, I was a n00b at everything and he made me get it. And it was funny as heck.
"Do you want protocols that look nice or work nice?"
"If you know what you are doing 3 layers are enough, if you don't 7 aren't"
His book was full of that stuff and it made you get the stack. If you are into the network stack, and especially if you are trying to figure it out, get that book.
In fact, if you contact me and say "that's me but I'm broke", I'll see if I can find a copy on ebay or some place and send it to you.
If HN doesn't upvote this to the stars, you suck. It is the essence of what HN likes, it's great tech presented well. Go find that book, go read about Mike, come on people, dig a little.
I don't care about the upvotes, I care that you go read. And read something that will help you see.
Maybe this thread is dead and I need to do a top level post about Mike.
This was a very informative article for me, but there was one thing I didn't understand. At the end he made the case that mobile routing needed essentially two layers: a fixed per-device (or per session) identifier, and then a separate routing-layer address that could change as a device moved. QUIC has session identifiers, and that's great and could solve the problem.
But earlier in that very article, he already pointed out that every device already has a globally unique identifier used in layer 2 routing ... the ethernet MAC address.
Would someone please explain to me why we can't use MAC addresses as globally unique device IDs?
In theory we can use MAC addresses, but there are problems: 0) Privacy: You don't want all traffic to be labeled with your hardware ID. 1) Flatness: MACs are essentially random, and a router would need a huge table to keep track of who's where. IP (v4/v6) assigns addresses hierarchically, making routing tables manageable.
The idea would be to use IP addresses for all levels of routing, and MAC addresses only on the endpoints to identify the connection. So routing actually becomes simpler. However, you have a good point about privacy. One of the other commenters also mentioned non-unique MACs.
Another point against using MACs which I want to point out is that they don't make much sense if you have a service running on multiple hosts. I mean, you could introduce "virtual MACs", but it seems better to keep the idea of "service ID" separate from "device ID". Session IDs solve the multiple hosts problem too, by completely avoiding it.
"The problem with ethernet addresses is they're assigned sequentially at the factory, so they can't be hierarchical. That means the "bridging table" is not as nice as a modern IP routing table, which can talk about the route for a whole subnet at a time."
MACs aren't unique and can be changed. Certain fly by night Asian manufacturers just make up MACs randomly. It's a big problem with counterfeit gear, too.
They are supposed to be unique, but in the real world they are not.
The "Internet Mobile Host Protocol" (IMHP) was written as a draft RFC in 1994. As far as I know it was never adopted, but is it still relevant, even as an inspiration for IPv6?
A little off topic, but the TCP BBR Congestion Control they mention looks promising. I've been annoyed by "Bufferbloat" for over a decade and find different solutions to the problem pretty fascinating.
The nice part about this solution is that it doesn't require making changes to the individual nodes on the network (e.g. cable modem) in the way that other solutions have required (small and fair queues).
It also appears to be able to avoid the usual packet-drops of regular TCP congestion control.
Part of the difficulty here - is you're not just upgrading the whole stack, you're instead layering on whatever stack is already there - its a needed part of deploying any new technology without replacing everything from the basement up. I'm not sure what this guy would do instead however - as someone with a decent networking background, I got completely lost in the end.
The author has a verbose writing style, and may not be a genius, but they are clearly familiar with networking protocols and do a good job explaining the general scene and history.
I just disagree with some of his premise - I don't think its feasible to get away from any and all layer two broadcast messages - even limiting yourself to IPv6 Mulitcast still leaves you with some pseudo-broadcast messages for either housekeeping or auto-discovery purposes - its part of what makes IP based networking in general work - and I'm talking services that run at layer four - that rely on that primitive to exist to make certain functionality happen.
Routing still adds complexity, and I see no way for it to not add complexity - he talks about each switch being a router for example - you then need a way to determine where a particular subnet you're trying to get to lies - which means a routing able (akin to a mac address table, but for prefix), and some way to feret out which interface a particular device lies (akin to arp, but instead searching for a prefix to route to) - in the end, you end up with the same primitives and complexity, but you've just moved it up a layer in the stack - which does nothing to get rid of complexity - it just adds to it. IP networks are often a tree in organizations - ethernet is often deployed with different topologies, relying to spanning-tree to give you more redundancy without extra configuration (at the expense of some delay on re-convergence)
I think also that so long as we're dragging along the legacies of the disparate layer one technologies (which in any case, would not and could not go away) - we're kinda stuck with what we have.
The point is that actually IPv6 already includes all the complexity you're talking about: complicated multicast to replace complicated broadcast, complicated routing to replace complicated bridging. The underlying problem with IPv6 is that it includes all this complexity because they expected to have to replace layer 2 bridging. But this never happened, so now we have all those features twice, which is worse.
So IPv6 in essence is the future that was stillborn - we still plan and develop our networks for an IPv4 world, and then run IPv6 on top of them, correct?
Well he doesn't seem to be aware of all the ID/Locator split discussions in the IETF (see HIP, LISP, ILNP).
Mobile QUIC will give Google even more insight about how users move from network to network..
Mobile IP didn't fail because the latency was too bad. It failed because there was financially viable use case. The mobile providers wanted a purely network-based roaming solution to control the billing model. WiFi vendors couldn't count on OS support of mobile IP, so they went for L2 solutions. Because there were L2 solutions, the OS vendors didn't bother to implement mobile IP.
it has a more visual explanation of the OSI model and how it relates to routing and different kinds of hardware. I also tried to explain some of the interesting problems in actually building out a network in the second half of my talk.
if anyone is just trying to learn the basics of networking I'd also strongly recommend the Juniper Networking Fundamentals online class, its free at https://learningportal.juniper.net/juniper/user_activity_inf... or you can find videos of it on YouTube.
One big UX mistake of IPv6: it was not made backward compatible with IPv4. (v6)0.0.192.168.1.10 == 192.168.1.10(v4).
This simple design when planning and rolling it out would have meant incrementally updating the networking stack to also support v6. Now it turns out v4 and v6 are completely different, and no one has a big enough reason to make the change until everyone else makes the change. Hard chicken-egg problem.
Backwards compatibility can not work. You can not answer an IPv6 packet with IPv4. There is no room in the header for the much bigger source/return address.
You can try to do hacks like NAT (like you probably do in your home IPv4 network, which breaks/stops any peer-2-peer protocol). The IPv6 version is called DNS64/NAT64, and it breaks even more things, e.g. DNSSEC. Because it not only requires network address translation (lying about your IP), but also rewriting DNS records (lying about signed DNS records).
The only sane way forward is get rid of IPv4 as fast as possible (even if 'fast' means a decade or two).
> You can try to do hacks like NAT (like you probably do in your home IPv4 network, which breaks/stops any peer-2-peer protocol).
That’s demonstrably untrue. Sure p2p protocols and NAT devices need to account for NAT, but to imply that they’re impossible to use with NAT is just silly. Many p2p protocols work via NAT all the time...
Sure, you can add another layer of hacks like UPnP or NAT-PMP to work around the breakage a bit. Or try to use side effects like NAT boxes waiting UDP responses when they forward such a packet. And you can spend the rest of day telling yourself this is totally fine, because it works most of the times.
Or could step back and see that this a terminally ill protocol on life support.
I don't mean it that way. Yes, on-the-wire format would have changed, maybe there would have been a required transition from v4 to v4.1, which could then have been backward compatible to v6.
To be fair, I don't think the authors of v6 at the time realized how much friction an alternative IP stack would cause.
That .1 change has __all__ of the same (and maybe a bit more) problems as the move to v6.
(Because it makes things even more complicated, now you'll have not two incompatible but about three things.)
And of course the problem is/was of economics. No incentive to change, v4 is good enough for now, for the incumbents. And even if large companies with new users were running out of v4, they can afford trading netblocks and/or carrier grade NAT devices, both are cheaper than telling others to upgrade (and you'd lose customers/subscribers otherwise).
> One big UX mistake of IPv6: it was not made backward compatible with IPv4.
The sin was committed when IPv4 was made and not initially designed to allow for variable / expanded address space -- it is not IPv6's fault.
Adding an IP Option to IPv4 packets that could carry extra address bits was not an option either -- IP options aren't preserved much at all on the Internet. Furthermore, even if most routers didn't drop IP options, adding "v6" address space via IP option in a packet that old/v4-only devices would nevertheless attempt to parse would have been hell operationally.
granted, IPv6 has lots of complexity/flaws/idiosyncrasies/weirdnesses (multicast, mobility, slaac, ndp, prettyprinting / the colons, extension headers, etc.) that mostly only look good through the rose-tinted glasses of the 90s and significantly slowed down deployment -- and in the end mostly ended up as "difference for difference's sake", but that the transition is difficult is also IPv4's fault for not having a robust address space expansion mechanism.
Ip options make it through the net/core just fine. It's generally edge networks where the ip option filtering occurs. This could be restored...so if someone wanted to suggest a backwards compatible extension to IPv4 that might be cool.
There is a straight-forward mapping of the IPv4 address space into IPv6 (you can do ping6 "::ffff:127.0.0.1"), and for new development that is also by far the easiest way to support both protocols: the application does everything in IPv6, and without writing a single line of code, the OS takes care of transparently using IPv4 when needed.
There is no mapping back from IPv6 to IPv4 (so IPv6 is backwards compatible, but not forwards compatible), but it can't be because the whole point is to have a larger address space.
With the way IP Routing works, every router on the path makes a routing decision. So two people on the "new IPv4" would be unable to talk to each other if even one router on their path doesn't understand the new format. Not to mention that to be fully compatible, you'd need a way for a person with the address "0.1.192.168.1.10" to talk to an IPv4 only client on 192.168.1.10.
Then there are switches and routers that make assumptions about incoming packets, so we can't do strange bit hacks with IPv4 packets. http://seclists.org/nanog/2016/Dec/29
Switching to a new protocol is the only real choice at this point.
One case where I found IPv6 great was on my home network. All my systems have IPv6 addresses and I can ssh into them from a remote network which also supports IPv6. I don't need any NAT. But, I do have an IPv6 firewall of course.
Except people _are_ making the change. When my ISP adopted IPv6, I just started using it. And a significant chunk of my traffic was IPv6. Backwards compatibility would have driven faster adoption, but just was too much of a technical hurdle. Dual-stack however is very possible, and is working to drive adoption. IPv6 usage is increasing. The change is happening.
I would like the extension of IPv4 to be that you have the option to specify the internal IP you are trying to reach too... so x.x.x.x:192.169.0.100 for example? That would be backwards compatible and give 8 bytes which is more than enough for all futures.
I found the piece informative and entertaining. But I'm not technical enough to comment much. I would have liked to see what he thought of MPTCP as a replacement for TCP.
We have to stop IPv6 debate, it's already the reality.
Even if you don't like it, even if you think it's ugly - doesn't matter.
US IPv6 mobile traffic passed 50% some time ago https://engineering.linkedin.com/blog/2017/07/linkedin-passe...
IPv6 at least on mobile is real and many of us even didn't notice it's there.
The IEEE hardware and IETF software guys have been busy adding complexity to the networks, with so many legacy protocols (when everyone just uses TCP/IP) and extra ports (when everything happens on port 80 - seriously, even email is now on cloud services).
I can't get LTE because of political problems. So I just gave up trying to be online, and started caching everything possible.
Meanwhile, storage is getting larger capacity, smaller size, and cheaper. I've got a 512GB SD card in my pocket all the time, with a backup of my laptop in case my bag gets stolen.
My phone does everything offline if possible. Offline MP3 music. Offline maps. Wikipedia. StackOverflow. Hacker News. FML. UrbanDictionary. XKCD. The few YouTube videos I actually want to see again.
The only thing I need Internet for is communication. To send a message, I walk around looking for open WiFi and type my message to them on Facebook Messenger. If they need to reach me urgently, they can just use my phone number (which keeps changing every 6 months for the same political problems).
What if access points had large caches with mirrors of the content people want? Instead of asking Google's server in the US to send me a map tile, what if I could just get it from the local WiFi AP's web server? It would be much faster, and save so much trouble with networking.
Sure, there are some things that people need the network for (e.g. new content, copyrighted material). But so much else is free of licenses, and would be possible to mirror locally everywhere.
Suprising to see a recommendation for QUIC by someone who seems to ackowledge djb's contributions and incredible attention to detail. http://apenwarr.ca/log/?m=201103#28
Correct me if wrong, but QUIC was inspired by djb's CurveCP?
Would you rather have djb implement your trusted UDP congestion-controlled overlay or a company with 70,000+ employees who are paid from the sale of online ads?
@hashbreaker
Apr 15
CurveCP's zero-padding (curvecp.org/messages.html) was designed years before ringroadbug.com, explicitly to stop that type of attack.
The Ring-Road Bug is a serious vulnerability in security protocols [e.g, QUIC but not CurveCP] that leaks the length of passwords allowing attackers to bypass user authentication. The Internet Engineering Task Force for HTTP/2 led by Google is working to create a patch to protect security protocols vulnerable to Ring-Road.
Researchers a part of Purdue University identified a major security issue with Google's QUIC protocol (Quick UDP Internet Connections, pronounced quick).
Is anyone else shocked at the low level of adoption of IPv6? I remember how in the late 90s people were saying we were going to run out of addresses and everyone need to migrate to IPv6 ASAP. Now, it seems that IPv4 is going to be around for a long while.
What if the server needs to send you a packet while you're mobile but you haven't sent it a packet yet so it can update its cache? That packet will be lost in his scheme. Nice try.
IP is best effort. Packets get lost all the time. Higher protocols, like TCP and QUIC, all handle packet loss--typically by trying again. Losing a packet is better than losing all open connections.
I think it may be worse than a lost packet, though. If the destination host is truly gone from that network, the nearest router might reply that no such computer exists, causing the socket to be disconnected. (It's been awhile since I've studied TCP/IP; I'm not sure what the exact correct reply would be...)
Perhaps this could be mitigated by adding a timeout that keeps sockets alive for awhile in case the destination shows up somewhere else.
I am very glad IPv6 didn't catch on. The world in which it was designed was not a world in which everyone (NSA, Google, Facebook) was trying to document and correlate every tiny thing you do, whether it is related to them or not.
If IPv6 eventually becomes widespread, I hope it comes with ISPs that will let you replace your prefix, and phones/hardware that will randomize your suffix - otherwise, the internet becomes completely pseudonymous.
IPv6 did catch on. Every consumer of the UK's largest broadband services (BT, Sky) now has access to the IPv6 internet. Many, many people across the world have access to it, with clients that prefer connecting to IPv6. And much of the world, especially on mobile but now even on broadband, doesn't even have an IPv4 address of their own - they're NATed along with their ISP's other subscribers through a handful of IPs for a whole ISP. An IPv6 address is the only address they actually have.
IPv6 is the only way we're ever going to create working peer-to-peer infrastructure. If you intend to keep anonymous, integrate Tor or HORNET into your protocols.
I am not familiar with the UK situation these days, but every country I've visited in the last year (US, quite a few european and a couple of asian) IPV6 wasn't more than a small irrelevant thing.
> And much of the world, especially on mobile, doesn't even have an IPv4 address of their own - they're NATed along with their ISP's other subscribers through a handful of IPs for a whole ISP. An IPv6 address is the only address they actually have.
And that's a great thing, if you care about privacy (and I do). And yet, peer to peer on these things works reasonably well using ICE, STUN, TURN and friends, and if you want a public IPV4 address, the going rate wherever I look is about $1/month.
> IPv6 is the only way we're ever going to create working peer-to-peer infrastructure
We practically have less-than-perfectly-but-still working peer to peer; the lack of immediately direct connection is not, I believe, what's stopping "working peer to peer" from happening -- the vast majority of the ISPs in the US, for example, block incoming port 80 and outgoing port 25, and for good reasons - most users cannot be trusted to run an addressable peer. So with IPv6 it would be technically easier to p2p, but practically the same as it will be firewalled by the ISPs.
And the price for this improvement will be utter complete tracability of your actions among every website -- right now, google and facebook can only (easily) exchange info about you if you gave enough of it to them, or if they decide to share cookies (which you can see and stop). On IPV6, it would be enough for them (and wikipedia, and ISPs and everyone else) to just trade access logs.
> every country I've visited in the last year (US, quite a few european and a couple of asian) IPV6 wasn't more than a small irrelevant thing
Well, yes. It works so long as you're connecting to someone else who has an IPv6 address, you don't really care about it unless it's broken.
> And that's a great thing, if you care about privacy (and I do).
It's really not. Your ISP can quickly deanonymize you, and there's regular "misconfigurations" which do. Facebook et al have no problem tracking you between sites pretty much no matter what you do - your browser cache can be used for that without even touching javascript.
Again, if you want to be anonymous on the internet, use Tor. It accomplishes what you're looking for in a NAT to a much better degree. If you want to keep other users' privacy, encourage the use of onion routing in new protocols, and encourage the use of Tor to access the legacy internet.
> And yet, peer to peer on these things works reasonably well using ICE, STUN, TURN and friends
Which require, of course, somebody running a centralised server and willing to pay for the bandwidth of TURN. This outright prevents proper peer-to-peer infrastructure from happening - the people running these services need to pay for them somehow. Even working around it via e.g. Skype's "supernodes" is expensive in terms of developer cost and the amount of expertise needed to create such a system.
> the vast majority of the ISPs in the US, for example, block incoming port 80 and outgoing port 25
And allow all other ports, hopefully? Peer-to-peer infrastructure is not going to run over HTTP and email. It's going to run over brand new protocols and ecosystems, many of which are sitting in a variety of research papers waiting to be implemented.
FTR, they block incoming port 80 because they want to maintain an artificial differential between "consumer" and "business", not any security rationale - most of the rest of the world doesn't do that, they just have a firewall blocking everything incoming on the ISP-provided router by default, and you can unfirewall port 80 if you want to. Blocking outgoing port 25, otoh, is done because SMTP is a terrible protocol that by default assumes every node on the internet is trustworthy, and ISPs were roped in to ensure nobody ever had to change it.
> It's really not. Your ISP can quickly deanonymize you, and there's regular "misconfigurations" which do. Facebook et al have no problem tracking you between sites pretty much no matter what you do - your browser cache can be used for that without even touching javascript.
Actually, facebook has a great problem tracking me between sites, because I make sure that they have these great problems (by using different VMs for different aspects of my works and life, none with access to hardware acceleration, by using proper web filtering at both the browser and gateway level). They have it easy with the vast majority of the population, no doubt, but for now my actions gets mixed with everyone elses in such a way that Facebook would actually have to assign a person to deanonymize me. Similarly Google.
My ISP can quickly deanonymize me, but at this point in time they don't unless they get a government request (I'd be surprised if they actually demand a warrant). Switching to IPv6 would effectively deanonymize me constantly.
> And allow all other ports, hopefully? Peer-to-peer infrastructure is not going to run over HTTP and email. It's going to run over brand new protocols and ecosystems, many of which are sitting in a variety of research papers waiting to be implemented.
That's a great ideal. No, they don't allow all other ports, but what they allow or block varies a lot by service class, area and ISP, and you'd know for sure only after you tried (it used to also change often, but I heard it's converged; I'm not living in the US anymore)
> Which require, of course, somebody running a centralised server and willing to pay for the bandwidth of TURN. This outright prevents proper peer-to-peer infrastructure from happening - the people running these services need to pay for them somehow. Even working around it via e.g. Skype's "supernodes" is expensive in terms of developer cost and the amount of expertise needed to create such a system.
Supernodes were retired because they do not work well anymore (haven't in a few years). I do not find "pay $1/month to provide service" too onerous; there are also public ICE/STUN/TURN.
I find it disingenuous that you completely dismiss the societal cost (privacy), and the engineering costs (the reason IPv6 is still not dominant despite being "in the works" for 20 years now), because some future protocol which had not been shown useful over those 20 years ("research papers waiting to be implemented"). There is enough IPv6 to make the case for the need, and the ONLY case that has been made is "we're running out of IPv4" which is not wrong, but far from dire as I can still get 100 IPv4 addresses for $50, which is the same price I've paid for it 10 years ago.
> My ISP can quickly deanonymize me, but at this point in time they don't unless they get a government request
http://www.bbc.co.uk/news/technology-16721338 - something I remember from recent-ish history. That data is, of course, still passed to O2's partner organisations (which don't seem to actually be listed anywhere), and you have no control over it.
> I find it disingenuous that you completely dismiss the societal cost (privacy)
I don't. I think there's other, significantly better solutions for it. I don't think NAT provides reasonable privacy in and of itself.
> the engineering costs
In practice, the fact that it's been spread out over 20 years so far is because that's how long it takes to get round to replacing an entire nation-wide deployment of carrier-grade infrastructure at all unless there's other reasons to do so. Smaller/regional ISPs have been on IPv6 for years now, partially because buying enough IPv4 space would be prohibitively expensive and partially because there's no reason not to. The technical details of IPv6 support were resolved in pretty much all networking kit a long, long time ago - it's a marginal cost at this point. The rest of it is primarily planning, testing, and replacing ancient consumer routers.
> the ONLY case that has been made is "we're running out of IPv4" which is not wrong, but far from dire as I can still get 100 IPv4 addresses for $50, which is the same price I've paid for it 10 years ago
And yet I can't get a real IP address for most of the things I'd like to. My ISP tries its hardest not to sell IPv4 addresses to anyone (it can't buy them quickly enough, and buying them is a huge resource drain - they lose money on every address sold, which is then made back up in subscription costs), let alone "home" users. On the other hand, it literally gives out static IPv6 ranges if you ask nicely.
> That data is, of course, still passed to O2's partner organisations (which don't seem to actually be listed anywhere), and you have no control over it.
Verizon was also doing this for mobile customers in the US, perhaps still do. I vote with my wallet against these ISPs. You did have some control over it, for example, by using HTTPS. But IPv6 prefixes are so plentiful, that they are assigned one-per-customer which makes correlating logs incredibly trivial; Even things like this O2/Verizon still required some per-ISP effort; no such thing with IPv6; no need to inject headers. The prefix is your undeletable cookie.
> I don't. I think there's other, significantly better solutions for it. I don't think NAT provides reasonable privacy in and of itself.
It's not the NAT that affords privacy - it's the size of the address space which does have enough IP addresses, but not so many that an ISP can avoid reassigning them.
The NAT only affords as much privacy as suffix randomization (as has been noted in this thread), which is "very little" to "not at all".
What are those other "significantly better" solutions you are aware of ? I've been looking for them, and found none.
> And yet I can't get a real IP address for most of the things I'd like to.
Likely because you are on a residential ISP and it's not their business (my ISP will gladly sell me one if I switch to the "business class" service, which is exactly the same except it costs about twice as much; I'd pay more to NOT have a fixed IP address).
Get an Amazon free tier and tunnel through it. Or pay $2 for a lowly VPS to tunnel through.
I don't think your wish to experiment is somehow more important than my wish for privacy. Neither of us get to actually vote (except with our wallet), though.
> It's not the NAT that affords privacy - it's the size of the address space which does have enough IP addresses, but not so many that an ISP can avoid reassigning them.
Again, we live in a world where CGNAT is a thing. My own ISP puts all IPv4 connections through CGNAT by default unless you explicitly opt out. Many smaller ISPs do the same - one of the new gigabit broadband services in my country will not allocate IPv4 addresses to customers, instead going for CGNAT and requiring an additional payment of £5 a month for an IPv4 address.
Mobile ISPs all implement CGNAT on IPv4 at this point - if they attempted to buy enough address space for every active mobile phone to have an IP, there'd be a serious problem.
Every single user on each of these networks does not have a routable IPv4 address. You cannot make a direct connection to these devices. IPv6 solves that problem.
> What are those other "significantly better" solutions you are aware of?
Tor. Future protocols should integrate HORNET or similar. If you really want a NAT without onion routing, use a VPN that'll do it.
> Likely because you are on a residential ISP
That's literally the point here. There's a differentiation between a "residential ISP" which can only ever consume and never participate as an equal part of the network, and a "business ISP" which is significantly more expensive because it comes with an SLA that I don't need or want.
IPv6 allows me to be an equal part of the network at the same cost as my current broadband service. I can run a website off my raspberry pi without paying anyone a penny. I can SSH/remote desktop into my home machine without having to create a "jump server". I can participate in peer-to-peer networks without depending on the hope that some other people on the network have machines that I can directly connect to, so that nobody else has to directly connect to me.
Ok, just to clear up the confusion (because not all posts in this thread use the same terminology):
Home NAT, which is equivalent to suffix randomization, does NOT afford any privacy.
Carrier Grade NAT, which would be equivalent to prefix randomization (if such a thing existed) DOES afford some privacy, provided that care is taken not to leak other data (through cookies, browser fingerprinting, stylometrics, etc).
I am not currently at home behind a CGNAT, because my ISP is apparently IPv4 rich, but they are planning to switch at some point. I am behind a CGNAT on my mobile. I have no problem doing peer to peer on either using a STUN server I run on a $2 VPS that comes with an IPV4 address. I also tunnel ssh to my home through it when I want to.
The same ISP, if I request an IPv6, will give me the prefix it assigned to me the day I signed up. That's how they roll (They actually play it as a feature - "you pay for a fixed IPv4, but you get a fixed IPv6 for free! without even asking!")
IPv6 allows you to play "equal part" - it's routable, yes, but if everyone was equal we would have mob rule by DDoS attacks way worse than we do now (perhaps everyone is equal and we will have them .... if that's the case, it will stop being the case after a few high profile attacks as such).
Also, 99.9% of the people do not know how to secure their networks or devices. If everything was routable, as you seem to desire, I think we'd be worse off. As it is, the local home NATs provide a bit of security (which no one would have designed - we got lucky they were there because of address scarcity) and the CGNATs/random V4 assignment provide a bit of privacy (which got lip service, but would not have been as effective if not for address scarcity).
My threat model includes "$company can track my whereabouts online regardless of what I do about it". Your threat model seems to be "I can't route to my server without another hop". It's not that one is valid and on is invalid - it's just that they are incompatible with each other.
> "Because the default wi-fi password formats are known, it's not difficult to crack them," said Mr Munro.
> Once an attacker has access to your wi-fi network, they can seek out further vulnerabilities.
I'm well aware of it, but that just means all those amazing peer to peer protocols[0] that are waiting to be implemented were hyperbole, doesn't it? You know, "default deny" and stuff. Oh sure, there will be a protocol, probably called "Universal hole-Punch aNd-get Pwned" or some acronym thereof, to relax that "default deny".
v6 with privacy addresses is not very much different to current v4 with NAT on the privacy front. You'll still be tracked with cookies and browser fingerprinting either way.
Out of curiosity... without cheating, what do you reckon v6 deployment is at for clients in the US -- that is, what percentage of clients do you think use v6 to connect to v6-enabled sites?
> v6 with privacy addresses is not very much different to current v4 with NAT on the privacy front.
Are you familiar of an ISP that will give you a new v6 prefix on demand (say, once every hour or day or week?) Or one that mixes all customers? otherwise, the NAT you do on your own behind that prefix is of very little (though not strictly zero) practical use; It just means that if someone gets access logs from two websites, they don't know if two requests were made from my laptop or one was from mine and the other from my kids.
I am not living in the US these days and have no knowledge on which to base the estimate ... but I haven't received an AAAA DNS record to any request I've made through several countries.
edit: added: My specific browser setup, described somewhere else around here, makes it hard to track or fingerprint. IPv6 takes that ability away from me (and everyone else).
> I haven't received an AAAA DNS record to any request I've made through several countries
Wait, whether you receive an AAAA DNS record has nothing to do with whether you're in IPv6 - it's to do with whether you're requesting AAAA records. How exactly are you testing this? What does `dig google.com AAAA @8.8.8.8` get you?
I actually never asked for Google, because I knew it worked; I've never asked for Amazon, because I assumed it worked -- only for the not-top-10 sites I use. However, I just tried amazon for the first time ever and got this:
Maybe they hate my ISP, but also anything behind cloudflare (e.g. news.ycombinator.com, and about 80% of the sites I regularly visit) doesn't seem to have an AAAA address.
> Maybe they hate my ISP, but also anything behind cloudflare (e.g. news.ycombinator.com, and about 80% of the sites I regularly visit) doesn't seem to have an AAAA address.
We provide IPv6 for all our customers by default. Some customers choose to disable IPv6 (the most typical reason appears to be they have anti-abuse systems that require the client IP to be v4).
Thanks. I was aware of that, and still every cloudflared website I've ever checked was IPv4 only, for whatever reason - I assumed it was off by default, it's surprising that it's on by default and still so many turn it off.
I'm even more surprised at Amazon lacking an AAAA record, though. They surely have the data to tell, and IPv6 won't improve their retail business (or they have IPv6 fraud problems that would negate whatever improvement).
Can you share what percentage of customers have IPv6 turned off explicitly?
Can you share what percentage of hits to cloudflared sites can be IPv6 (even if they happen through IPv4)?
Cloudflare does support IPv6 (has since 2012), but requires that you actually set up the AAAA record. Unfortunately many sites can't be bothered to do that despite it being a 5-minute job, since not doing it doesn't currently break anything in most cases, and might break things for the minority with broken IPv6 configuration.
Cloudfront now supports IPv6, so the reasoning for not enabling it on Amazon.com is likely similar.
Because a) statements in your comment are false, b) some proposals in your comment have already been implemented, and c) the ones that haven't are bad ideas.
IPv6 has caught on (I'm commenting from an IPv6-only connection right now, on a residential US ISP).
Most clients do perform RFC4941 suffix randomization.
Replacing the prefix destroys one of the most useful features of IP addresses and in particular the larger IPv6 address space - routability (the property that getting to two addresses that share a prefix usually uses the same next-hop router).
Thank you for taking the time to explain. Let me try to word things a little better:
What is the percentage of US homes who are on an IPv6?
What is the percentage of websites on IPv6?
What are the number of web site hits that are IPv6 to IPv6? (in the US? in the world?)
The highest estimate I've ever seen for any of these is less than 20%, which - 20 years into IPv6, is in my opinion "not caught on". The mobile world when 3G arrived preferred carrier-grade-NAT to IPv6 (which was technically a better solution), which is in my opinion "not caught on".
Suffix randomization has been implemented, but is not universal in my experience; and it is essentially useless for privacy in one's home. It slightly blurs the distinction between my laptop and my son's iPad. And that's ALL it does.
Right now I enjoy getting an address from a pool of 16K addresses every time I reset my cable modem; And it is likely to transition soon to a carrier-grade NAT which would give me even more privacy.
It is likely that I should be taking crazy pills - I seem to remember the snowdens of yesteryear and facebook shadow profiles, which either I'm hallucinating or no one else seems to care about.
On the other hand, I have a startup idea to profit from the impending v6-complete-lack-of-privacy that I should probably start working on. If you can't beat them, profit off them.
Google puts global native IPv6 adoption among their users (i.e. proportion of incoming connections that are IPv6) at 17%-ish and exponentially/logistically increasing; the US numbers are much higher, at around 35% [https://www.google.com/intl/en/ipv6/statistics.html#tab=per-...]. These numbers do not distinguish between mobile and fixed clients; I suspect that mobile deployment is higher than residential.
The server-side IPv6 adoption is not so great; see [http://www.delong.com/ipv6_alexa500.html] for deployment numbers. Luckily, most ISPs providing IPv6 (or at least, my personal one) provide a carrier-grade NAT64 gateway to allow access to IPv4 services from IPv6-only clients.
By web site hits, I have no idea - I don't know where to find those numbers.
IPv6 suffix randomization is enabled by default on Windows, OSX, and iOS. For Android, it probably varies (like everything else) by vendor, but my personal Android phone is using a random suffix. What are the machines you're using that aren't doing this?
Yes, suffix randomization doesn't hide which home connection you're on; but neither did old-school IPv4 + NAT. Sure, IPv6 didn't add that feature, but that's enough of a performance killer that it should be relegated to a separate system like TOR. The IPv6 prefix you are assigned by your carrier is a feature of whatever DHCPv6 setup they have; if they're assigning you the same prefix for every time you power-cycle your modem on IPv6 and they were not doing so with DHCPv4, that's super weird.
> IPv6 suffix randomization is enabled by default on Windows, OSX, and iOS. For Android, it probably varies (like everything else) by vendor, but my personal Android phone is using a random suffix. What are the machines you're using that aren't doing this?
I think it was Win7 last I tested it, probably an early service pack; according to https://superuser.com/questions/243669/how-to-avoid-exposing... it should already have had privacy addressing, but perhaps it was somehow turned off on the machine I tested (or perhaps my expectation that it would change on reboot was wrong?).
> The IPv6 prefix you are assigned by your carrier is a feature of whatever DHCPv6 setup they have; if they're assigning you the same prefix for every time you power-cycle your modem on IPv6 and they were not doing so with DHCPv4, that's super weird.
They were allocating from a pool on DHCPv4, where reservations were for a few hours (so immediate power cycle would get same address, but if you wait a couple of hours or release and request, you'd get a new one). They are not using DHCPv6 in the same way - they assign a prefix-per-customer. That was the case with all the local IPv6 carriers I inquired with. I guess it means that the prefix is /56 or even /60 - I didn't even ask.
You are technically correct, the best kind of correct. It is possible that my sampling bias is giving me a distorted view of where the world is going, but:
All local ISPs I've asked, don't give out the same IPv4 (they all charge for a fixed IP, so no guarantee you'll get the same one unless you pay; at least 3 out of the 6 actively change your IP whenever they can, to force you to pay even if you need fixed IPs for short times).
All local ISPs I've asked provide the same IPv6 prefix to a customer.
I assumed that was common practice - at the very least, more common than the other way around (fixed IPv4 when you didn't ask for it, random IPv6)
Quite likely that it varies a lot between markets, true. From what I've seen, cable providers often give you a "static" IP (I think often tied to the modems hardware ID?). ADSL providers here in Germany did commonly reset your connection and IP every 24 hours (or on manual reconnect). From what I hear not all of them do that anymore, but e.g. Deutsche Telekom seems to change IPv6 as well (probably since you're supposed to pay for business class if you want static).
And some run the connection as IPv6 only and then CG-NAT IPv4, which of course gives you a random IP again, but is even worse for P2P applications and means you can't use DynDNS etc anymore.
My personal connection has had the same IP for the past 5 years, and I think changing it would mean asking my ISP and coming up with an answer for why I need that.
Not nonsense! The global IP broadcast is specified as 255.255.255.255 and is used by other protocols. The source IP address for the initial discovery is indeed 0.0.0.0, which is not intuitive, but the rest of the DHCP exchange is handled with real IP addresses like normal IP traffic. DHCP is very much an IP protocol (see DHCP relay for how it transits IP networks).
>Actually, RARP worked quite fine and did the same thing as bootp and DHCP while being much simpler, but we don't talk about that.
Ugh, come on! RARP doesn't provide you with a route to get out of the network or other extremely useful things like a DNS server.
>and DHCP, which is an IP packet but is really an ethernet protocol, and so on.
No, it's not an ethernet protocol. It's a layer-3 address assignment protocol that runs inside of IP, which is normally encapsulated in ethernet frames. You can have a remote DHCP server running any arbitrary L2 non-ethernet protocol and if it receives a relayed DHCP request it will reply with IP unicast perfectly fine with no ethernet involved.