20 Sep

IPv6 allocations to downwards machine with just one /64


One of my friend went for a VM with a German hosting provider. He got single IPv4 (quite common) and a /64 IPv6. Overall /64 per VM/end server used to be ok till few years back but now these days running applications inside LXC containers (OS level virtualization) make more sense. This gives option to maintain separate hosting environment for each application. I personally do that a lot and infect blog which you are reading right now itself is on a LXC container.


So my friend tried to do similar setup but it went tricky for him because of just one single /64 from upstream. For me I have a /32 and I originate a /48 from this location giving me over 65k /64s of IPv6 for any testing and random fun application deployments.


The challenge in his setup was following: 

  1. One can use available 18 quintillion IPv6 address in the /64 by bridging the internal container interface with it. That’s ok for IPv6 but fails terribly for IPv4 as many people do not need dedicated IPv4 per container while it’s fun to have that for IPv6 and gives so much flexibility. For IPv4 a custom setup makes more sense with specific DST NAT and reverse proxy for port 80 and port 443 traffic.
  2. For NATing IPv4 a separate virtual interface (veth) makes sense so that one can run private IPv4 addressing. Now here firstly subnetting of /64 sounds stupid and weird but even if one does that it won’t work because main /64 allocation is via layer 2 and not a routed pool. This doesn’t works, read further on why.



So after our discussion my friend decided to use a /112 for container (ugly I know but datacenter provider quoted 4-5Euro/month for additional /64!). A /112 out of 128 IPv6 addressing gives one 2^16 i.e 65k IPv6 addresses to use on containers which is good number of IPv6 with few limitations like:

  1. Many things support /64 only like for instance use of IPv6 in OpenVPN sticks with that due to Linux Kernel implentation.
  2. IPv6 auto conf heavily depends on it. In my own personal setup I have a dedicated /64 for the container interfaces and radvd takes care of autoconfig via router advertisements. With anything less then /64 that’s not possible.


So we broke the allocated /64 into a /112 and allocated first IP our of that on veth based interface and next used 2nd IP on a container. IPv4 was working fine on container with SRC NAT in this case but IPv6 connectivity wasn’t. Container was able to reach host machine but nothing beyond that. I realised issue was of layer 2 based allocation which was relying on IPv6 NDP. So the container’s IPv6 had internal reachability with host machine but whenever any packet came from internet, the L3 device of VM provider wasn’t able to send packets further because of missing entry of that IP in their NDP table. Problem wasn’t just with IPv6 of container but with just any IPv6 used on any interface of the VM (whether that virtual veth or even loopback). Adding IPv6 on eth0 (which was connected to upstream device) was making IPv6 to work but not possible to use it further on a downstream device like a container. The datacenter provider offered to split /64 into /65s and route 2nd /65 for a monthly charge (ugly!!!). So we ended up with a nasty workaround – use of proxy NDP. This is very similar concept to proxy arp as in case of  IPv4. So that required enabling proxy arp by enabling in sysctl.conf and next doing proxy NDP for specific IPv6 using: ip neigh add proxy xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx:xxxx/64 dev eth0


Hurricane Electric datacenter


This works and thus with an extra step of adding proxy NDP entry for each IPv6 in use. In general routed pool is way better and if I had a make a choice on behalf of his datacenter provider, I would have gone for use of /64 for point to point connectivity and a routed /48. At Hurricane Electric (company I work for) we offer IPv6 free of charge so that networks can grow without having to worry about address space or do nasty things like one I described above. 😉

Haven’t deployed IPv6 on your application yet? Do it now!

Time to get back to work and do more IPv6 🙂

26 Aug

Multiple IP’s on Linux servers

One of things which people often asked me around in past was on how to have multiple IPs on Linux machine under various circumstances. I know there are ton of blog posts about this but very few explain how it works and possible options under different use cases etc.


I will share router side and server side config with focus on how it should be done from server end. Assuming server side config to be for Ubuntu/Debian. You can find similar concept for CentOS.


Say you have a router on IP and server on IP on a /24 ( subnet. Assming that entire is available for server’s connectivity. Setup would be like:

R1 - Server 01 connectivity

Configuration so far is super simple. You have got placed on R1’s interface (g1/0) which connects to server01 and server01 has


and on server’s config is:


Now let’s say you want to add additional IP’s to the server. There can few ways:

  1. You may add more IP’s from this same pool i.e un-used IP’s from within
  2. You may add more IP’s from all together different pools say from


When adding new IP’s/additonal IPs to server, you must understand that they would be either via layer 2 (i.e those IP’s will generate ARP packets on the interface connected to the router) or would be layer 3 i.e routed IP’s which are routed “behind” an existing working IP (like in this case. Another case you can have is additonal IP’s which are eventually NATTed behind public IPs which I will also discuss in the end.


Layer 2 based addition

When IP’s are from layer 2 – they are are supposed to be present on the interface so that they reflect in ARP and hence machines on the LAN do IP to MAC conversion and route packets destination for those IPs. Currently connected interface here is eth0 and hence new IP’s should be eth0 only. Thus you can add those IP’s by creating so called “alias interface”. eth0 can have eth0:1, eth0:2 etc as alias. IP’s can also be added on same single eth0 interface.

Since entire pool is available for use between R1 and server01, this doesn’t needs any change at R1 end. On server end, we can add IP as:


Temporary addition (will get removed after reboot):


So there we go – two IP’s added on eth0 itself.



Let’s try to ping them from router:


And so it works. Now, if we examine ARP table for g1/0 interface of router (which connects to server01) we will find all these three IP’s which are in use by server.


Another way of doing same thing is by creating alias interface and adding IP’s on it. So we can add following in the /etc/network/interfaces:


Being those interfaces up using: ifup eth0:1 and ifup eth0:2. A logical question  – where to put gateway often comes up and confuses. Keep in mind as of now all IP’s are coming from same single device R1 and IP at R1 end is and hence single gateway in eth0 config is good enough to ensure that traffic to any IP outside pool can be routed via Let’s say you want to add IP from a completely different pool (for some reason) on server like Here you can do it via layer 2 by first defining an IP as secondary on R1 and add IP as alias on the server.


On Server01 end:


This simply ensures that both R1 and Server01 get in single broadcast domain which has broadcast address and hence can speak to each other. Again, in this case as well on router end – router gets ARP for IP and that tells how to reach. ARP table (IP to MAC address conversion) and forwarding of packets based on Mac (Mac table: Mac >> Interface conversion).


Another way of layer 2 setup can be by either patching an un-used extra port and have separate network on it (separate IP / subnet mask). You can also have a setup where you send tagged VLAN till server and untag it on the server. I will put blog post about it later on.


Layer 3 based addition

Due to scalability as well as scarcity of IPv4 address issue, layer 2 based method isn’t the best one when it comes to adding of additional IP’s. Layer 3 setup is simply where additional IP’s are “routed” behind a working single public IP.

So e.g thought it’s better to use /30 for P2P (infact /31!) but let’s keep same case going. We have on R1 and on Server01 and both are in /24. Now to allocate say to server, we can route this IP behind


So setup on R1:


This will ensure that all packets going towards (single IP) are routed to which is on server01. Next, we can add this IP on existing loopback interface lo or a new alias of loopback as lo:1.

ip -4 addr add dev lo for temporary addition (removed after reboot) and

auto lo:1
iface lo:1 inet static


So how exactly this works? It’s important to understand it as it explains key difference between IP’s added on interface Vs IP’s routed. Let’s see “sh ip route” output for and


Here clearly there’s a “directly connected route” while for there’s a static route towards


Some of key comparison point layer 2 Vs layer 3 based setup:

  1. With layer 3 method you can have as many IP’s as want on server without getting into CIDR cuts. So e.g if you want to add entirely new pool to server, you would need at least 2 IP’s (a /31). If you want just 3 IP’s then you would need a /29 (consuming 8 IPs) and so on. This approach has issue as it wastes lot of IPs and that becomes critical when we are almost out IPv4. In IPv6 that’s no issue at all.
  2. With layer 3 you can have a setup where addition of IP’s doesn’t really creates any layer 2 noise (ARP packets). So e.g you can use just and then route entire behind server. This will ensure that server can use without generating any ARP for it and router will just have one single routing table entry for that enture /24. ARP would be just for single IP which is used to connect R1 with the server.



I hope this will help you !