26 Aug

Multiple IP’s on Linux servers

One of things which people often asked me around in past was on how to have multiple IPs on Linux machine under various circumstances. I know there are ton of blog posts about this but very few explain how it works and possible options under different use cases etc.


I will share router side and server side config with focus on how it should be done from server end. Assuming server side config to be for Ubuntu/Debian. You can find similar concept for CentOS.


Say you have a router on IP and server on IP on a /24 ( subnet. Assming that entire is available for server’s connectivity. Setup would be like:

R1 - Server 01 connectivity

Configuration so far is super simple. You have got placed on R1’s interface (g1/0) which connects to server01 and server01 has


and on server’s config is:


Now let’s say you want to add additional IP’s to the server. There can few ways:

  1. You may add more IP’s from this same pool i.e un-used IP’s from within
  2. You may add more IP’s from all together different pools say from


When adding new IP’s/additonal IPs to server, you must understand that they would be either via layer 2 (i.e those IP’s will generate ARP packets on the interface connected to the router) or would be layer 3 i.e routed IP’s which are routed “behind” an existing working IP (like in this case. Another case you can have is additonal IP’s which are eventually NATTed behind public IPs which I will also discuss in the end.


Layer 2 based addition

When IP’s are from layer 2 – they are are supposed to be present on the interface so that they reflect in ARP and hence machines on the LAN do IP to MAC conversion and route packets destination for those IPs. Currently connected interface here is eth0 and hence new IP’s should be eth0 only. Thus you can add those IP’s by creating so called “alias interface”. eth0 can have eth0:1, eth0:2 etc as alias. IP’s can also be added on same single eth0 interface.

Since entire pool is available for use between R1 and server01, this doesn’t needs any change at R1 end. On server end, we can add IP as:


Temporary addition (will get removed after reboot):


So there we go – two IP’s added on eth0 itself.



Let’s try to ping them from router:


And so it works. Now, if we examine ARP table for g1/0 interface of router (which connects to server01) we will find all these three IP’s which are in use by server.


Another way of doing same thing is by creating alias interface and adding IP’s on it. So we can add following in the /etc/network/interfaces:


Being those interfaces up using: ifup eth0:1 and ifup eth0:2. A logical question  – where to put gateway often comes up and confuses. Keep in mind as of now all IP’s are coming from same single device R1 and IP at R1 end is and hence single gateway in eth0 config is good enough to ensure that traffic to any IP outside pool can be routed via Let’s say you want to add IP from a completely different pool (for some reason) on server like Here you can do it via layer 2 by first defining an IP as secondary on R1 and add IP as alias on the server.


On Server01 end:


This simply ensures that both R1 and Server01 get in single broadcast domain which has broadcast address and hence can speak to each other. Again, in this case as well on router end – router gets ARP for IP and that tells how to reach. ARP table (IP to MAC address conversion) and forwarding of packets based on Mac (Mac table: Mac >> Interface conversion).


Another way of layer 2 setup can be by either patching an un-used extra port and have separate network on it (separate IP / subnet mask). You can also have a setup where you send tagged VLAN till server and untag it on the server. I will put blog post about it later on.


Layer 3 based addition

Due to scalability as well as scarcity of IPv4 address issue, layer 2 based method isn’t the best one when it comes to adding of additional IP’s. Layer 3 setup is simply where additional IP’s are “routed” behind a working single public IP.

So e.g thought it’s better to use /30 for P2P (infact /31!) but let’s keep same case going. We have on R1 and on Server01 and both are in /24. Now to allocate say to server, we can route this IP behind


So setup on R1:


This will ensure that all packets going towards (single IP) are routed to which is on server01. Next, we can add this IP on existing loopback interface lo or a new alias of loopback as lo:1.

ip -4 addr add dev lo for temporary addition (removed after reboot) and

auto lo:1
iface lo:1 inet static


So how exactly this works? It’s important to understand it as it explains key difference between IP’s added on interface Vs IP’s routed. Let’s see “sh ip route” output for and


Here clearly there’s a “directly connected route” while for there’s a static route towards


Some of key comparison point layer 2 Vs layer 3 based setup:

  1. With layer 3 method you can have as many IP’s as want on server without getting into CIDR cuts. So e.g if you want to add entirely new pool to server, you would need at least 2 IP’s (a /31). If you want just 3 IP’s then you would need a /29 (consuming 8 IPs) and so on. This approach has issue as it wastes lot of IPs and that becomes critical when we are almost out IPv4. In IPv6 that’s no issue at all.
  2. With layer 3 you can have a setup where addition of IP’s doesn’t really creates any layer 2 noise (ARP packets). So e.g you can use just and then route entire behind server. This will ensure that server can use without generating any ARP for it and router will just have one single routing table entry for that enture /24. ARP would be just for single IP which is used to connect R1 with the server.



I hope this will help you !

16 May

Backend of Google’s Public DNS

And finally academic session over. Done with all vivas and related stuff.

Next will be exams likely in June. Time for me to get ready for travel. 🙂


Anyways an interesting topic for today’s post – Google Public DNS. Lot of us are familier with popular (and free) DNS resolvers and I have covered reason in previous posts on why it tends to fail with Content Delivery networks like Akamai which rely on anycasting at bottom DNS layer and simple unicasting on application servers. Anycasted DNS nodes point to application servers based on various factors like distance, load, cost etc out of interesting algorithms these CDN networks use for load & cost management.


Anyways today’s post focus is not CDN issues with these resolvers but Google Public DNS itself. Are these servers located in India and everywhere else where Google has PoPs?


Let’s do a simple trace to get forward path from Airtel to Google’s


Type escape sequence to abort.
Tracing the route to google-public-dns-a.google.com (

1 [MPLS: Label 550027 Exp 0] 0 msec [MPLS: Label 550027 Exp 0] 4 msec [MPLS: Label 354133 Exp 0] 0 msec
2 0 msec 0 msec 0 msec
3 44 msec 44 msec 48 msec
4 [AS 15169] 52 msec 56 msec 52 msec
5 google-public-dns-a.google.com ( [AS 15169] 52 msec * 116 msec


50ms latency. Clearly destination is within India and based on my experience with latency values, I strongly guess that’s Chennai.


Location of Google Public DNS servers

Anyways so does that means Google’s DNS server is within India?


A clear answer is no. This is just a DNS caching server and Google does not use it for originating actual queries further to root, TLDs nodes and authoritative DNS servers. This seems like a interesting distributed setup.

As per Google Public DNS FAQ page, there are quite a few locations from where DNS servers originate queries but India is not in the list yet. Google has PoPs in Delhi, Mumbai and Chennai and they peer with pretty much every Indian ISP out from there.


We can actually test which node is serving us here in India.
This can be achieved in multiple ways:

  1. Running a authoritative zone on a server with basic BIND installation. I tried this with my own Linux server by having testing-google-dns.anuragbhatia.com. DNS zone. I delegated NS for this zone on auth. DNS servers for “anuragbhatia.com” zone. Next I sent a DNS query with dig @  testing-google-dns.anuragbhatia.com. a +short to ask my DNS server for IP and this gave me source IP of Google’s resolver. 
  2. The other easy way out is to simply use Akamai’s “whoami.akamai.net” service. It is designed in a way to return A record of DNS resolver which queries it. This gives IP of Google’s server which sent the DNS query for resolution.


Anurags-MacBook-Pro:~ anurag$ dig whoami.akamai.net a @ +short
Anurags-MacBook-Pro:~ anurag$



In both cases I saw IP was It belongs to announced by Google’s AS15169. As per Google’s FAQ page (which has IPs too!) the prefix belongs to Kuala Lumpur, Malaysia. So that’s the actual DNS resolver node which serves users here in India. Machines with IPs and are just caching replies and more over taking the IP traffic to Google within India.


Now one can ask why Google is not having DNS resolver within India?




Guess work time!

I don’t know exactly but I can do a strong guess work here. Google is a tier 1 transit free network. It relies on paying on layer 2, building PoPs and connected them together. It does not pays on layer 3 for bandwidth to any ISP. So Google’s routers in India learn traffic from just peering sessions with all major telcos (except BSNL). Google is peering with Tata-VSNL AS4755, Reliance AS18101, Airtel AS9498, MTNL AS17813, Spectranet AS10029 etc. One interesting thing here is that these are all tier 2 networks. Tata Communications is a tier 1 network but their domestic backbone VSNL AS4755 is technically not a tier 1 network and technically it sits in downstream of Tata AS6453 (which is their tier 1 IP backbone). Thus Google does not gets full global table feed from any of these links and possibly nearest PoP of Google which has full table feed from Tier 1 networks is in Malaysia.


What I am not able to answer from my guess work is that when Google is relying on East Asian PoPs for such stuff and mantaining a backbone between East Asia and India directly then why they could not feed Indian routers routing table with routes learnt from outside?  It could be just to ensure direct delivery in India and avoid routing loops. E.g BSNL has IP port from Tata-VSNL AS4755 within India and IPLC port from Tata-AS6453 to outside Indian PoPs. Thus if tables are combined Google might see paths like  AS6453 > AS9829 and AS4755 > AS9829 which seem identical as per AS path but one is direct India to India traffic while other via India > Singapore > India or India > US > India. It’s not just about BSNL but Sify also lately has weird routing loops going from outside India for Indian destinations.


That’s about it. Can’t do any guess work beyond this point unless someone gives me access to a router of AS15169 to see table! 🙂