13 Jan

Vyatta based VyOS – Linux based network OS

VyOS is quite interesting OS. It’s a open source Linux based network operating system based on Vyatta. It’s config style seems bit like JunOS in terms of hierarchy and set/edit/delete options while editing configuration.


Can one use it in a small ISP or a Corporate LAN setup? 

Someone asked me recently if we can have complete open source based router in smaller network doing basic stuff. Not with not-so-streamlined Linux shell but networking OS where network engineers favorite tool “?” works in CLI with options.

Let’s take a possible case with bunch of routers, a server with speedtest-mini running on it and end desktop with Ubuntu-desktop on it along with VyOS based router. Goal here is to have basic features to work (to start with!). I am conducting this test and setup on the VM infrastructure at home but that should have zero impact/configuration of network devices and hence not going to focus on that part. All devices including server, desktop and router are pretty much running on virtual machines or KVM containers.



To configure and test:

  1. Configuration of interfaces and basic static routing with reachability
  2. Source-NAT on RTR01 for LAN side connectivity with DHCP
  3. Traffic shaping on RTR01 on per user basis
  4. IPv6 autoconfig
  5. Basic firewalling to protect server01


Note: I am treating 10.x.x.x range here as public IP wan range in demo while as LAN IPs which would be NATted behind WAN IP.


1.Static routing with reachability

Configuring VyOS is very much like that of JunOS – basic edit to go inside to hierarchal levels and set to add to config, delete to delete from config.


Configuration on transit router’s interface


Configuration on RTR01 interfaces


Transit router is without any default route, while server01 & RTR01 have default towards transit and as usual end user desktops have default towards RTR01.


This gives basic connectivity from RTR01 all way up to the server01.


Connectivity test


2.Source-NAT on RTR01 for LAN side connectivity

So we have eth1 on the transit router facing on LAN side. Just like all other LANs, it would need a combination of DHCP + NAT of private against WAN IP for outgoing packets.


Configuration DHCP


Note: Here subnet and default router IP belongs to IP used on eth1 interface. Now we have as LAN pool, and we need to NAT it against WAN IP which is on eth0 –

NAT configuration

This ensures that all packets with source IP from pool going to outside world via eth0 – WAN would be “re-written” to  source IP address of WAN.

Once this is done, Ubuntu-desktop shows IP which it learnt from RTR01.

Ubuntu Desktop - learnt IP via DHCP



Testing it’s connectivity to end destination server01 which is on WAN IP

Connectivity test from desktop


Quick check on NAT table on RTR01


3. Traffic shaping on RTR01 on per user basis

All works well. Now, I am running speedtest-mini script on the server01. So browsing on end desktop to test speeds.

(Note: As I said this is virtualized environment and hence extremely likely I am going to hit resource limit much before actual theoretical speeds of port. Aim of this test is just sample and to assist in bandwidth shaping.)


Speedtest on Desktop

Screen Shot 2016-01-13 at 2.10.32 AM


Screen Shot 2016-01-13 at 2.12.04 AM



This shows quite decent speeds. Now coming to traffic shaping, let’s shape everyone on LAN to 10Mbps symmetric. Here we can make use of “traffic shaper“. One key thing with traffic shaper is that it works only in OUT direction and hence it can shape packets going in out direction from port. Thus to have symmetric effect, we can apply it on both eth0 – WAN (for capping uploads) and eth1 – LAN (for capping downloads).


Next, we need to call appropriate shaper on interfaces. Calling “User-Upload-Cap” on eth0-WAN in outbound direction and “User-Download-Cap” in  outbound in eth1-LAN direction.


Speedtest after cap

Speedtest after cap


And capping works! One can more fine tune it as per requirement like capping just specific traffic for specific set of users etc.


4. IPv6 autoconfig

So next goes is to have full IPv6 auto configuration. I would also dual stack interfaces of server01 to make IPv6 work end to end between server01 to desktop.

Following RFC 3849, I will use 2001:DB8::/32 for this sample configuration / documentation purpose.

2001:DB8::/32 – Main allocation

2001:DB8:1::/48 – Transit Router Pool
2001:DB8:1:1::/64 – Connectivity with server01
2001:DB8:1:2::/64 – Connectivity between Transit Router and RTR01
2001:DB8:2::/48 – IPv6 pool for RTR01 routed to it
2001:DB8:2:a::/64 – LAN pool for allocation to end customers


Dual stacking eth0 interface of Transit-Router which is facing server01 & eth1 facing RTR01


Next, routing 2001:DB8:2::/48 behind RTR01 (2001:DB8:1:2::2)



Configuration RTR01 interfaces with IPv6


Setting up router advertisements

Checking now on end user desktop

IPv6 connectivity test

(This shows IPv6 getting automatically configured on the end user’s desktop)


IPv6 connectivity test


Speedtest on IPv6


Speedtest over IPv6 is also capped since cap was based on interfaces and traffic flow and hence works on both IPv4 and IPv6 world.


5.Basic firewalling to protect server01

In current configuration, server01 has all open ports. Let’s try to allow only and only tcp port 80 traffic and drop everything else. Out ports from server01 to outside should work.


Step 1 – Create firewall rule to drop all traffic by default and just allow tcp port 80 and established + related connections so that return traffic (initiated from server) could work.


Step 2 – Next, apply this policy on the “out” direction on the interface connecting the server01 so packet going towards the server can be filtered.


And finally testing and looking at firewall statistics



All works pretty well. We can surely use such box in a small network.


Next logical step – using it for more cool features like full redundancy using VRRP, OpenVPN tunnels, BGP with route-maps etc.



Note: I would presenting a paper on “Disconnected Network Islands” at SANOG 27 on 25th. Meet and greet if you are around! 🙂

26 Aug

Multiple IP’s on Linux servers

One of things which people often asked me around in past was on how to have multiple IPs on Linux machine under various circumstances. I know there are ton of blog posts about this but very few explain how it works and possible options under different use cases etc.


I will share router side and server side config with focus on how it should be done from server end. Assuming server side config to be for Ubuntu/Debian. You can find similar concept for CentOS.


Say you have a router on IP and server on IP on a /24 ( subnet. Assming that entire is available for server’s connectivity. Setup would be like:

R1 - Server 01 connectivity

Configuration so far is super simple. You have got placed on R1’s interface (g1/0) which connects to server01 and server01 has


and on server’s config is:


Now let’s say you want to add additional IP’s to the server. There can few ways:

  1. You may add more IP’s from this same pool i.e un-used IP’s from within
  2. You may add more IP’s from all together different pools say from


When adding new IP’s/additonal IPs to server, you must understand that they would be either via layer 2 (i.e those IP’s will generate ARP packets on the interface connected to the router) or would be layer 3 i.e routed IP’s which are routed “behind” an existing working IP (like in this case. Another case you can have is additonal IP’s which are eventually NATTed behind public IPs which I will also discuss in the end.


Layer 2 based addition

When IP’s are from layer 2 – they are are supposed to be present on the interface so that they reflect in ARP and hence machines on the LAN do IP to MAC conversion and route packets destination for those IPs. Currently connected interface here is eth0 and hence new IP’s should be eth0 only. Thus you can add those IP’s by creating so called “alias interface”. eth0 can have eth0:1, eth0:2 etc as alias. IP’s can also be added on same single eth0 interface.

Since entire pool is available for use between R1 and server01, this doesn’t needs any change at R1 end. On server end, we can add IP as:


Temporary addition (will get removed after reboot):


So there we go – two IP’s added on eth0 itself.



Let’s try to ping them from router:


And so it works. Now, if we examine ARP table for g1/0 interface of router (which connects to server01) we will find all these three IP’s which are in use by server.


Another way of doing same thing is by creating alias interface and adding IP’s on it. So we can add following in the /etc/network/interfaces:


Being those interfaces up using: ifup eth0:1 and ifup eth0:2. A logical question  – where to put gateway often comes up and confuses. Keep in mind as of now all IP’s are coming from same single device R1 and IP at R1 end is and hence single gateway in eth0 config is good enough to ensure that traffic to any IP outside pool can be routed via Let’s say you want to add IP from a completely different pool (for some reason) on server like Here you can do it via layer 2 by first defining an IP as secondary on R1 and add IP as alias on the server.


On Server01 end:


This simply ensures that both R1 and Server01 get in single broadcast domain which has broadcast address and hence can speak to each other. Again, in this case as well on router end – router gets ARP for IP and that tells how to reach. ARP table (IP to MAC address conversion) and forwarding of packets based on Mac (Mac table: Mac >> Interface conversion).


Another way of layer 2 setup can be by either patching an un-used extra port and have separate network on it (separate IP / subnet mask). You can also have a setup where you send tagged VLAN till server and untag it on the server. I will put blog post about it later on.


Layer 3 based addition

Due to scalability as well as scarcity of IPv4 address issue, layer 2 based method isn’t the best one when it comes to adding of additional IP’s. Layer 3 setup is simply where additional IP’s are “routed” behind a working single public IP.

So e.g thought it’s better to use /30 for P2P (infact /31!) but let’s keep same case going. We have on R1 and on Server01 and both are in /24. Now to allocate say to server, we can route this IP behind


So setup on R1:


This will ensure that all packets going towards (single IP) are routed to which is on server01. Next, we can add this IP on existing loopback interface lo or a new alias of loopback as lo:1.

ip -4 addr add dev lo for temporary addition (removed after reboot) and

auto lo:1
iface lo:1 inet static


So how exactly this works? It’s important to understand it as it explains key difference between IP’s added on interface Vs IP’s routed. Let’s see “sh ip route” output for and


Here clearly there’s a “directly connected route” while for there’s a static route towards


Some of key comparison point layer 2 Vs layer 3 based setup:

  1. With layer 3 method you can have as many IP’s as want on server without getting into CIDR cuts. So e.g if you want to add entirely new pool to server, you would need at least 2 IP’s (a /31). If you want just 3 IP’s then you would need a /29 (consuming 8 IPs) and so on. This approach has issue as it wastes lot of IPs and that becomes critical when we are almost out IPv4. In IPv6 that’s no issue at all.
  2. With layer 3 you can have a setup where addition of IP’s doesn’t really creates any layer 2 noise (ARP packets). So e.g you can use just and then route entire behind server. This will ensure that server can use without generating any ARP for it and router will just have one single routing table entry for that enture /24. ARP would be just for single IP which is used to connect R1 with the server.



I hope this will help you !