07 Apr

Manage Wireguard users using Ansible

Day 16 of lockdown here in Haryana due to Covid19. Time for some distraction.


Last week it was reported that Wireguard will be added in next version of Linux kernel. I have been using Wireguard from over a year and it has been working great. I replaced OpenVPN with Wireguard for both site to site VPN as well as client-server VPN. If you are looking for a free open source VPN for remote employees or just connecting to your own remote servers Wireguard can be a really good candidate.

Recently I create client-server VPN at home so that I can get inside the home network whenever travelling (which is little uncommon due to Covid19 lockdown!).

Somehow I did not find any good automated script to generate keys. Tried a few projects and either they did not work or they tend to re-write everything inside /etc/wireguard directory. I presently run 5 different VPN daemons on my Raspberry Pi. It does site to site VPNs to two locations over two different uplinks and then OSPF running over FRR takes care of dynamically routing. For 5th one which is client-server VPN, I used Ansible put a playbook. Idea is to run playbook each time I want to add a user, provide it with client-name and client-ip (didn’t automate client IP since it’s just 4-5 devices max) and the playbook will take care of generating keys, config (which can be copy-pasted in Wireguard running on a laptop) and also QR code which can be scanned for importing config along with the keys in iOS devices. Ideally, I should put a more detailed one as Ansible role but then it’s just me being lazy and settling for a playbook instead.

Here’s goes the playbook!

---
  - hosts: ## Put server hostname here ##
    gather_facts: no
    become: yes
    vars: 
      client_name: anurag-phone
      client_ip: 10.0.0.10 
      client_mask: 24
      client_dns: 10.1.0.5
      wgname: wg5
      wgport: 5005
      work_dir: "/home/anurag/config"
      server_ip: ## Put server IP here ##


    tasks: 
      - name: Ensure {{ work_dir }} exists
        file: 
          path: '{{ work_dir }}'
          state: directory

      - name: Generate client keys for {{ client_name }}
        shell:
          cmd: wg genkey | tee privatekey | wg pubkey > publickey
          chdir: "{{ work_dir }}"

      - name: Read client privatekey and register into variable
        shell: cat {{ work_dir }}/privatekey
        register: privatekey    
      
      - name: Read client publickey and register into variable
        shell: cat {{ work_dir }}/publickey
        register: clientpublickey    
  
      - name: Read server publickey of server and register into variable
        shell: cat /etc/wireguard/publickey
        register: serverpublickey    

      - name: Add {{ client_name }} to the server
        blockinfile:
          path: '/etc/wireguard/{{ wgname }}.conf'
          marker: "## Added by Ansible"
          block: |
              # {{ client_name }}
              [Peer]
              PublicKey = {{ clientpublickey.stdout }}
              AllowedIPs = {{ client_ip }}/32

      - name: Stop wireguard for {{ wgname }}
        command: wg-quick down {{ wgname }}
        register: wireguardstop 
        tags: wireguardrestart

      - debug: 
          var: wireguardstop.stderr_lines
        tags: wireguardrestart 

      - name: Start wireguard for {{ wgname }}
        command: wg-quick up {{ wgname }}
        register: wireguardstart
        tags: wireguardrestart

      - debug: 
          var: wireguardstart.stderr_lines
        tags: wireguardrestart  

      - name: Generate client config for {{ client_name }} for full internet access
        blockinfile:
          path: "{{ work_dir }}/{{ client_name }}-full.conf"
          block: |
              [Interface]
              PrivateKey = {{ privatekey.stdout }}
              Address = {{ client_ip }}/{{ client_mask }}
              DNS = {{ client_dns }}
          
              [Peer]
              PublicKey = {{ serverpublickey.stdout }}
              AllowedIPs = 0.0.0.0/0
              Endpoint = {{ server_ip }}:{{ wgport }}       

          state: present    
          create: yes

      - name: Generate QR code for {{ client_name }}
        shell: qrencode -t ansiutf8  < {{ work_dir }}/{{ client_name }}-full.conf  > {{ work_dir }}/{{ client_name }}-qr-full
        tags: qr

Some limitations of this playbook:

  1. Cannot be used to delete users. I don’t do that often and thus I am OK to delete those just manually though one can make it little more smart to do that. Probably define users within vars and have a check to not-re-write keys during each run.
  2. It will keep on adding keys to the server side config and hence if run twice for same user, IP – it will add junk. Again, this was more of a quick written solution and not a extensively written playbook to tackle that.

The key objective here was just to generate keys, insert client public key in server side config and server’s key in client side config. And ofcourse making config available in text and QR code form so that one can use import and delete it.

08 Nov

Why airport wifi sucks?

IMG_20151108_183647

 

 

Sitting at Kolkata airport. Noticed the usual “Free Wifi in the area!” message and connected to Tata Docomo Free wifi. Performance was quite poor.

 

Two key issues with wifi: 

  1. Using of only 2.4Ghz (802.11b/g/n with 20Mhz channel). No AP with 5Ghz box. (Click here to view scanner data). Should have been 5Ghz
  2. Entire traffic is getting tunnel via Mumbai i.e West India (while I am sitting on Eastern side). Adding up to latency and performance significantly.

 

Here are some of traces to random locations:

traceroute anuragbhatia.com
traceroute to anuragbhatia.com (178.238.225.14), 64 hops max, 52 byte packets
 1  100.96.128.1 (100.96.128.1)  74.141 ms  55.771 ms  83.987 ms
 2  10.124.109.130 (10.124.109.130)  60.473 ms  56.363 ms  56.885 ms
 3  * 10.124.111.158 (10.124.111.158)  57.123 ms  60.577 ms
 4  10.117.225.90 (10.117.225.90)  62.529 ms  57.420 ms  57.032 ms
 5  14.141.63.185.static-mumbai.vsnl.net.in (14.141.63.185)  57.206 ms  57.201 ms  59.841 ms
 6  * * *
 7  ix-0-100.tcore1.mlv-mumbai.as6453.net (180.87.38.5)  60.127 ms *  59.179 ms
 8  if-9-5.tcore1.wyn-marseille.as6453.net (80.231.217.17)  163.571 ms  163.083 ms  165.671 ms
 9  if-8-1600.tcore1.pye-paris.as6453.net (80.231.217.6)  165.586 ms *  168.976 ms
10  if-2-2.tcore1.pvu-paris.as6453.net (80.231.154.17)  164.356 ms  160.600 ms  167.841 ms
11  80.231.153.66 (80.231.153.66)  204.567 ms  170.125 ms  164.025 ms
12  ae-1-19.bar1.munich1.level3.net (4.69.153.245)  187.130 ms  176.954 ms  175.734 ms
13  ae-1-19.bar1.munich1.level3.net (4.69.153.245)  173.793 ms  180.293 ms  175.585 ms
14  gw03.contabo.net (62.140.24.126)  174.955 ms * *
15  anuragbhatia.com (178.238.225.14)  179.955 ms *  179.185 ms

 

traceroute google.com
traceroute: Warning: google.com has multiple addresses; using 173.194.36.97
traceroute to google.com (173.194.36.97), 64 hops max, 52 byte packets
 1  100.96.128.1 (100.96.128.1)  57.058 ms  56.659 ms  55.847 ms
 2  10.124.109.130 (10.124.109.130)  56.825 ms  58.513 ms  55.854 ms
 3  10.124.111.158 (10.124.111.158)  56.682 ms  60.542 ms  59.486 ms
 4  10.117.225.90 (10.117.225.90)  58.176 ms  57.624 ms  58.444 ms
 5  14.141.63.185.static-mumbai.vsnl.net.in (14.141.63.185)  58.806 ms  57.714 ms  59.340 ms
 6  * * *
 7  115.113.165.98.static-mumbai.vsnl.net.in (115.113.165.98)  58.810 ms  65.872 ms  69.436 ms
 8  209.85.241.52 (209.85.241.52)  58.748 ms  60.547 ms
    72.14.232.202 (72.14.232.202)  58.878 ms
 9  209.85.252.142 (209.85.252.142)  77.188 ms  80.828 ms  78.031 ms
10  209.85.240.17 (209.85.240.17)  82.458 ms  77.529 ms  79.603 ms
11  del01s07-in-f1.1e100.net (173.194.36.97)  77.242 ms *  76.067 ms

 

traceroute cloudaccess.net
traceroute to cloudaccess.net (199.116.78.60), 64 hops max, 52 byte packets
 1  100.96.128.1 (100.96.128.1)  65.006 ms  73.056 ms  57.290 ms
 2  * 10.124.109.130 (10.124.109.130)  55.313 ms  55.498 ms
 3  10.124.111.158 (10.124.111.158)  62.335 ms  58.146 ms  65.322 ms
 4  10.117.225.90 (10.117.225.90)  58.307 ms  64.118 ms  60.188 ms
 5  14.141.63.185.static-mumbai.vsnl.net.in (14.141.63.185)  67.951 ms  58.059 ms  57.658 ms
 6  * * *
 7  ix-0-100.tcore1.mlv-mumbai.as6453.net (180.87.38.5)  60.601 ms  58.711 ms  58.611 ms
 8  if-9-5.tcore1.wyn-marseille.as6453.net (80.231.217.17)  170.234 ms  163.890 ms *
 9  if-8-1600.tcore1.pye-paris.as6453.net (80.231.217.6)  163.956 ms  165.691 ms  174.445 ms
10  if-2-2.tcore1.pvu-paris.as6453.net (80.231.154.17)  161.027 ms  165.970 ms  179.712 ms
11  80.231.153.202 (80.231.153.202)  164.602 ms  164.395 ms  163.093 ms
12  xe-1-2-2.chi11.ip4.gtt.net (89.149.187.85)  271.367 ms
    xe-8-2-2.chi11.ip4.gtt.net (141.136.105.253)  273.996 ms  265.810 ms
13  ip4.gtt.net (173.205.48.130)  266.941 ms  265.019 ms  265.221 ms
14  173-225-176-89.core2.sfld2.r256.net (173.225.176.89)  275.479 ms  272.507 ms  272.840 ms
15  border-router02-detroit.static.cloudaccess.net (173.225.188.138)  280.231 ms  268.907 ms  286.357 ms
16  199.116.78.60 (199.116.78.60)  269.869 ms !Z  270.031 ms !Z  270.207 ms !Z

 

So no matter wherever I push packets for, then hit hop 5 – Mumbai / VSNL AS4755 router because likely that is where the core L3 device (MSC/central authentication box) for this network is. This is big issue because likely Tata Docomo would be tunneling entire wifi traffic from anywhere in India, going to anywhere globally via Mumbai because that is where they put their wifi central box. What we need in India is more simpler deployments, more open source stuff so cost doesn’t becomes point of selection for keeping such devices central. And most important we need networks to peer at internet exchanges so atleast East region traffic stays within East and doesn’t has to travel thousands of kilometers to Mumbai just to hop on to another network.

 

 

Overall speeds seems to be capped at 1Mbps which is too low these days and here’s 100 packet ping to first hop (100.96.128.1) showing how poor is the wireless signal performance.

ping -c 100 100.96.128.1
PING 100.96.128.1 (100.96.128.1): 56 data bytes
64 bytes from 100.96.128.1: icmp_seq=0 ttl=255 time=52.365 ms
64 bytes from 100.96.128.1: icmp_seq=1 ttl=255 time=51.391 ms
64 bytes from 100.96.128.1: icmp_seq=2 ttl=255 time=48.985 ms
64 bytes from 100.96.128.1: icmp_seq=3 ttl=255 time=264.974 ms
64 bytes from 100.96.128.1: icmp_seq=4 ttl=255 time=252.179 ms
Request timeout for icmp_seq 5
64 bytes from 100.96.128.1: icmp_seq=6 ttl=255 time=51.491 ms
64 bytes from 100.96.128.1: icmp_seq=7 ttl=255 time=81.809 ms
64 bytes from 100.96.128.1: icmp_seq=8 ttl=255 time=49.312 ms
64 bytes from 100.96.128.1: icmp_seq=9 ttl=255 time=55.065 ms
64 bytes from 100.96.128.1: icmp_seq=10 ttl=255 time=52.825 ms
64 bytes from 100.96.128.1: icmp_seq=11 ttl=255 time=49.899 ms
64 bytes from 100.96.128.1: icmp_seq=12 ttl=255 time=59.585 ms
64 bytes from 100.96.128.1: icmp_seq=13 ttl=255 time=262.916 ms
64 bytes from 100.96.128.1: icmp_seq=14 ttl=255 time=55.734 ms
64 bytes from 100.96.128.1: icmp_seq=15 ttl=255 time=49.476 ms
64 bytes from 100.96.128.1: icmp_seq=16 ttl=255 time=48.953 ms
Request timeout for icmp_seq 17
64 bytes from 100.96.128.1: icmp_seq=18 ttl=255 time=299.531 ms
64 bytes from 100.96.128.1: icmp_seq=19 ttl=255 time=315.367 ms
64 bytes from 100.96.128.1: icmp_seq=20 ttl=255 time=49.276 ms
64 bytes from 100.96.128.1: icmp_seq=21 ttl=255 time=48.629 ms
64 bytes from 100.96.128.1: icmp_seq=22 ttl=255 time=59.231 ms
64 bytes from 100.96.128.1: icmp_seq=23 ttl=255 time=54.367 ms
64 bytes from 100.96.128.1: icmp_seq=24 ttl=255 time=49.607 ms
64 bytes from 100.96.128.1: icmp_seq=25 ttl=255 time=62.368 ms
64 bytes from 100.96.128.1: icmp_seq=26 ttl=255 time=50.263 ms
64 bytes from 100.96.128.1: icmp_seq=27 ttl=255 time=167.378 ms
Request timeout for icmp_seq 28
64 bytes from 100.96.128.1: icmp_seq=29 ttl=255 time=316.048 ms
64 bytes from 100.96.128.1: icmp_seq=30 ttl=255 time=325.624 ms
64 bytes from 100.96.128.1: icmp_seq=31 ttl=255 time=463.967 ms
64 bytes from 100.96.128.1: icmp_seq=32 ttl=255 time=469.114 ms
64 bytes from 100.96.128.1: icmp_seq=33 ttl=255 time=292.147 ms
64 bytes from 100.96.128.1: icmp_seq=34 ttl=255 time=522.468 ms
64 bytes from 100.96.128.1: icmp_seq=35 ttl=255 time=713.133 ms
64 bytes from 100.96.128.1: icmp_seq=36 ttl=255 time=110.451 ms
Request timeout for icmp_seq 37
64 bytes from 100.96.128.1: icmp_seq=38 ttl=255 time=342.196 ms
Request timeout for icmp_seq 39
64 bytes from 100.96.128.1: icmp_seq=40 ttl=255 time=269.410 ms
64 bytes from 100.96.128.1: icmp_seq=41 ttl=255 time=252.759 ms
64 bytes from 100.96.128.1: icmp_seq=42 ttl=255 time=406.372 ms
64 bytes from 100.96.128.1: icmp_seq=43 ttl=255 time=222.788 ms
64 bytes from 100.96.128.1: icmp_seq=44 ttl=255 time=228.961 ms
64 bytes from 100.96.128.1: icmp_seq=45 ttl=255 time=205.769 ms
64 bytes from 100.96.128.1: icmp_seq=46 ttl=255 time=177.845 ms
Request timeout for icmp_seq 47
64 bytes from 100.96.128.1: icmp_seq=48 ttl=255 time=347.503 ms
64 bytes from 100.96.128.1: icmp_seq=49 ttl=255 time=285.772 ms
64 bytes from 100.96.128.1: icmp_seq=50 ttl=255 time=428.171 ms
64 bytes from 100.96.128.1: icmp_seq=51 ttl=255 time=306.871 ms
64 bytes from 100.96.128.1: icmp_seq=52 ttl=255 time=246.806 ms
64 bytes from 100.96.128.1: icmp_seq=53 ttl=255 time=213.304 ms
64 bytes from 100.96.128.1: icmp_seq=54 ttl=255 time=175.060 ms
64 bytes from 100.96.128.1: icmp_seq=55 ttl=255 time=262.179 ms
64 bytes from 100.96.128.1: icmp_seq=56 ttl=255 time=421.965 ms
64 bytes from 100.96.128.1: icmp_seq=57 ttl=255 time=339.597 ms
64 bytes from 100.96.128.1: icmp_seq=58 ttl=255 time=334.415 ms
64 bytes from 100.96.128.1: icmp_seq=59 ttl=255 time=461.400 ms
64 bytes from 100.96.128.1: icmp_seq=60 ttl=255 time=439.854 ms
64 bytes from 100.96.128.1: icmp_seq=61 ttl=255 time=475.714 ms
64 bytes from 100.96.128.1: icmp_seq=62 ttl=255 time=269.855 ms
64 bytes from 100.96.128.1: icmp_seq=63 ttl=255 time=223.720 ms
64 bytes from 100.96.128.1: icmp_seq=64 ttl=255 time=190.660 ms
64 bytes from 100.96.128.1: icmp_seq=65 ttl=255 time=70.555 ms
64 bytes from 100.96.128.1: icmp_seq=66 ttl=255 time=51.592 ms
64 bytes from 100.96.128.1: icmp_seq=67 ttl=255 time=57.906 ms
64 bytes from 100.96.128.1: icmp_seq=68 ttl=255 time=54.205 ms
64 bytes from 100.96.128.1: icmp_seq=69 ttl=255 time=250.238 ms
64 bytes from 100.96.128.1: icmp_seq=70 ttl=255 time=62.416 ms
64 bytes from 100.96.128.1: icmp_seq=71 ttl=255 time=51.538 ms
64 bytes from 100.96.128.1: icmp_seq=72 ttl=255 time=48.953 ms
64 bytes from 100.96.128.1: icmp_seq=73 ttl=255 time=193.173 ms
64 bytes from 100.96.128.1: icmp_seq=74 ttl=255 time=183.505 ms
64 bytes from 100.96.128.1: icmp_seq=75 ttl=255 time=371.035 ms
64 bytes from 100.96.128.1: icmp_seq=76 ttl=255 time=77.897 ms
64 bytes from 100.96.128.1: icmp_seq=77 ttl=255 time=51.497 ms
64 bytes from 100.96.128.1: icmp_seq=78 ttl=255 time=54.808 ms
Request timeout for icmp_seq 79
64 bytes from 100.96.128.1: icmp_seq=80 ttl=255 time=60.320 ms
64 bytes from 100.96.128.1: icmp_seq=81 ttl=255 time=48.887 ms
64 bytes from 100.96.128.1: icmp_seq=82 ttl=255 time=49.610 ms
Request timeout for icmp_seq 83
64 bytes from 100.96.128.1: icmp_seq=84 ttl=255 time=51.179 ms
64 bytes from 100.96.128.1: icmp_seq=85 ttl=255 time=64.214 ms
64 bytes from 100.96.128.1: icmp_seq=86 ttl=255 time=64.161 ms
64 bytes from 100.96.128.1: icmp_seq=87 ttl=255 time=168.550 ms
64 bytes from 100.96.128.1: icmp_seq=88 ttl=255 time=163.238 ms
64 bytes from 100.96.128.1: icmp_seq=89 ttl=255 time=164.712 ms
64 bytes from 100.96.128.1: icmp_seq=90 ttl=255 time=62.995 ms
64 bytes from 100.96.128.1: icmp_seq=91 ttl=255 time=65.484 ms
64 bytes from 100.96.128.1: icmp_seq=92 ttl=255 time=50.530 ms
64 bytes from 100.96.128.1: icmp_seq=93 ttl=255 time=54.615 ms
Request timeout for icmp_seq 94
64 bytes from 100.96.128.1: icmp_seq=95 ttl=255 time=57.349 ms
64 bytes from 100.96.128.1: icmp_seq=96 ttl=255 time=59.677 ms
64 bytes from 100.96.128.1: icmp_seq=97 ttl=255 time=191.897 ms
64 bytes from 100.96.128.1: icmp_seq=98 ttl=255 time=49.441 ms

--- 100.96.128.1 ping statistics ---
100 packets transmitted, 90 packets received, 10.0% packet loss
round-trip min/avg/max/stddev = 48.629/182.123/713.133/147.560 ms

 

Since latency min is 48ms, quite clearly L3 end is far off in Mumbai and likely would be running ipsec or some other kind of VPN tunnels to the APs. This is ground level performance of what we hear in media “wifi business strategies“. Wifi as a technology is excellent but does take decent homework to deploy properly. Just hanging bunch of boxes and routing traffic from one MSC/central server placed far off doesn’t really helps. Wifi as a technology can help to offload stress on 3G/4G significantly as long as it is done in right way keeping in assumption that Wifi runs on “unlicensed spectrum” and interference can very much happen.

Time to catch up flight to next hop!

23 Oct

Night fun task: OpenVPN, Quagga, Rasberry Pi and more!

I have been using OpenVPN from quite sometime and very much like it. Earlier I was running OpenVPN client on TP Link 1043nd router and that worked great. But recently I switched home routing to Microtik Map2N which has much better VLAN & IPv6 support. Since then I had trouble in getting VPN back live. I can always use VPN client on laptop but that’s ugly for daily use specially when this is my primary work location!

 

It’s hard to put in this blog but there are number of issues and limitations which come up if I try to use Microtik itself as OpenVPN client. Eventually gave up on that. Now, comes Raspberry Pi which is just a tiny Debian box running quite nicely. I configured OpenVPN on it and connected it as client to my server. It works fine. Next, pointed a static route for a sample test IP from core router to Rasberry Pi, and it did not work. I realized IPv4 forwarding isn’t enabled on it by default. 🙂

 

Once IPv4 forwarding is enabled, still connectivity doesn’t works because of usual private addressing issues. Both home LAN and VPN tunnel works over private IPs and hence packets do hit my server but it cannot return since it’s unsure on return path. This brings me to do usual source NAT on home LAN pool going towards VPN server via tunneled interface. And vola! it works.

iptables -t nat POSTROUTING -s 172.16.18.0/24 -o tun0 -j SNAT –to-source 10.200.105.10

 

Now, a big issue which comes here is that for private addresses which I use for random applications/testing etc it works fine but there are certain public IPs as well which I route via OpenVPN. In this setup, I have to put static route at router end to push towards Raspberry Pi for each case and worst – if VPN tunnel breaks, connectivity will go down completely for those pools.

Thus I decided to rather use “dynamic routing” here. My lovely Quagga comes to rescue. Next, I configured a iBGP session between Quagga and core router and passed “redistribute kernel” in BGP config to redistribute all routes Raspberry Pi learn while connecting to VPN (in form of pushed routed in openvpn config on the server). And it works! 🙂

In this way as long as VPN is up, it’s prefered for public pools and when VPN is down, they take their usual path without having to manually disable any static routes.

 

So here comes a “State of Art” site to site VPN using OpenVPN, Quagga and Raspberry Pi. Time taken to setup: ~5 mins (Less then time taken to write this post 😛 😉 )

 

Quagga Config:

raspberrypi# sh run
Building configuration...

Current configuration:
!
log syslog
!
service integrated-vtysh-config
!
interface eth0
 ipv6 nd suppress-ra
!
interface lo
!
interface tun0
 ipv6 nd suppress-ra
!
router bgp 58901
 bgp router-id 172.16.18.5
 redistribute kernel
 neighbor 172.16.18.1 remote-as 58901
 neighbor 172.16.18.1 next-hop-self
!
ip forwarding
!
line vty
!
end
raspberrypi#

 

Have fun!