Cisco IOS to Azure Tunnel within a VRF

Posted by Michael Palmer on Sunday, September 6, 2020
Last Modified on Thursday, September 1, 2022

Background

I’ve got a customer who has an MPLS network across several providers, tied together through my company’s network. These sites all tie back to my customer access router (CAR01) via point to hub links, coming in from major providers. All the customer’s sites are in their own VRF, with their own OSPF instance, which handles almost all the routing and local site LAN private routing.

The customer uses a “hosted firewall” company, which I’ll keep to myself, but they have issues getting them to work on stuff, so they asked us, could we bridge thier MPLS network over to Azure, through our “hub” instead of at one of the sites. Of course, sales tells them sure, we can do that! So I’m tasked with making it happen.

I figured it would be easy. Make a tunnel interface to Azure, via IKEv2, dump the tunnel to the customer VRF. Right, easy! For some reason, on our IOS 15.2 router, it just wouldn’t work. I spent at least 20 hours messing with it.

Initial Config

Configuration for Azure is pretty easy for Route Based VPN tunnels to Cisco router.

My Network Info:

1
2
3
4
Azure VM Network: 10.30.0.0/16 (netmask: 255.255.0.0)
Local LAN Subnets: 10.22.9.0/24 10.10.110.0/24 10.40.0.0/24
Loopback 0: 199.22.101.12 (*internet routable loopback IP*)
Azure Endpoint: 12.23.62.113

Phase 1

1
2
3
4
5
6
car01# conf terminal
car01(config)# crypto ikev2 proposal cust-Azure-prop
car01(config-ikev2-proposal)# encryption aes-cbc-256 aes-cbc-128 3des 
car01(config-ikev2-proposal)# integrity sha1 
car01(config-ikev2-proposal)# group 2 
car01(config-ikev2-proposal)# exit

Add the above proposal to an ike policy

1
2
3
car01(config)#crypto ikev2 policy cust-ike-policy
car01(config-ikev2-policy)# proposal cust-Azure-prop
car01(config-ikev2-policy)# exit

Create your keyring, this is from the Azure console when creating a connection.

1
2
3
4
5
6
car01(config)#crypto ikev2 keyring cust-keyring-azure
car01(config-ikev2-keyring)# peer 12.23.62.113
car01(config-ikev2-keyring-peer)# address 199.22.101.12
car01(config-ikev2-keyring-peer)# pre-shared-key feedmecandybars
car01(config-ikev2-keyring-peer)# exit
car01(config-ikev2-keyring)# exit

My Phase 1 profile. Note, I use a loopback address for internet.

1
2
3
4
5
6
7
car01(config)#crypto ikev2 profile cust-azure-profile
car01(config-ikev2-profile)# match address local address 199.22.101.12
car01(config-ikev2-profile)# match identity remote address 12.23.62.113 255.255.255.255
car01(config-ikev2-profile)# authentication remote pre-share
car01(config-ikev2-profile)# authentication local pre-share
car01(config-ikev2-profile)# keyring cust-keyring-azure
car01(config-ikev2-profile)# exit

My Transform set

1
2
3
car01(config)#crypto ipsec transform-set azure-transfrom esp-aes 256 esp-sha-hmac
car01(cfg-crypto-trans)# mode tunnel
car01(cfg-crypto-trans)# exit

Phase 2 glue

1
2
3
4
car01(config)#crypto ipsec profile cust-ipsec-azure-profile
car01(ipsec-profile)# set transform-set azure-transform
car01(ipsec-profile)# set ikev2-profile cust-azure-profile
car01(ipsec-profile)# exit

VRF Aware Tunnel

1
2
3
4
5
6
7
8
9
car01(config)#int tunnel 20
car01(config-if)# ip vrf forwarding CUST
car01(config-if)# ip address 169.254.0.1 255.255.255.252
car01(config-if)# ip tcp adjust-mss 1350
car01(config-if)# tunnel source 199.22.101.12
car01(config-if)# tunnel mode ipsec ipv4
car01(config-if)# tunnel destination 40.113.16.195
car01(config-if)# tunnel protection ipsec profile cust-ipsec-azure-profile
car01(config-if)# exit

Add my route back to my Azure instance

1
car01(config)#ip route vrf CUST 10.30.0.0 255.255.0.0 tunnel 20

On to testing

You can check your tunnel is up:

1
2
3
car01# show crypto sessions
car01# show crypto ikev2 sa
car01# show crypto ipsec sa

You can now ping to your VM in azure (make sure the firewall isn’t blocking ping):

1
2
3
4
5
car01# ping vrf CUST 10.30.1.4
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 10.30.1.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 12/12/12 ms

Just in case you wonder, my customer end links look like this:

1
2
3
4
5
6
7
interface GigabitEthernet0/1.508
 description Cust: Customer Name Dallas [10Mbit] {9E/KEFN/310212/LVLC}
 bandwidth 20000
 encapsulation dot1Q 508
 ip vrf forwarding CUST
 ip address 10.22.9.117 255.255.255.252
 service-policy output voip-20mbps

Traffic should flow now, from your Azure, into the VRF, to the customer’s site.. Or it should.. Mine didn’t. I spent two days, messing with the phase 2, messing with the routes, the tunnel, changing interfaces, even rolled out my own Azure VPN instance and VM to test with. We tried changing to policy based VPN, but that’s even more of a nightmare, the example config’s talk about IKEv2, but then we find out, even though Azure is set for IKEv2, it really only connects via IKEv1 when policy based is setup! It was a nightmare.

This setup is “by the book” and should work. I think my gear is just wonky with some sort of issue in my router firmware. I’ve had several people tell me it looks fine. Am I missing something? I hesitated publishing this post, since it didnt work for me. But it’s good for sure, if you leave off the VRF on the tunnel and the route, you’ll have a working setup. If you see this and want to make fun of me, or lend a hand you can DM me on Twitter, email me, or leave a comment below (if i have comments enabled)!

I’m going to have to GNS3 this whole setup, because, everything I’ve been through shows me this would work. I get really weird results. I can ping my customer’s sub-interface ip, on the router, but not their far end (in my above, 10.22.9.117 is pingable, but not 10.22.9.118).


comments powered by Disqus