Simple Cisco MPLS Core w/ L3VPN (Part 1)

Introduction

I decided to create a few follow-up posts on the Multiprotocol Label Switching (MPLS) topic but instead of focusing on Juniper configurations, I am going to re-create the topology using Cisco configurations.  In all honesty, the basics are the same in how the topology works with regards to using an internal routing protocol (again, using Open Shortest Path First (OSPF),) a label distribution protocol (Label Distribution Protocol (LDP)), and peering via Border Gateway Protocol (BGP) to perform internal peering so that the exchanging of Virtual Route Forwarding (VRF) prefixes can be done.

In the first post, we are going to go over the topology ( 90% of the topology is the same as the Simple Juniper MPLS Core w/ L3VPN (Part 1)) while bringing up the core of the network.  In the next post, we will bring up the very same VRF for Customer A while injecting some prefixes and verifying that routing across the MPLS core works.

Topology

As mentioned prior, the topology is largely like the Juniper diagram with a few differences.  First, I have added a loopback on each Provider Edge (PE) router that belongs to the Customer A VRF as well as altering the addresses for these loopbacks.  The other difference is since I am using the Cisco Cloud Services Router 1000v (CSR1000v) for this lab I need to configure all the interfaces to use Dot1Q encapsulation.  This doesn’t have a big impact on the design but more of a footnote in the event you have not used CSR1000v’s in a lab environment.  Let’s take a look at the topology diagram:

Simple Cisco MPLS core diagram
Simple Cisco MPLS core diagram

As before, we have four PE routers, two P routers, and two CE routers.  We are not going to define any Label Switched Paths (LSP) or do any kind of resiliency techniques so that we can just concentrate on the basics from a Cisco point-of-view.  With that being said, let’s get into the configuration.

Transits, Loopbacks, and Routing Configuration

The first step in this configuration is a simple configuration of transit links, loopback interfaces, and other basic configurations.  This will include OSPF routing but not our management interfaces (GigabitEthernet2).  I will assume that any kind of interface that is used for out-of-band management will have already been configured by you.  Now, the following are the configurations for each device:

R1 (PE)

hostname R1
interface GigabitEthernet1.16
  encapsulation dot1q 16
  ip vrf forwarding CUSTOMER-A
  ip address 172.16.1.1 255.255.255.252
!
interface GigabitEthernet1.100
  encapsulation dot1q 100
  ip address 10.0.0.1 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.104
  encapsulation dot1q 104
  ip address 10.0.0.5 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.108
  encapsulation dot1q 108
  ip address 10.0.0.9 255.255.255.252
  mpls ip
!
interface Loopback0
  ip address 100.1.1.1 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
  passive-interface GigabitEthernet2
!

R2 (PE)

hostname R2
interface GigabitEthernet1
  no shutdown
!
interface GigabitEthernet1.108
  encapsulation dot1q 108
  ip address 10.0.0.10 255.255.255.252 
  mpls ip
!
interface GigabitEthernet1.112
  encapsulation dot1q 112
  ip address 10.0.0.13 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.116
  encapsulation dot1q 116
  ip address 10.0.0.17 255.255.255.252
  mpls ip
!
interface Loopback0 
  ip address 100.1.1.2 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
  passive-interface GigabitEthernet2
!

R3 (P)

hostname R3
interface GigabitEthernet1
  no shutdown
!
interface GigabitEthernet1.100
  encapsulation dot1q 100
  ip address 10.0.0.2 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.112
  encapsulation dot1q 112
  ip address 10.0.0.14 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.120
  encapsulation dot1q 120
  ip address 10.0.0.21 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.124
  encapsulation dot1q 124
  ip address 10.0.0.25 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.128
  encapsulation dot1q 128
  ip address 10.0.0.29 255.255.255.252
  mpls ip
!
interface Loopback0
  ip address 100.1.1.3 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
 passive-interface GigabitEthernet2
!

R4 (P)

hostname R4
interface GigabitEthernet1
  no shutdown
!
interface GigabitEthernet1.104
  encapsulation dot1q 104
  ip address 10.0.0.6 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.116
  encapsulation dot1q 116
  ip address 10.0.0.18 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.120
  encapsulation dot1q 120
  ip address 10.0.0.22 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.132
  encapsulation dot1q 132
  ip address 10.0.0.33 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.136
  encapsulation dot1q 136
  ip address 10.0.0.37 255.255.255.252
  mpls ip
!
interface Loopback0
  ip address 100.1.1.4 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
  passive-interface GigabitEthernet2
!

R5 (PE)

hostname R5
interface GigabitEthernet1
  no shutdown
!
interface GigabitEthernet1.124
  encapsulation dot1q 124
  ip address 10.0.0.26 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.132
  encapsulation dot1q 132
  ip address 10.0.0.34 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.140
  encapsulation dot1q 140
  ip address 10.0.0.41 255.255.255.252
  mpls ip
!
interface Loopback0
  ip address 100.1.1.5 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
  passive-interface GigabitEthernet2
!

R6 (PE)

hostname R6
interface GigabitEthernet1
  no shutdown
!
interface GigabitEthernet1.128
  encapsulation dot1q 128
  ip address 10.0.0.30 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.136
  encapsulation dot1q 136
  ip address 10.0.0.38 255.255.255.252
  mpls ip
!
interface GigabitEthernet1.140
  encapsulation dot1q 140
  ip address 10.0.0.42 255.255.255.252
  mpls ip
!
interface Loopback0
  ip address 100.1.1.6 255.255.255.255
!
router ospf 100
  network 0.0.0.0 255.255.255.255 area 0
  passive-interface GigabitEthernet2
!

As you can see, the configurations are very straightforward if you are used to configuring on Cisco devices.  One configuration worth noting is the mpls ip that are configured on the transit interfaces.  First, this is needed to enable MPLS on those interfaces.  Second, if you are using the CSR1000v in a lab environment and you haven’t already, to use MPLS features (and others such as IPSec VPN) you need to enable the premium license on each router.  To do this, use the command (config) license boot level premium.   Once you do that, you will get a prompt to confirm changing the license level to which you will say yes, write the config and reboot.

Now that we have the basics configured, we can perform some basic testing to verify everything looks good.  First, lets check the routing table on one of the P routers to verify that we see all the loopbacks and transit and perform a test pings:

R3#sh ip route os
Codes: L - local, C - connected, S - static, R - RIP, M - mobile, B - BGP
       D - EIGRP, EX - EIGRP external, O - OSPF, IA - OSPF inter area
       N1 - OSPF NSSA external type 1, N2 - OSPF NSSA external type 2
       E1 - OSPF external type 1, E2 - OSPF external type 2
       i - IS-IS, su - IS-IS summary, L1 - IS-IS level-1, L2 - IS-IS level-2
       ia - IS-IS inter area, * - candidate default, U - per-user static route
       o - ODR, P - periodic downloaded static route, H - NHRP, l - LISP
       a - application route
       + - replicated route, % - next hop override

Gateway of last resort is 10.207.123.1 to network 0.0.0.0

      10.0.0.0/8 is variably subnetted, 19 subnets, 3 masks
O        10.0.0.4/30 [110/2] via 10.0.0.22, 03:54:34, GigabitEthernet1.120
                     [110/2] via 10.0.0.1, 03:53:56, GigabitEthernet1.100
O        10.0.0.8/30 [110/2] via 10.0.0.13, 03:54:20, GigabitEthernet1.112
                     [110/2] via 10.0.0.1, 03:53:56, GigabitEthernet1.100
O        10.0.0.16/30 [110/2] via 10.0.0.22, 03:54:34, GigabitEthernet1.120
                      [110/2] via 10.0.0.13, 03:54:20, GigabitEthernet1.112
O        10.0.0.32/30 [110/2] via 10.0.0.26, 03:55:04, GigabitEthernet1.124
                      [110/2] via 10.0.0.22, 03:54:34, GigabitEthernet1.120
O        10.0.0.36/30 [110/2] via 10.0.0.30, 03:55:14, GigabitEthernet1.128
                      [110/2] via 10.0.0.22, 03:54:34, GigabitEthernet1.120
O        10.0.0.40/30 [110/2] via 10.0.0.30, 03:55:14, GigabitEthernet1.128
                      [110/2] via 10.0.0.26, 03:55:04, GigabitEthernet1.124
      100.0.0.0/32 is subnetted, 6 subnets
O        100.1.1.1 [110/2] via 10.0.0.1, 03:53:56, GigabitEthernet1.100
O        100.1.1.2 [110/2] via 10.0.0.13, 03:54:20, GigabitEthernet1.112
O        100.1.1.4 [110/2] via 10.0.0.22, 03:54:34, GigabitEthernet1.120
O        100.1.1.5 [110/2] via 10.0.0.26, 03:55:04, GigabitEthernet1.124
O        100.1.1.6 [110/2] via 10.0.0.30, 03:55:14, GigabitEthernet1.128
R3(tcl)#foreach address {
+>(tcl)#100.1.1.1
+>(tcl)#100.1.1.2
+>(tcl)#100.1.1.4
+>(tcl)#100.1.1.5
+>(tcl)#100.1.1.6
+>(tcl)#} { ping $address siz 36
+>(tcl)#}
Type escape sequence to abort.
Sending 5, 36-byte ICMP Echos to 100.1.1.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Type escape sequence to abort.
Sending 5, 36-byte ICMP Echos to 100.1.1.2, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Type escape sequence to abort.
Sending 5, 36-byte ICMP Echos to 100.1.1.4, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Type escape sequence to abort.
Sending 5, 36-byte ICMP Echos to 100.1.1.5, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
Type escape sequence to abort.
Sending 5, 36-byte ICMP Echos to 100.1.1.6, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms
R3(tcl)#

So all of our basic tests are successful.  Let’s move onto configuring our label distribution protocol and enable Multi-Protocol BGP (MP-BGP) between our PE routers.

LDP and MP-BGP Configuration

The configuration of our label distribution protocol is simple.  We can choose between LDP and Tag Distribution Protocol (TDP) but  the latter is a Cisco-developed protocol which is not used and has fallen out of favor to LDP.  The configuration is simply defining the LDP router-id and specifying which protocol to use.  So on each PE and P router we are going to enable the following:

mpls label protocol ldp
mpls ldp router-id Loopback0

Once you configure this on each router, a simple show mpls ldp neighbor should show that each neighbor sees the other and have started to exchange labels:

R3#sh mpls ldp neighbor
    Peer LDP Ident: 100.1.1.1:0; Local LDP Ident 100.1.1.3:0
        TCP connection: 100.1.1.1.646 - 100.1.1.3.13089
        State: Oper; Msgs sent/rcvd: 293/295; Downstream
        Up time: 03:58:35
        LDP discovery sources:
          GigabitEthernet1.100, Src IP addr: 10.0.0.1
        Addresses bound to peer LDP Ident:
          10.0.0.1        10.0.0.5        10.0.0.9        10.207.123.101
          100.1.1.1
    Peer LDP Ident: 100.1.1.2:0; Local LDP Ident 100.1.1.3:0
        TCP connection: 100.1.1.2.646 - 100.1.1.3.52830
        State: Oper; Msgs sent/rcvd: 294/295; Downstream
        Up time: 03:58:35
        LDP discovery sources:
          GigabitEthernet1.112, Src IP addr: 10.0.0.13
        Addresses bound to peer LDP Ident:
          10.0.0.10       10.0.0.13       10.0.0.17       10.207.123.102
          100.1.1.2
<output omitted for brevity>

Although R3 (P) knows of more neighbors, I have just included a few in the output to show that everything is working as expected.  We can also use the command show mpls interfaces to see that the MPLS interfaces are using LDP as their label distribution protocol:

R3#sh mpls interfaces
Interface              IP            Tunnel   BGP Static Operational
GigabitEthernet1.100   Yes (ldp)     No       No  No     Yes
GigabitEthernet1.112   Yes (ldp)     No       No  No     Yes
GigabitEthernet1.120   Yes (ldp)     No       No  No     Yes
GigabitEthernet1.124   Yes (ldp)     No       No  No     Yes
GigabitEthernet1.128   Yes (ldp)     No       No  No     Yes

Now that is working, let’s configure iBGP peering between all the PE routers.  Just like with the Juniper scenario, we need to enable iBGP and tell each neighbor that we can signal VRFs.   Below is the configuration from R1 (PE).  Note that I am using BGP peer-groups for this configuraiton.  If you wish to read more on this, please refer to the Cisco documentation for BGP:

router bgp 100
  bgp log-neighbor-changes
  no bgp default ipv4-unicast
  neighbor IBGP peer-group
  neighbor IBGP update-source Loopback0
  neighbor 100.1.1.2 remote-as 100
  neighbor 100.1.1.2 peer-group IBGP
  neighbor 100.1.1.5 remote-as 100
  neighbor 100.1.1.5 peer-group IBGP
  neighbor 100.1.1.6 remote-as 100
  neighbor 100.1.1.6 peer-group IBGP
  address-family ipv4
    neighbor 100.1.1.2 activate
    neighbor 100.1.1.5 activate
    neighbor 100.1.1.6 activate
  exit-address-family
  address-family vpnv4
    neighbor IBGP send-community extended
    neighbor 100.1.1.2 activate
    neighbor 100.1.1.5 activate
    neighbor 100.1.1.6 activate
  exit-address-family
!

This will look very strange if you are not used to configuring BGP address families in Cisco.  Address families are just a way for one BGP peer to signal to the others what capabilities it can support.  In the configuration above, we see that we have two address families configured at this time.  IPv4 which is the normal BGP unicast and VPNV4 which support VRFs.  When you configure peering to each neighbor in the global BGP configuration, you must also signal to each neighbor (activate) which capability you support.  So to each other PE router, we are saying we support basic IPv4 unicast along with VRF.  To further demonstrate this, take a look at the output of the show ip bgp neighbors command:

BGP neighbor is 100.1.1.2,  remote AS 100, internal link
 Member of peer-group IBGP for session parameters
  BGP version 4, remote router ID 100.1.1.2
  BGP state = Established, up for 03:30:28
  Last read 00:00:48, last write 00:00:20, hold time is 180, keepalive interval is 60 seconds
  Neighbor sessions:
    1 active, is not multisession capable (disabled)
  Neighbor capabilities:
    Route refresh: advertised and received(new)
    Four-octets ASN Capability: advertised and received
    Address family IPv4 Unicast: advertised and received
    Address family VPNv4 Unicast: advertised
    Enhanced Refresh Capability: advertised and received
    Multisession Capability:
    Stateful switchover support enabled: NO for session 1

This shows that R1 has peered with R2 and advertised and received the IPv4 unicast capability but only advertised VPNv4.  This is because on R1 and R5 will be configured with a VRF at this time.   We can also take a look at show ip bgp summary on R1 and see that we are now peering with R2, R5 and R6:

R1#sh ip bgp sum
BGP router identifier 100.1.1.1, local AS number 100
BGP table version is 1, main routing table version 1

Neighbor        V           AS MsgRcvd MsgSent   TblVer  InQ OutQ Up/Down  State/PfxRcd
100.1.1.2       4          100     237     238        1    0    0 03:33:05        0
100.1.1.5       4          100     238     234        1    0    0 03:29:06        0
100.1.1.6       4          100     237     233        1    0    0 03:33:05        0
      

Conclusion

We are at the stopping point for this post.  We have configured everything we need for a basic Cisco MPLS core.  Interfaces (transits and loopbacks), routing (BGP & OSPF), and our label distribution protocol (LDP).  Next post we will configure a customer VRF on R1 and R5 while connecting an interface to a customer router to see how we configure interfaces and BGP peering between the provider and the customer.X

Troy Perkins

Leave a Reply

Your email address will not be published. Required fields are marked *