Simple Juniper MPLS Core w/ L3VPN (Part 2)

Introduction

If you haven’t already, please go to the first post Simple Juniper MPLS Core w/ L3VPN (Part 1) as that sets up the topology, routing, and label distribution that we need to continue setting up our customer edge devices.  For this post, we need to do a few things to successfully route traffic from our customer edge devices (sourced from their loopbacks) over the core and to their other site.  To accomplish this, we need to perform the following configurations:

  • Establish iBGP sessions between each provider edge device and enable inet-vpn signaling for MP-BGP.
  • Create a routing-instance for each customer connected to a provider edge switch and configure it as a L3VPN.
  • Establish eBGP peering between the provider edge switches and each customer edge device.

Now that we have an idea of what needs to be accomplished, let’s start off by configuring iBGP in the core.

Configure iBGP Peering in the Core

If you are familiar with configuring iBGP in JunOS then this will be easy.  For those that are not familiar, although we are configuring this in our MPLS core and enabling inet-vpn signaling, it is not any different from regular iBGP deployments.  First, you want to make sure you have a full mesh topology (or use route-reflectors in between provider edge routers) and create peerings.  If you are from the Cisco world this may be a bit different in that you need to create a group that will contain the settings for iBGP.  Take a look at the configurations for each of the four provider edge routers:

PE-R1

set protocols bgp group ibgp type internal
set protocols bgp group ibgp local-address 100.1.1.1
set protocols bgp group ibgp family inet-vpn any
set protocols bgp group ibgp neighbor 100.1.1.2
set protocols bgp group ibgp neighbor 100.1.1.5
set protocols bgp group ibgp neighbor 100.1.1.6
set routing-options autonomous-system 100

PE-R2

set protocols bgp group ibgp type internal
set protocols bgp group ibgp local-address 100.1.1.2
set protocols bgp group ibgp family inet-vpn any
set protocols bgp group ibgp neighbor 100.1.1.1
set protocols bgp group ibgp neighbor 100.1.1.5
set protocols bgp group ibgp neighbor 100.1.1.6
set routing-options autonomous-system 100

PE-R3

set protocols bgp group ibgp type internal
set protocols bgp group ibgp local-address 100.1.1.5
set protocols bgp group ibgp family inet-vpn any
set protocols bgp group ibgp neighbor 100.1.1.1
set protocols bgp group ibgp neighbor 100.1.1.2
set protocols bgp group ibgp neighbor 100.1.1.6
set routing-options autonomous-system 100

PE-R4

set protocols bgp group ibgp type internal
set protocols bgp group ibgp local-address 100.1.1.6
set protocols bgp group ibgp family inet-vpn any
set protocols bgp group ibgp neighbor 100.1.1.1
set protocols bgp group ibgp neighbor 100.1.1.2
set protocols bgp group ibgp neighbor 100.1.1.5
set routing-options autonomous-system 100

If you break down the configuration, it will make sense how this is configured.  First, we configure the group as iBGP internal.  After that, we define the address which will be used for the peering (in this case it is the loopback0 address of 100.1.1.x/32).  Next, we define the peers in for the iBGP group along with defining the local autonomous system number we use in the core (ASN 100 for this topology).  Finally, we enable L3VPN signaling on the provider edge routes with the family inet-vpn command.  Once this is all completed, we can verify iBGP peering by using the command show bgp summary:

root@PE-R1> show bgp summary
Groups: 1 Peers: 3 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
bgp.l3vpn.0 0 0 0 0 0 0
bgp.l3vpn.2 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
100.1.1.2 100 12 12 0 0 4:31 Establ
 bgp.l3vpn.0: 0/0/0/0
 bgp.l3vpn.2: 0/0/0/0
100.1.1.5 100 3 4 0 0 41 Establ
 bgp.l3vpn.0: 0/0/0/0
 bgp.l3vpn.2: 0/0/0/0
100.1.1.6 100 1 11 0 0 19 Establ
 bgp.l3vpn.0: 0/0/0/0
 bgp.l3vpn.2: 0/0/0/0

Near the bottom, we see each peering session with PE-R1 as established (for example, 100.1.1.2 shows as Establ).  Notice, however, now that we have enabled L3VPN signaling we have the routing table bgp.l3vpn.0.  This is the routing table which will accept routes from customer edge devices (via BGP) and be used for route advertisement across the MPLS network (with the appropriate route-distinguishers based on the routing instance that it belongs to.  More on this later).  The other routing table that was created (bgp.l3vpn.2) deals with multicast based routes in the routing instances and is out of the scope of this post.  Later, I will create a post about routing multicast traffic over the MPLS cloud.

Now that we have successfully established peering, you can perform one more check to verify that the neighbors have successfully negotiated the use of L3VPN signaling by using the command show bgp neighbor <neighbor-address> and look at the address families section that they have negotiated:

root@PE-R1> show bgp neighbor 100.1.1.2
Peer: 100.1.1.2+60479 AS 100 Local: 100.1.1.1+179 AS 100
 Type: Internal State: Established Flags: <ImportEval Sync>
<output omitted for brevity>
 Address families configured: inet-vpn-unicast inet-vpn-multicast
 Local Address: 100.1.1.1 Holdtime: 90 Preference: 170
<output omitted for brevity>
 Peer ID: 100.1.1.2 Local ID: 100.1.1.1 Active Holdtime: 90
<output omitted for brevity>
 NLRI for restart configured on peer: inet-vpn-unicast inet-vpn-multicast
 NLRI advertised by peer: inet-vpn-unicast inet-vpn-multicast
 NLRI for this session: inet-vpn-unicast inet-vpn-multicast
<output omitted for brevity>
 NLRI that restart is negotiated for: inet-vpn-unicast inet-vpn-multicast

Configure Customer Routing Instances

Now that we have L3VPN signaling, we can create our L3VPN instances on two of the provider edge routers.  As mentioned at the beginning of the post, when we create a routing-instance for the customer, we need to establish eBGP peering with the customer edge router.  However, there is just a bit more to it.   On the provider edge router, we need to configure the following items:

  • Configure the routing instance as VRF
  • Assign the interface that connects to the customer edge router to the routing instance
  • Assign route-distinguisher and configure vrf-targets
  • Configure eBGP to peer to the customer edge router

Now that you know what needs to be done, let’s take a look at the configuration on PE-R1:

set interfaces ge-0/0/3 unit 0 family inet address 172.16.1.1/30
set routing-instances Customer-A instance-type vrf
set routing-instances Customer-A interface ge-0/0/3.0
set routing-instances Customer-A route-distinguisher 100.1.1.1:16
set routing-instances Customer-A vrf-target target:100:16
set routing-instances Customer-A protocols bgp group eBGP type external
set routing-instances Customer-A protocols bgp group eBGP peer-as 65161
set routing-instances Customer-A protocols bgp group eBGP neighbor 172.16.1.2

As you can see, we configured an interface with a transit address then assigned it to the routing instance.  Note that an interface can only belong to one instance (I.E. either a routing instance or the global).  After that, we configure the route-distinguisher (100.1.1.1:16) and the VRF target we will use to import/export (100:16).  Finally, similar to the iBGP configuration, we create a eBGP group to specify peering options towards CE-A-R1.

On CE-A-R1, we create an eBGP peering configuration towards PE-R1.  Since this is the customer edge device which does not know anything about the MPLS core, we configure the BGP peering as just a normal eBGP configuration.  Note, we need to create a policy-statement to export connected routes (transit and loopback interfaces) into the BGP routing process.  If you are not familiar with this process in JunOS or why, please take a look at the Juniper docs regarding importing/exporting prefixes for each routing protocol.

set policy-options policy-statement export-connected term 1 from protocol direct
set policy-options policy-statement export-connected term 1 then accept
set interfaces ge-0/0/0 unit 0 family inet address 172.16.1.2/30
set routing-options autonomous-system 65161
set protocols bgp group eBGP type external
set protocols bgp group eBGP export export-connected
set protocols bgp group eBGP peer-as 100
set protocols bgp group eBGP neighbor 172.16.1.1

There are a few show commands that you can use to verify everything is working.  First, let’s take a look at the BGP peering status:

root@PE-R1> show bgp summary instance Customer-A
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
Custo.inet.0 3 2 0 0 0 0
Custom.mdt.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
172.16.1.2 65161 131 131 0 0 57:51 Establ
 Customer-A.inet.0: 2/3/3/0

Notice that when we use the command show bgp summary  we need to append instance Customer-A.  This is because we are looking at BGP for sessions inside a L3VPN hence we need to run all show commands relating to the customer edge router from the Customer-A instance.  Also, notice the line Customer-A.inet.0: 2/3/3/0…this shows that in the Customer-A inet table (routing instance specific table) that there are two routes active, three received, three accepted, and none dampened.

If we take a look at the Customer-A.inet.0 table we will see prefixes received via BGP.

root@PE-R1> show route table Customer-A.inet.0

Customer-A.inet.0: 4 destinations, 5 routes (4 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.207.123.0/24 *[BGP/170] 00:59:47, localpref 100
 AS path: 65161 I
 > to 172.16.1.2 via ge-0/0/3.0
16.16.16.1/32 *[BGP/170] 00:59:47, localpref 100
 AS path: 65161 I
 > to 172.16.1.2 via ge-0/0/3.0
172.16.1.0/30 *[Direct/0] 01:05:37
 > via ge-0/0/3.0
 [BGP/170] 00:59:47, localpref 100
 AS path: 65161 I
 > to 172.16.1.2 via ge-0/0/3.0
172.16.1.1/32 *[Local/0] 01:05:37
 Local via ge-0/0/3.0

If we perform a quick ping from the routing instance to CE-A-R1’s loopback0 address, we will see that complete successfully.

root@PE-R1> ping routing-instance Customer-A 16.16.16.1
PING 16.16.16.1 (16.16.16.1): 56 data bytes
64 bytes from 16.16.16.1: icmp_seq=0 ttl=64 time=9.956 ms
64 bytes from 16.16.16.1: icmp_seq=1 ttl=64 time=10.981 ms
64 bytes from 16.16.16.1: icmp_seq=2 ttl=64 time=10.538 ms

We will now configure PE-R3 and CE-A-R2 in a similar fashion.  Obviously with their own transit addresses and other information.  First, lets take care of PE-R3

set routing-options autonomous-system 100
set interfaces ge-0/0/3 unit 0 family inet address 172.16.2.1/30
set routing-instances Customer-A instance-type vrf
set routing-instances Customer-A interface ge-0/0/3.0
set routing-instances Customer-A route-distinguisher 100.1.1.5:16
set routing-instances Customer-A vrf-target target:100:16
set routing-instances Customer-A protocols bgp group eBGP type external
set routing-instances Customer-A protocols bgp group eBGP peer-as 65162
set routing-instances Customer-A protocols bgp group eBGP neighbor 172.16.2.2

As CE-A-R2

set routing-options autonomous-system 65162
set interfaces ge-0/0/0 unit 0 family inet address 172.16.2.2/30
set policy-options policy-statement export-connected term 1 from protocol direct
set policy-options policy-statement export-connected term 1 then accept
set protocols bgp group eBGP type external
set protocols bgp group eBGP export export-connected
set protocols bgp group eBGP peer-as 100
set protocols bgp group eBGP neighbor 172.16.2.1

Running the same commands as before, we see BGP peering established as expected and the appropriate routes:

root@PE-R3> show bgp summary instance Customer-A
Groups: 1 Peers: 1 Down peers: 0
Table Tot Paths Act Paths Suppressed History Damp State Pending
Custo.inet.0 5 3 0 0 0 0
Custom.mdt.0 0 0 0 0 0 0
Peer AS InPkt OutPkt OutQ Flaps Last Up/Dwn State|#Active/Received/Accepted/Damped...
172.16.2.2 65162 10 13 0 0 3:31 Establ
 Customer-A.inet.0: 1/2/2/0
oot@PE-R3> show route table Customer-A.inet.0 protocol bgp

Customer-A.inet.0: 5 destinations, 7 routes (5 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

10.207.123.0/24 *[BGP/170] 00:04:16, localpref 100
 AS path: 65162 I
 > to 172.16.2.2 via ge-0/0/3.0
 [BGP/170] 00:04:28, localpref 100, from 100.1.1.1
 AS path: 65161 I
 to 10.0.0.25 via ge-0/0/0.0, Push 299856, Push 299776(top)
 > to 10.0.0.33 via ge-0/0/1.0, Push 299856, Push 299776(top)
16.16.16.1/32 *[BGP/170] 00:04:28, localpref 100, from 100.1.1.1
 AS path: 65161 I
 > to 10.0.0.25 via ge-0/0/0.0, Push 299856, Push 299776(top)
 to 10.0.0.33 via ge-0/0/1.0, Push 299856, Push 299776(top)
172.16.1.0/30 *[BGP/170] 00:04:28, localpref 100, from 100.1.1.1
 AS path: I
 to 10.0.0.25 via ge-0/0/0.0, Push 299856, Push 299776(top)
 > to 10.0.0.33 via ge-0/0/1.0, Push 299856, Push 299776(top)
172.16.2.0/30 [BGP/170] 00:04:16, localpref 100
 AS path: 65162 I
 > to 172.16.2.2 via ge-0/0/3.0

Testing and Verifying Connection Between Customer Edge Devices

Now that we have both sides of our customer edge deployment configured and working, all that is left is to test and verify that we see the routes that we would expect, the routes we should not see, and ping from one customer edge router loopback to the other.  Before we get to the verification, you should see the following:

  • Only prefixes that belong to the customer edge routers or are in the customer routing instances on the provider edge routers should be in the inet.0 routing tables on the customer edge devices.
  • None of the prefixes that make up the core (transits and loopbacks) should be seen in the customer edge routers inet.0.

With those two bits of information out of the way, let’s take a look at the inet.0 routing table on CE-A-R1 to see if we see the transit and loopback prefix of CE-B-R1:

root@CE-A-R1> show route table inet.0 protocol bgp

inet.0: 9 destinations, 9 routes (9 active, 0 holddown, 0 hidden)
+ = Active Route, - = Last Active, * = Both

16.16.16.0/30 *[BGP/170] 00:00:15, localpref 100
 AS path: 100 65162 I
 > to 172.16.1.1 via ge-0/0/0.0
172.16.2.0/30 *[BGP/170] 00:10:04, localpref 100
 AS path: 100 I
 > to 172.16.1.1 via ge-0/0/0.0

Now let’s perform a quick ping and make sure that it is successful:

root@CE-A-R1> ping 16.16.16.2 source 16.16.16.1
PING 16.16.16.2 (16.16.16.2): 56 data bytes
64 bytes from 16.16.16.2: icmp_seq=0 ttl=61 time=24.200 ms
64 bytes from 16.16.16.2: icmp_seq=1 ttl=61 time=25.626 ms
64 bytes from 16.16.16.2: icmp_seq=2 ttl=61 time=25.849 ms

Conclusion

That is it for our simple MPLS topology.  In these two posts, we created an MPLS core, enabled L3VPN signaling between provider edge routers, and advertised prefixes from each customer edge device across the MPLS core to allow end-to-end connectivity.  This is just the tip of the iceberg with regards to the types of things you can do and topologies can get much more complex when you add things in like multicast, multiple L3VPNs, and more.

Later posts will look at things such as multi-L3VPNs for multi-tenancy routing and more complex configurations in the MPLS core to perform things like Label Switched Paths (LSP) and path constraints using Resource Reservation Protocol (RSVP).

Troy Perkins

Leave a Reply

Your email address will not be published. Required fields are marked *