Ethernet Provider Backbone Bridge/Transport (PBB and PBT)

Even MPLS is being challenged. As carriers move to a simplified and more cost-effective technology stack, the prospects of carrying IP within Ethernet directly over the OTN seem increasingly attractive. Traditional Ethernet has scaling and management problems, because its forwarding model depends on Ethernet switches flooding outbound network links with Ethernet frames where the destination is not known, and then learning which exit port to use in future by noting which port the reply eventually arrives at. For this to work, there has to be a unique path between any two points on the network, which is guaranteed by Ethernet’s spanning tree protocol. This turns off network links between Ethernet switches until a minimal covering tree remains, but the procedure has a number of problems including inefficient use of network resources and long recovery times in the event of link or node failure (a new tree has to be recomputed and established).


Add a note hereHowever, using Provider Backbone Transport, a subrange of VLAN tags is reserved for carrier forwarding purposes, the chosen tags (+ destination addresses) functioning somewhat similarly to MPLS labels. This forwarding information is provisioned into the carrier Ethernet switches by the central network management function, or by GMPLS, to create forwarding virtual circuits (and optionally failover restoration paths) across the carrier network. Unlike the situation with MPLS labels, however, the PBT combination of destination MAC header and VLAN tag is globally unique and identical across network switching elements. This offers significant advantages over MPLS in fault finding and tracing.

Add a note hereCarrier Ethernet is seeing a number of innovations that increase its capabilities, including:
§  Add a note hereMAC-in-MAC—the provision of a separate carrier forwarding address field, pre-pended to the enterprise customer header (defined in 802.1ah, and also called “Provider Backbone Bridge”)
§  Add a note hereQ-in-Q—used to create a hierarchy of VLAN tags allowing carriers to distinguish between customers (defined in 802.1ad, and also called “VLAN stacking”).

Add a note hereWith these, Ethernet is now beginning to match MPLS accomplishments, both in traffic engineering and in the provision of customer VPNs. The battle to come will prove interesting.


No comments:

More?