Application Platforms

At the most general level, a network can be conceptualized as a mechanism, frequently drawn as a cloud, connecting any subset of (people, data, applications) together. SIP/IMS is optimized for connections involving people to people and data, because the session holding times are typically long and the user-interface properties have to suit people (audio, video adapted to the terminal device capabilities).
Add a note hereWhen applications connect to applications, they exchange formatted byte streams. The sessions can be ultrashort, and protocol support is required to choreograph sessions, as there is no human common sense to rely upon. This is the world of computer record exchange, remote procedure calls, asynchronous communications, transaction capabilities, and session management protocols. XML has emerged as the syntactic framework of choice for application internet-working, and both Microsoft and the Java communities have developed application platform architectures. These platforms are Microsoft’s .NET and the Java community’s Java EE (Java Platform, Enterprise Edition) respectively. They run on computer servers connected to the Internet and provide a preexisting platform onto which E-business applications can be installed. Java EE and .NET systems talk across high-quality IP/MPLS transport networks. Carriers get to play by providing a JAVA EE or .NET hosting service so that customers can install their applications via standardized interfaces. The carrier may also provide many other useful services:
§  Add a note hereData backup and restore,
§  Add a note hereHosted application development environment,
§  Add a note hereManaged applications, web server, application server,
§  Add a note hereCaching and content distribution services,
§  Add a note hereSecurity monitoring.
Add a note hereIt is fair to say that hosting of application platforms is at the cutting edge of hosting services today. Most carriers are happier providing managed servers, network connectivity, and application monitoring on top of operating systems such as Windows and Unix/Linux.

The Road to IMS



Add a note hereAs the early experiments with voice over IP evolved into services with significant usage, it was clear that a multimedia signaling protocol was required, analogous to the signaling used in the existing telephone networks (most notably common channel signaling system 7 often loosely referred to as SS7). Multimedia signaling over IP networks was always going to be more complex. User terminals had to negotiate with each other to determine their media-handling capabilities, and with the network to request the quality of service they needed. There were issues of security, and problems in finding the IP addresses of other parties to a call.
Add a note hereThe carriers, through the ITU, had an existing protocol suite, H.323, which had been developed for LAN-based video-telephony. This was pressed into service in first-generation VoIP networks, but its clumsiness and lack of scalability triggered activity within the IETF to develop a more IP-friendly, extensible and scalable signaling protocol. Over a period between 1996 and 2002, the IETF developed the Session Initiation Protocol (SIP) as the end-to-end signaling protocol of choice for multimedia sessions over IP. SIP languished for several years, waiting for other developments to catch up, when perhaps surprisingly, the initiative was taken by the cellular industry.

Add a note hereThe Third Generation Partnership Project (3GPP) was set up in 1998 to specify a third generation mobile system evolving from GSM network architecture. At the same time the 3GPP2 organization was set up by standards bodies in the US, China, Japan, and South Korea to fast-track a parallel evolution to third generation mobile, evolving from the second generation CDMA networks prevalent in those countries.
Add a note here3G mobile architecture had originally used ATM, but by 2000 it was clear that the future was IP.

Arrangements were therefore made to set up formal links between 3GPP/3GPP2 and the IETF to develop IP standards for 3G. The 3G subsystem that handles signaling was the IP Multimedia Subsystem (IMS) and was based on the IETF’s SIP. But in the IMS architecture, SIP had to do a lot more work. For example, users may want to set up preconditions for the call to be made (e.g., QoS or bandwidth) before the called party is alerted, or they may need information on their registration status with the network, and terminals need SIP signaling compression on low-bandwidth radio access links to speed-up transmission and reduce contention for bandwidth.

Add a note hereWhat the 3GPP communities really needed was an architecture that could standardize the interrelationship between the many functions needed to bring 3G multimedia services into commercial reality. Such an architecture would have to integrate many different protocols (signaling, authentication and authorization, security, QoS, policy compliance, application service management, metering and billing, etc.). To deal with the many new developments to SIP (and other protocols) that were needed to make the IMS architecture work, a joint 3GPP-IETF group, SIPPING, was set up.

Add a note hereAs the 3G Mobile architecture evolved, it came to the attention of architects and standards people working on evolution for the fixed network operators. This Next-Generation Network activity, carried out in bodies such as ETSI TISPAN and the ITU-T NGN program, had a similar requirement for an all-IP signaling layer and session management architecture. IMS essentially fitted the bill, and was adopted, although further changes and developments are in the future roadmap. So, for example, BT’s twenty-first century network architecture will eventually have IMS right at the center.

Add a note hereIMS has been described as “mind-numbingly complex.” This may be true, but the complexity is there for a reason. IMS provides common services to: user terminals, network-based application servers, network routers, policy engines, billing systems, and foreign networks for roaming capabilities. It provides for authentication, registration and security. It supports presence and instant messaging, and new services such as Push to Talk over Cellular (PoC). By doing so much, through standard interfaces, the intent of IMS is to remove the need for new services to re-invent these wheels. IMS-powered services should therefore be lighter-weight and be more easily introduced. Carriers believe they will take a one-time hit to get IMS into their networks, and will afterwards reap the benefits over subsequent service introduction.

Add a note hereAt the lower layers of the network, there was little dispute as to who provided the service. Running optical/SDH transmission networks, and running IP networks is pretty much definitional as to what carriers do. But as we get higher in the stack, the focus turns more to applications running on servers that exploit the IP network for connectivity. You don’t have to be a carrier to run servers. In principle, a multimedia telephony company could run IMS in a garage. IMS is not precisely designed to do this, because it was conceived by carriers, who arranged for a high degree of potential coupling between the IMS layer and the IP layer. However, this coupling does not have to be turned on, and may not even be necessary for many service concepts, rescuing the garage option. Or perhaps the carriers could be encouraged to expose the necessary IP transport layer interfaces specified in IMS to third parties? And, of course, multimedia telephony companies who do not use IMS today (e.g., Vonage, Skype) do indeed build their businesses on servers (rather few in Skype’s case) and then buy-in the Internet connectivity they need.

Add a note hereA generalized, powerful, and complex session management platform such as IMS is rather pervasive and there may be a case for it being provided by a specialized ISP, or a facilities-based carrier (a carrier that owns telecommunications equipment—normally fiber, transmission equipment, routers and switches). However, when it comes to providing a discrete service such as music download, access to streaming audio or video, or any specific application service, there is little reason to believe that facilities-based carriers have some special advantage. Most of us don’t do our Internet banking with the company providing our broadband connection. This should be a warning to carriers not to go too far down the “walled garden for content or value-added services” path as the route to future margin success.

IPv6?

Another issue that surfaces on a regular basis is the future of the current version of IP, IP version 4. There has been debate over many years as to when, or whether, the Internet should transition to IPv6. My own position on this is skeptical for the following reasons.
Add a note hereRecall that the Internet is a network of networks. The Internet backbone is composed of networks from tier-1 Internet companies such as Verizon, AT&T, Sprint, British Telecom (BT), Deutsche Telekom, and so on. Smaller tier-2 carriers connect to the tier-1 companies, and in their turn offer connectivity to even smaller tier-3 ISPs. All of these networks are currently running IPv4 for Internet traffic. Since a collective cutover to IPv6 is not on the cards, any protocol migration to IPv6 is fraught with practical difficulties. The early mover will encounter guaranteed IPv4-IPv6 interworking issues, will gain few advantages from the move, and will contemplate wistfully the many advantages that would have been gained from sticking with IPv4 for the duration.
Add a note hereWhen the IETF designers finalized IPv6 back in 1994, they had added to it many attractive features over IPv4. These included support for end-to-end security via IPsec, support for class-of-service marking via a new Differentiated Services field in the IPv6 header, support for host mobility (mobile IP), support for auto-configuration and, of course, the much larger address field. Unfortunately for IPv6, time has whittled away many, if not all, of these advantages. To put it especially bluntly, most everything of value in IPv6 was re-engineered back into IPv4 on the understandable basis that the world couldn’t wait.
§  Add a note hereIPv6’s class-of-service marking scheme replaced IPv4’s obsolete “type of service” header field as the new IPv4 Diffserv Code Point—DSCP.
§  Add a note hereIPsec was engineered to work with IPv4.
§  Add a note hereMobile IP was engineered to work with IPv4.
§  Add a note hereDHCP configuration of IPv4 hosts obviated most of the IPv6 auto-configuration facilities.
§  Add a note herePrivate addressing and NAT resolved the address space problem in practice.
Add a note hereDespite claims that these engineering hacks would impact on usability, the difficulties have been steadily overcome. Even some of the hardest problems, getting signaling applications for VoIP to work through NAT and firewalls, have now been mostly solved. Skype is a case in point, following the earlier pioneering work in network gaming and peer-to-peer file sharing.
Add a note hereThere is a purist motivation for IPv6, which looks to get back to a clean and transparent end-to-end Internet model leveraging the larger address space of IPv6. NAT is particularly disliked, seen as breaking the simplicity of host-router transparency. However, with NAT making some contribution to network security and working well in practice, the practical motivation is less strong. Given the lack of a positive business case for the IPv4 to IPv6 transition, together with the effectiveness of the IPv4 “workarounds,” it is hard to predict whether the transition to IPv6 will ever happen. One positive but seldom-mentioned feature of IPv4 against IPv6 is that 128 bit IPv6 addresses such as:
Add a note here2001:0db8:85a3:08d3:1319:8a2e:0370:7334
Add a note hereare a lot harder for humans to manage than IPv4’s 66.249.64.4—even after the many rules for presentationally shortening IPv6 addresses have been taken into account.

Ethernet Provider Backbone Bridge/Transport (PBB and PBT)

Even MPLS is being challenged. As carriers move to a simplified and more cost-effective technology stack, the prospects of carrying IP within Ethernet directly over the OTN seem increasingly attractive. Traditional Ethernet has scaling and management problems, because its forwarding model depends on Ethernet switches flooding outbound network links with Ethernet frames where the destination is not known, and then learning which exit port to use in future by noting which port the reply eventually arrives at. For this to work, there has to be a unique path between any two points on the network, which is guaranteed by Ethernet’s spanning tree protocol. This turns off network links between Ethernet switches until a minimal covering tree remains, but the procedure has a number of problems including inefficient use of network resources and long recovery times in the event of link or node failure (a new tree has to be recomputed and established).


Add a note hereHowever, using Provider Backbone Transport, a subrange of VLAN tags is reserved for carrier forwarding purposes, the chosen tags (+ destination addresses) functioning somewhat similarly to MPLS labels. This forwarding information is provisioned into the carrier Ethernet switches by the central network management function, or by GMPLS, to create forwarding virtual circuits (and optionally failover restoration paths) across the carrier network. Unlike the situation with MPLS labels, however, the PBT combination of destination MAC header and VLAN tag is globally unique and identical across network switching elements. This offers significant advantages over MPLS in fault finding and tracing.

Add a note hereCarrier Ethernet is seeing a number of innovations that increase its capabilities, including:
§  Add a note hereMAC-in-MAC—the provision of a separate carrier forwarding address field, pre-pended to the enterprise customer header (defined in 802.1ah, and also called “Provider Backbone Bridge”)
§  Add a note hereQ-in-Q—used to create a hierarchy of VLAN tags allowing carriers to distinguish between customers (defined in 802.1ad, and also called “VLAN stacking”).

Add a note hereWith these, Ethernet is now beginning to match MPLS accomplishments, both in traffic engineering and in the provision of customer VPNs. The battle to come will prove interesting.


GMPLS

Following the success of MPLS in the provision, configuration and management of virtual circuits for IP networks, some thought was given as to whether MPLS might be used to handle other sorts of virtual circuits, not as a transport mechanism, but as a signaling and control plane for:
§  Add a note hereLayer-2 virtual circuits for Frame Relay,
§  Add a note hereTDM virtual paths for SONET and SDH,
§  Add a note hereWavelengths in an optical transport network (OTN),
§  Add a note hereFiber segments linked via spatial physical port switches.
Add a note hereThus was born Generalized MPLS (GMPLS), which applies the MPLS control plane (with extensions) to these other layers—the focus is typically on SONET/ SDH and optical (wavelength) networks. Cross-connects and Add-Drop Multiplexers in these networks need to exchange GMPLS protocol messages. This is not necessarily strange—all these devices today run element managers or SNMP agents that communicate via a management IP layer. In the TDM world, MPLS label allocation/de-allocation is identified with time-slot allocation; in the optical world it is equivalent to wavelength allocation.
Add a note hereIn fact, GMPLS has had a mixed reception in the world’s carriers. Optical and transmission engineers don’t necessarily believe that the IP guys know best when it comes to controlling their networks. There is a history of virtual path management in SONET/SDH networks and optical channel management for OTNs being organized through the network management systems. With increasing element intelligence and more powerful management tools, it is widely felt that the management plane is adequate, and that replicating its functions in a new signaling plane is not required. Of course, opinions differ.

IP/MPLS Transport and Routing Layer

This is the classic Internet model. In an all-IP world, hosts, or end systems (computers, servers, or anything that can run an IP stack) communicate over any convenient wired or wireless access transmission link (wet string) to edge routers. These edge routers look at packet headers and then forward them on the correct “next hop,” to the next router in the chain, or to the final destination host. Routers started as ordinary computers running routing software (this still works!) in the earliest days of the Internet, and then became special purpose machines with a custom architecture. Initially focused on enterprise applications, a new generation of ultralarge and ultrareliable machines came into service in the Internet boom of 1999–2001. The current state of the art is the Terabit router, the Tera prefix (10^12) indicating aggregate router throughput of thousands of billions of bits per second.
Add a note hereOnly routers at the edge of modern Service Provider networks actually see IP packets. The Provider Edge routers encapsulate a packet into MPLS by attaching a label to the front of the packet that indicates its final destination (unlike IP addresses, labels are only locally unique and may be altered at each intermediate label-switching router, thereby supporting scalability). Interior or core routers forward the labeled packets—based on their label information—along label-switched paths. The threading of label-switched paths through network routers is under the explicit control of the operator, and this control is used for a number of purposes:

§  Add a note hereLoad-balancing between alternative routes,
§  Add a note hereThe creation of virtual private networks (VPNs) for enterprises,
§  Add a note hereSegregation of traffic between different quality of service classes,
§  Add a note hereNetwork survivability via failover to backup label-switched paths.
Add a note hereThere used to be many concerns about the robustness and service quality of IP networks in general, and the Internet in particular. But as the Internet has become more central to business, significant care and attention, as well as capital resources have been invested by telecom carriers. The Internet is no longer a byword for flakiness and delay. Many carriers privately believe that the Internet is currently “too good,” and as the inexorable rise of Internet traffic fills up the currently rather empty pipes, expect to see a harder-nosed commercial attitude emerging.
Add a note hereMany carriers will focus their NGNs on connectivity services based directly on IP such as Internet access and MPLS-based VPNs. Services such as leased lines, frame relay, and ATM will either be discontinued, or will be supported only on legacy platforms that will eventually be phased out—this may take a while for leased lines services based on SDH, for example. However, some carriers want to phase out and decommission legacy networks early, to get the OPEX advantages of a simpler network infrastructure, but still leave these legacy services in place to avoid disruption to customers.
Add a note hereSurprisingly, there is a way to do this. It involves emulating leased line, Frame Relay, and ATM services over the new MPLS network, using service adaption at Multi-Service Access Nodes (MSANs) or Provider Edge routers at the edge of the network. There is obviously a considerable cost in MSAN or edge router device complexity, service management overhead and in dealing with the immaturity of the MPLS standards for doing this (using MPLS pseudo-wires). The advantage seems to be in decoupling platform evolution to the NGN from portfolio evolution to “new wave” products and services.

The Current-Generation Network

In carrier networks to date, the major division has been between circuit switching and transmission (transmission divides into SONET/SDH—Synchronous Optical Network/Synchronous Digital Hierarchy and optical networking using DWDM—Dense Wave-Division Multiplexing) (Stern, Bala, and Ellinas 1999). Traditionally, both switching and transmission have been voice oriented (Figure 1).


Figure 1: The current-generation network architecture.
Add a note here
Add a note hereThe switching/transmission divide is not just technological, but also a structural feature of organizations and even engineering careers. There are still many telecoms engineers around who will proudly state they are in switching or transmission, and each will have a less-than-detailed view of what the other discipline is all about. Data people, formerly X.25, Frame Relay, and ATM, and latterly IP, were historically the new, and rather exotic next-door neighbors.

Add a note hereCircuit Switching
Add a note hereThe traditional problem of switching is essentially one of connection: how to identify end-points (by assigning phone numbers), how to request a connection between end-points (by dialing and signaling) and how to physically set-up and tear-down the required voice connection (using telephone switches). Once upon a time this was done by analogue technologies, but that is going back too far. From the 1980s, the state of the art was digital switching, and telecom voice switches became expensive versions of computers.
Add a note hereOnce people were digitally connected, more advanced services could be introduced such as free-phone numbers, premium rate numbers, call blocking, call redirect, and so forth. Initially this was done by increasing the complexity of the call-control software in the digital telephones switches. Unfortunately, such code was proprietary to the switch vendors: the carriers paid handsomely to buy it, and were then locked-in for their pains. The solution was for the carriers to get together and design a standardized architecture for value-added voice services called the Intelligent Network (IN). In North America the preferred term was Advanced Intelligent Network (AIN). The IN architecture called for relatively dumb switches (service switching points—SSPs) invoking service-specific applications running on high-specification computers called service control points (SCPs) during the progression of the call. Since the very same vendors sold SSPs and SCPs as sold the original switches, prices did not go down and the IN was only a partial success at best.

Add a note hereTransmission
Add a note hereTransmission solves a different problem—that of simultaneously carrying the bit-streams corresponding to many different voice calls, data sessions, or signaling messages over long distances on scarce resources, such as copper wire, coaxial cable, radio links, fiber optic strands, or precisely-tuned laser wavelengths. A transmission engineer would start with a collection of nodes—towns and cities where telecoms equipment was going to be placed—and an estimated traffic matrix showing the maximum number of calls to be carried between any two nodes. The next step was to design a hierarchy of collector links that aggregated traffic from smaller nodes to larger hub nodes. These hubs would then be connected by high-capacity backbone links. This sort of hub-and-spoke architecture is common in physical transportation systems as well: roads, rail, and air travel.
Add a note hereVoice traffic never traveled end-to-end across the transmission network, because it had to be routed at intermediate voice switches. The telephone handset connected to a local exchange switch (or a similar device called a concentrator) at a carrier Point-of-Presence (PoP) located within a few miles of the telephone. The local switch or concentrator then connected to transmission devices to send the call to a much bigger switch at the nearest hub. From there, the call bounced via transmission links from switch to switch until it reached the called telephone at the far end.
Add a note hereSwitch engineers called the transmission network “wet string,” based on the child’s first telephone—two tin cans connected by wet string (wetting decreases sound attenuation). Transmission engineers, on the other hand, considered voice switches as just one user of their transmission network, and in recent years a less interesting user than the high-speed data clients. These are to voice switches as a fire hose is to a dripping tap. For transmission engineers, it’s all about speed and they boast that they don’t get out of bed for less than STM-4 (Synchronous Transfer Module level 4, running at 622 Mbps).
Add a note hereJust a note on terminology. The word “signal” is used in two very different ways. In session services such as voice and multimedia calls, signaling is used to set up and tear-down the call as previously noted. Here we are talking about a signaling protocol. However, in transmission, signals are just the physical form of a symbol on the medium. So, for example, we talk about analogue signals, where we mean a voltage waveform on the copper wire copying sound waves from the speaker’s mouth. We talk about digital signals when we mean bits emitted from a circuit, suitably encoded onto a communications link (cf., digital signal processing). The two uses of the word “signal” are normally disambiguated by context.

More?