Since I fancy myself somewhat of a science fiction writer, I decided to write about what could happen in the near future. The diagram (above) shows the proposed architecture for network Management and Orchestration (MANO).
Note that although SDN is not specifically mentioned in this story, it is present in the core of the network routing MPLS tagged packets.
I have posted on MANO in the past and have talked about Virtual Network functions, so I will assume that you know about these things. This story pulls the whole thing together in what I hope is an entertaining read.
It’s a cold snowy February day in Ottawa and the year is 2025.
The primary provider in downtown Ottawa has been built up their central POP with a collection of physical network elements (PNFs) and virtual network elements (VNFs) that meet the needs of the government on Parliament hill as well as the people in the surrounding area. The service provider has determined that a 20% investment in extra equipment and licenses will address 80% of the markets needs and that purchasing extra capacity is the best way to do business.
The service provider has contracts in place with rival providers and a new class of tier-2 provider that provides VNF’s and core bandwidth to tier-1 providers when the reach the limits of their core networks. The tier-2 provider used to be a legacy cloud provider but realized that they could repurpose their extra capacity to offer new services to exiting tier-1 providers without a major rework of their operation.
Mean while out on the street, the tech sector is planning a trade show at the convention centre and most booths will require 10Gbps links in order to facilitate remote video and holographic meetings with potential clients. The conference organizers have had their sales teams planning the event for several months and sales of booth space and bandwidth have reached the capacity of the primary provider’s network.
A sales person (Al) is talking to a large exhibitor and is about to place an order for a 100Gbps booth link, the sales system pauses for a moment after he hits return, but the sale goes through.
What happened behind the scenes that caused the pause?
The OSS/BSS sales system asked the primary provider’s Orchestrator if it could handle the 100Gbps link and it did not reply right away. The primary Orchestrator realized that it had no capacity, so it contacted it’s neighbouring orchestrators to request resources. The neighbouring Orchestrators received the service definition as part of the request and they compare it against their VNF catalogs to determine if they can fill the request. In some cases, the neighbouring Orchestrators have to contact their partner Orchestrators for additional resource. It was determined that resources could be reserved and the sale was allowed to proceed.
Sales continued in much the same fashion for the rest of the week leading up to the conference.
On the next Monday morning at 10AM the show floor opens, video links go live, virtual executives appear in their booths and the noise on the trade show floor peaks. Excited customers pour into the conference centre to see the latest in communication technology. Some customers appear in person and others appear holographically via portable projectors carried to the show by employees.
What happened behind the scenes?
The OSS/BSS system that tracked the sales leading up to the show, sent a list of new services to the primary provider’s Orchestrator. That Orchestrator referred to it’s service definition catalog to determine what resources would be needed to deploy each of the new services. The orchestrator already had a catalog of local resources ready to draw from because the VNF and PNF managers had been monitoring the network resources and reporting their functions and availability to the Orchestrator. Every time the Orchestrator received a message from a PNF or VNF manager it updated its VNF catalog to insure that the available inventory was always up to date.
At 10AM, the Orchestrator sent messages to local VNF and PNF managers to ask them to turn up, connect and configure VNFs and PNFs required deliver the services as described in the local service catalog. While this was happening in the local network, the primary Orchestrator had communicated with its partner Orchestrators to request that they turn up, configure and connect the resources that had been reserved for over flow services.
Sections of the network topology were changed as existing customers are shuffled onto new links for the duration of the day, but everything was done via machine-to-machine interfaces that eliminated human error and executed the changes in seconds with no loss of data.
Later in the day, The CEO from Snow Drift Communications is closing a deal with Penguin Trucking, when he experiences a brief fade in the communication link. He notices the blip because that is his business, but his customer doesn’t and the deal closes without complications.
What happened behind the scenes?
A new water supply was being installed on Bank street and the backhoe dug through the main fiber-optic feed to the primary providers downtown POP. The primary providers OAM&P system detected the outage and informed the primary Orchestrator. The Orchestrator sent a burst of requests to partner Orchestrators with service descriptions for all of the services that were effected. Service and VNF catalogs were checked at each partner location by their Orchestrators and backup services were established. The primary Orchestrator had to resend requests to additional partners as each partner reached capacity. Thousands of circuits were turned down at the central POP and routed to other POP’s scattered through out the city.
At 5:55PM there is a recorded announcement that informs all show participants that the show is over for the day. A few minutes later, the lights are dimmed and holographic sales people and executives, start to disappear from their booths as services are disconnected.
What happened behind the scenes?
Behind the scenes, the primary Orchestrator informed all of its partners that it was done for the day and requested that they turn down services. Messages from each Orchestrator are sent to the local VNF and PNF managers. The managers save configurations, turn down VNF’s, remotely power down PNF’s, turn down fiber-optic links and wireless connections. This action saves power and changes the resource states back to available in the local VNF and PNF catalogs.
The primary provider restores its network to the previously stable, energy and cost effective topology that existed before the show. Not all the circuits are restored to exactly the same place however, because repairs have not been completed on the fiber-optic line to the central POP.
At 10:30 that same night the fiber-optic link is returned to service and the network topology is returned to its original configuration.
At the end of the month, the primary provider gets a report from the OSS/BSS systems detailing what the extra resources for the show and the outage cost. The accountants run the numbers and recommend an increase in core capacity by 5% in order to maximize profits.
I hope you enjoyed your trip to the future and will consider Nakina Systems as you evolve your networks in that direction…