The concept of software-defined networks (SDNs) has been generally accepted as the wave of the future by experts. The basic idea involves separating the intelligence that maps out where data is headed (the control plane) from the data itself (not surprisingly, the data plane).
The pure concept is crowded with complexities, though. A big idea suggests big changes, and in the real world, making big changes requires a couple of things of the IT organization: general agreement on what the new infrastructure will look like and a way to make the changes without stranding the significant investment that has been made in the older infrastructure.
Both of these are knotty challenges. And actually defining SDNs is as much a food fight between the various vendors and associated organizations as it is a task of finding the best technical approach. Billions of dollars are on the table. And getting from here to there, or helping organizations actually deploy SDNs in a way that protects those previous investments, is a science unto itself.
Dan Pitt, the executive director of the Open Networking Foundation, used a post at InformationWeek to explain three use cases that provide real-world information on the transition to SDNs.
The first use case involves Google’s employment of OpenFlow, which is the approach to SDNs proposed by the ONF, to ferry data among data centers. The second features NTT’s use of OpenFlow as a way to make its border gateway protocol (BGP) processes run more efficiently. The final use case looks at Stanford University’s use of OpenFlow to support wireless and wired communications on parts of the campus.
The transition from today’s topologies to SDNs also is discussed by Greg Ferro for Network Computing. His premise is simply that the nature of current applications makes a flash cut to a pure SDN network impossible:
An enterprise data center has any number of servers that will remain attached to physical network ports because legacy applications are resistant to virtualization. In some cases, organizations are reluctant to replace hardware appliances such as firewalls or proxy servers with virtual equivalents. This means that a virtual network overlay must be able to connect to physical networks. Here is where hardware-defined networking helps, by connecting legacy enterprise systems to SDN infrastructure.
Ferro describes a concept called hardware-defined networks as put in place by Midokura and Cumulus Networks. The idea is to link together a number of technologies to create “low-cost networks that connect physical networks directly to overlay networks.”
Another level to the transition to SDNs involves which vendors will have the advantage. THE Journal’s Leila Meyer examines an OpenDaylight Project report that suggests that open source software is the predominant favorite of companies interested in evolving to SDN and its cousin, Network Functions Virtualization (NFV). The research, which was conducted on behalf of the consortium by Gigaom Research, found that three quarters of respondents wanted the open source products to come from commercial vendors. The survey went to 300 enterprises and 300 service providers. This information suggests that perhaps the fun is just beginning.
The transition to SDN and NFV will be a long-term process. It will be tricky both because of deeply entrenched legacy equipment and the great amount of money that vendors have riding on the outcome. Networking, by nature, doesn’t lend itself to great drama. But the next few years may prove to be an exception.