The Road to SDN: An Intellectual History of Programmable Networks

Page created by Wade Harrington
 
CONTINUE READING
The Road to SDN: An Intellectual History
                      of Programmable Networks
                   Nick Feamster                      Jennifer Rexford                   Ellen Zegura
                 Georgia Tech                       Princeton University                Georgia Tech
           feamster@cc.gatech.edu                jrex@cs.princeton.edu              ewz@cc.gatech.edu

ABSTRACT                                                       network’s data-plane elements (i.e., routers, switches, and
Software Defined Networking (SDN) is an exciting tech-         other middleboxes) via a well-defined Application Pro-
nology that enables innovation in how we design and man-       gramming Interface (API). OpenFlow [51] is a prominent
age networks. Although this technology seems to have ap-       example of such an API. An OpenFlow switch has one or
peared suddenly, SDN is part of a long history of efforts to   more tables of packet-handling rules. Each rule matches a
make computer networks more programmable. In this pa-          subset of traffic and performs certain actions on the traffic
per, we trace the intellectual history of programmable net-    that matches a rule; actions include dropping, forwarding,
works, including active networks, early efforts to separate    or flooding. Depending on the rules installed by a con-
the control and data plane, and more recent work on Open-      troller application, an OpenFlow switch can behave like
Flow and network operating systems. We highlight key           a router, switch, firewall, network address translator, or
concepts, as well as the technology pushes and application     something in between.
pulls that spurred each innovation. Along the way, we de-         Over the past few years, SDN has gained significant
bunk common myths and misconceptions about the tech-           traction in industry. Many commercial switches support
nologies and clarify the relationship between SDN and re-      the OpenFlow API. Initial vendors that supported Open-
lated technologies such as network virtualization.             Flow included HP, NEC, and Pronto; this list has since
                                                               expanded dramatically. Many different controller plat-
1.   Introduction                                              forms have emerged [23, 30, 37, 46, 56, 64, 82]. Program-
                                                               mers have used these platforms to create many applica-
  Computer networks are complex and difficult to man-
                                                               tions, such as dynamic access control [16, 52], server load
age. These networks have many kinds of equipment, from
                                                               balancing [39,83], network virtualization [53,68], energy-
routers and switches to middleboxes such as firewalls, net-
                                                               efficient networking [42], and seamless virtual-machine
work address translators, server load balancers, and intru-
                                                               migration and user mobility [24]. Early commercial suc-
sion detection systems. Routers and switches run com-
                                                               cesses, such as Google’s wide-area traffic-management
plex, distributed control software that is typically closed
                                                               system [44] and Nicira’s Network Virtualization Plat-
and proprietary. The software implements network pro-
                                                               form [53], have garnered significant industry attention.
tocols that undergo years of standardization and interop-
                                                               Many of the world’s largest information-technology com-
erability testing. Network administrators typically config-
                                                               panies (e.g., cloud providers, carriers, equipment vendors,
ure individual network devices using configuration inter-
                                                               and financial-services firms) have joined SDN industry
faces that vary across vendors—and even across different
                                                               consortia like the Open Networking Foundation [55] and
products from the same vendor. Although some network-
                                                               the Open Daylight initiative [58].
management tools offer a central vantage point for config-
                                                                  Although the excitement about SDN has become more
uring the network, these systems still operate at the level
                                                               palpable during the past few years, many of the ideas un-
of individual protocols, mechanisms, and configuration
                                                               derlying SDN have evolved over the past twenty years
interfaces. This mode of operation has slowed innovation,
                                                               (or more!). In some ways, SDN revisits ideas from early
increased complexity, and inflated both the capital and op-
                                                               telephony networks, which used a clear separation of con-
erational costs of running a network.
                                                               trol and data planes to simplify network management and
  Software Defined Networking (SDN) is changing the
                                                               the deployment of new services. Yet, open interfaces like
way we design and manage networks. SDN has two defin-
                                                               OpenFlow enable more innovation in controller platforms
ing characteristics. First, an SDN separates the control
                                                               and applications than was possible on closed networks de-
plane (which decides how to handle the traffic) from the
                                                               signed for a narrow range of telephony services. In other
data plane (which forwards traffic according to decisions
                                                               ways, SDN resembles past research on active networking,
that the control plane makes). Second, an SDN consoli-
                                                               which articulated a vision for programmable networks, al-
dates the control plane, so that a single software control
                                                               beit with an emphasis on programmable data planes. SDN
program controls multiple data-plane elements. The SDN
control plane exercises direct control over the state in the
also relates to previous work on separating the control and          to enable greater to innovation; (2) control and data plane
data planes in computer networks.                                    separation (from around 2001 to 2007), which developed
   In this article, we present an intellectual history of pro-       open interfaces between the control and data planes; and
grammable networks culminating in present-day SDN.                   (3) the OpenFlow API and network operating systems
We capture the evolution of key ideas, the application               (from 2007 to around 2010), which represented the first
“pulls” and technology “pushes” of the day, and lessons              instance of widespread adoption of an open interface and
that can help guide the next set of SDN innovations.                 developed ways to make control-data plane separation
Along the way, we debunk myths and misconceptions                    scalable and practical.
about each of the technologies and clarify the relation-                Network virtualization played an important role
ship between SDN and related technologies, such as net-              throughout the historical evolution of SDN, substantially
work virtualization. Our history begins twenty years ago,            predating SDN yet taking root as one of the first signifi-
just as the Internet takes off, at a time when the Internet’s        cant use cases for SDN. We discuss network virtualization
amazing success exacerbated the challenges of managing               and its relationship to SDN in Section 3.
and evolving the network infrastructure. We focus on in-
novations in the networking community (whether by re-
searchers, standards bodies, or companies), although we              2.1    Active Networking
recognize that these innovations were in some cases cat-                The early- to mid-1990s saw the Internet take off, with
alyzed by progress in other areas, including distributed             applications and appeal that far outpaced the early applica-
systems, operating systems, and programming languages.               tions of file transfer and email for scientists. More diverse
The efforts to create a programmable network infras-                 applications and greater use by the general public drew
tructure also clearly relate to the long thread of work              researchers who were eager to test and deploy new ideas
on supporting programmable packet processing at high                 for improving network services. To do so, researchers de-
speeds [5, 21, 38, 45, 49, 72, 74].                                  signed and tested new network protocols in small lab set-
   Before we begin our story, we caution the reader that             tings and simulated behavior on larger networks. Then,
any history is incomplete and more nuanced than a single             if motivation and funding persisted, they took ideas to
storyline might suggest. In particular, much of the work             the Internet Engineering Task Force (IETF) to standard-
that we describe in this article predates the usage of the           ize these protocols. The standardization process was slow
term “SDN”, coined in an article [36] about the OpenFlow             and ultimately frustrated many researchers.
project at Stanford. The etymology of the term “SDN” is                 In response, some networking researchers pursued
itself complex, and, although the term was initially used to         an alternative approach of opening up network control,
describe Stanford’s OpenFlow project, the definition has             roughly based on the analogy of the relative ease of
since expanded to include a much wider array of technolo-            re-programming a stand-alone PC. Specifically, conven-
gies. (The term has even been sometimes co-opted by in-              tional networks are not “programmable” in any meaning-
dustry marketing departments to describe unrelated ideas             ful sense of the word. Active networking represented
that predated Stanford’s SDN project.) Thus, instead of              a radical approach to network control by envisioning a
attempting to attribute direct influence between projects,           programming interface (or network API) that exposed re-
we instead highlight the evolution of and relationships be-          sources (e.g., processing, storage, and packet queues) on
tween the ideas that represent the defining characteristics          individual network nodes, and supported the construction
of SDN, regardless of whether or not they directly influ-            of custom functionality to apply to a subset of packets
enced specific subsequent research. Some of these early              passing through the node. This approach was anathema to
ideas may not have directly influenced later ones, but we            many in the Internet community who advocated that sim-
believe that the connections between the concepts that we            plicity in the network core was critical to Internet success.
outline are noteworthy, and that these projects of the past             The active networks research program explored radical
may yet offer new lessons for SDN in the future.                     alternatives to the services provided by the traditional In-
                                                                     ternet stack via IP or by Asynchronous Transfer Mode
                                                                     (ATM), the other dominant networking approach of the
2.   The Road to SDN                                                 early 1990s. In this sense, active networking was the first
   Making computer networks more programmable en-                    in a series of clean-slate approaches to network architec-
ables innovation in network management and lowers the                ture [14] subsequently pursued in programs such as GENI
barrier to deploying new services. In this section, we               (Global Environment for Network Innovations) [33] and
review early work on programmable networks. We di-                   NSF FIND (Future Internet Design) [28] in the United
vide the history into three stages, as shown in Figure 1.            States, and EU FIRE (Future Internet Research and Ex-
Each stage has its own contributions to the history: (1) ac-         perimentation Initiative) [29] in the European Union.
tive networks (from the mid-1990s to the early 2000s),                  The active networking community pursued two pro-
which introduced programmable functions in the network               gramming models:

                                                                 2
1995                          2000                          2005                           2010                    2015

                                      Tennenhouse/           Smart
                                      Wetherall [76]      Packets [67]
      Active Networks
        Section 2.1
                                                  ANTS [84],          High Perf.
                                                 SwitchWare [3],     Router [86],
                                                  Calvert [15]      NetScript [20]
                                                                                       RCP [26],
                                                  Tempest [79]                       SoftRouter [47]    IRSCP [78]
  Control-Data Separation
        Section 2.2
                                                                      ForCES                   PCE [25],    Ethane [16]
                                                                    protocol [88]               4D [35]

                                                                                                              OpenFlow [51]     Onix [46]   ONOS [56]
 OpenFlow and Network OS
        Section 2.3
                                                                                                         Ethane [16]    NOX [37]

                                                                                 Planet-                                    Mininet [48],
                            MBone [50]            Tempest [79]                  Lab [18]       GENI [59]                   FlowVisor [68]
  Network Virtualization:
        Section 3
                                         6Bone [43]                   RON [4]             Impasse        VINI [7]         Open           Nicira
                                                                                            [62]                       vSwitch [63]     NVP [53]

Figure 1: Selected developments in programmable networking over the past 20 years, and their chronological relationship to advances in network
virtualization (one of the first successful SDN use cases).

• the capsule model, where the code to execute at the                          research work in active networks was funded by DARPA,
 nodes was carried in-band in data packets [84]; and                           the funding program supported a collection of projects
                                                                               and, perhaps more importantly, encouraged convergence
• the programmable router/switch model, where the code
                                                                               on a terminology and set of active network components
 to execute at the nodes was established by out-of-band
                                                                               so that projects could contribute to a whole meant to be
 mechanisms (e.g., [8, 69]).
                                                                               greater than the sum of the parts [14]. The Active Net-
The capsule model came to be most closely associated                           works program placed an emphasis on demonstrations and
with active networking. In intellectual connection to sub-                     project inter-operability, with a concomitant level of de-
sequent efforts, though, both models have some lasting                         velopment effort. The bold and concerted push from a
legacy. Capsules envisioned installation of new data-plane                     funding agency in the absence of near-term use cases may
functionality across a network, carrying code in data pack-                    have also contributed to a degree of community skepticism
ets (as in earlier work on packet radio [90]) and using                        about active networking that was often healthy but could
caching to improve efficiency of code distribution. Pro-                       border on hostility and may have obscured some of the in-
grammable routers placed decisions about extensibility di-                     tellectual connections between that work and later efforts
rectly in the hands of the network operator.                                   to provide network programmability.
                                                                                  The “use pulls” for active networking described in the
Technology push and use pull. The “technology pushes”                          literature of the time [15, 75] are remarkably similar to
that encouraged active networking included reduction in                        the examples used to motivate SDN today. The issues
the cost of computing, making it conceivable to put more                       of the day included network service provider frustration
processing in the network, advances in programming lan-                        with the timescales necessary to develop and deploy new
guages such as Java that offered platform portability and                      network services (so-called network ossification), third-
some code execution safety, and virtual machine tech-                          party interest in value-added, fine-grained control to dy-
nology that protected the host machine (in this case the                       namically meet the needs of particular applications or net-
active node) and other processes from misbehaving pro-                         work conditions, and researcher desire for a platform that
grams [71]. Some active networking research projects                           would support experimentation at scale. Additionally,
also capitalized on advances in rapid code compilation and                     many early papers on active networking cited the prolif-
formal methods.                                                                eration of middleboxes, including firewalls, proxies, and
   An important catalyst in the active networking ecosys-                      transcoders, each of which had to be deployed separately
tem was funding agency interest, in particular the Ac-                         and entailed a distinct (often vendor-specific) program-
tive Networks program created and supported by the U.S.                        ming model. Active networking offered a vision of uni-
Defense Advanced Research Projects Agency (DARPA)                              fied control over these middleboxes that could ultimately
from the mid-1990s into the early 2000s. Although not all

                                                                           3
replace the ad hoc, one-off approaches to managing and              • The vision of a unified architecture for middlebox or-
controlling these boxes [75]. Interestingly, the early liter-         chestration. Although the vision was never fully real-
ature foreshadows the current trends in network functions             ized in the active networking research program, early
virtualization (NFV) [19], which also aims to provide a               design documents cited the need for unifying the wide
unifying control framework for networks that have com-                range of middlebox functions with a common, safe pro-
plex middlebox functions deployed throughput.                         gramming framework. Although this vision may not
                                                                      have directly influenced the more recent work on NFV,
Intellectual contributions. Active networks offered in-               various lessons from active networking research may
tellectual contributions that relate to SDN. We note three            prove useful as we move forward with the application of
in particular:                                                        SDN-based control and orchestration of middleboxes.
• Programmable functions in the network to lower the
 barrier to innovation. Research in active networks pi-             Myths and misconceptions. Active networking included
 oneered the notion of programmable networks as a way               the notion that a network API would be available to end-
 to lower the barrier to network innovation. The notion             users who originate and receive packets, though most in
 that it is difficult to innovate in a production network and       the research community fully recognized that end-user
 pleas for increased programmability were commonly                  network programmers would be rare [15]. The miscon-
 cited in the initial motivation for SDN. Much of the early         ception that packets would necessarily carry Java code
 vision for SDN focused on control-plane programmabil-              written by end users made it possible to dismiss active
 ity, whereas active networks focused more on data-plane            network research as too far removed from real networks
 programmability. That said, data-plane programmabil-               and inherently unsafe. Active networking was also crit-
 ity has continued to develop in parallel with control-             icized at the time for not being able to offer practical
 plane efforts [5, 21], and data-plane programmability              performance and security. While performance was not
 is again coming to the forefront in the emerging NFV               a first-order consideration of the active networking re-
 initiative. Recent work on SDN is exploring the evo-               search community (which focused on architecture, pro-
 lution of SDN protocols such as OpenFlow to support                gramming models, and platforms), some efforts aimed
 a wider range of data-plane functions [11]. Addition-              to build high-performance active routers [86]. Similarly,
 ally, the concepts of isolation of experimental traffic            while security was under-addressed in many of the early
 from normal traffic—which have their roots in active               projects, the Secure Active Network Environment Archi-
 networking—also appear front and center in design doc-             tecture project [2] was a notable exception.
 uments for OpenFlow [51] and other SDN technologies
                                                                    In search of pragmatism. Although active networks ar-
 (e.g., FlowVisor [31]).
                                                                    ticulated a vision of programmable networks, the tech-
                                                                    nologies did not see widespread deployment. Many fac-
• Network virtualization, and the ability to demultiplex            tors drive the adoption of a technology (or lack thereof).
 to software programs based on packet headers. The                  Perhaps one of the biggest stumbling blocks that active
 need to support experimentation with multiple program-             networks faced was the lack of an immediately compelling
 ming models led to work on network virtualization. Ac-             problem or a clear path to deployment. A significant
 tive networking produced an architectural framework                lesson from the active networks research effort was that
 that describes the components of such a platform [13].             “killer” applications for the data plane are hard to con-
 The key components of this platform are a shared Node              ceive. The community proffered various applications that
 Operating System (NodeOS) that manages shared re-                  could benefit from in-network processing, including in-
 sources; a set of Execution Environments (EEs), each of            formation fusion, caching and content distribution, net-
 which defines a virtual machine for packet operations;             work management, and application-specific quality of ser-
 and a set of Active Applications (AAs) that work within            vice [15, 75]. Unfortunately, although performance bene-
 a given EE to provide an end-to-end service. Direct-               fits could be quantified in the lab, none of these application
 ing packets to a particular EE depends on fast pattern             areas demonstrated a sufficiently compelling solution to a
 matching on header fields and demultiplexing to the ap-            pressing need.
 propriate EE. Interestingly, this model was carried for-              Subsequent efforts, which we describe in the next sub-
 ward in the PlanetLab [61] architecture, whereby differ-           section, were more modest in terms of the scope of prob-
 ent experiments run in virtual execution environments,             lems they addressed, focusing narrowly on routing and
 and packets are demultiplexed into the appropriate ex-             configuration management. In addition to a narrower
 ecution environment on their packet headers. Demulti-              scope, the next phase of research developed technologies
 plexing packets into different virtual execution environ-          that drew a clear distinction and separation between the
 ments has also been applied to the design of virtualized           functions of the control and data planes. This separa-
 programmable hardware data planes [5].                             tion ultimately made it possible to focus on innovations

                                                                4
in the control plane, which not only needed a significant           Some early proposals for separating the data and control
overhaul but also (because it is commonly implemented in            planes also came from academic circles, in both ATM net-
software) presented a lower barrier to innovation than the          works [10, 32, 80] and active networks [70].
data plane.                                                            Compared to earlier research on active networking,
                                                                    these projects focused on pressing problems in network
2.2    Separating Control and Data Planes                           management, with an emphasis on: innovation by and
   In the early 2000s, increasing traffic volumes and a             for network administrators (rather than end users and re-
greater emphasis on on network reliability, predictabil-            searchers); programmability in the control plane (rather
ity, and performance led network operators to seek bet-             than the data plane); and network-wide visibility and con-
ter approaches to certain network-management functions              trol (rather than device-level configuration).
such as the control over the paths used to deliver traf-               Network-management applications included selecting
fic (a practice commonly known as traffic engineering).             better network paths based on the current traffic load,
The means for performing traffic engineering using con-             minimizing transient disruptions during planned routing
ventional routing protocols were primitive at best. Oper-           changes, giving customer networks more control over the
ators’ frustration with these approaches were recognized            flow of traffic, and redirecting or dropping suspected at-
by a small, well-situated community of researchers who              tack traffic. Several control applications ran in opera-
either worked for or interacted regularly with backbone             tional ISP networks using legacy routers, including the In-
network operators. These researchers explored pragmatic,            telligent Route Service Control Point (IRSCP) deployed
near-term approaches that were either standards-driven or           to offer value-added services for virtual-private network
imminently deployable using existing protocols.                     customers in AT&T’s tier-1 backbone network [78]. Al-
   Specifically, conventional routers and switches embody           though much of the work during this time focused on man-
a tight integration between the control and data planes.            aging routing within a single ISP, some work [25, 26] also
This coupling made various network-management tasks,                proposed ways to enable flexible route control across mul-
such as debugging configuration problems and predicting             tiple administrative domains.
or controlling routing behavior, exceedingly challenging.              Moving control functionality off of network equipment
To address these challenges, various efforts to separate the        and into separate servers made sense because network
data and control planes began to emerge.                            management is, by definition, a network-wide activity.
                                                                    Logically centralized routing controllers [12, 47, 78] were
Technology push and use pull. As the Internet flour-
                                                                    enabled by the emergence of open-source routing soft-
ished in the 1990s, the link speeds in backbone net-
                                                                    ware [9, 40, 65] that lowered the barrier to creating proto-
works grew rapidly, leading equipment vendors to imple-
                                                                    type implementations. The advances in server technology
ment packet-forwarding logic directly in hardware, sepa-
                                                                    meant that a single commodity server could store all of the
rate from the control-plane software. In addition, Inter-
                                                                    routing state and compute all of the routing decisions for
net Service Providers (ISPs) were struggling to manage
                                                                    a large ISP network [12, 81]. This, in turn, enabled sim-
the increasing size and scope of their networks, and the
                                                                    ple primary-backup replication strategies, where backup
demands for greater reliability and new services (such as
                                                                    servers store the same state and perform the same compu-
virtual private networks). In parallel with these two trends,
                                                                    tation as the primary server, to ensure controller reliability.
the rapid advances in commodity computing platforms
meant that servers often had substantially more memory              Intellectual contributions. The initial attempts to sep-
and processing resources than the control-plane processor           arate the control and data planes were relatively prag-
of a router deployed just one or two years earlier. These           matic, but they represented a significant conceptual de-
trends catalyzed two innovations:                                   parture from the Internet’s conventionally tight coupling
                                                                    of path computation and packet forwarding. The efforts to
• an open interface between the control and data planes,
                                                                    separate the network’s control and data plane resulted in
 such as the ForCES (Forwarding and Control Element
                                                                    several concepts that have been carried forward in subse-
 Separation) [88] interface standardized by the Internet
                                                                    quent SDN designs:
 Engineering Task Force (IETF) and the Netlink inter-
 face to the kernel-level packet-forwarding functionality           • Logically centralized control using an open interface
 in Linux [66]; and                                                   to the data plane. The ForCES working group at the
                                                                      IETF proposed a standard, open interface to the data
• logically centralized control of the network, as seen in
                                                                      plane to enable innovation in control-plane software.
 the Routing Control Platform (RCP) [12, 26] and Soft-
                                                                      The SoftRouter [47] used the ForCES API to allow a
 Router [47] architectures, as well as the Path Computa-
                                                                      separate controller to install forwarding-table entries in
 tion Element (PCE) [25] protocol at the IETF.
                                                                      the data plane, enabling the complete removal of control
These innovations were driven by industry’s demands for               functionality from the routers. Unfortunately, ForCES
technologies to manage routing within an ISP network.                 was not adopted by the major router vendors, which

                                                                5
hampered incremental deployment. Rather than wait-                 into hardware meant that a router’s control-plane software
 ing for new, open APIs to emerge, the RCP [12,26] used             could fail independently from the data plane. Similarly,
 an existing standard control-plane protocol (the Border            distributed routing protocols adopted scaling techniques,
 Gateway Protocol) to install forwarding-table entries in           such as OSPF areas and BGP route reflectors, where
 legacy routers, enabling immediate deployment. Open-               routers in one region of a network had limited visibility
 Flow also faced similar backwards compatibility chal-              into the routing information in other regions. As we dis-
 lenges and constraints: in particular, the initial Open-           cuss in the next section, the separation of the control and
 Flow specification relied on backwards compatibility               data planes somewhat paradoxically enabled researchers
 with hardware capabilities of commodity switches.                  to think more clearly about distributed state management:
                                                                    the decoupling of the control and data planes catalyzed
• Distributed state management. Logically centralized               the emergence of a state management layer that maintains
 route controllers faced challenges involving distributed           consistent view of network state.
 state management. A logically centralized controller
 must be replicated to cope with controller failure, but            In search of generality. Dominant equipment vendors
 replication introduces the potential for inconsistent state        had little incentive to adopt standard data-plane APIs like
 across replicas. Researchers explored the likely fail-             ForCES, since open APIs could enable new entrants into
 ure scenarios and consistency requirements. At least               the marketplace. The resulting need to rely on existing
 in the case of routing control, the controller replicas            routing protocols to control the data plane imposed sig-
 did not need a general state management protocol, since            nificant limitations on the range of applications that pro-
 each replica would eventually compute the same routes              grammable controllers could support. Conventional IP
 (after learning the same topology and routing informa-             routing protocols compute routes for destination IP ad-
 tion) and transient disruptions during routing-protocol            dress blocks, rather than providing a wider range of func-
 convergence were acceptable even with legacy proto-                tionality (e.g., dropping, flooding, or modifying packets)
 cols [12]. For better scalability, each controller instance        based on a wider range of header fields (e.g., MAC and
 could be responsible for a separate portion of the topol-          IP addresses, TCP and UDP port numbers), as OpenFlow
 ogy. These controller instances could then exchange                does. In the end, although the industry prototypes and
 routing information with each other to ensure consistent           standardization efforts made some progress, widespread
 decisions [81]. The challenges of building distributed             adoption remained elusive.
 controllers would arise again several years later in the              To broaden the vision of control and data plane sepa-
 context of distributed SDN controllers [46, 56]. Dis-              ration, researchers started exploring clean-slate architec-
 tributed SDN controllers face the far more general prob-           tures for logically centralized control. The 4D project [35]
 lem of supporting arbitrary controller applications, re-           advocated four main layers—the data plane (for process-
 quiring more sophisticated solutions for distributed state         ing packets based on configurable rules), the discovery
 management.                                                        plane (for collecting topology and traffic measurements),
                                                                    the dissemination plane (for installing packet-processing
                                                                    rules), and a decision plane (consisting of logically cen-
Myths and misconceptions. When these new architec-
                                                                    tralized controllers that convert network-level objectives
tures were proposed, critics viewed them with healthy
                                                                    into packet-handling state). Several groups proceeded to
skepticism, often vehemently arguing that logically cen-
                                                                    design and build systems that applied this high-level ap-
tralized route control would violate “fate sharing”, since
                                                                    proach to new application areas [16, 87], beyond route
the controller could fail independently from the devices
                                                                    control. In particular, the Ethane project [16] (and its di-
responsible for forwarding traffic. Many network oper-
                                                                    rect predecessor, SANE [17]) created a logically central-
ators and researchers viewed separating the control and
                                                                    ized, flow-level solution for access control in enterprise
data planes as an inherently bad idea, as initially there was
                                                                    networks. Ethane reduces the switches to flow tables that
no clear articulation of how these networks would con-
                                                                    are populated by the controller based on high-level secu-
tinue to operate correctly if a controller failed. Skeptics
                                                                    rity policies. The Ethane project, and its operational de-
also worried that logically centralized control moved away
                                                                    ployment in the Stanford computer science department,
from the conceptually simple model of the routers achiev-
                                                                    set the stage for the creation of OpenFlow. In particular,
ing distributed consensus, where they all (eventually) have
                                                                    the simple switch design in Ethane became the basis of the
a common view of network state (e.g., through flooding).
                                                                    original OpenFlow API.
In logically centralized control, each router has only a
purely local view of the outcome of the route-selection
process.                                                            2.3    OpenFlow and Network OSes
   In fact, by the time these projects took root, even                In the mid-2000s, researchers and funding agencies
the traditional distributed routing solutions already vio-          gained interest in the idea of network experimentation
lated these principles. Moving packet-forwarding logic              at scale, encouraged by the success of experimental in-

                                                                6
frastructures (e.g., PlanetLab [6] and Emulab [85]), and              The initial OpenFlow protocol standardized a data-
the availability of separate government funding for large-         plane model and a control-plane API by building on tech-
scale “instrumentation” previously reserved for other dis-         nology that switches already supported. Specifically, be-
ciplines to build expensive, shared infrastructure such as         cause network switches already supported fine-grained ac-
colliders and telescopes [54]. An outgrowth of this en-            cess control and flow monitoring, enabling OpenFlow’s
thusiasm was the creation of the Global Environment for            initial set of capabilities on switch was as easy as perform-
Networking Innovations (GENI) [33] with an NSF-funded              ing a firmware upgrade—vendors did not need to upgrade
GENI Project Office and the EU FIRE program [29]. Crit-            the hardware to make their switches OpenFlow-capable.
ics of these infrastructure-focused efforts pointed out that          OpenFlow’s initial target deployment scenario was
this large investment in infrastructure was not matched            campus networks, meeting the needs of a networking re-
by well-conceived ideas to use it. In the midst of this,           search community actively looking for ways to conduct
a group of researchers at Stanford created the Clean Slate         experimental work on “clean-slate” network architectures
Program and focused on experimentation at a more local             within a research-friendly operational setting. In the late
and tractable scale: campus networks [51].                         2000s, the OpenFlow group at Stanford led an effort to
   Before the emergence of OpenFlow, the ideas underly-            deploy OpenFlow testbeds across many campuses and
ing SDN faced a tension between the vision of fully pro-           demonstrate the capabilities of the protocol both on a sin-
grammable networks and pragmatism that would enable                gle campus network and over a wide-area backbone net-
real-world deployment. OpenFlow struck a balance be-               work spanning multiple campuses [34].
tween these two goals by enabling more functions than                 As real SDN use cases materialized on these campuses,
earlier route controllers and building on existing switch          OpenFlow began to take hold in other realms, such as
hardware, through the increasing use of merchant-silicon           data-center networks, where there was a distinct need to
chipsets in commodity switches. Although relying on                manage network traffic at large scales. In data centers,
existing switch hardware did somewhat limit flexibility,           the cost of hiring engineers to write sophisticated con-
OpenFlow was almost immediately deployable, allowing               trol programs to run over large numbers of commodity
the SDN movement to be both pragmatic and bold. The                switches proved to be more cost-effective than continu-
creation of the OpenFlow API [51] was followed quickly             ing to purchase closed, proprietary switches that could
by the design of controller platforms like NOX [37] that           not support new features without substantial engagement
enabled the creation of many new control applications.             with the equipment vendors. As vendors began to com-
   An OpenFlow switch has a table of packet-handling               pete to sell both servers and switches for data centers,
rules, where each rule has a pattern (that matches on bits         many smaller players in the network equipment market-
in the packet header), a list of actions (e.g., drop, flood,       place embraced the opportunity to compete with the es-
forward out a particular interface, modify a header field,         tablished router and switch vendors by supporting new ca-
or send the packet to the controller), a set of counters (to       pabilities like OpenFlow.
track the number of bytes and packets), and a priority (to
disambiguate between rules with overlapping patterns).             Intellectual contributions. Although OpenFlow embod-
Upon receiving a packet, an OpenFlow switch identifies             ied many of the principles from earlier work on the sep-
the highest-priority matching rule, performs the associated        aration of control and data planes, the rise of OpenFlow
actions, and increments the counters.                              offered several additional intellectual contributions:
                                                                   • Generalizing network devices and functions. Previous
Technology push and use pull. Perhaps the defining fea-
                                                                     work on route control focused primarily on matching
ture of OpenFlow is its adoption in industry, especially as
                                                                     traffic by destination IP prefix. In contrast, OpenFlow
compared with its intellectual predecessors. This success
                                                                     rules could define forwarding behavior on traffic flows
can be attributed to a perfect storm of conditions between
                                                                     based on any set of 13 different packet headers. As such,
equipment vendors, chipset designers, network operators,
                                                                     OpenFlow conceptually unified many different types of
and networking researchers. Before OpenFlow’s genesis,
                                                                     network devices that differ only in terms of which header
switch chipset vendors like Broadcom had already begun
                                                                     fields they match, and which actions they perform. A
to open their APIs to allow programmers to control certain
                                                                     router matches on destination IP prefix and forwards out
forwarding behaviors. The decision to open the chipset
                                                                     a link, whereas a switch matches on source MAC ad-
provided the necessary impetus to an industry that was al-
                                                                     dress (to perform MAC learning) and destination MAC
ready clamoring for more control over network devices.
                                                                     address (to forward), and either floods or forwards out a
The availability of these chipsets also enabled a much
                                                                     single link. Network address translators and firewalls
wider range of companies to build switches, without in-
                                                                     match on the five tuple (of source and destination IP
curring the substantial cost of designing and fabricating
                                                                     addresses and port numbers, and the transport proto-
their own data-plane hardware.
                                                                     col) and either rewrites address and port fields, or drops
                                                                     unwanted traffic. OpenFlow also generalized the rule-

                                                               7
installation techniques, allowing anything from proac-              whether the controller handles any data traffic. Some SDN
 tive installation of coarse-grained rules (i.e., with “wild-        applications respond only to topology changes and coarse-
 cards” for many header fields) to reactive installation of          grained traffic statistics and update rules infrequently in
 fine-grained rules, depending on the application. Still,            response to link failures or network congestion. Other ap-
 OpenFlow does not offer data-plane support for deep                 plications may send the first packet of some larger traffic
 packet inspection or connection reassembly; as such,                aggregate to the controller, but not a packet from every
 OpenFlow alone cannot efficiently enable sophisticated              TCP or UDP connection.
 middlebox functionality.                                               A second myth surrounding SDN is that the controller
• The vision of a network operating system. In contrast to           must be physically centralized. In fact, Onix [46] and
 earlier research on active networks that proposed a node            ONOS [56] demonstrate that SDN controllers can—and
 operating system, the work on OpenFlow led to the no-               should—be distributed. Wide-area deployments of SDN,
 tion of a network operating system [37]. A network op-              as in Google’s private backbone [44], have many con-
 erating system is software that abstracts the installation          trollers spread throughout the network.
 of state in network switches from the logic and appli-                 Finally, a commonly held misconception is that SDN
 cations that control the behavior of the network. More              and OpenFlow are equivalent; in fact, OpenFlow is merely
 generally, the emergence of a network operating system              one (widely popular) instantiation of SDN principles. Dif-
 offered a conceptual decomposition of network opera-                ferent APIs could be used to control network-wide for-
 tion into three layers [46]: (1) a data plane with an open          warding behavior; previous work that focused on routing
 interface; (2) a state management layer that is responsi-           (using BGP as an API) could be considered one instanti-
 ble for maintaining a consistent view of network state;             ation of SDN, for example, and architectures from vari-
 (3) control logic that performs various operations de-              ous vendors (e.g., Cisco ONE and JunOS SDK) represent
 pending on its view of network state.                               other instantiations of SDN that differ from OpenFlow.
• Distributed state management techniques. Separating                In search of control programs and use cases. Despite
 the control and data planes introduces new challenges               the initial excitement surrounding SDN, it is worth rec-
 concerning state management. Running multiple con-                  ognizing that SDN is merely a tool that enables innova-
 trollers is crucial for scalability, reliability, and perfor-       tion in network control. SDN neither dictates how that
 mance, yet these replicas should work together to act               control should be designed nor solves any particular prob-
 like a single, logically centralized controller. Previous           lem. Rather, researchers and network operators now have
 work on distributed route controllers [12, 81] only ad-             a platform at their disposal to help address longstanding
 dressed these problems in the narrow context of route               problems in managing their networks and deploying new
 computation. To support arbitrary controller applica-               services. Ultimately, the success and adoption of SDN de-
 tions, the work on the Onix [46] controller introduced              pends on whether it can be used to solve pressing prob-
 the idea of a network information base—a represen-                  lems in networking that were difficult or impossible to
 tation of the network topology and other control state              solve with earlier protocols. SDN has already proved use-
 shared by all controller replicas. Onix also incorpo-               ful for solving problems related to network virtualization,
 rated past work in distributed systems to satisfy the state         as we describe in the next section.
 consistency and durability requirements. For example,
 Onix has a transactional persistent database backed by              3.   Network Virtualization
 a replicated state machine for slowly-changing network
                                                                        In this section, we discuss network virtualization, a
 state, as well as an in-memory distributed hash table for
                                                                     prominent early “use case” for SDN. Network virtualiza-
 rapidly-changing state with weaker consistency require-
                                                                     tion presents the abstraction of a network that is decoupled
 ments. More recently, the ONOS [56] system offers an
                                                                     from the underlying physical equipment. Network virtu-
 open-source controller with similar functionality, using
                                                                     alization allows multiple virtual networks to run over a
 existing open-source software for maintaining consis-
                                                                     shared infrastructure, and each virtual network can have a
 tency across distributed state and providing a network
                                                                     much simpler (more abstract) topology than the underly-
 topology database to controller applications.
                                                                     ing physical network. For example, a Virtual Local Area
                                                                     Network (VLAN) provides the illusion of a single LAN
Myths and misconceptions. One myth concerning SDN                    spanning multiple physical subnets, and multiple VLANs
is that the first packet of every traffic flow must go to the        can run over the same collection of switches and routers.
controller for handling. Indeed, some early systems like             Although network virtualization is conceptually indepen-
Ethane [16] worked this way, since they were designed to             dent of SDN, the relationship between these two technolo-
support fine-grained policies in small networks. In fact,            gies has become much closer in recent years.
SDN in general, and OpenFlow in particular, do not im-                  We preface our discussion of network virtualization
pose any assumptions about the granularity of rules or               with three caveats. First, a complete history of network

                                                                 8
virtualization would require a separate survey; we focus            ing them much easier to deploy. To lower the barrier
on developments in network virtualization that relate di-           for experimenting with overlay networks, researchers be-
rectly to innovations in programmable networking. Sec-              gan building virtualized experimental infrastructures like
ond, although network virtualization has gained promi-              PlanetLab [61] that allowed multiple researchers to run
nence as a use case for SDN, the concept predates modern-           their own overlay networks over a shared and distributed
day SDN and has in fact evolved in parallel with pro-               collection of hosts. Interestingly, PlanetLab itself was a
grammable networking. The two technologies are in fact              form of “programmable router/switch” active networking,
tightly coupled: Programmable networks often presumed               but using a collection of servers rather than the network
mechanisms for sharing the infrastructure (across multiple          nodes, and offering programmers a conventional operat-
tenants in a data center, administrative groups in a campus,        ing system (i.e., Linux). These design decisions spurred
or experiments in an experimental facility) and supporting          adoption by the distributed-systems research community,
logical network topologies that differ from the physical            leading to a significant increase in the role of experimen-
network, both of which are central tenets of network vir-           tation with prototype systems in this community.
tualization. Finally, we caution that a precise definition of          Based on the success of shared experimental plat-
“network virtualization” is elusive, and experts naturally          forms in fostering experimental systems research, re-
disagree as to whether some of the mechanisms we discuss            searchers started advocating the creation of shared exper-
(e.g., slicing) represent forms of network virtualization. In       imental platforms that pushed support for virtual topolo-
this article, we define the scope of network virtualization         gies that can run custom protocols inside the underlying
to include any technology that facilitates hosting a virtual        network [7,62] to enable realistic experiments to run side-
network on an underlying physical network infrastructure.           by-side with operational traffic. In this model, the network
                                                                    equipment itself “hosts” the virtual topology, harkening
Network Virtualization before SDN. For many years,                  back to the early Tempest architecture [79] where multi-
network equipment has supported the creation of virtual             ple virtual ATM networks could co-exist on the same set
networks, in the form of VLANs and virtual private net-             of physical switches [79]; the Tempest architecture even
works. However, only network administrators could cre-              allowed switch-forwarding behavior to be defined using
ate these virtual networks, and these virtual networks were         software controllers, foreshadowing the work on control
limited to running the existing network protocols. As               and data-plane separation.
such, incrementally deploying new technologies proved                  The GENI [33, 60] initiative took the idea of a vir-
difficult. Instead, researchers and practitioners resorted          tualized and programmable network infrastructure to a
to running overlay networks, where a small set of up-               much larger scale, building a national experimental infras-
graded nodes use tunnels to form their own topology on              tructure for research in networking and distributed sys-
top of a legacy network. In an overlay network, the up-             tems. Moving beyond experimental infrastructure, some
graded nodes run their own control-plane protocol, and              researchers argued that network virtualization could form
direct data traffic (and control-plane messages) to each            the basis of a future Internet that enables multiple network
other by encapsulating packets, sending them through the            architectures to coexist at the same time (each optimized
legacy network, and decapsulating them at the other end.            for different applications or requirements, or run by differ-
The Mbone (for multicast) [50], the 6bone (for IPv6) [43],          ent business entities), and evolve over time to meet chang-
and the X-Bone [77] were prominent early examples.                  ing needs [27, 62, 73, 89].
   These early overlay networks consisted of dedicated
nodes that ran the special protocols, in the hope of                Relationship of Network Virtualization to SDN. Net-
spurring adoption of proposed enhancements to the net-              work virtualization (an abstraction of the physical net-
work infrastructure. The notion of overlay networks soon            work in terms of a logical network) clearly does not re-
expanded to include any end-host computer that installs             quire SDN. Similarly, SDN (the separation of a logically
and runs a special application, spurred by the success              centralized control plane from the underlying data plane)
of early peer-to-peer file-sharing applications (e.g., Nap-         does not imply network virtualization. Interestingly, how-
ster and Gnutella). In addition to significant research on          ever, a symbiosis between network virtualization and SDN
peer-to-peer protocols, the networking research commu-              has emerged, which has begun to catalyze several new re-
nity reignited research on using overlay networks as a way          search areas. SDN and network virtualization relate in
to improve the network infrastructure, such as the work             three main ways:
on Resilient Overlay Networks [4], where a small collec-
                                                                    • SDN as an enabling technology for network virtualiza-
tion of communicating hosts form an overlay that reacts
                                                                      tion. Cloud computing brought network virtualization to
quickly to network failures and performance problems.
                                                                      prominence, because cloud providers need a way to al-
   In contrast to active networks, overlay networks did
                                                                      low multiple customers (or “tenants”) to share the same
not require any special support from network equipment
                                                                      network infrastructure. Nicira’s Network Virtualiza-
or cooperation from the Internet Service Providers, mak-
                                                                      tion Platform (NVP) [53] offers this abstraction without

                                                                9
requiring any support from the underlying networking                physical resources or dynamically reconfiguring networks
 hardware. The solution is use overlay networking to pro-            in multi-tenant environments—that actually come from
 vide each tenant with the abstraction of a single switch            network virtualization. Although SDN facilitates network
 connecting all of its virtual machines. Yet, in contrast to         virtualization and may thus make some of these functions
 previous work on overlay networks, each overlay node                easier to realize, it is important to recognize that the ca-
 is a actually an extension of the physical network—a                pabilities that SDN offers (i.e., the separation of data and
 software switch (like Open vSwitch [57,63]) that encap-             control plane, abstractions for distributed network state)
 sulates traffic destined to virtual machines running on             do not directly provide these benefits.
 other servers. A logically centralized controller installs
 the rules in these virtual switches to control how packets          Exploring a broader range of use cases. Although SDN
 are encapsulated, and updates these rules when virtual              has enjoyed some early practical successes and certainly
 machines move to new locations.                                     offers much-needed technologies in support of the specific
                                                                     use case of network virtualization, more work is needed
• Network virtualization for evaluating and testing SDNs.            both to improve the existing infrastructure and to explore
 The ability to decouple an SDN control application from             SDN’s potential to solve problems for a much broader set
 the underlying data plane makes it possible to test and             of use cases. Although early SDN deployments focused
 evaluate SDN control applications in a virtual environ-             on university campuses [34], data centers [53], and private
 ment before the application is deployed on an opera-                backbones [44], recent work explores applications and ex-
 tional network. Mininet [41, 48] uses process-based vir-            tensions of SDN to a broader range of network settings, in-
 tualization to run multiple virtual OpenFlow switches,              cluding home networks, enterprise networks, Internet ex-
 end hosts, and SDN controllers—each as a single pro-                change points, cellular core networks, cellular and WiFi
 cess on the same physical (or virtual) machine. The use             radio access networks, and joint management of end-host
 of process-based virtualization allows Mininet to emu-              applications and the network. Each of these settings in-
 late a network with hundreds of hosts and switches on               troduces many new opportunities and challenges that the
 a single machine. In such an environment, a researcher              community will explore in the years ahead.
 or network operator can develop control logic and easily
 test it on a full-scale emulation of the production data            4.   Conclusion
 plane; once the control plane has been evaluated, tested,              This paper has offered an intellectual history of pro-
 and debugged, it can then be deployed on the real pro-              grammable networks. The idea of a programmable net-
 duction network.                                                    work initially took shape as active networking, which es-
                                                                     poused many of the same visions as SDN, but lacked both
• Virtualizing (“slicing”) an SDN. In conventional net-
                                                                     a clear use case and an incremental deployment path. Af-
 works, virtualizing a router or switch is complicated,
                                                                     ter the era of active networking research projects, the pen-
 because each virtual component needs to run own in-
                                                                     dulum swung from vision to pragmatism, in the form of
 stance of control-plane software. In contrast, virtualiz-
                                                                     separating the data and control plane to make the network
 ing a “dumb” SDN switch is much simpler. The FlowVi-
                                                                     easier to manage. This work focused primarily on bet-
 sor [68] system enables a campus to support a testbed for
                                                                     ter ways to route network traffic—a much narrower vision
 networking research on top of the same physical equip-
                                                                     than previous work on active networking.
 ment that carries the production traffic. The main idea
                                                                        Ultimately, the work on OpenFlow and network oper-
 is to divide traffic flow space into “slices” (a concept in-
                                                                     ating systems struck the right balance between vision and
 troduced in earlier work on PlanetLab [61]), where each
                                                                     pragmatism. This work advocated network-wide control
 slice has a share of network resources and is managed
                                                                     for a wide range of applications, yet relied only on the
 by a different SDN controller. FlowVisor runs as a hy-
                                                                     existing capabilities of switch chipsets. Backwards com-
 pervisor, speaking OpenFlow to each of the SDN con-
                                                                     patibility with existing switch hardware appealed to many
 trollers and to the underlying switches. Recent work
                                                                     equipment vendors clamoring to compete in the growing
 has proposed slicing control of home networks, to al-
                                                                     market in data-center networks. The balance of a broad,
 low different third-party service providers (e.g., smart
                                                                     clear vision with a pragmatic strategy for widespread
 grid operators) to deploy services on the network with-
                                                                     adoption gained traction when SDN found a compelling
 out having to install their own infrastructure [89]. More
                                                                     use case in network virtualization.
 recent work proposes ways to present each “slice” of
                                                                        As SDN continues to develop, we believe that history
 a software-defined network with its own logical topol-
                                                                     has important lessons to tell. First, SDN technologies will
 ogy [1, 22] and address space [1].
                                                                     live or die based on “use pulls”. Although SDN is of-
                                                                     ten heralded as the solution to all networking problems, it
Myths and misconceptions. People often refer to sup-                 is worth remembering that SDN is just a tool for solving
posed “benefits of SDN”—such as amortizing the cost of               network-management problems more easily. SDN merely

                                                                10
places the power in our hands to develop new applications                          Protocols on Programmable Hardware. In Proc. ACM SIGCOMM,
and solutions to longstanding problems. In this respect,                           New Delhi, India, Aug. 2010.
                                                                             [6]   A. Bavier, M. Bowman, D. Culler, B. Chun, S. Karlin, S. Muir,
our work is just beginning. If the past is any indication,                         L. Peterson, T. Roscoe, T. Spalink, and M. Wawrzoniak.
the development of these new technologies will require                             Operating System Support for Planetary-Scale Network Services.
innovation on multiple timescales, from long-term bold                             In Proc. First Symposium on Networked Systems Design and
                                                                                   Implementation (NSDI), San Francisco, CA, Mar. 2004.
visions (such as active networking) to near-term creative                    [7]   A. Bavier, N. Feamster, M. Huang, L. Peterson, and J. Rexford. In
problem solving (such as the operationally focused work                            VINI Veritas: Realistic and Controlled Network Experimentation.
on separating the control and data planes).                                        In Proc. ACM SIGCOMM, Pisa, Italy, Aug. 2006.
   Second, we caution that the balance between vision and                    [8]   S. Bhattacharjee, K. Calvert, and E. Zegura. An architecture for
                                                                                   active networks. In High Performance Networking, 1997.
pragmatism remains tenuous. The bold vision of SDN ad-                       [9]   BIRD Internet routing daemon.
vocates a wide variety of control applications; yet, Open-                         http://bird.network.cz/.
Flow’s control over the data plane is confined to primi-                    [10]   J. Biswas, A. A. Lazar, J.-F. Huard, K. Lim, S. Mahjoub, L.-F.
tive match-action operations on packet-header fields. We                           Pau, M. Suzuki, S. Torstensson, W. Wang, and S. Weinstein. The
                                                                                   ieee p1520 standards initiative for programmable network
should remember that the initial design of OpenFlow was                            interfaces. Communications Magazine, IEEE, 36(10):64–70, 1998.
driven by the desire for rapid adoption, not first princi-                  [11]   P. Bosshart, G. Gibb, H. Kim, G. Varghese, N. McKeown,
ples. Supporting a wide range of network services would                            M. Izzard, F. Mujica, and M. Horowitz. Forwarding
                                                                                   metamorphosis: Fast programmable match-action processing in
require much more sophisticated ways to analyze and ma-                            hardware for SDN. In ACM SIGCOMM, Aug. 2013.
nipulate traffic (e.g., deep-packet inspection, and com-                    [12]   M. Caesar, N. Feamster, J. Rexford, A. Shaikh, and J. van der
pression, encryption, and transcoding of packets), using                           Merwe. Design and implementation of a routing control platform.
                                                                                   In Proc. 2nd USENIX NSDI, Boston, MA, May 2005.
commodity servers (e.g., x86 machines) or programmable
                                                                            [13]   K. Calvert. An architectural framework for active networks (v1.0).
hardware (e.g., FPGAs, network processors, and GPUs),                              http://protocols.netlab.uky.edu/c̃alvert/arch-latest.ps.
or both. Interestingly, the renewed interest in more sophis-                [14]   K. Calvert. Reflections on network architecture: An active
ticated data-plane functionality, such as Network Func-                            networking perspective. ACM SIGCOMM Computer
                                                                                   Communications Review, 36(2):27–30, 2006.
tions Virtualization, harkens back to the earlier work on                   [15]   K. Calvert, S. Bhattacharjee, E. Zegura, and J. Sterbenz.
active networking, bringing our story full circle.                                 Directions in active networks. IEEE Communications Magazine,
   Maintaining SDN’s bold vision requires us to continue                           pages 72–78, October 1998.
thinking “out of the box” about the best ways to program                    [16]   M. Casado, M. J. Freedman, J. Pettit, J. Luo, N. McKeown, and
                                                                                   S. Shenker. Ethane: Taking control of the enterprise. In ACM
the network, without being constrained by the limitations                          SIGCOMM ’07, 2007.
of current technologies. Rather than simply designing                       [17]   M. Casado, T. Garfinkel, M. Freedman, A. Akella, D. Boneh,
SDN applications with the current OpenFlow protocols in                            N. McKeown, and S. Shenker. SANE: A protection architecure for
                                                                                   enterprise networks. In Proc. 15th USENIX Security Symposium,
mind, we should think about what kind of control we want                           Vancouver, BC, Canada, Aug. 2006.
to have over the data plane, and balance that vision with a                 [18]   B. Chun, D. Culler, T. Roscoe, A. Bavier, L. Peterson,
pragmatic strategy for deployment.                                                 M. Wawrzoniak, and M. Bowman. Planetlab: an overlay testbed
                                                                                   for broad-coverage services. ACM SIGCOMM Computer
                                                                                   Communication Review, 33(3):3–12, 2003.
Acknowledgments                                                             [19]   M. Ciosi et al. Network functions virtualization. Technical report,
We thank Mostafa Ammar, Ken Calvert, Martin Casado,                                ETSI, Darmstadt, Germany, Oct. 2012. http:
                                                                                   //portal.etsi.org/NFV/NFV_White_Paper.pdf.
Russ Clark, Jon Crowcroft, Ian Leslie, Larry Peterson,                      [20]   S. da Silva, Y. Yemini, and D. Florissi. The NetScript active
Nick McKeown, Vyas Sekar, Jonathan Smith, Kobus van                                network system. IEEE Journal on Selected Areas in
der Merwe, and David Wetherall for detailed comments,                              Communications, 19(3):538–551, 2001.
feedback, insights, and perspectives on this article.                       [21]   M. Dobrescu, N. Egi, K. Argyraki, B.-G. Chun, K. Fall,
                                                                                   G. Iannaccone, A. Knies, M. Manesh, and S. Ratnasamy.
                                                                                   RouteBricks: Exploiting parallelism to scale software routers. In
REFERENCES                                                                         Proc. 22nd ACM Symposium on Operating Systems Principles
 [1] A. Al-Shabibi. Programmable virtual networks: From network                    (SOSP), Big Sky, MT, Oct. 2009.
     slicing to network virtualization, July 2013. http://www.              [22]   D. Drutskoy, E. Keller, and J. Rexford. Scalable network
     slideshare.net/nvirters/virt-july2013meetup.                                  virtualization in software-defined networks. IEEE Internet
 [2] D. Alexander, W. Arbaugh, A. Keromytis, and J. Smith. Secure                  Computing, March/April 2013.
     active network environment archtiecture: Realization in                [23]   D. Erickson. The Beacon OpenFlow controller. In Proc. HotSDN,
     SwitchWare. IEEE Network Magazine, pages 37–45, May 1998.                     Aug. 2013.
 [3] D. S. Alexander, W. A. Arbaugh, M. W. Hicks, P. Kakkar, A. D.          [24]   D. Erickson et al. A demonstration of virtual machine mobility in
     Keromytis, J. T. Moore, C. A. Gunter, S. M. Nettles, and J. M.                an OpenFlow network, Aug. 2008. Demo at ACM SIGCOMM.
     Smith. The SwitchWare active network architecture. IEEE                [25]   A. Farrel, J. Vasseur, and J. Ash. A Path Computation Element
     Network, 12(3):29–36, 1998.                                                   (PCE)-Based Architecture. Internet Engineering Task Force, Aug.
 [4] D. G. Andersen, H. Balakrishnan, M. F. Kaashoek, and R. Morris.               2006. RFC 4655.
     Resilient Overlay Networks. In Proc. 18th ACM Symposium on             [26]   N. Feamster, H. Balakrishnan, J. Rexford, A. Shaikh, and
     Operating Systems Principles (SOSP), pages 131–145, Banff,                    K. van der Merwe. The case for separating routing from routers.
     Canada, Oct. 2001.                                                            In ACM SIGCOMM Workshop on Future Directions in Network
 [5] B. Anwer, M. Motiwala, M. bin Tariq, and N. Feamster.                         Architecture, Portland, OR, Sept. 2004.
     SwitchBlade: A Platform for Rapid Deployment of Network

                                                                       11
You can also read