Introduction to Networks - Dwight L. Linn CTO - FAE Telecom

Page created by Anthony Payne
 
CONTINUE READING
Introduction to Networks - Dwight L. Linn CTO - FAE Telecom
Introduction to Networks

                               Dwight L. Linn
                           CTO – FAE Telecom
Introduction to Networks - Dwight L. Linn CTO - FAE Telecom
Contents
Introduction .................................................................................................................................................. 1
Basics of Networks ........................................................................................................................................ 2
   Fundamental Principle .............................................................................................................................. 3
Network Evolution ........................................................................................................................................ 3
   The Building Block of All Networks – Bandwidth ...................................................................................... 4
Simple Network Management Protocol ....................................................................................................... 5
Virtual Network Environment (VNE) ............................................................................................................. 6
Total Cost of Ownership (TCO) ..................................................................................................................... 7

The All Purpose Transport Network - Whitepaper
Introduction to Networks - Dwight L. Linn CTO - FAE Telecom
Introduction
Information technology has become the key means for organizational progress and productivity growth.
It is the means of service delivery for all enterprises, on a global scale. IT infrastructure includes,
computers, storage, and software that reside at their various facilities, and the interconnection of
information in the form of networks.

Networks are a critical part of the contemporary IT landscapes. A well-versed grounding and
understanding of how to exploit networks and build them is a critical part of business success in both
existing and new enterprises. Network use and design has been an evolving practice due to fast paced
technology change over the last 35 years. Software and Hardware have reversed importance over that
time.

In the early years, hardware advances were critical items to track. In the 1980’s, technology generations
evolved over decreasing time lines. This was the pace of general change for network hardware
throughout the 1980’s and 90’s. Product life cycles in hardware were shortened to about 18 months,
from one product generation to the next, from idea to full implementation. During previous decades an
effective product life cycle could range from 5 to 10 years. A good example was the IBM System 370
series mainframe. Many of these lasted via upgrades in Service for 15 or more years.

At the same time hardware achieved a standardized feature set, hardware replacement no longer was a
given every two to three years, if care was taken in the selection criteria in the beginning.

Software on the other hand, has gotten more complex and interdependent, as the power of computers
have increased. With the E-Commerce model and World Wide Web we have reached the point, where
few businesses are able to fully exploit the power of the standard software they purchase.

This happened for many reasons, but a key one is simply that training cost associated with employees
learning to use the entire feature set that is available is large. This becomes compounded with new
versions and releases, in some cases on a yearly basis.

Software selection and change management is now the key focus in IT decisions as the most important
management priority.

The power and productivity gains due to software and incremental computing power have made many
white-collar productivity increases possible over the past decade. This occurred in spite of capacity
problems, changes in work place, downsizing, and a market meltdown that affected many of the original
leaders in the field of computing and telecommunications.

Globally IT technology today represents a multi- trillion-dollar marketplace for goods and services. IT is
so large and complex in its practice that we need more effective if not better tools to support decision
making in each of the critical areas of business applications.

                                                                                      Introduction to Networks | 1
There are now a number of decision tools developed for complex risk management and critical process
identification. This paper is about design, validation, and training support focusing on the network
transport infrastructure.

A network can be either simple or complex. In its simplest form, two machines are connected to each
other for the purpose of sharing resources and information. The layers of complexity increase with the
number of machines, how they are used, and how many people have access to them with various levels
of filtering and security. Many of the complex issues are a function of network evolution, networks that
grow themselves, and their relationships to their users rather than planned design. This is where
perspective and strategic planning can be invaluable. In knowing the origins of a network, many
otherwise hidden attributes can be made explicit.

Basics of Networks

2 |Introduction to Networks
Network topology has morphed many times as technology advances have allowed new ways to
interconnect, from the very basic point-to-point analog modem based network, through point-to-
multipoint, meshed and now the ubiquitous “cloud”, (which is just a hidden mesh!)

Fundamental Principle
 All networks are about managing, accessing, and paying for the amount of bandwidth used in moving
information from one location to another. The Lowest cost per bit moved is the most desirable solution.

Network Evolution

In spite of the many changes in the underlying transport technology and its protocols, Bandwidth
Provisioning is still the prime requirement for moving information quickly and reliably. Today we see
convergence on two key technologies for transport, namely Synchronous Optical Networks (SONET),
teamed with Ethernet Access and TCP/IP as the end-to-end delivery protocol. It is very helpful to have
the understanding that these elements have been with us for many years, but in much separated
worlds, with lots of complexity layered on them to make them able to interoperate.

SONET’s origins lie in the field of telephony, where the carriers had a desire to interconnect large
switching centers with total reliability and capability. Here bandwidth was defined for the most part, by
the standard boundaries of voice telephone networks, (i.e. 56/64KBPS DSO, DS1, DS3), boundaries that
have been with us since the early Time Division and Frequency Division Multiplex days of the 1950’s.

These became the world-class long haul networks built by the carrier industry from 1950 until the
breakup of the Bell system in the early 1980’s. Much of our understanding of traffic engineering
principles and bandwidth management stems for the engineering work done at these organizations
along with experts at the Bell Labs.

                                                                                     Introduction to Networks | 3
The Building Block of All Networks – Bandwidth

Beginning with the shift to network design as a key skill in the 70’s, basic knowledge of networks and
their architecture came from some of the pioneers in the field including James Martin, Dr. Dixon Doll. In
Network Management the development of SNMP by Drs. Jeff Case, Marshall Rose, and Dave Perkins. In
Operating Systems for Parallel Computing IBM research fellow, Dr. John Tyler. They all aided in building
networks. In studying their work, people were able to contribute to the theory and practice in the field
of networks. Until the late 1980’s this was a fairly stable, slow growing body of knowledge.

It was the invention of Ethernet by Bob Metcalfe in the early 1970’s at the Xerox Palo Alto Research
Center that laid the seeds for an explosion in bandwidth demand. Ethernet was intended for
interconnection of local machines, located within a structure, over wiring within that building. It was a
wild dream to think that a direct connection to the Wide Area Network would someday be possible.

4 |Introduction to Networks
Early attempts were expensive and difficult at best to make Local Area Networks (LANs) connect with
Wide Area Networks (WANs). Metcalfe went on to found 3Com, which helped create the environment
for the E Commence world of today.

Another key piece of technology growing in parallel to Ethernet was Transmission Control Protocol
/Internet Protocol, which was an outgrowth of military work to ensure reliable communications
between Military units and their command structure, and the commercial network on X25 packet
switched protocol networks. Development of hardware for use of TCP/IP was largely driven by funding
from the Defense Agency for Advance Research Projects (DARPA).

These three large trends in technology, converged in the late 1990’s to bring on a movement to simplify
the underlying network architecture on the global information grid. The use of the World Wide Web as a
tool for business and personal use has made the interconnection of machines and networks a common
utility service for the entire world. Education, business, government, rely on readily available access and
the usage of networks to support their daily functioning.

The adoption and standardization of one last protocol was needed before a tool could be implemented
in software that would support network simulation and behavior modeling of complex networks. This
missing piece of the puzzle was SNMP.

Simple Network Management Protocol
Convergence on a single network management standard allowed networks to be managed and operated
with lower cost

That protocol is Simple Network Management Protocol (SNMP) and its itineration of Versions 1, 2, and
the last V3. Since SNMPv3 was adopted as a finished standard it accelerated the drive among vendors to
manage their products remotely, since it answered the last concerns over security for SNMP.

With SNMP and its associated Management Information Base (MIB), hardware and software is now
managed on a global level from a series of Management Control Stations. No matter the size, from the
smallest network to largest and most complex network, SNMP though the use of a functional Network
Management Systems can be used to control and effectively manage any IT system.

                                                                                     Introduction to Networks | 5
Virtual Network Environment (VNE)
One interesting aspect of SNMP and associated MIB’s, from a network engineering point of view, is that
now an empirical testing platform representing a network can be built. This is done via a simulated
network environment. This simulated network will respond and act the same way the network would
act and respond, as if it were physically cabled and running live data in a production network. Since
MIB’s define all classes and aspects for a managed object in software, a VNE can simulate the interaction
of hardware, software and the underlying TCP/IP transport.

These tools require considerable expertise to use and build for a specific network. Today, it is possible to
build via software the most complex networks, as large as can be conceived and managed. But it is still
a struggle to use these tools when it comes time for a designer to ask “what if “questions during the
design phase of a network in planning because so few software engineers have mastered the body of
knowledge to build simulations. (For a complete overview of VNEs see the FAE Telecom Whitepaper –
Virtual Network Environment.)

Depicted here is an entire network containing all its elements, simulated using a VNE for use with
network managers for training and other testing purposes

Questions can be asked

     •    What happens if a critical link breaks, who will be affected?
     •    What happens if Traffic grows faster than predicated, how will the network behave?
     •    What if it does not behave as expected?

6 |Introduction to Networks
•   What happens if some applications grow and some applications do not?
    •   How do we optimize the use of the network for all kinds of unexpected changes?
    •   How can we allocate and justify costs for a planned design?

This is an example of networks that were built and can now be simulated. Some of these are still under
construction without the aid of design and validation simulation tools, which increases their risk of faults
or even failure in implementation.

One industry where there is a strong position for the USE of VNEs is the Power Utility industry. With the
development of the Smart Grid, we are seeing network designers struggle with the complexities of
multiple applications with very different requirements needing to be supported on the network.

Total Cost of Ownership (TCO)
In today’s cost constrained world, practicing network architects, use the following formula to evaluate
the various design approaches available to us for each network we work with. The goal is to find the
best mix of hardware and software to lower Total Cost of Ownership for the user. Designers use tools
such as VNEs and experience with all the aspects of a network to price each part to use as an objective
formula to make recommendations. Each one is a unique product as no two networks are ever alike,
each is different in location and organizational knowledge on how to use or operate a network.

                                                                                      Introduction to Networks | 7
The cost factors in designing and then running a network are:

     •    Network planning and detailed engineering costs
     •    Equipment purchase
     •    Installation and turn-up
     •    Provisioning costs
     •    Reliability and associated maintenance costs
     •    Training of staff on new technologies

Network management systems can drastically impact OPEX, allowing costs to be effectively controlled.

                                                      About the Author:
                                                        Dwight Linn

  A systems engineer and network architect working with broadband networks for 37 years. He worked for GTE, Hitachi,
  Motorola, Codex, and Racal Milgo, and several successful leading edge startups. He helped launch new products in the
  telecommunications field in both hardware and software. He was a C4I systems engineer in the USAF. He has an AS
  degree in Electronics - USAF, a BBA/BS degree from Angelo State University in Texas and did graduate level work at LSU in
  Baton Rouge in Computer Science and Telecommunications.

                                                                                                           PO Box 2842
                                                                                                       Acton, MA 01720
                                                                                                  info@FAETelecom.com
                                                                                                        855.GO-FAETEL

8 |Introduction to Networks
You can also read