Network Ace Team...

Network Ace Team...
Our Certified Trainers

Monday 19 November 2012

Hubs vs. Switches vs. Routers


Hubs vs. Switches vs. Routers -
Layered Communication
Network communication models are generally organized into layers. The
OSI model specifically consists of seven layers, with each layer
representing a specific networking function. These functions are controlled
by protocols, which govern end-to-end communication between devices.
As data is passed from the user application down the virtual layers of the
OSI model, each of the lower layers adds a header (and sometimes a
trailer) containing protocol information specific to that layer. These headers
are called Protocol Data Units (PDUs), and the process of adding these
headers is referred to as encapsulation.
The PDU of each lower layer is identified with a unique term:

Commonly, network devices are identified by the OSI layer they operate at
(or, more specifically, what header or PDU the device processes).
For example, switches are generally identified as Layer-2 devices, as
switches process information stored in the Data-Link header of a frame
(such as MAC addresses in Ethernet). Similarly, routers are identified as
Layer-3 devices, as routers process logical addressing information in the
Network header of a packet (such as IP addresses).
However, the strict definitions of the terms switch and router have blurred
over time, which can result in confusion. For example, the term switch can
now refer to devices that operate at layers higher than Layer-2. This will be
explained in greater detail in this guide.

Icons for Network Devices


Layer-1 Hubs

Hubs are Layer-1 devices that physically connect network devices together
for communication. Hubs can also be referred to as repeaters.
Hubs provide no intelligent forwarding whatsoever. Hubs are incapable of
processing either Layer-2 or Layer-3 information, and thus cannot make
decisions based on hardware or logical addressing.
Thus, hubs will always forward every frame out every port, excluding the
port originating the frame. Hubs do not differentiate between frame types,
and thus will always forward unicasts, multicasts, and broadcasts out every
port but the originating port.
Ethernet hubs operate at half-duplex, which allows a device to either
transmit or receive data, but not simultaneously. Ethernet utilizes Carrier
Sense Multiple Access with Collision Detect (CSMA/CD) to control
media access. Host devices monitor the physical link, and will only transmit
a frame if the link is idle.
However, if two devices transmit a frame simultaneously, a collision will
occur. If a collision is detected, the hub will discard the frames and signal
the host devices. Both devices will wait a random amount of time before
resending their respective frames.
Remember, if any two devices connected to a hub send a frame
simultaneously, a collision will occur. Thus, all ports on a hub belong to the
same collision domain. A collision domain is simply defined as any
physical segment where a collision can occur.
Multiple hubs that are uplinked together still all belong to one collision
domain. Increasing the number of host devices in a single collision domain
will increase the number of collisions, which can significantly degrade
performance.
Hubs also belong to only one broadcast domain – a hub will forward both
broadcasts and multicasts out every port but the originating port. A broadcast
domain is a logical segmentation of a network, dictating how far a broadcast
(or multicast) frame can propagate.
Only a Layer-3 device, such as a router, can separate broadcast domains.

Layer-2 Switching


Layer-2 devices build hardware address tables, which will contain the
following at a minimum:
• Hardware addresses for host devices
• The port each hardware address is associated with

Using this information, Layer-2 devices will make intelligent forwarding
decisions based on frame (Data-Link) headers. A frame can then be
forwarded out only the appropriate destination port, instead of all ports.
Layer-2 forwarding was originally referred to as bridging. Bridging is a
largely deprecated term (mostly for marketing purposes), and Layer-2
forwarding is now commonly referred to as switching.
There are some subtle technological differences between bridging and
switching. Switches usually have a higher port-density, and can perform
forwarding decisions at wire speed, due to specialized hardware circuits
called ASICs (Application-Specific Integrated Circuits). Otherwise,
bridges and switches are nearly identical in function.
Ethernet switches build MAC-address tables through a dynamic learning
process. A switch behaves much like a hub when first powered on. The
switch will flood every frame, including unicasts, out every port but the
originating port.
The switch will then build the MAC-address table by examining the source
MAC address of each frame. Consider the following diagram:


When ComputerA sends a frame to ComputerB, the switch will add ComputerA’s
MAC address to its table, associating it with port fa0/10. However, the switch will not
learn ComputerB’s MAC address until ComputerB sends a frame to ComputerA, or
to another device connected to the switch. Switches always learn from the source
MAC address.
A switch is in a perpetual state of learning. However, as the MAC-address
table becomes populated, the flooding of frames will decrease, allowing the
switch to perform more efficient forwarding decisions.



While hubs were limited to half-duplex communication, switches can
operate in full duplex. Each individual port on a switch belongs to its own
collision domain. Thus, switches create more collision domains, which
results in fewer collisions.
Like hubs though, switches belong to only one broadcast domain. A Layer-
2 switch will forward both broadcasts and multicasts out every port but the
originating port. Only Layer-3 devices separate broadcast domains.
Because of this, Layer-2 switches are poorly suited for large, scalable
networks. The Layer-2 header provides no mechanism to differentiate one
network from another, only one host from another.
This poses significant difficulties. If only hardware addressing existed, all
devices would technically be on the same network. Modern internetworks
like the Internet could not exist, as it would be impossible to separate my
network from your network.
Imagine if the entire Internet existed purely as a Layer-2 switched
environment. Switches, as a rule, will forward a broadcast out every port.
Even with a conservative estimate of a billion devices on the Internet, the
resulting broadcast storms would be devastating. The Internet would simply
collapse.
Both hubs and switches are susceptible to switching loops, which result in
destructive broadcast storms. Switches utilize the Spanning Tree Protocol
(STP) to maintain a loop-free environment. STP is covered in great detail in
another guide.
Remember, there are three things that switches do that hubs do not:
• Hardware address learning
• Intelligent forwarding of frames
• Loop avoidance
Hubs are almost entirely deprecated – there is no advantage to using a hub
over a switch. At one time, switches were more expensive and introduced
more latency (due to processing overhead) than hubs, but this is no longer
the case.


Layer-2 Forwarding Methods

Switches support three methods of forwarding frames. Each method copies
all or part of the frame into memory, providing different levels of latency
and reliability. Latency is delay - less latency results in quicker forwarding.
The Store-and-Forward method copies the entire frame into memory, and
performs a Cycle Redundancy Check (CRC) to completely ensure the
integrity of the frame. However, this level of error-checking introduces the
highest latency of any of the switching methods.
The Cut-Through (Real Time) method copies only enough of a frame’s
header to determine its destination address. This is generally the first 6 bytes
following the preamble. This method allows frames to be transferred at wire
speed, and has the least latency of any of the three methods. No error
checking is attempted when using the cut-through method.
The Fragment-Free (Modified Cut-Through) method copies only the first
64 bytes of a frame for error-checking purposes. Most collisions or
corruption occur in the first 64 bytes of a frame. Fragment-Free represents a
compromise between reliability (store-and-forward) and speed (cut-through).


Layer-3 Routing

Layer-3 routing is the process of forwarding a packet from one network to
another network, based on the Network-layer header. Routers build routing
tables to perform forwarding decisions, which contain the following:
• The destination network and subnet mask
• The next hop router to get to the destination network
• Routing metrics and Administrative Distance
Note that Layer-3 forwarding is based on the destination network, and not
the destination host. It is possible to have host routes, but this is less
common.
The routing table is concerned with two types of Layer-3 protocols:
• Routed protocols - assigns logical addressing to devices, and routes
packets between networks. Examples include IP and IPX.
• Routing protocols - dynamically builds the information in routing
tables. Examples include RIP, EIGRP, and OSPF.
Each individual interface on a router belongs to its own collision domain.
Thus, like switches, routers create more collision domains, which results in
fewer collisions.
Unlike Layer-2 switches, Layer-3 routers also separate broadcast domains.
As a rule, a router will never forward broadcasts from one network to
another network (unless, of course, you explicitly configure it to).
Routers will not forward multicasts either, unless configured to participate in
a multicast tree. Multicast is covered in great detail in another guide.
Traditionally, a router was required to copy each individual packet to its
buffers, and perform a route-table lookup. Each packet consumed CPU
cycles as it was forwarded by the router, resulting in latency. Thus, routing
was generally considered slower than switching.
It is now possible for routers to cache network-layer flows in hardware,
greatly reducing latency. This has blurred the line between routing and
switching, from both a technological and marketing standpoint. Caching
network flows is covered in greater detail shortly.

Collision vs. Broadcast Domain Example


Consider the above diagram. Remember that:
• Routers separate broadcast and collision domains.
• Switches separate collision domains.
• Hubs belong to only one collision domain.
• Switches and hubs both only belong to one broadcast domain.
In the above example, there are THREE broadcast domains, and EIGHT
collision domains:



VLANs – A Layer-2 or Layer-3 Function?
By default, a switch will forward both broadcasts and multicasts out every
port but the originating port.
However, a switch can be logically segmented into multiple broadcast
domains, using Virtual LANs (or VLANs). VLANs are covered in
extensive detail in another guide.
Each VLAN represents a unique broadcast domain:
• Traffic between devices within the same VLAN is switched
(forwarded at Layer-2).
• Traffic between devices in different VLANs requires a Layer-3
device to communicate.
Broadcasts from one VLAN will not be forwarded to another VLAN. This
separation provided by VLANs is not a Layer-3 function. VLAN tags are
inserted into the Layer-2 header.
Thus, a switch that supports VLANs is not necessarily a Layer-3 switch.
However, a purely Layer-2 switch cannot route between VLANs.
Remember, though VLANs provide separation for Layer-3 broadcast
domains, and are often associated with IP subnets, they are still a Layer-2
function.







Ethernet Standards


- Ethernet Standards -
What is Ethernet?
Ethernet has become the standard technology used in LAN networking. Over
time, the Ethernet standard has evolved to satisfy bandwidth requirements,
resulting in various IEEE “categories” of Ethernet:

• 802.3 - Ethernet (10 Mbps)
• 802.3u - Fast Ethernet (100 Mbps)
• 802.3z or 802.3ab - Gigabit Ethernet (1000 Mbps)

Various subsets of these Ethernet categories exist, operating at various speeds,
distances, and cable types:


Half-Duplex vs. Full-Duplex

Ethernet devices can operate either at half-duplex, or full-duplex. At half
duplex, devices can either transmit or receive data, but not simultaneously.
Full-duplex allows devices to both transmit and receive at the same time.
Devices connected to a hub can only operate at half-duplex, whereas devices
connected to a switch can operate at full-duplex.
Half-duplex Ethernet uses Carrier Sense Multiple Access with Collision
Detect (CSMA/CD) to control media access. Devices monitor the physical
link, and will only transmit a frame if the link is idle. If two devices send a
packet simultaneously, a collision will occur. When a collision is detected, both
NICs will wait a random amount of time before resending their respective
packets. Full-duplex Ethernet does not use CSMA/CD.
Port speed and duplex can be either manually configured or auto-negotiated
with a hub or switch. However, a duplex mismatch will occur if one side is
configured manually, and the other configured for auto-negotiation.

Ethernet (10 Mbps)

The first incarnation of Ethernet operated at 10 Mbps, over thinnet
(10base2), thicknet (10base5), or twisted pair (10baseT) mediums.
Ethernet’s specifications were outlined in the IEEE 802.3 standard.
Even though the term “Ethernet” is widely used to describe any form of
Ethernet technology, technically the term refers to the 10 Mbps category.
The most common implementation of Ethernet is over Category 5 twistedpair
cable, with a maximum distance of 100 meters.
Full Duplex Ethernet allows devices to both send and receive
simultaneously, doubling the bandwidth to 20 Mbps per port. Only devices
connected to a switch can operate at Full Duplex.

Fast Ethernet

Fast Ethernet, or IEEE 802.3u, operates at 100 Mbps, utilizing Category 5
twisted-pair (100base-TX) or fiber cabling (100base-FX).
Full Duplex Fast Ethernet allows devices connected to a switch to both send
and receive simultaneously, doubling the bandwidth to 200 Mbps per port.
Many switches (and hubs) support both Ethernet and Fast Ethernet, and are
commonly referred to as 10/100 switches. These switches will autonegotiate
both port speed and duplex.
As mentioned earlier, it is also possible to statically configure this
information. Both the device and switch must be configured for autonegotiation
(or both configured with the same static settings), otherwise a
duplex mismatch error will occur.

Gigabit Ethernet

Gigabit Ethernet operates at 1000 Mbps, and can be utilized over Category
5e twisted-pair (1000baseT) or fiber cabling (1000baseSX or 1000baseLX).
Gigabit Ethernet over copper is defined in the IEEE 802.3ab standard.
Full Duplex Gigabit Ethernet allows devices connected to a switch to both
send and receive simultaneously, doubling the bandwidth to 2000 Mbps.
Newer switches can support Ethernet, Fast Ethernet, and Gigabit Ethernet
simultaneously, and are often referred to as 10/100/1000 switches. Again,
switches and devices can auto-negotiate both speed and duplex.
10 Gigabit Ethernet has also been developed, defined in the IEEE 802.3ae
standard, and currently can operate only over fiber cabling.

Twisted-Pair Cabling

Twisted-pair cable usually contains 2 or 4 pairs of wire, which are twisted
around each other to reduce crosstalk. Crosstalk is a form of
electromagnetic interference (EMI) or “noise” that reduces the strength and
quality of a signal. It is caused when the signal from one wire “bleeds” or
interferes with another wire’s signal.
Twisted-pair cabling can be either shielded or unshielded. Shielded twistedpair
is more resistant to from external EMI. Florescent light ballasts,
microwaves, and radio transmitters can all create EMI.
There are various categories of twisted-pair cable, identified by the number
of “twists per inch.”
• Category 3 (three twists per inch)
• Category 5 (five twists per inch)
• Category 5e (five twists per inch, pairs are twisted around each
other)
Category 5 (and 5e) twisted-pair cabling usually contains four pairs of wire
(eight wires total), and each wire is assigned a color:

• White Orange
• Orange
• White Green
• Green
• White Blue
• Blue
• White Brown
• Brown

Types of Twisted-Pair Cables

Various types of twisted-pair cables can be used. A straight-through cable
is used in the following circumstances:
• From a host to a hub (or switch)
• From a router to a hub (or switch)
The pins (wires) on each end of a straight-through cable must be identical.

OSI reference Model


- OSI Reference Model -

Network Reference Models

As computer network communication grew more prevalent, the need for a
consistent standard for vendor hardware and software became apparent.
Thus, the first development of a network reference model began in the
1970’s, spearheaded by an international standards organization.
A network reference model serves as a blueprint, dictating how network
communication should occur. Programmers and engineers design products
that adhere to these models, allowing products from multiple manufacturers
to interoperate.
Network models are organized into several layers, with each layer assigned
a specific networking function. These functions are controlled by protocols,
which govern end-to-end communication between devices.
Without the framework that network models provide, all network hardware
and software would have been proprietary. Organizations would have been
locked into a single vendor’s equipment, and global networks like the
Internet would have been impractical or even impossible.

The two most widely recognized network reference models are:
• The Open Systems Interconnection (OSI) model
• The Department of Defense (DoD) model

The OSI model was the first true network model, and consisted of seven
layers. However, the OSI model has become deprecated over time, replaced
with more practical models like the TCP/IP (or DoD) reference model.
Network models are not physical entities. For example, there is no OSI
device. Devices and protocols operate at a specific layer of a model,
depending on the function. Not every protocol fits perfectly within a specific
layer, and some protocols spread across several layers.

The Open Systems Interconnection (OSI) model was developed in the
1970’s and formalized in 1983 by the International Organization for
Standardization (ISO). It was the first networking model, and provided the
framework governing how information is sent across a network.
The OSI Model (ISO standard 7498) consists of seven layers, each
corresponding to a particular network function:

7 Application
6 Presentation
5 Session
4 Transport
3 Network
2 Data-link
1 Physical

Various mnemonics have been devised to help people remember the order of
the OSI model’s layers:

7 Application All Away
6 Presentation People Pizza
5 Session Seem Sausage
4 Transport To Throw
3 Network Need Not
2 Data-link Data Do
1 Physical

The ISO further developed an entire protocol suite based on the OSI model;
however, this OSI protocol suite was never widely implemented. More
common protocol suites can be difficult to fit within the OSI model’s layers,
and thus the model has been mostly deprecated.
A more practical model was developed by the Department of Defense
(DoD), and became the basis for the TCP/IP protocol suite (and
subsequently, the Internet). The DoD model is explained in detail later in
this guide.
The OSI model is still used predominantly for educational purposes, as
many protocols and devices are described by what layer they operate at.

The Upper Layers

The top three layers of the OSI model are often referred to as the upper
layers. Thus, protocols that operate at these layers are usually called upperlayer
protocols, and are generally implemented in software.
The function of the upper layers of the OSI model can be difficult to
visualize. The upper layer protocols do not fit perfectly within each layer;
and several protocols function at multiple layers.

The Application layer

The Application layer (Layer 7) provides the actual interface between the
user application and the network. The user directly interacts with this layer.
Examples of application layer protocols include:

• FTP (via an FTP client)
• HTTP (via a web-browser)
• SMTP (via an email client)
• Telnet

The Presentation layer

The Presentation layer (Layer 6) controls the formatting of user data,
whether it is text, video, sound, or an image. The presentation layer ensures
that data from the sending device can be understood by the receiving device.
Additionally, the presentation layer is concerned with the encryption and
compression of data.

Examples of presentation layer formats include:
• Text (RTF, ASCII, EBCDIC)
• Music (MIDI, MP3, WAV)
• Images (GIF, JPG, TIF, PICT)
• Movies (MPEG, AVI, MOV)

The Session layer 

The Session layer (Layer 5) establishes, maintains, and ultimately
terminates connections between devices. Sessions can be full-duplex (send
and receive simultaneously), or half-duplex (send or receive, but not
simultaneously).
The four layers below the upper layers are often referred to as the lower
layers, and demonstrate the true benefit of learning the OSI model.

The Transport Layer

The Transport layer (Layer 4) is concerned with the reliable transfer of
data, end-to-end. This layer ensures (or in some cases, does not ensure) that
data arrives at its destination without corruption or data loss.
There are two types of transport layer communication:
• Connection-oriented - parameters must be agreed upon by both
parties before a connection is established.
• Connectionless – no parameters are established before data is sent.
Parameters that are negotiated by connection-oriented protocols include:
• Flow Control (Windowing) – dictating how much data can be sent
between acknowledgements
• Congestion Control
• Error-Checking
The transport layer does not actually send data. Instead, it segments data
into smaller pieces for transport. Each segment is assigned a sequence
number, so that the receiving device can reassemble the data on arrival.
Examples of transport layer protocols include Transmission Control
Protocol (TCP) and User Datagram Protocol (UDP). Both protocols are
covered extensively in another guide.
Sequenced Packet Exchange (SPX) is the transport layer protocol in the
IPX protocol suite.

The Network Layer

The Network layer (Layer 3) has two key responsibilities. First, this layer
controls the logical addressing of devices. Logical addresses are organized
as a hierarchy, and are not hard-coded on devices. Second, the network layer
determines the best path to a particular destination network, and routes the
data appropriately.
Examples of network layer protocols include Internet Protocol (IP) and
Internetwork Packet Exchange (IPX). IP version 4 (IPv4) and IP version 6
(IPv6) are covered in nauseating detail in separate guides.

The Data-Link Layer

The Data-Link layer (Layer 2) actually consists of two sub-layers:
• Logical Link Control (LLC) sub-layer
• Media Access Control (MAC) sub-layer
The LLC sub-layer serves as the intermediary between the physical link and
all higher layer protocols. It ensures that protocols like IP can function
regardless of what type of physical link is being used.
Additionally, the LLC sub-layer can use flow-control and error-checking,
either in conjunction with a transport layer protocol (such as TCP), or
instead of a transport layer protocol (such as UDP).
The MAC sub-layer controls access to the physical medium, serving as
mediator if multiple devices are competing for the same physical link.
Specific technologies have various methods of accomplishing this (for
example: Ethernet uses CSMA/CD, Token Ring utilizes a token).
The data-link layer packages the higher-layer data into frames, so that the
data can be put onto the physical wire. This packaging process is referred to
as framing or encapsulation. The encapsulation type used is dependent on
the underlying data-link/physical technology (such as Ethernet, Token Ring,
FDDI, Frame-Relay, etc.)
Included in this frame is a source and destination hardware (or physical)
address. Hardware addresses usually contain no hierarchy, and are often
hard-coded on a device. Each device must have a unique hardware address
on the network.

The Physical Layer

The Physical layer (Layer 1) controls the transferring of bits onto the
physical wire. Devices such as network cards, hubs, and cabling are all
considered physical layer equipment.
Physical-layer devices are covered extensively in other guides.

Explanation of Encapsulation

As data is passed from the user application down the virtual layers of the
OSI model, each of the lower layers adds a header (and sometimes a
trailer) containing protocol information specific to that layer. These headers
are called Protocol Data Units (PDUs), and the process of adding these
headers is called encapsulation.
For example, the Transport layer adds a header containing flow control and
sequencing information (when using TCP). The Network layer header adds
logical addressing information, and the Data-Link header contains physical
addressing and other hardware specific information.

The PDU of each layer is identified with a different term:
Layer PDU Name
Application -
Presentation -
Session -

Transport Segments
Network Packets
Data-Link Frames
Physical Bits

Each layer communicates with the corresponding layer on the receiving
device. For example, on the sending device, hardware addressing is placed
in a Data-Link layer header. On the receiving device, that Data-Link layer
header is processed and stripped away before it is sent up to the Network
and other higher layers.
Specific devices are often identified by the OSI layer the device operates at;
or, more specifically, what header or PDU the device processes. For
example, switches are usually identified as Layer-2 devices, as switches
process hardware (usually MAC) address information stored in the Data-
Link header of a frame.
Similarly, routers are identified as Layer-3 devices, as routers look for
logical (usually IP) addressing information in the Network header of a
packet.

OSI Reference Model Example

The following illustrates the OSI model in more practical terms, using a web
browser as an example:
• At the Application layer, a web browser serves as the user interface for
accessing websites. Specifically, HTTP interfaces between the web
browser and the web server.
• The format of the data being accessed is a Presentation layer function.
Common data formats on the Internet include HTML, XML, PHP, GIF,
JPG, etc. Additionally, any encryption or compression mechanisms used
on a webpage are a function of this layer.
• The Session layer establishes the connection between the requesting
computer and the web server. It determines whether the communication
is half-duplex or full-duplex.
• The TCP protocol ensures the reliable delivery of data from the web
server to the client. These are functions of the Transport layer.
• The logical (in this case, IP) addresses configured on the client and web
server are a Network Layer function. Additionally, the routers that
determine the best path from the client to the web server operate at this
layer.
• IP addresses are translated to hardware addresses at the Data-Link
layer.
• The actual cabling, network cards, hubs, and other devices that provide
the physical connection between the client and the web server operate at
the Physical layer.

IP and the DoD Model

The Internet Protocol (IP) was developed by the Department of Defense
(DoD) during the late 1970’s. It was included in a group of protocols that
became known as the TCP/IP protocol suite.
The DoD developed their own networking model to organize and define the
TCP/IP protocol suite. This became known as the DoD Model, and consists
of four layers:

OSI Model DoD Model

7 Application
6 Presentation
5 Session
4 Application
4 Transport 3 Host-to-Host
3 Network 2 Internet
2 Data-link
1 Physical
1 Network Access

The DoD model’s streamlined approach proved more practical, as several
protocols spread across multiple layers of the OSI Model.
The following chart diagrams where protocols fit in the DoD model:
Layer Example Protocols
Application FTP, HTTP, SMTP
Host-to-Host TCP, UDP
Internet IP
Network Access Ethernet

Basic Networking


- Introduction to Networking -

What is a Network?

A network is defined as devices connected together to share information
and services. The types of data/services that can be shared on a network is
endless - documents, music, email, websites, databases, printers, faxes,
telephony, videoconferencing, etc.
Protocols are “rules” that govern the method by which devices share data
and services. Protocols are covered in great detail in subsequent sections.

Basic Network Types

Networks are generally broken down into two types:
LANs (Local Area Networks) - a high-speed network that covers a
relatively small geographic area, usually contained within a single building
or campus. A LAN is usually under the administrative control of a single
entity/organization.
WANs (Wide Area Networks) – The book definition of a WAN is a
network that spans large geographical locations, usually to interconnect
multiple LANs.
A more practical definition describes a WAN as a network that traverses a
public network or commercial carrier, using one of several WAN
technologies. Thus, a WAN can be under the administrative control of
several entities or organizations, and does not need to “span large
geographical distances.”

Network “Architectures”

A host refers to any device that is connected to your network. Some define a
host as any device that has been assigned a network address.

A host can serve one or more functions:
• A host can request data (often referred to as a client)
• A host can provide data (often referred to as a server)
• A host can both request and provide data (often referred to as a peer)

Because of these varying functions, multiple network “architectures” have
been developed, including:

• Peer-to-Peer networks
• Client/Server networks
• Mainframe/Terminal networks

When using a peer-to-peer architecture, all hosts on the network can both
request and provide data and services. For example, configuring two
Windows XP workstations to share files would be considered a peer-to-peer
network.
Though peer-to-peer networks are simple to configure, there are several key
disadvantages to this type of architecture. First, data is spread across
multiple devices, making it difficult to manage and back-up that data.
Second, security becomes problematic, as you must configure individual
permissions and user accounts on each host.
When using a client/server architecture, hosts are assigned specific roles.
Clients request data and services stored on Servers. Connecting Windows
XP workstations to a Windows 2003 domain controller would be considered
a client/server network.
While client/server environments tend to be more complex than peer-to-peer
networks, there are several advantages. With data now centrally located on
a server or servers, there is only one place to manage, back-up, and secure
that data. This simplified management allows client/server networks to scale
much larger than peer-to-peer. The key disadvantage of client/server
architecture is that it introduces a single point of failure.
When using a mainframe/terminal architecture, often referred to as a thinclient
environment, a single device (the mainframe) stores all data and
services for the network. This provides the same advantage as a client/server
environment – centralized management and security of data.
Additionally, the mainframe performs all processing functions for the dumb
terminals (or thin-clients) that connect to the mainframe. The thin clients
perform no processing whatsoever, but serve only as input and output
devices into the mainframe. Put more simply, the mainframe handles all the
“thinking” for the thin-clients.
A typical hardware thin-client consists of a keyboard/mouse, a display, and
an interface card into the network. Software thin-clients are also prevalent,
and run on top of a client operating system (such as Windows XP or Linux).
Windows XP’s remote desktop is an example of a thin-client application.

Saturday 17 November 2012

IPV6


- IPv6 Addressing
IPv6 Basics

The most widespread implementation of IP currently is IPv4, which utilizes 
a 32-bit address. Mathematically, a 32-bit address can provide roughly 4 
billion unique IP addresses (232= 4,294,967,296). Practically, the number of 
usable IPv4 addresses is much lower, as many addresses are reserved for 
diagnostic, experimental, or multicast purposes. 
The explosive growth of the Internet and corporate networks quickly led to 
an IPv4 address shortage. Various solutions were developed to alleviate this 
shortage, including CIDR, NAT, and Private Addressing. However, these 
solutions could only serve as temporary fixes. 
In response to the address shortage, IPv6 was developed. IPv6 increases the 
address size to 128 bits, providing a nearly unlimited supply of addresses 
(340,282,366,920,938,463,463,374,607,431,768,211,456 to be exact). This 
provides roughly 50 octillion addresses per person alive on Earth today, or 
roughly 3.7 x 1021 addresses per square inch of the Earth’s surface.

IPv6 offers the following features: 

• Increased Address Space and Scalability – providing the absurd
number of possible addresses stated previously.
• Simplified Configuration – allows hosts to auto-configure their IPv6
addresses, based on network prefixes advertised by routers.
• Integrated Security – provides built-in authentication and encryption
into the IPv6 network header
• Compatibility with IPv4 – simplifies address migration, as IPv6 is
backward-compatible with IPv4


The IPv6 Address

The IPv6 address is 128 bits, as opposed to the 32-bit IPv4 address. Also
unlike IPv4, the IPv6 address is represented in hexadecimal notation,
separate by colons.
An example of an IPv6 address would be:
1254:1532:26B1:CC14:0123:1111:2222:3333

Each “grouping” (from here on called fields) of hexadecimal digits is 16
bits, with a total of eight fields. The hexadecimal values of an IPv6 address
are not case-sensitive.
We can drop any leading zeros in each field of an IPv6 address. For
example, consider the following address:
1423:0021:0C13:CC1E:3142:0001:2222:3333

We can condense that address to: 1423:21:C13:CC1E:3142:1:2222:3333
Only leading zeros can be condensed. If we have an entire field comprised of
zeros, we can further compact the following address:
F12F:0000:0000:CC1E:2412:1111:2222:3333

The condensed address would be: F12F::CC1E:2412:1111:2222:3333
Notice the double colons (::). We can only condense one set of contiguous
zero fields. Thus, if we had the following address:
F12F:0000:0000:CC1E:2412:0000:0000:3333

We could not condense that to: F12F::CC1E:2412::3333
The address would now be ambiguous, as we wouldn’t know


The IPv6 Address Hierarchy

IPv4 separated its address space into specific classes. The class of an IPv4
address was identified by the high-order bits of the first octet:
• Class A - (00000001 – 01111111, or 1 - 127)
• Class B - (10000000 – 10111111, or 128 - 191)
• Class C - (11000000 – 11011111, or 192 - 223)
• Class D - (11100000 – 11101111, or 224 - 239)
IPv6’s addressing structure is far more scalable. Less than 20% of the IPv6
address space has been designated for use, currently. The potential for
growth is enormous.
The address space that has been allocated is organized into several types,
determined by the high-order bits of the first field:
• Special Addresses – addresses begin 00xx:
• Link Local – addresses begin FE8x:
• Site Local – addresses begin FECx:
• Aggregate Global – addresses begin 2xxx: or 3xxx:
• Multicasts – addresses begin FFxx:
• Anycasts



Friday 9 November 2012


Lab Infrastructure of Network Ace





Thursday 8 November 2012



Cisco Certification Path

Best Institute for CCNA CCNP CCIE Cisco Certification Training in indore

 Network Ace is first of its kind & the best institute in providing CISCO Certified courses training on CCNA,CCNP, CCSP and CCIE, in 24X7 operating labs facility.
Network Ace is the first choice for those who aspire to choose networking as a Career to progress with their ambition in life.
The curriculum is designed to cater the coaching as per CISCO examinations by our experienced CCIE trainers in providing Coaching & Network solution.
We offer CISCO certification courses like CCNA, CCNP, CCSP and CCIE on real cisco labs, which includes Cisco routers, switches, ASA firewalls and IPS. We also offer Network Security and Managed Network services in India.
Policy : Network Ace is committed to produce qualified Networking professionals by providing educational opportunities & high class training in Networking framework.

Monday 5 November 2012

RIP CONFIGURATION



RIP has two versions
1. Version 1
2. Version 2



RIPv1-

R1(config)#router RIP  *
R1(config-router)#network 10.0.0.0
R1(config-router)#network 20.0.0.0       

R2(config)#router RIP
R2(config-router)#network 20.0.0.0
R2(config-router)#network 30.0.0.0

RIPv2-          Described in other section.

R1(config)#router RIP  **
R1(config)#version 2                     
R1(config-router)#network 10.0.0.0      
R1(config-router)#network 20.0.0.0      
R1(config-router)#no auto-summary **

R2(config)#router RIP
R2(config-router)#version 2
R2(config-router)#network 20.0.0.0
R2(config-router)#network 30.0.0.0
R2(config-router)#no auto-summary **

*In RIP, we specify only those networks that belong to us. RIP sends routing table updates to its neighbors for every 30secs. RIP uses hop count as a unit of metric. The administrative distance of RIP is 120.
**Command “no auto-summary” tells RIP not to summarize the network.
WAN NETWORK USING RIP, EIGRP AND OSPF


Sunday 4 November 2012



Best Offer...

Take admission in CCNP Training and Get CCNA Training Free.

Network Ace

406,4th Floor, Shagun Building-1,
Near Vijay Nagar Square,
AB Road.
Indore.
Contact : 08889935005
Mail : info@networkace.in