Search This Blog

Thursday, February 21, 2013

CCDA Notes: WAN Technology

WAN Technology

When designing a WAN solution, the requirements typically stem from two goals:
  • Service Level Agreement (SLA): This agreement defines the availability of the network, based on what level of availability, downtime and impact are acceptable to the organization.
  • Cost and Usage: Consider the budget, expected utilization and usage requirements
Three objectives of effective WAN solution design:
  1. WAN must support policies and goals of the organization
  2. WAN technology selected must meet application requirements as well as future growth
  3. The proposed design must be within the budget allocated
The WAN interfaces with the Enterprise Edge module. There can be multiple connections, commonly used connectivity modules include Internet, DMZ, and site-to-site circuits. ISPs offer many options for Internet and DMZ connectivity as well as inter-site connectivity such as MPLS VPN/WAN. Alternative connection options include DSL/cable with IPSEC VPN.

WAN technology can be point-to-point or point-to-multipoint, such as MPLS or Frame Relay. Public WAN connections over the Internet such as cable/DSL are available as well. Usually Internet connections have a much lower SLA than MPLS/Frame Relay connections.

WAN Transport Technology

When choosing which WAN technology to implement, consideration must be taken for whether public Internet transport or private WAN connections are required. Geography also plays a role in what WAN technologies are available in a given area. Major cities have many options, while rural areas typically have few. Here are some WAN technologies compared/contrasted in terms of bandwidth, reliability, latency and cost:

ISDN: Low bandwidth, medium reliability, medium latency, low cost
DSL: Low/medium bandwidth, low reliability, medium latency, low cost
Cable: Low/medium bandwidth, low reliability, medium latency, low cost
Wireless: Low/medium bandwidth, low reliability, medium latency, medium cost
Frame Relay: Low/medium bandwidth, medium reliability, low latency, medium cost
TDM: Medium bandwidth, high reliability, low latency, medium cost
Metro Ethernet: Medium/high bandwidth, high reliability, low latency, medium cost
SONET/SDH: High bandwidth, high reliability, low latency, high cost
MPLS: High bandwidth, high reliability, low latency, high cost
Dark Fiber: High bandwidth, high reliability, low latency, high cost
DWDM: High bandwidth, high reliability, low latency, high cost

Above technologies explained below:


Integrated Services Digital Network was standardized in the early 1980's. It's an all-digital phone line that carries voice and data. It comes in two flavors: Basic Rate Interface (BRI) and Primary Rate Interface (PRI).


BRI consists of two B channels and one D channel. Both BRI channels operate at 64kbps and carry data. D channel handles signaling/control info and operates at 16kbps. 48kbps is used for synchronization, totalling 192kbps data rate.


PRI consists of 23 B channels and 1 D channel in North America/Japan. Each channel operates at 64kbps, totalling 1.544 Mbps including the overhead. In Europe/Australia the service has 30 B channels and 1 64 kbps D channel.

Digital Subscriber Line (DSL)

DSL provides high speed Internet over plain old copper telephone cable using frequencies not utilized in normal voice calls.

ADSL is the most popular flavor of DSL and most widely available. The upstream/downstream is asymmetric, usually upstream is much slower than downstream. ADSL's main drawback is that it must be deployed geographically close to a digital subscriber line access multiplexer (DSLAM), typically less than 2 km. With DSL, the customer premise equipment (CPE) generally means a DSL modem and PC. An ADSL circuit consists of twisted-pair telephone line containing three info channels:
  • Medium-speed downstream channel
  • Low-speed upstream channel
  • Basic telephone service channel
DSL splitters separate the voice and data traffic. Since DSL crosses the public Internet it is suggested to use DSL in conjunction with VPN to connect to the corporate network.



Sunday, February 17, 2013

CCDA Notes: Wireless LAN Design (Mobility and WLAN Design Best Practice)

WLAN Mobility

AP Controller Equipment Scaling

Cisco provides different solutions for supporting differing numbers of APs within an enterprise. Standalone WLCs, modules for Integrated Services Routers (ISR), and modules for 6500 switches. Below is listed different WLC types, followed by the number of supported APs that can be associated:
  • 2100 series WLC: 25
  • WLC for ISR: 25
  • Catalyst 3750 Integrated WLC: 50
  • 4400 series WLC: 100
  • 6500/6700 series WLC module: 300
  • 5500 series WLC: 500

To scale beyond the default 48 supported APs on a Cisco WLC:
  1. Use multiple AP interfaces: This option only works on 4400 series WLCs
  2. Use link aggregation (LAG): This option works on 5500 and 4400 series WLC, and is the default operation on Catalyst 3750 Integrated WLCs and Catalyst 6500 WiSM

The largest limitation of LAG is that only one may exist per WLC, so if a LAG exists all physical ports are members. This means the WLC can only be connected to one neighboring device.

 Roaming and Mobility Groups

Roaming occurs when users move from one AP association to another, this may occur as a user moves around. This must be seamless to the end user, and can be intercontroller, or intracontroller.

Intracontroller Roaming

This occurs when a user moves between APs that are both associated with the same WLC. The WLC updates its client database with the new AP association and does not change the client's IP address. If required, a client is reauthenticated when changing AP associations and a new security association is created.

Layer 2 Intercontroller Roaming

This occurs when a user moves between two APs that are associated to different WLCs, but both WLCs are part of the same subnet. When this sort of roaming occurs, the WLC passes its client database to the other WLC, and no IP address change happens for the client. If required the client is reauthenticated and a new security association is created.

Layer 3 Intercontroller Roaming

This occurs when a client moves between APs associated to WLCs that are on different subnets. When the client moves its association, the new WLC and the previous WLC exchange mobility messages. The client database is not moved to the new WLC, instead the first WLC marks the client as an 'anchor' entry and the new WLC marks the client as a 'foreign' entry. The wireless client's IP address is preserved and, if required, the client reauthenticates and gets a new security association. From then on, traffic is routed asymmetrically. Traffic from the client is forwarded to the wired network by the new WLC, but traffic that is destined for the client is forwarded from the wired network to the original WLC.  The original WLC then forwards that traffic to the new WLC via Ether-in-IP tunneling, which is then sent from the new WLC to the client.

Mobility Groups

Mobility groups allow WLCs to peer with each other to allow roaming across the controller's boundaries, AP load balancing and redundancy. When WLCs are placed into the same mobility group, they will exchange mobility messages and the EtherIP tunneling is possible when roaming occurs. For this reason WLCs that are meant to be redundant and allow roaming should be placed into the same mobility groups.

Up to 24 WLCs can be placed into a mobility group, and what devices are in the group determine how many APs can be supported. WLCs can also be configured with mobility lists, which are lists of which WLCs belong to which mobility groups. If a WLC has this list, clients can roam between mobility groups so long as mobility lists are configured on the WLCs. Mobility lists can support 48 mobility groups with Release 5.0, or 72 lists with Release 5.1 or later

WLCs use UDP port 16666 for unencrypted messages and UDP 166667 for encrypted messages. APs learn the IPs of other members of the mobility group when joining via CAPWAP

Cisco best practice is to minimize intercontroller roaming, and if needed, Layer 2 intercontroller roaming is preferred as it is far more efficient. Total round-trip travel time between controllers should be under 10ms. Proactive key caching (PKC) or Cisco Compatible Extensions (CCKM) Version 4 is recommended to speed/secure roaming.

WLAN Design Best Practice

Controller Redundancy: Dynamic or Deterministic

Deterministic redundancy is best practice and requires APs to be configured with a primary/secondary/tertiary controller preference. This requires more front-end work, but allows for deterministic failover and predictability. Deterministic advantages include:
  • Predictability
  • Network scalability
  • Flexible/powerful redundancy options
  • Faster failover
  • Deterministic fallback

Dynamic redundancy uses CAPWAP to load balance APs across WLCs, by populating each AP with a backup WLC. This solution works best when all WLCs are located centrally since it is dynamic. Dynamic advantages include:
  • Easier configuration
  • Dynamic AP load balancing

Unpredictable operation and longer failover occurs with dynamic redundancy, as well as a lack of other options for failover.

N+1 WLC Redundancy

With this redundancy option, a single WLC is configured as a backup for multiple WLCs. This could cause the backup to become oversubscribed.

N+N WLC Redundancy

With this redundancy option, an equal number of backup WLCs are configured. A pair of WLCs on one floor may be configured as backup WLCs for another floor, and vice versa. There needs to be enough capacity to allow for failover if needed (no more than 50% capacity used).

N+N+1 WLC Redundancy

With this redundancy option, an equal number of controllers are configured as backups for each other (as above), and a tertiary backup WLC is configured as well. This tertiary controller backs up the secondary controllers, usually placed in the data center or NOC

Radio Management/Radio Groups

Due to the ISM limit on available frequencies for 802.11b/g/n there is a limit on what non-overlapping channels can be used (1, 6, 11). Best practice for APs is to limit the number of data devices attached to a single AP to 20, or 7 concurrent Voice over WLAN (VoWLAN) calls using G.711 codec, or 8 concurrent VoWLAN calls using G.729.

As user population grows on the WLAN additional APs should be added to maintain the ratio. Cisco Radio Resource Management (RRM) manages AP RF channels/power configuration to minimize interference. WLCs use RRM algorithm to automatically optimize and self-heal the radio frequencies using these functions:
  • Radio Resource Monitor: LWAPs monitor all radio channels and monitor for rogue APs, clients and interfering APs
  • Dynamic Channel Assignment: WLCs automatically manage channels for APs to avoid interference
  • Interference Detection/Avoidance: Interference is detected by a predefined threshold (10% default)
  • Dynamic Transmit Power Control: WLCs automatically adjust broadcast power of APs
  • Coverage Hole Detection/Correction: WLCs can adjust AP power output if clients report low signals
  • Client/Network Load Balancing: Clients can be influenced to connect to certain APs to load balance

WLCs can use RRM to raise power levels and channels of APs to compensate for lost/downed APs.

RF Groups

RF groups are clusters of WLCs that coordinate their RRM calculations. When the WLCs join the group, the RRM calculation expands to include the WLCs joined. APs send neighbor messages to each other, and if the message is above -80dBm the controllers form an RF group. WLCs elect a leader to analyze the RF data and make RRM decisions. The leader exchanges messages among RF group members on UDP port 12114 for 802.b/g/n, and UDP port 12115 for 802.11a.

How RF groups form:
  1. APs send out neighbor messages looking for other APs, which includes an encrypted shared secret key that is preconfigured on trusted WLCs
  2. Messages with the same secret key are validated and trusted. These messages must be transmitted above -80dBm to form the group.
  3. Members of the formed RF group elect a leader to analyze and push a master power/channel scheme for the group. The leader receives realtime data about the WLAN to make this calculation

RF Site Survey

Site surveys are done similarly to surveys for wired network design. The RF site survey identifies customer requirements and coverage needed as well as check for interference. The site survey should consist of the following steps:
  1. Define customer requirements, what applications are needed (such as VOIP) and what types of devices need to be supported as well as where these wireless devices will be located
  2. Obtain a facility diagram to identify RF interference/dead zones
  3. Visually inspect the facility to identify barriers to wireless signal like elevator shafts and stairwells
  4. Identify areas intensively used as well as areas that are not used often
  5. Determine preliminary AP locations, power placement, wired network access, channel selection, mounting locations, antennas
  6. Use an AP to survey locations and the received RF strength based on targeted AP placement
  7. Document findings by recording locations, signal readings, data rates at the outer areas of coverage. The report includes:
  • Detailed customer requirements, diagram AP coverage
  • Parts list including antennas, accessories, network components
  • Tools/methods used for site survey

Ethernet over IP Tunnels for Guest Services

Basic guest access entails separating guest SSID/vlan from the corporate network, broadcasting guest access but not corporate. Another solution involves EoIP to tunnel the guest traffic from the AP to the an anchor WLC. When guests access the guest APs, their connections are automatically tunneled to the specified anchor WLC for guest access. This keeps guest traffic logically separated from the corporate network without the need to run extra vlans.

Wireless Mesh in Outdoor Wireless

Wireless Mesh Components:
  • Wireless Control System (WCS): Wireless mesh SNMP management system allows network-wide configuration/management
  • WLAN Controller (WLC): Links the meshed APs to the wired network, manages security, mitigates radio interference, etc
  • Rooftop AP(RAP): Connects the mesh to the wired network, serves as root. Communicates with MAPs, typically located on rooftops/towers
  • Mesh Access Point(MAP): AP that provides access to wireless clients, communicating with RAPs for wired network connection. Usually located on a lamppost or other pole.

Mesh Design Recommendations

  • Less than 10ms latency per hop, 2-3ms preferred
  • Four or fewer hops are recommended for outdoor deployment though eight are supported
  • For indoor deployment one hop is supported
  • Best performance occurs when no more than 20 MAPs are used per RAP, though 32 are supported
  • Throughput: One hop = 14Mbps, two hops = 7 Mbps, three hops = 3 Mbps, four hops = 1 Mbps

Campus Design Considerations

  • Number of APs: Should be enough APs to provide full coverage for wireless clients for the expected access locations. 20 data devices per AP, and 7 G.711 concurrent or 8 G.729 concurrent VoWLAN calls.
  • AP Placement: APs should be placed in a centralized location of the expected access area, and placed in conference rooms to accommodate peak requirements
  • AP Power: Traditional wall power can be used, or Power over Ethernet (PoE)
  • Number of WLCs: The number of WLCs depends on the redundancy strategy and number of required APs
  • WLC Placement: WLCs are placed in secured wiring closets or the data center. Intercontroller roaming should be minimized, and deterministic redundancy is recommended

Branch Design Considerations

Branch offices may not need a WLC installed depending on how many APs are needed. If a WLC is not installed at the branch office, the round-trip time between APs and the WLC should not exceed 300ms. REAP or Hybrid REAP (H-REAP) should be used.

Local MAC: CAPWAP supports local media access control for branch deployments. In this deployment, the AP provides MAC management support for associations, terminating traffic at the AP instead of a WLC. This allows local access without requiring traffic to travel all the way to a central office WLC, and to continue functioning if the connection to the central office is lost.

REAP: REAP supports branch offices by extending LWAPP control timers. Control traffic is still encapsulated over an LWAPP tunnel over the WAN to a WLC, but local traffic is bridged. In this way the clients still have access to local resources if the WAN fails. REAP devices only support Layer 2 security policy, do not support NAT and need a routable IP address.

Hybrid REAP: H-REAP enhances REAP by providing additional capabilities like NAT and the ability to control three APs remotely. APs connect to WLC over WAN and use two security modes:
  1. Standalone: H-REAP authenticates clients when the WLC can't be reached. WPA-PSK and WPA2-PSK are supported.
  2. Connected: The AP uses the WLC for client authentication. H-REAP supports WPA-PDK, WPA2-PSK, VPN, L2TP, EAP and web authentication

H-REAP round-trip time must not exceed 300ms and CAPWAP must be prioritized traffic.

Branch Office Controllers

  1. Cisco 2100 series
  2. Cisco 4402-12/4402-24
  3. WLC Module in Integrated Services Router
  4. 3750 with WLAN controller

WLAN Design Summary

  • RF site survey will determine RF characteristics and AP placement
  • Guest services are supported using EoIP in the Cisco Unified Wireless Network
  • Outdoor wireless is supported using outdoor APs and mesh networking APs
  • Campus WLAN design provides wireless coverage using LWAPs managed by WLCs
  • Branch WLAN design deals with wireless access management at remote sites using REAP or H-REAP
  • Each AP should be limited to 20 data devices
  • Separate SSIDs should be used for voice, and APs should not have more than 7 concurrent calls using G.711 codec, or 8 using G.729 codec

UDP Ports Used by Wireless

LWAPP Control: 12223
LWAPP Data: 12222
WLC Exchange Messages (unencrypted): 16666
WLC Exchange Messages (encrypted): 16667
RF 802.11b/g/n: 12114
RF 802.11a: 12115
CAPWAP Control: 5246
CAPWAP Data: 5247

Saturday, February 16, 2013

CCDA Notes: Wireless LAN Design (WLAN Standards and WLCs)

Wireless LAN Design

WLAN Standards

First standard for WLAN was established by IEEE, 802.11, ratified in 1997. Originally implemented at speeds of 1-2 MBPS using direct sequence spread spectrum (DSSS) and frequency-hopping spread spectrum (FHSS) at the Physical Layer of OSI model. DSSS separates data into sections which is transmitted over different frequencies at the same time, while FHSS uses frequency-hopping to send data in bursts, transmitting part of the data on channel 1, then hopping to channel 2 for the next part, then back to channel 1.

802.11b was announced in 1999 which provided 11MBPS data rate, using 11 channels of the Industrial, Scientific and Medical (ISM) frequencies. 802.1b uses DSSS and is backwards compatible with other 802.11 systems which use DSSS.

802.11a was approved as a second standard in 1999, providing 54MBPS data rate but being incompatible with 802.11b. 802.11a uses 13 channels of Unlicensed National Information Infrastructure (UNII) frequencies and is incompatible with 802.11b/g.

802.11g was approved in 2003 which used ISM frequencies and provided 54 MBPS data rate. 802.11g was also backwards-compatible with 802.11b.

802.11n standard was ratified in 2009. It uses multiple-input multiple-output (MIMO) antennas and expected max data rate of 600 MBPS using 4 streams, each with 40-MHz width. Uses DSSS and orthogonal frequency-division multiplexing (OFDM) as the digital carrier modulation method, 802.11n uses both 2.4-GHz and 5-GHz bands.

ISM and UNII Frequencies

802.11b/g uses 2.4-GHz range of frequencies as set in ISM, with overlapping channels that are 22MHz wide. Common non-overlapping channels used are 1, 6 and 11 to prevent interference.

UNII has three ranges:
  1. 5.15 GHz - 5.25 GHz, and 5.25 GHz - 5.35 GHz
  2. 5.47 GHz - 5.725 GHz. Used by High Performance Radio LAN in Europe
  3. 5.725 GHz - 5.875 GHz. This range overlaps ISM
802.11a has 12 non-overlapping channels.

Service Set Identifier

WLANs use an SSID to identify WLAN network name. SSIDs can be 2 to 32 characters, and all devices in WLAN must use the same SSID to communicate. This acts very much like a vlan in a wired network. The main difficulty in large networks is configuring SSID, frequency and power settings for remotely located access points. Cisco use Wireless Control System (UCS).

WLAN Layer 2 Access

802.11 media layer access control uses Carrier Sense Multiple Access Collision Avoidance (CSMA/CA) as the access method. Each WLAN station listens for other stations transmitting, and then transmits if no other traffic is detected on the radio frequency. Of course, with a centrally located access point it is entirely possible to have stations unable to detect each other, whereas on a wired network the collision would be detected by all participants on the network segment. If the AP does not receive the transmission, the station backs off a random amount of time before trying again.

WLAN Security

Because of wireless signals proliferation and ease of eavesdropping on signal, wireless security has its own set of challenges. Several standards were created to address wireless security concerns. The first was Wireless Equivalent Privacy (WEP) which was used in the 802.11b standard. This method used a short preshared key to encrypt traffic and was easily cracked. In 2004, the 802.11i standard was created to provide additional security for WLAN networks. This standard is also known as Wireless Protected Access 2 (WPA2) and Robust Security Network (RSN). 802.11 contains the following:
  • 4-Way Handshake and Group Key Handshake, both using 802.1x for authentication (using Extensible Authentication Protocol and an authentication server)
  • Robust Security Network for establishment and tracking of robust security associations
  • Advanced Encryption Standard (AES) for confidentiality, integrity, and origin authentication

Unauthorized Access

Wireless signals are difficult to control and contain. Because wireless signal may extend beyond the physical boundaries, attackers may be able to gain access to the network. If the wireless network does not have a mechanism to compare wireless card MAC addresses of hosts to a database of approved MACs, attackers may achieve unauthorized access. Simply having a database is also not protection because MAC addresses can be spoofed by attackers. Because static MAC address lists are not scalable and are defeated by spoofing, wireless encryption methods such as WEP/WPA2 need to be employed so that attackers cannot gain access without the security keys.

WLAN Security Design Approach

Two assumptions are made concerning the security design approach described:
  • All WLAN devices are connected to a unique IP subnet
  • Most services available to the wired network are also available to users of the WLAN
With those assumptions in mind, there are two basic security approaches:
  • Use EAP via Secure Tunneling (EAP-FAST) to secure authentication
  • Use VPN with IP Security (IPSec) to secure traffic from wireless to wired network
WLANS can potentially open new attack vectors for hackers and so security should be enhanced by using VPN with IPSec, 802.1x protocol, and WPA.

802.1x Port-Based Authentication

802.1x is a port-based authentication protocol that can be used on Ethernet, Fast Ethernet and WLAN networks. Client hosts run 802.1x software utilizing EAP to communicate with the AP. The AP relays the authentication request to an authentication server that will accept or deny the credentials, activating or deactivating the port/wireless connection. Usually a Remote Authentication Dial-In User Service (RADIUS) server handles authentication requests. This request is not encrypted as 802.1x is not an encryption protocol.

Dynamic WEP Keys and LEAP

Cisco offers dynamic, per-session WEP keys that are more secure than statically configured WEP keys. To centralize user-based authentication, Cisco developed LEAP. LEAP uses mutual authentication between client/server and 802.1x for wireless authentication messaging. LEAP can use Temporary Key Integrity Protocol (TKIP) rather than WEP to overcome the weakness of WEP. LEAP uses RADIUS to manage user information.

LEAP combines 802.1x and EAP, combining the ability to authenticate to various servers (such as RADIUS) with the ability to force users to log onto an AP that compares logon info with RADIUS. This solution is far more scalable than trying to keep a database of authorized MAC addresses.

Because the WLAN access depends on receiving an address using DHCP, and authenticating connection attempts via RADIUS, the WLAN needs access to these servers. LEAP does not support one-time passwords (OTP) so good password security practice is essential.

Controlling WLAN Access to Servers

The security posture of servers accessible to the WLAN should be similar to that of a DMZ because it is potentially accessible by attackers. WLAN RADIUS and DHCP servers should be kept on a separate segment (vlan) from other primary servers. Access into this vlan should be filtered, which ensures that attacks on these WLAN-accessible servers are contained within that segment. Network access to these servers should be controlled and restricted, as the WLAN should be considered an unsecured network segment. 

These WLAN-accessible servers also need to be protected from attack, possibly using IDS/IPS or firewalls.

Cisco Unified Wireless Network

Cisco UWN Architecture

The Cisco Unified Wireless Network architecture combines elements of wireless and wired networks to manage, secure and scale WLANS. Cisco UWN architecture is comprised of five elements:
  • Client Devices: Laptops, workstations, IP phones, PDAs and manufacturing devices to access WLAN
  • Access Points: Placed in strategic locations to maximize signal and minimize interference
  • Network Unification: The WLAN should support wireless applications by providing security policy, QoS, intrusion prevention, and radio management. Cisco WLAN Controllers provide this functionality and integrates within all major routing/switching platforms
  • Network Management: Cisco Wireless Control System (WCS) provides central management tool to allow design, control and monitoring of WLAN
  • Mobility Services: Includes guest access, location services, voice services, threat detection/mitigation

The Cisco UWN provides benefits:
  • Reduced Total Cost of Ownership (TCO)
  • Enhanced visibility/control
  • Dynamic radio management
  • WLAN Security
  • Unified wireless/wired network
  • Enterprise mobility
  • Enhanced collaboration/productivity

Lightweight Access Point Protocol

LWAPP is an IETF standard for control messaging between APs and WLCs. LWAPP control messages can be transmitted as Layer 2 or Layer 3 tunnels. Layer 2 LWAPP tunnels came first, and APs did not need an IP address, but the WLC had to be on every subnet on which an AP resides because only Layer 2 traffic was available. Layer 3 LWAPP is now the preferred solution, but lightweight APs can support both. LWAPP Layer 3 tunneling uses IP addresses that are collected from a mandatory DHCP server. When using Layer 2 tunneling, LWAPP uses a proprietary code to communicate with access points. WLCs reside on the wired network and the lightweight APs are at the edge, not directly connected. This is why tunneling is needed, to protect control traffic between WLCs and LWAPs.

LWAPP Layer 2 uses EtherType code 0xBBBB, Layer 3 uses UDP ports 12222/12223.

Control And Provisioning for Wireless Access Points

CAPWAP is an IETF standard for control messaging between APs and WLCs. Using Control Software 5.2, Cisco LWAPs use CAPWAP to communicate between LWAPs and WLCs. CAPWAP is different from LWAPP in the following ways:
  • CAPWAP uses Datagram Transport Layer Security (DTLS) for authentication and encryption to protect traffic between LWAP and WLC. LWAPP uses EAP for the same.
  • CAPWAP has a dynamic MTU discovery mechanism.
  • CAPWAP control messages use UDP port 5246.
  • CAPWAP data messages use UDP port 5247.
CAPWAP uses Layer 3 tunnels between the LWAP and WLC. The LWAP obtains an IP from DHCP servers. Control and data messages sent from an LWAP use an ephemeral UDP port that is derived from a hash of the AP MAC addresses, while WLC traffic uses UDP port 5246/5247 for control/data traffic.

Cisco Unified Wireless Split-MAC Architecture

With split-MAC architecture, LWAP control and data messaging is split. LWAPs communicate with WLCs using control messages over the wired network, while LWAPP/CAPWAP data messages are encapsulated and forwarded to/from wireless clients. WLCs provide configuration and firmware updates to APs as needed.

LWAP MAC functions:
  • 802.11: Beacons, probe response
  • 802.11 Control: Packet acknowledgement and transmission
  • 802.11e: Frame queuing and packet prioritization
  • 802.11i: MAC layer data encryption/decryption
Controller MAC Functions:
  • 802.11 MAC Management: Association requests and actions
  • 802.11e Resource Reservation: Reserves resources for specific applications
  • 802.11i: Authentication and key management

Local MAC

Local MAC is supported by CAPWAP, which moves the MAC management from the WLC to the local AP. This allows termination of client traffic at the wired port of the AP. This is useful at small or remote offices where a WLC isn't needed. 

LWAP MAC Functions:
  • 802.11: Beacons, probe response
  • 802.11 Control: Packet acknowledgement/transmission
  • 802.11e: Frame queuing/packet prioritization
  • 802.11i: MAC layer data encryption/decryption
  • 802.11 MAC Management: Association requests/actions

Controller MAC Functions:
  • 802.11: Proxy association requests/actions
  • 802.11e Resource Reservation: Reserves resources for specific applications
  • 802.11i: Authentication and key management

With autonomous APs not associated to a WLC, the AP simply acts as a trunk carrying different vlan traffic. With a WLC connected with CAPWAP, the AP tunnels to the WLC and then the WLC trunks to the switch.

AP Modes

  • Local mode: Default mode of operation. Every 180 secs, the AP measures noise floor/interference and scans for IDS events. This occurs on unused channels, lasts 60ms
  • Hybrid Remote Edge AP (H-REAP) Mode: Enables LWAP to reside across a WAN from the WLC. It uses local MAC, and is supported on Cisco 1130, 1140, 1240AB, and 1250AG series LWAPs.
  • Monitor mode: Feature to allow specific CAPWAP-enabled APs to opt out of handling data traffic, instead serving as sensors for rogue APs, intrusion detection and location-based services (LBS). These monitors continuously cycle through channels listening to each for 60ms.
  • Rogue Detector mode: LWAPs in this mode monitor for rogue APs. RD APs are attached to a trunk port to enable seeing all traffic since rogue APs can be connected to any vlan. The wired switch sends a list of rogue AP/client MACs to the RD AP and the RD AP forwards the list to the WLC to compare with MACs registered over the WLAN. If there are matches, then the WLC is aware that a rogue AP is plugged into the wired network and what rogue clients are connected.
  • Sniffer mode: LWAP that operates in sniffer mode captures and forwards packets on a particular channel to a remote machine running AiroPeek. This mode only works with AiroPeek, a 3rd party packet sniffer.
  • Bridge mode: This mode is only available on Cisco 1130 and 1240 series (typically indoor), and 1500 APs (typically outdoor mesh) and provides high-bandwidth cost-effective bridging. Point-to-point, point-to-multipoint, point-to-point wireless access with integrated backhaul and point-to-multipoint wireless access with integrated backhaul 

LWAPP Discovery of WLC

LWAPs placed on the network attempt DHCP discovery to obtain an IP address, followed by a Layer 3 LWAPP discovery attempt. If the WLC does not respond, the AP reboots and tries again. Layer 3 LWAPP discovery algorithm follows:
  1. AP sends a Layer 3 LWAPP discovery request
  2. All WLCs that receive this request reply with a unicast LWAPP discovery response message
  3. The requesting AP compiles a list of responding WLCs.
  4. The AP selects its preferred WLC based on certain criteria
  5. The AP validates the selected WLC and sends an LWAPP join response. An encryption key is agreed upon and future communications are encrypted.

Layer 3 discovery requests are sent in one or more of the following ways:
  • Local subnet broadcast
  • Unicast LWAPP discovery requests to WLCs advertised by other APs
  • Previously stored WLC addreses
  • IP addresses learned by DHCP option 43
  • IP addresses learned by DNS resolution of CISCO-LWAPP-CONTROLLER.local-domain

The WLC which is selected is selected based on certain criteria:
  • Previously configured primary/secondary/tertiary WLCs
  • WLC configured as master
  • WLC which has the most capacity for AP associations

If the WLC has CAPWAP, the AP follows this process:
  1. CAPWAP AP begins discovery process to find the WLC using a CAPWAP request, to which the WLC sends a CAPWAP response.
  2. If the AP receives no CAPWAP response within 60 seconds, the AP uses LWAPP discovery
  3. If the AP cannot find a WLC using LWAPP within 60 seconds it tries CAPWAP again.

CAPWAP is a design decision that is configurable within the WLC. APs select the WLC to create a CAPWAP tunnel based on information contained within the WLC responses. These responses contain the controller sysName, current capacity and load, status of the master WLC and the AP manager IP address. Based on this information, the AP will select its preferred WLC as followed:
  • Primary/Secondary/Tertiary WLC preconfigured sysName (preconfigured preference)
  • Master WLC
  • WLC with greatest capacity for AP associations


WLAN Authentication

When wireless clients try to associate with an AP, they need to authenticate with an authentication server before being granted access to the WLAN. The authentication server resides in the wired LAN and and EAP/RADIUS tunnel is built from the WLC to the server to handle the request. Cisco has a Secure Access Control (ACS) which uses EAP which can service these requests.

Authentication Options

Different types of EAP have advantages and disadvantages. There are trade-offs in security, types of devices supported, ease of use and infrastructure support.
  • EAP-Transport Layer Security (EAP-TLS): Open IETF standard that is well-supported but rarely deployed. Uses PKI to secure communications to the RADIUS server using TLS and digital certificates.
  • Protected Extensible Authentication Protocol (PEAP): PEAP/MSCHAPv2 is the most common version deployed and is widely available. Similar in design to EAP-TTLS, needing only a server-side PKI cert to create a secure TLS tunnel to protect user authentication. PEAP-GTC allows more generic authentication to other kinds of user databases such as Novell Directory Services.
  • EAP-Tunneled TLS (EAP-TTLS): Widely supported across platforms, offers good security, using PKI certs on the authentication server. 
  • Cisco Lightweight EAP (LEAP): Early proprietary method of EAP supported in Cisco Certified Extensions (CCX) program. Vulnerable to dictionary attacks.
  • EAP-Flexible Authentication via Secure Tunneling (EAP-FAST): Proposal by Cisco to address the weaknesses of LEAP. EAP-FAST uses a Protected Access Credential with optional server certificates. EAP-FAST has three phases:
  1. Phase 0: Optional phase where PAC can be provisioned manually or dynamically.
  2. Phase 1: Client and AAA server use the PAC to establish a TLS tunnel.
  3. Client sends information over the established tunnel

WLAN Controller Components

Three major components of WLCs:
  • WLANS: Identified by unique SSID network names, each assigned to an interface on the WLC.
  • Interface: A logical connection mapping a wireless network to a vlan on the wired network
  • Port: Physical connection to the wired LAN, usually a trunk. There could be multiple ports on a WLC that are port-channeled into a single interface. Some WLCs may have an out-of-band management port.

WLC Interface Types

WLCs have five different interface types:
  •   Management: Mandatory static interface configured at setup, used for in-band management, AAA authentication and Layer 2 discovery/association
  • Service Port: Optional, statically configured at setup, used for out-of-band management
  • AP Manager: Static, configured at setup, mandatory on all but 5508 model WLC. Used for Layer 3 discovery/association, has source IP of AP that is statically configured
  • Dynamic: Analogous to vlans, used for client data
  • Virtual: Static, configured at setup, and mandatory, used for Layer 3 security authentication, DHCP relay, and mobility management

Monday, February 4, 2013

CCDA Notes: Data Center Design

Enterprise Data Center Architectures

Data Centers used to use mainframes to centrally process data, with users connecting via terminals to do work on the mainframe (Data Center 1.0).

Data Center 2.0 introduced the concept of client/server connections and distributed computing. Business applications were installed on servers in data center and accessed by users on their workstations. Applications services were distributed because of cost of WAN links and slow performance.

In Data Center 3.0, consolidation and virtualization are the main components. Due to communication equipment becoming cheaper and stronger computing power being available, the current move is toward consolidating services in data centers, which centralizes management and is more cost-effective than distributing services. Newer architecture takes advantage of server virtualization which results in higher utilization of computing/network resources. This raises return on investment (ROI) and lowers total cost of ownership (TCO).

Data Center 3.0 Components

  • Virtual local area networks (vlans), virtual storage-area networks (VSAN), virtual device contexts (VDC) help segment LAN/SAN/network instances
  • Cisco Nexus 1000V virtual switch for VMWare ESX/ESXi helps with policy control and visibility of virtual machines (VM)
  • Flexible network options that support multiple server form factors/vendors including those with integrated Ethernet/Fibre channel switches
Unified Fabric
  • Fibre Channel over Ethernet (FCoE) and Internet Small Computer Systems Interface (iSCSI) are two methods to implement unified fabric in data center oveer 10 Gigabit Ethernet networks
  • FCoE is supported on VMWare ESX/ESXi vSphere 4.0 and up
  • Cisco Catalyst/Nexus/MDS family of switches support iSCSI. Cisco Nexus 5000  supports unified fabric lossless operation which improves iSCSI performance using 10 Gigabit Ethernet
  • Cisco Nexus switches created to support unified fabric. Nexus 4000/5000 supports data center bridging (DCB) and FCoE, in future Nexus 7000 and Cisco MDS switches will as well
  • Converged network adapters (CNA) run at 10GE speeds and support FCoE. Available from Emulex and QLogic, and certain software stacks for 10GE interfaces are available from Intel
Unified Computing
  • Cisco Unified Computing System (UCS) is next-gen platform designed to converge computing, storage, network and virtualization together into one system
  • Integrates lossless 10GE unified network fabric with x86-based servers
  • Allows Cisco Virtual Interface Cards to virtualize network interfaces on servers
  • Cisco VN-Link virtualization
  • Supports extended memory technology patented by Cisco
  • Uses just-in-time provisioning using service profiles to increase productivity
At top layer of architecture, the virtual machines are software entities that run hypervisors which emulate hardware. Then there are the unified computing resources within which service profiles define the identity of the server. Identity includes hardware settings such as allocated memory and CPU, network card information, boot image and storage. 10GE, FCoE and Fibre Channel technologies provide unified fabric supported by Cisco Nexus 50000. FCoE allows native Fibre Channel frames to function on 10GE networks. VLAN/VSAN technology segments multiple LANs and SANs on same physical equipment. At the lowest layer there is virtualized hardware where storage devices can be virtualized into storage pools, and network devices are virtualized using virtual device contexts.

Challenges in the Data Center

Data center requirements and mechanical specifications help to define the following:
  • Power needed
  • Physical rack space used
  • Limits on scaling
  • Management (resources, firmware)
  • Security
  • Virtualization support
  • Management effort required

Data Center Facility Considerations

  • Space available
  • Floor load capacity
  • Power/cooling capacity
  • Cabling infrastructure
  • Operating temperature and humidity level
  • Access to site, security alarms and fire suppression
  • Space for employees to move/work
  • Compliance with regulations such as Payment Card Industry (PCI), Sarbanes-Oxley (SOX), and Health Insurance Portability and Accountability Act (HIPAA)

Data Center Space

  • Number of employees who will support data center
  • Number of servers and amount of storage/network gear needed
  • Space needed for non-infrastructure areas such as shipping/receiving, server/network staging, storage/break/bathrooms, and office space
Other considerations related to equipment rack/cabinet space:
  • Weight of rack/equipment
  • Heat expelled from equipment
  • Amount and type of power required (UPS/RPS)
  • Loading, which determines what/how many devices can be installed

Data Center Power

Desired power reliability drives requirements which may include multiple redundant power feeds from utility, backup generators, redundant power supplies. Power in the data center is used to power and cool devices in the data center. The power system also needs to protect against power surges, failures and other electrical problems. Key points of a power design will:
  • Define overall power capacity
  • Provides physical electrical infrastructure and addresses redundancy

Data Center Cooling

Cooling is used to control humidity and temperature in order to extend the lifespan of devices. High-density rack design should be weighed against heating considerations. Smaller form-factor servers allow more to be placed into a rack but airflow and cooling must be accounted for. Cabinets and racks should be organized into 'cold' and 'hot' aisles. In cold aisles, the fronts of devices should face each other across the aisle and in hot aisles the backs of devices should face each other across the aisle. Cold aisles should have perforated floor tiles through which cold air is blown that will be drawn into the fronts of the devices, flushing the hot air out of the back into the hot aisles. Hot aisles should have no perforated tiles, which will keep hot/cold air from mixing and diluting its effect.

If equipment does not exhaust heat to the rear, other cooling techniques can be leveraged:
  • Block unnecessary air escapes to increase airflow
  • Increase height of raised floor
  • Spread equipment to unused racks
  • Use open racks rather than cabinets in places security is not a concern
  • Use cabinets with meshed front/back
  • Custom perforated tiles with larger openings to allow more cold airflow

Data Center Heat

Data center design must account for high density servers and heat produced by them. Considerations in design for cooling need to be taken into account for proper sizing of servers and anticipated growth, along with the corresponding heat output.
  • Increase number of HVAC units
  • Increase airflow through devices
  • Increase space between racks/rows
  • Use alternative cooling technologies such as water-cooled racks

Data Center Cabling

Data center cabling is known as passive infrastructure. The cabling plant is what connects everything together, terminating connections between devices and governing how devices communicate. Cabling must be easy to maintain, abundant and capable of supporting different media types and connectors for proper operations.

Considerations for following must be determined during design:
  • Media selection
  • Number of connections
  • Type of cable termination organizers
  • Space for cabling on horizontal/vertical cable trays
Cabling needs to avoid the following:
  • Inadequate cooling due to restricted airflow
  • Outages due to accidental disconnections
  • Unplanned dependencies
  • Difficult troubleshooting options

Enterprise Data Center Infrastructure

Current enterprise data center design follows Cisco multilayer (hierarchical) architecture including access, aggregation and core layers. This model supports blade servers, single rack-unit (RU) servers and mainframes.

Defining Data Center Access Layer

The main purpose of data center access layer is to provide Layer 2/3 physical port density for various servers. The access layer also provides low-latency and high-performance switching that can support oversubscription requirements. Most data centers are built with Layer 2 connectivity but Layer 3 (routed access) options are available. Layer 2 connectivity uses vlan trunk uplinks to allow aggregation services to be shared across the same vlan across multiple switches. Spanning Tree is used in Layer 2 access to avoid loops in network. The recommended STP instance is RPVST+.
New routed access design aims to contain Layer 2 to the access layer and avoid the use of STP. First-hop redundancy must be be provided as the access switch becomes the first-hop router. Access layer benefits are as follows:
  • Port density for server farms
  • Supports single/dual-homed servers
  • High-performance, low-latency Layer 2 switching
  • Supports mix of oversubscription requirements

Defining Data Center Aggregation Layer

Aggregation (Distribution) layer aggregates Layer 2/3 links from the access layer and connects upstream to the core layer. Layer 3 connectivity, if not implemented at access layer, is typically used towards core from aggregation layer. The aggregation layer is a critical point for data center application and security services including load balancing, SSL offloading, and firewall/IPS services. Depending on design requirements the Layer 2/3 border could be in multilayer switches, firewalls, or content switching devices. Multiple aggregation layers can support different environments such as a test environment, production, etc each with its own applications and security requirements. First-hop redundancy is typically implemented in aggregation layer if Layer 3 is not implemented at the access layer. Benefits of the aggregation layer are:
  • Aggregates traffic from data center access layer and connects to data center core
  • Supports advanced security/application services
  • Layer 4 services such as firewalls, IPS, SSL offloading and server load balancing
  • Large STP process load
  • Highly flexible/scalable

Defining Data Center Core Layer

Data Center Core connects the campus core to the data center aggregation layer utilizing high-speed Layer 3 links. The core is a centralized Layer 3 routing layer to which the data center aggregation layers connect. Data center networks are summarized here and shared with the campus core, and default routes are injected into the data center aggregation layer from the data center core. Multicast traffic must also be allowed through the data center core to support a growing list of multicast applications.

Data Center Core Drivers
  • 10 Gigabit Ethernet density: Are there enough links to link multiple aggregation layers together?
  • Administrative domains/policies: Separate cores help isolate campus distribution from data center aggregation for troubleshooting and QoS/ACL policies
  • Future Growth: Future impact/downtime that would be needed to expand later makes it important to provide enough core layers when designing for initial implementation
Characterisics of a Data Center Core
  • Low-latency switching
  • Distributed forwarding architecture
  • 10 Gigabit Ethernet
  • Scalable IP Multicast support

Virtualization Overview

Virtualization technology allows one physical device to emulate several, or several physical devices to emulate a single logical device. The modern data center is changing based on virtualizatuion and data center design changes with it.

Virtualization Driving Forces
  • Need to reduce rising cost of powering/cooling devices while getting more productivity
  • Data center consolidation of assets performing individual tasks
  • Logical, separate user groups secured from other groups on same network
  • Eliminate underutilized hardware that has poor performance/price ratio

Virtualization Benefits
  • Better use of computing resources, higher server densities, simplified server migration
  • Flexibility and ease of management for adds/reassignments/repurposing of resources
  • Separation of groups utilizing same physical network, enabling traffic isolation
  • Ability to provide per-department security policy
  • Reduction in power/space needed
  • Increased uptime, decreased operational cost

Network Virtualization
  • VLAN
  • VSAN
  • VRF (Virtual Routing/Forwarding)
  • VPN
  • vPC (Virtual Port Channel)

Device Virtualization
  • Server virtualization (VM)
  • Cisco Application Control Enginre (ACE) context
  • Virtual Switching System (VSS)
  • Cisco ASA firewall context
  • Virtual device contexts (VDC)

Virtualization Technologies

Virtual Switching System is network virtualization that allows two physical Cisco Catalyst 6500 series switches to act as a single logical switch. Similar to StackWise technology used on Cisco Catalyst 3750 switches that allows chaining multiple switches together into a single logical switch, but VSS is limited to two chassis linked together.

Virtual routing and forwarding virtualizes Layer 3 route tables to allow multiple routing tables to exist on a single device. In Multi-Protocol Label Switching VPN environment, VRF allows  multiple networks to exist on the same MPLS network. Routing information is contained in VRF and is only visible to other routers participating in the same VRF instance. Because of this duplicate IP address schemes can be used.

Virtual Port Channel technology works by virtualizing two Cisco Nexus 7000 or Nexus 5000 series switches as a single logical switch. 10GE links connect the two physical switches which then represent themselves as a single logical switch for purposes of port channeling. Although multiple redundant paths exist, the spanning tree topology appears loop-free. This allows all links to be utilized.

Device Contexts
Device contexts allow a single physical network device to host multiple virtual devices. Each context is its own instance with its own configuration, policies, network interfaces and management. Most features available on single network devices also exist on contexts. These devices support contexts:
  • Cisco Nexus 7000 series switches
  • Cisco ASA Firewall
  • Cisco Catalyst 6500 Firewall Services Module (FSM)
  • Cisco Application Control Engine Appliance
  • Cisco Catalyst 6500 Application Control Engine Module
  • Cisco IPS
Server Virtualization
Server virtualization is a software technique which abstracts server resources from hardware to provide flexibility and optimize the usage of the underlying hardware. The virtualized hypervisor controls hardware and physical resources that can be allocated to the different server VMs. This shares resources among the VMs without the VMs being aware of their actual physical hardware. Several vendors for server virtualization, along with products:
  • VMWare ESX Server
  • Citrix XenServer
  • Microsoft Hyper-V

Network Virtualization Design Considerations

Access Control
Access should be controlled to make sure users and devices are identified and authorized to communicate with their assigned network segment.

Path Isolation
Path isolation involves the creation of independent logical paths over the same physical network infrastructure. MPLS VPN assigned to specific VRFs is an example of this. VLANs and VSANs also logical separate networks.

Services Edge
Services Edge refers to making services available to the users, groups and devices intended with an enforced centralized managed policy. Effective way to enforce service access is a firewall or other centralized device that contains policies on what should and should not be accessible.


Sunday, February 3, 2013

CCDA Notes: Enterprise LAN Design (Best Practice)

Campus LAN design factors in following categories:
  1. Network Application Characteristics: Different types of applications
  2. Infrastructure Device Characteristics: Layer 2/3 switching and hierarchy
  3. Environmental Characteristics: Geography, wiring, space, distance, etc

Application Characteristics

Application requirements drive design due to usability constraints. Time and drop-sensitive applications need special consideration as far as allowable latency/packet loss.

Peer-to-Peer: Instant messaging, file sharing, IP/video calls. Requires medium/high throughput, can allow low/high availability depending on application and has low to medium network cost

Client-local servers: Servers are located in same segment as clients or close by, normally on same LAN. With 80/20 workgroup rule, 80% of traffic is local and 20% is routed elsewhere. Requires medium throughput, medium availability and incurs medium network cost.

Client-server farm: Mail, database, etc servers. Access to servers is fast, reliable and controlled. Requires high throughput, high availability and a high network cost.

Client-enterprise edge servers: External servers such as smtp relay, web, DMZ. e-commerce. Requires medium throughput. high availability and medium network cost.

Hierarchical Layer Best Practice

Access Layer Best Practice

  • Limit vlans to single switch/closet when possible to provide deterministic and highly available network topology
  • Use Rapid Per-Vlan Spanning Tree+ (RPVST+) if STP is needed
  • Set trunks to on/on and nonegotiate
  • Manually prune unused vlans from trunks to avoid unnecessary broadcast traffic propagating between switches
  • Use Vlan Trunking Protocol (VTP) in Transparent mode because common vlan propagation in hierarchical network is not needed
  •  Disable dynamic trunking on host ports, enable Portfast
  • Consider routing in access layer to speed up convergence and provide Layer 3 load balancing
  • Use switchport host command on server/host ports to enable Portfast and disable channelling
  • Use Cisco STP toolkit (Portfast, Loop Guard, Root Guard, BPDU Guard) to prevent loops and protect deterministic Spanning Tree topology

Distribution Layer Best Practice

  • Links to core must support aggregated bandwidth of access layer links
  • Redundant links to access/core layers
  • QoS/security/policy enforcement should occur at this layer
  • Use first-hop redundancy protocols such as Hot Standby Router Protocol (HSRP) or Gateway Load Balancing Protocol (GLBP) if layer 2 trunks are used between access and distribution layers
  • Use Layer 3 routing protocols between distribution and core to allow fast convergence and load balancing to occur
  • Only peer with other routers on links intended to be used as transit links
  • Build Layer 3 triangle links,  not squares:

  • Use distribution switches to connect Layer 2 vlans that span multiple access switches
  • Summarize routes from distribution layer to core to reduce routing overhead
  • Use Virtual Switching System (if possible) to eliminate need for STP and first-hop redundancy

Core Layer Best Practice

  • Must support fast switching, redundant paths and and high availability to distribution points
  • Reduce switch peering by using redundant triangle connections between switches (as above)
  • Use routing topology that allows no Layer 2 loops seen in Layer 2 links utilizing STP
  • Use Layer 3 switches in core which provide intelligent services Layer 2 switches do not support
  • Use equal-cost dual paths to each destination network

Large-Building LANs

  • Tend to be separated by floors or departments
  • Access component serves one or more floors/departments
  • Distribution component aggregates multiple floors/departments
  • Core components connects data center, building distribution components, and enterprise edge distribution component
  • Access layer typically uses Layer 2 switches to save costs
  • Distribution layer typically uses Layer 3 switches for access control, QoS and policy enforcement
  • Core layer utilizes Layer 3 switches for fast switching and fast convergence/load balancing
  • FastEthernet at access layer, GigabitEthernet for distribution/core links

Enterprise Campus LAN

  • Typically connects two or more buildings within local geographic area using high-bandwidth LAN backbone
  • GigabitEthernet backbones connecting campus buildings are new standard
  • Requires hierarchical composite design with network-level addressing to control broadcasts
  • Each building should have network addressing leveraged to facilitate summarization
  • Use Layer 3 switches with fast-switching capabilities in core
  • In smaller campuses, distribution layer can be collapsed and core can connect directly to access layer
  • Can also collapse distribution layer by utilizing Layer 3 switching in access layer to provide access/distribution services

Edge Distribution

  • On large LANs, provides additional security between campus LAN and enterprise edge
  • Can help defend campus LAN against IP spoofing, unauthorized access, network reconnaissance, and packet sniffing

Medium-Size LANs

  • Typically utilizes collapsed core hierarchy
  • 200 - 1000 devices

Small/Remote Site LANs

  • Typically connect to corporate network via small router which filters broadcasts to WAN and forwards packets requiring services from corporate network
  • Local servers tend to be small and provide minimal services for network connectivity such as DHCP and backup domain controller
  • If local servers are not used then router must forward broadcast and other types of traffic to corporate network

Server Farm

  • Most servers connect to access switches via GigEthernet, 10GigEthernet or Etherchannels
  • Server farm switches connect via redundant links to core, larger farms may need distribution layer which utilizes QoS, policies and access control 
  • Servers typically connected to switch by:
  1. Single network interface card (NIC)
  2. Dual NIC with Etherchannel
  3. Dual NIC to separate access switches
  4. Content Switching (advanced content switches that front end user requests and provide redundancy/load balancing)

Enterprise Data Center Architecture

Data centers have different server technologies including standalone servers, blades, mainframes, clustered servers and virtual servers.
  • Data center access layer must provide port density to support server connections, high performance/low latency Layer 2 switching, and support single/dual connected servers
  • Preferred design contains Layer 2 switching to access layer and moves Layer 3 to distribution layer, though some designs can push Layer 3 to access layer
  • Cisco Data Center 3.0 architecture provides next evolution of data center
  • Distribution layer aggregates access links to core
  • Load balancers are implemented at distribution layer
  • SSL offloading devices terminate Secure Socket Layer sessions
  • Firewalls control/filter access
  • Intrusion Detection/Intrusion Prevention devices used to detect/prevent attacks

Campus LAN QoS Consideration

  • Access layer marks frames/packets for QoS policies in distribution layer
  • Classification is done via ISL or 802.1q tagging by setting Class of Service (CoS) bits
  • Traffic should be marked as close as possible to source

Multicast Traffic Consideration

  • Internet Group Management Protocol (IGMP) is used between hosts and local Layer 3 switch, IGMP is also protocol used between hosts and local router
  • IGMP messages uses IP protocol number 2, and messages are limited to local interface and not routed
  • Hosts report multicast membership to local routers to receive multicast traffic
  • End hosts in campus LAN may be flooded with unwanted multicast traffic if measures are not taken to prune/bound traffic.
  • Cisco Group Management Protocol (CGMP) and IGMP Snooping are solutions to unwanted multicast traffic issue
CGMP is Cisco proprietary protocol used to control multicast traffic at Layer 2. Because Layer 2 switches are unaware of Layer 3 IGMP messages it can't stop multicast traffic from going to all ports. CGMP allows Layer 2 switch to receive MAC addresses of hosts who subscribe to multicast from local router. Router must also be configured to use CGMP to pass info to Layer 2 switches

IGMP Snooping also allows multicast traffic to be controlled at Layer 2, and is now the preferred method. With IGMP switches listen to IGMP messages between hosts and routers. If hosts sends multicast query message to router, the switch will add the host to the multicast group and permits that port to receive the multicast. If the host sends an IGMP leave message the traffic is no longer forwarded. In order to use IGMP snooping the switch must listen to all IGMP messages which may negatively impact CPU usage.


Tuesday, January 29, 2013

CCDA Notes: Enterprise LAN Design (LAN Hardware)

LAN Hardware

LAN devices are categorized based on the layer of the OSI model in which they operate
  • Repeaters
  • Hubs
  • Bridges
  • Routers
  • Layer 2 switches
  • Layer 3 switches


Repeaters are layer 1 devices with no awareness of what traverses them. Their main use is to receive traffic, amplify it and send it out of all ports. Basic rule of Ethernet repeaters is 5-4-3 rule. Maximum path between any two hosts should be no more than five segments, with no more than four repeaters between them, and with no more than three of the segments populated with other hosts. Repeating generates latency when propagating traffic. When designing Ethernet networks, repeaters must be taken into account when determining 512-bit time for collision detection.


Hubs are basically repeaters with more ports, which were introduced to be installed in wiring closets for aggregation. Follow other rules for repeaters as above.


Bridges connect two segments of a network, and are different from repeaters because they are intelligent and operate at layer 2. Bridges control collision domains and learn MAC addresses of hosts on segment and on which interface their traffic comes into the bridge. In this way they lower total traffic on segments, because they learn on which segments hosts reside and will transmit only out of that interface to that segment. If a bridge has not learned a MAC it will flood the incoming frame out of all ports except that on which it was received, and when the answer comes in the bridge will learn the MAC/interface. They will also not forward frames to other segments that are destined for hosts on the same segment.

Bridges are store-and-forward devices, which store an entire frame, perform a CRC check to verify its integrity and then forward it on if it passes. Bridges are designed to flood all unknown and broadcast traffic.

Because of this bridges use Spanning Tree topology (STP) to implement a loop-free network so that broadcast traffic will not flood around the network eating resources and saturating links. STP elects a root bridge from all bridges participating in spanning tree, and then uses that root bridge's location in the topology of the network to determine which redundant links should be shut down. Root bridge election is based on priority, with the lowest priority being elected as root; If all bridges have equal priority, the lowest MAC address value is used to elect the root bridge. After the root bridge is elected, each other bridge will determine their best path0. to reach the root and shut down any other links. These links are available should the primary path fail, they are just shut down. If the link to root is detected to have failed, the bridge will go through a convergence period where it will try to reach the root on other paths, learn MAC addresses if possible and then activate the new best path, shutting down any other links if any remain. Physical changes to the network force spanning tree to reconverge.

Layer 2 Switches

Switches use special application-specific integrated circuits (ASICs) to reduce latency that regular bridges have, and are evolution of bridges. Some switches run in cut-through mode, meaning it only reads the destination MAC address and then forwards it without checking CRC. This speeds switching time but increases likelihood of forwarding bad frames. Most modern switches use store-and-forward method. Each port on a switch is a separate collision domain, meaning it has no need for CSMA/CD on the line and so it can operate at full duplex. Each switch is one broadcast domain, meaning any ports in a vlan will receive broadcasts sent from that vlan. Switches also learn MAC addresses and use STP to avoid loops in the network.


Routers are layer 3 devices that make forwarding decisions based on network addresses (IP adress). When an Ethernet frame enters a router interface, the layer 2 header is removed and the router inspects the layer 3 address, then adds the layer 2 address of its outgoing interface and forwards the packet. Routers do not forward layer 2 broadcasts over other interfaces. A router defines layer 3 broadcast domains based on the IP address and subnet of its interfaces. Routers are aware of the network protocol and so can forward routed protocols such as IP and IPX. Each interface of a router is its own collision/broadcast domain.

Routers can share network route information using a routing protocol in order to expand its list of known networks and the best routes to reach them. The following are some well-known protocols:
  • OSPF
  • BGP
  • RIP
  • IS-IS
Since routers can translate layer 2 protocols, they can be used to connect networks of different media types together such as Ethernet and Token Ring or Ethernet and Serial. Since they are protocol-aware, routers can also be configured to filter based on ports, IP addresses, hierarchical addressing and multicast routing.

Layer 3 Switches

LAN switches that can run layer 3 network protocols are Layer 3 switches. They are also called multilayer switches as they do routing and switching. Layer 3 switches have LAN interfaces that can switch network layer packets which greatly increases the speed of traffic flow. Use of ASICs to cache route information allows hardware switching of packets without needing to inspect data link addressing and consult the routing table. With routing processor power saved, the switch can perform advanced packet features when needed such as security filtering and intrusion detection. As with routers each port is its own collision domain, and ports can be grouped into network broadcast domains by subnet. Routing protocols can be implemented on layer 3 switches to exchange routing information.

Monday, January 28, 2013

CCDA Notes: Enterprise LAN Design (LAN Media)

Enterprise LAN Design

LAN Media

Ethernet Design Rules

Scalability Constraints for 802.3:

10BASE5 (Thicknet)
  • Bus Topology
  • 500 meter maximum segment length
  • 100 maximum attachments per segment
  • 2500 meters of five segments and four repeaters, of which only three segments can be populated as maximum collision domain
10BASE2 (Thinnet)
  • Bus Topology
  • 185 meter maximum segment length
  • 30 maximum attachments per segment
  • 2500 meters of five segments and four repeaters, of which only three segments can be populated as maximum collision domain
10BASET (Ethernet)
  • Star Topology
  • 100 meters from hub to station
  • 2 maximum attachments per segment (hub and station or hub - hub)
  • 2500 meters of five segments and four repeaters, of which only three segments can be populated as maximum collision domain
100BASET (Fast Ethernet)
  • Star Topology
  • 100 meters from hub to station
  • 2 maximum attachments per segment (hub and station or hub - hub)
  • Maximum collision domain is dependent on repeater technology but in general can only have two repeaters. Most networks use switches instead of repeaters
Main design rule for Ethernet is that the round-trip propagation delay in a single collision domain must not exceed 512-bit times in order for collision detection to work correctly. Maximum round-trip delay for 10MBPS Ethernet is 51.2 microseconds and 100MBPS Ethernet network is only 5.12 because its delay is .001 instead of .01

100-MBPS Fast Ethernet Design Rules

Uses CSMA/CD (Carrier Sense Multiple Access / Collision Detection) and UTP/fiber cabling. Speed/distance constraints are greater with Fast Ethernet because delays must be shorter to meet 512 bit rule (5.12 microseconds). Cabling specifications follow:
  • 100BASE-TX
  • 100BASE-T4
  • 100BASE-FX

100BASE-TX Fast Ethernet

100BASE-TX requires no special cabling to support over 10-Mbps Ethernet. Uses Cat5 UTP wiring, RJ-45 connectors. Utilizes only two pairs of the four-pair UTP wiring. Punchdown blocks in wiring closet must be Cat5 certified if used. Uses 4B5B coding.

100BASE-T4 Fast Ethernet

100BASE-T4 not widely deployed, supports Cat3, Cat4 and Cat5 UTP. To support older wiring, three of four wiring pairs are utilized with the fourth being reserved for collision detection Since there is no separate transmit/receive pairs this cabling cannot run at full-duplex. Uses 8B6T coding.

100BASE-FX Fast Ethernet

100BASE-FX is a fiber cabling standard. Operates over two strands of multimode or single-mode fiber with media interface connectors (MIC), Stab and Twist (ST), or Stab and Click (SC) fiber connectors. Fiber can transmit over greater distances than copper. Uses 4B5B coding.

100BASE-T Repeaters

Fast Ethernet limited to two repeaters. General rule is that Fast Ethernet has maximum diameter of 205 meters with UTP cabling. Since switches are used instead of repeaters in modern networks, effective length of cabling is 100 meters between host and switch.

Gigabit Ethernet Design Rules

802.3z-1998 specifies Gigabit Ethernet over fiber and coax and introduces GMII (Gigabit Media-Independent Interface). 802.3ab-1999 specified operation of Gigabit Ethernet over Cat5 UTP. Both are rolled into latest revision 802.3-2002. GigEthernet still uses same framing methods, CSMA/CD and full-duplex communication. All GigEthernet uses 8B10B coding.

Scalability Constraints/Specifications for Gigabit Ethernet

  • 100 meter maximum segment length
  • Cat5, four-pair UTP media
1000Base-LX Long-Wavelength
  • 62.5 micrometer wiring: 440 meter maximum segment length
  • 50 micrometer wiring: 550 meter maximum segment length
  • 9 micrometer (single-mode fiber): 5 kilometer maximum segment length
  • Single/multi-mode fiber
1000BASE-SX Short Wavelength
  • 62.5 micrometer wiring: 220 meter maximum segment length
  • 50 micrometer wiring: 500 meter maximum segment length
  • Multimode fiber media
1000BASE-CX Gigabit Over Coaxial
  • 25 meters meter maximum segment length
  • Used mainly for server connections
  • Shielded balanced copper media
1000BASE-T Gigabit Over UTP
  • Cat5, 4-pair UTP
  • Maximum length 100 meters
  • Five-level coding scheme
  • 1 byte is sent over 4 pairs at 1250 MHZ

10Gigabit Ethernet Design Rules supplement to 802.3 standard defines 10 Gigabit Ethernet. Defined for full-duplex operation over fiber, UTP and copper. Disallows usage of hubs/repeaters as they operate in half-duplex mode. Distances covered are consistent with MAN (Metropolitan Area Network) and WAN designs. Also includes data centers/server farms, corporate backbones.

10GE Media

  • Short-wavelength multimode fiber media using 66B encoding
  • 300 meter maximum distance
  • Short-wavelength multimode fiber media using Wan Interface Sublayer (WIS)
  • 300 meter maximum distance
  • Long-wavelength single-mode fiber using 66B encoding
  • 10 kilometers maximum distance
  • Long-wavelength single-mode fiber using WIS
  • 10 kilometers maximum distance
  • Extra-long wavelength single-mode fiber using 66B encoding
  • 40 kilometers maximum distance
  • Extra-long wavelength single-mode fiber using WIS
  • 40 kilometers maximum distance
  • Division multiplexing to leverage SMF and MMF using 8B/10B encoding
  • 10 kilometer maximum distance
  • Four pairs of twinax copper
  • 15 meters maximum distance
  • Cat6a UTP
  • 100 meter maximum distance


Cisco Etherchannel allows method of increasing bandwidth/link redundancy by bundling like speeds, ie, FastEthernet, Gigabit and 10GE into single logical port load balancing across all physical links. Can be formed with up to eight compatibly configured ports, must have same speed, duplex and vlan

Comparing Campus Media

  • Up to 10GBPS
  • Up to 100 meters
  • Inexpensive
Multimode Fiber
  • Up to 10GBPS
  • Up to 2 kilometers (FastEthernet)
  • Up to 550 meters (GigabitEthernet)
  • Up to 300 meters (10GigabitEthernet)
  • Moderate cost
Single-mode Fiber
  • Up to 10GBPS
  • Up to 100 kilometers (FE)
  • Up to 5 kilometers (GE)
  • Up to 40 kilometers (10GE)
  • Moderate to expensive cost
Wireless LAN
  • Up to 300MBPS
  • Up to 500 meters at 1MBPS
  • Moderate cost


Monday, January 21, 2013

CCDA Notes: Network Structure Models

Network Structure Models

Hierarchical Network Models

Hierarchical models use layers to simplify tasks for internetworking, with each layer focusing on specific functionality. This allows choosing correct features for each layers. This model applies to LAN and WAN designs.


  1. Cost Savings: Not trying to do it all on one routing/switching platform. Reduces need for advance bandwidth provisioning
  2. Ease of Understanding: Layered model easier to understand, different reporting/management can be distributed to different layers to help control management costs
  3. Modular Network Growth: Modularity allows replication as network grows and only small subsets require upgrade/replacement at a time
  4. Improved Fault Isolation: Transition points in network are easier to troubleshoot because network is segmented
Modern routing protocols were designed with hierarchical model in mind. Route summarization is facilitated by this model and more difficult if there are not clear boundaries

Hierarchical Network Design

  • Core: Fast transport between distribution devices within enterprise campus network
  • Distribution: Provides policy-based / Layer 3 connectivity
  • Access: Provides workgroup/users access to network
Core Layer

Fast-switching, backbone for network. Requires:
  • Fast transport
  • Redundancy
  • Reliability
  • Manageability
  • No CPU-intensive processes
  • QoS (if implemented)
  • Limited number of hops from edge to edge (workstation to server, etc)
Distribution Layer

Isolation point between access layer and core, implements many features:
  • Policy-based connections (ACLs, traffic policy)
  • Redundancy/load balancing
  • Aggregate access layer devices
  • Aggregate WAN connections (if connected here)
  • QoS
  • Security filters
  • Route summarization
  • Layer 3 interface/Inter-Vlan routing
  • Media translation (if needed between ethernet/token ring, etc)
  • Routing protocol redistribution
  • Demarcation between static/route protocols
Using Cisco IOS software features further policies can be applied:
  • Route filtering, static routing, QoS mechanisms like queueing

Access Layer

User access to local segments of network via switches. Other features of this layer:
  • High availability
  • Port security
  • Broadcast suppression (via vlan segmentation)
  • QoS Marking/Trust boundary classification
  • Rate limiting/policing
  • ARP inspection
  • VACLs (Vlan ACLs)
  • Spanning tree
  • PoE and auxiliary vlans for VOIP
  • Other auxiliary vlans
Hierarchical Model Examples

Traditional Model

Routed Hierarchical Design

As above, but the layer 3 switching is pushed to the access layer instead of the distribution layer. Route summarization is configured on interfaces pointed toward the core, while route filtering is configured toward access layer. Since links to distribution layer are routed, load balancing can occur versus spanning tree where one link is disabled.

If Cisco 6500 switches with VSS (Virtual Switching System) Supervisor 720-10G are available, two redundant distribution switches can be configured as one logical switch. The two distribution switches are connected by a 10Gig link called Virtual Switch Link. Benefits are as follows:
  • Layer 3 switching can be used toward access layer
  • Scales bandwidth to 1.44TBPS
  • Simplifies management of single configuration on VSS
  • Increased bandwidth between access/distribution layer gives better return on investment
  • No new chassis required (assuming you have 2 6500 chassis with these supervisor modules)

Cisco Enterprise Architecture Model

Modular approach to design, divides network into functional areas/modules. These areas/modules are:
  •  Enterprise Campus Module
  •  Enterprise Data Center module
  •  Enterprise Branch module
  •  Enterprise Teleworker module
Enterprise Architecture model maintains concepts of access/distribution components connecting users utilizing high-speed core

Enterprise Campus Module

  • Campus Core
  • Server Farm/Data Center
  • Building Distribution
  • Building Access
Campus core provides high-speed backbone between buildings, server farm towards enterprise edge, has redundant/fast-converging connectivity

Building distribution aggregates access and performs QoS, access control, route redundancy and load balancing

Building access provides user access, vlan control, auxiliary vlans and PoE for VOIP, spanning tree

Server Farm/Data Center provides high speed access and high availability of services

Enterprise Edge Area
  • E-commerce networks/servers
  • Internet/DMZ
  • VPN/Remote access
  • Enterprise WAN
E-commerce module describes highly available networks for business services, uses high availability design of server farm with Internet connectivity module. Devices within this submodule include:
  • Web/App servers - Primary user interface for e-commerce
  • Database servers - Application/transaction information
  • Firewall/Firewall routers - Governs communications between users
  • IPS - Monitor key network segments for attacks
  • Multilayer switch utilizing IPS module - Traffic transport/integrated security monitoring

Internet/DMZ Module provides public servers, email, DNS. Connectivity to ISP included in this module. Other components include:
  • Firewall/Firewall routers - Protect resources, stateful filtering, VPN termination for remote sites/users
  • Internet edge routers - Provide WAN connectivity, basic filtering
  • FTP/HTTP servers - Provides web applications that interface enterprise with Internet
  • SMTP relay servers - Relays mail to/from Internet to/from local email servers
  • DNS servers - Authoritative external DNS server for enterprise, relay internal requests to Internet
Multihoming provides for Internet connectivity redundancy
  1. Single router/dual links to one ISP
  2. Single router/dual links to two ISPs
  3. Dual routers/dual links to one ISP
  4. Dual routers/dual links to two ISPs

VPN/Remote access provides RA termination services, including authentication for remote users/sites. Components include:
  • Firewalls - Stateful filtering of traffic, authenticate remote users, provide tunnel connectivity
  • Dial-in access concentrators - Terminate legacy dialup and authenticate those users
  • Cisco ASA - Terminate IPSec tunnels and authenticate individual users, also firewall/IPS services
  • Network IPS - Proactively monitor network for attacks

Enterprise WAN is the edge module that connects to ISPs/WAN. AN technologies include:
  • MPLS (Multiprotocol Label Switching)
  • Metro Ethernet
  • Leased Lines
  • SONET and SDH
  • PPP/Frame Relay
  • ATM
  • Cable/DSL
  • Wireless
Guidelines for designing Enterprise edge:
  • Determine connection needed to connect Enterprise to Internet, this is assigned to Internet module
  • Create e-commerce module for customers and partners that require Internet access to business/database applications
  • Design Remote Access/VPN module for VPN access to internal network. Implement security and authentication, authorization parameters
  • Assign edge sections with permanent connections to remote branch offices to WAN/VPN module

Service Provider Edge Module consists of SP edge services such as:

  • Internet service
  • PSTN (Telephone)
  • WAN services

Remote Module consists of:
  •  Enterprise branch
  •  Enterprise Data Center
  •  Enterprise Teleworker
Enterprise Branch module consists of remote offices that rely on the WAN to connect back to main office for services. Commonly uses MPLS/WAN or IPSEC VPN tunneling to connect

Enterprise Data Center module uses network to leverage services, storage, applications. Components of data center include:
  • Network infrastructure - Gigabit/10GE, Infiniband, optical transport, storage switching
  • Interactive services - Computer infrastructure, storage services, application optimization
  • DC management - Cisco Fabric manager, Cisco VFrame for server/service management
Enterprise Teleworker module involves small office or mobile user who needs access to main campus, often utilizing VPN client. Cisco Virtual Office offers solution that is centrally managed using small integrated service routers (ISR). VOIP capability included in Virtual Office for teleworkers

Borderless Network Services

Cisco next-generation network architecture solution which enables connectivity to anyone/anything from anywhere at any time. Connectivity needs to be secure, reliable, seamless.
  • Mobility: Cisco Motion delivers anywhere/anytime access to information for mobile users from any device. Also provides detection, location, classification mitigation of sources of wireless interference
  • Security: Cisco TrustSec provides foundation for identity-directed and policy-based access. Uses Cisco ASA, Cisco Virtualization Security, and Cisco AnyConnect for endpoints/users. Cisco SAFE blueprint provides design/implementation guidelines for building secure/reliable architecture
  • Application Performance: Application Velocity optimizes speed/performance of any application by using Wide Area Application Services (WAAS)
  • Voice/Video (IP Communication): Medianet for Enterprise optimizes multimedia through automatic endpoints and optimized network configuration. Reduces video deployment time and provides multicast video

High Availability Network Services

Design redundancy for critical systems/services wherever possible. Consider following types of redundancy:
  • Workstation to router redundancy in building access layer
  • Server redundancy in server farm module
  • Route redundancy within/between network components
  • Link media redundancy in access layer

Workstation to Router Redundancy and LAN High Availability Protocols

  • ARP: Proxy ARP allows routers to respond to ARP requests it knows how to reach with its own MAC
  • Explicit Configuration: Configure workstation with IP of default gateway
  • ICMP Router Discovery Protocol (RDP): RFC 1256 specifies extension to ICMP to allow a workstation to learn a router's address
  • RIP: IP workstation can run RIP to learn about routers, should be set to passive if used at all
  • HSRP: Workstation can be configured with default gateway IP, two routers can share that virtual IP which provides default gateway that is fault tolerant
  • VRRP: Router redundancy protocol dynamically assigns responsibility for router to a VRRP router participating. Master router assigns forwarding router, but any VRRP-participating router can forward if failover is needed
  • GLBP: Provides first-hop redundancy and also load balancing between redundant routers, uses single virtual IP and multiple MAC addresses, as requests come in the MAC address of a GLBP router is given for that request. GLBP has several benefits:
  1. Load Sharing
  2. Multiple virtual routers
  3. Preemption
  4. Authentication

Server Redundancy

Servers may be mirrored for redundancy and replicate data between them. Can also deploy Cisco Unified Communications Manager servers for redundancy. These servers should be on different networks and utilize redundant power supplies. Options for server implementation in the server farm include:
  • Single attachment - Not recommended as it requires alternate mechanisms (HSRP, VRRP, GLBP) to find alternate router
  • Dual attachment - Solution increases availability by utilizing redundant NICs
  • Fast Etherchannel and Gigabit Etherchannel  port bundles

Route Redundancy

Redundant routes have two purposes: Load balancing and increasing availability

Load Balancing

Most routing protocols will load balance across parallel links with equal cost, can do unequal with configuration, or use more links to balance. To support load balancing keep bandwidth consistent within layer of hierarchical model.

Hop-based routing protocol will load balance across unequal bandwidth links so long as hop count is equal. After slower link is saturated, packet loss prevents traffic and router will not automatically utilize only high-speed link. This is called pinhole congestion, can be avoided by provisioning equal-bandwidth links or using a protocol that takes bandwidth into account

IP load balancing on a Cisco router depends on whether it is process switching or fast/netflow based switching. Process switching inspects each packet, whereas hardware/fast/netflow switching uses destination basis because it is cached

Increasing Availability

Bandwidth should be kept consistent to ease load balancing, but redundant routes also increase availability because more paths to a destination exist. Routing protocols converge faster on equal-cost links. Mesh network designs are fault-tolerant because multiple links connect network devices. If a single link fails connectivity is minimally (or not at all) impacted.

Number of links in a full mesh is n(n-1)/2 where n is the number of devices

Full mesh is very expensive to implement in WANs because of the cost of circuit links. Also with more mesh links, the CPU/bandwidth overhead for routing protocols and broadcast traffic increases. Since broadcast traffic should consume no more than 20 percent of a link, number of routers exchanging routing information should be limited. 80 percent of link bandwidth should be reserved for data, voice, video traffic. Planning redundancy should take into account hierarchical design for partial mesh, meshing access to distribution and distribution to core

Link Media Redundancy

In mission-critical applications it may be necessary to provide redundant media. Switches can be connected to each other, but need spanning tree to bound broadcast traffic. WAN links can be made redundant with redundant links to WAN providers or to backup WAN providers. May provision backup route as a floating static route (static route with very high administrative distance that will only be installed into routing table if primary link fails).

Cisco also supports Multilink Point-to-Point Protocol (MPPP) which aggregates multiple WAN links into single logical channel. This increases bandwidth and provides link redundancy.