Search This Blog

Monday, February 4, 2013

CCDA Notes: Data Center Design

Enterprise Data Center Architectures


Data Centers used to use mainframes to centrally process data, with users connecting via terminals to do work on the mainframe (Data Center 1.0).

Data Center 2.0 introduced the concept of client/server connections and distributed computing. Business applications were installed on servers in data center and accessed by users on their workstations. Applications services were distributed because of cost of WAN links and slow performance.

In Data Center 3.0, consolidation and virtualization are the main components. Due to communication equipment becoming cheaper and stronger computing power being available, the current move is toward consolidating services in data centers, which centralizes management and is more cost-effective than distributing services. Newer architecture takes advantage of server virtualization which results in higher utilization of computing/network resources. This raises return on investment (ROI) and lowers total cost of ownership (TCO).

Data Center 3.0 Components

Virtualization
  • Virtual local area networks (vlans), virtual storage-area networks (VSAN), virtual device contexts (VDC) help segment LAN/SAN/network instances
  • Cisco Nexus 1000V virtual switch for VMWare ESX/ESXi helps with policy control and visibility of virtual machines (VM)
  • Flexible network options that support multiple server form factors/vendors including those with integrated Ethernet/Fibre channel switches
Unified Fabric
  • Fibre Channel over Ethernet (FCoE) and Internet Small Computer Systems Interface (iSCSI) are two methods to implement unified fabric in data center oveer 10 Gigabit Ethernet networks
  • FCoE is supported on VMWare ESX/ESXi vSphere 4.0 and up
  • Cisco Catalyst/Nexus/MDS family of switches support iSCSI. Cisco Nexus 5000  supports unified fabric lossless operation which improves iSCSI performance using 10 Gigabit Ethernet
  • Cisco Nexus switches created to support unified fabric. Nexus 4000/5000 supports data center bridging (DCB) and FCoE, in future Nexus 7000 and Cisco MDS switches will as well
  • Converged network adapters (CNA) run at 10GE speeds and support FCoE. Available from Emulex and QLogic, and certain software stacks for 10GE interfaces are available from Intel
Unified Computing
  • Cisco Unified Computing System (UCS) is next-gen platform designed to converge computing, storage, network and virtualization together into one system
  • Integrates lossless 10GE unified network fabric with x86-based servers
  • Allows Cisco Virtual Interface Cards to virtualize network interfaces on servers
  • Cisco VN-Link virtualization
  • Supports extended memory technology patented by Cisco
  • Uses just-in-time provisioning using service profiles to increase productivity
At top layer of architecture, the virtual machines are software entities that run hypervisors which emulate hardware. Then there are the unified computing resources within which service profiles define the identity of the server. Identity includes hardware settings such as allocated memory and CPU, network card information, boot image and storage. 10GE, FCoE and Fibre Channel technologies provide unified fabric supported by Cisco Nexus 50000. FCoE allows native Fibre Channel frames to function on 10GE networks. VLAN/VSAN technology segments multiple LANs and SANs on same physical equipment. At the lowest layer there is virtualized hardware where storage devices can be virtualized into storage pools, and network devices are virtualized using virtual device contexts.

Challenges in the Data Center

Data center requirements and mechanical specifications help to define the following:
  • Power needed
  • Physical rack space used
  • Limits on scaling
  • Management (resources, firmware)
  • Security
  • Virtualization support
  • Management effort required

Data Center Facility Considerations


  • Space available
  • Floor load capacity
  • Power/cooling capacity
  • Cabling infrastructure
  • Operating temperature and humidity level
  • Access to site, security alarms and fire suppression
  • Space for employees to move/work
  • Compliance with regulations such as Payment Card Industry (PCI), Sarbanes-Oxley (SOX), and Health Insurance Portability and Accountability Act (HIPAA)

Data Center Space

  • Number of employees who will support data center
  • Number of servers and amount of storage/network gear needed
  • Space needed for non-infrastructure areas such as shipping/receiving, server/network staging, storage/break/bathrooms, and office space
Other considerations related to equipment rack/cabinet space:
  • Weight of rack/equipment
  • Heat expelled from equipment
  • Amount and type of power required (UPS/RPS)
  • Loading, which determines what/how many devices can be installed

Data Center Power

Desired power reliability drives requirements which may include multiple redundant power feeds from utility, backup generators, redundant power supplies. Power in the data center is used to power and cool devices in the data center. The power system also needs to protect against power surges, failures and other electrical problems. Key points of a power design will:
  • Define overall power capacity
  • Provides physical electrical infrastructure and addresses redundancy

Data Center Cooling

Cooling is used to control humidity and temperature in order to extend the lifespan of devices. High-density rack design should be weighed against heating considerations. Smaller form-factor servers allow more to be placed into a rack but airflow and cooling must be accounted for. Cabinets and racks should be organized into 'cold' and 'hot' aisles. In cold aisles, the fronts of devices should face each other across the aisle and in hot aisles the backs of devices should face each other across the aisle. Cold aisles should have perforated floor tiles through which cold air is blown that will be drawn into the fronts of the devices, flushing the hot air out of the back into the hot aisles. Hot aisles should have no perforated tiles, which will keep hot/cold air from mixing and diluting its effect.

If equipment does not exhaust heat to the rear, other cooling techniques can be leveraged:
  • Block unnecessary air escapes to increase airflow
  • Increase height of raised floor
  • Spread equipment to unused racks
  • Use open racks rather than cabinets in places security is not a concern
  • Use cabinets with meshed front/back
  • Custom perforated tiles with larger openings to allow more cold airflow

Data Center Heat

Data center design must account for high density servers and heat produced by them. Considerations in design for cooling need to be taken into account for proper sizing of servers and anticipated growth, along with the corresponding heat output.
  • Increase number of HVAC units
  • Increase airflow through devices
  • Increase space between racks/rows
  • Use alternative cooling technologies such as water-cooled racks

Data Center Cabling

Data center cabling is known as passive infrastructure. The cabling plant is what connects everything together, terminating connections between devices and governing how devices communicate. Cabling must be easy to maintain, abundant and capable of supporting different media types and connectors for proper operations.

Considerations for following must be determined during design:
  • Media selection
  • Number of connections
  • Type of cable termination organizers
  • Space for cabling on horizontal/vertical cable trays
Cabling needs to avoid the following:
  • Inadequate cooling due to restricted airflow
  • Outages due to accidental disconnections
  • Unplanned dependencies
  • Difficult troubleshooting options

Enterprise Data Center Infrastructure

Current enterprise data center design follows Cisco multilayer (hierarchical) architecture including access, aggregation and core layers. This model supports blade servers, single rack-unit (RU) servers and mainframes.

Defining Data Center Access Layer

The main purpose of data center access layer is to provide Layer 2/3 physical port density for various servers. The access layer also provides low-latency and high-performance switching that can support oversubscription requirements. Most data centers are built with Layer 2 connectivity but Layer 3 (routed access) options are available. Layer 2 connectivity uses vlan trunk uplinks to allow aggregation services to be shared across the same vlan across multiple switches. Spanning Tree is used in Layer 2 access to avoid loops in network. The recommended STP instance is RPVST+.
New routed access design aims to contain Layer 2 to the access layer and avoid the use of STP. First-hop redundancy must be be provided as the access switch becomes the first-hop router. Access layer benefits are as follows:
  • Port density for server farms
  • Supports single/dual-homed servers
  • High-performance, low-latency Layer 2 switching
  • Supports mix of oversubscription requirements

Defining Data Center Aggregation Layer

Aggregation (Distribution) layer aggregates Layer 2/3 links from the access layer and connects upstream to the core layer. Layer 3 connectivity, if not implemented at access layer, is typically used towards core from aggregation layer. The aggregation layer is a critical point for data center application and security services including load balancing, SSL offloading, and firewall/IPS services. Depending on design requirements the Layer 2/3 border could be in multilayer switches, firewalls, or content switching devices. Multiple aggregation layers can support different environments such as a test environment, production, etc each with its own applications and security requirements. First-hop redundancy is typically implemented in aggregation layer if Layer 3 is not implemented at the access layer. Benefits of the aggregation layer are:
  • Aggregates traffic from data center access layer and connects to data center core
  • Supports advanced security/application services
  • Layer 4 services such as firewalls, IPS, SSL offloading and server load balancing
  • Large STP process load
  • Highly flexible/scalable

Defining Data Center Core Layer

Data Center Core connects the campus core to the data center aggregation layer utilizing high-speed Layer 3 links. The core is a centralized Layer 3 routing layer to which the data center aggregation layers connect. Data center networks are summarized here and shared with the campus core, and default routes are injected into the data center aggregation layer from the data center core. Multicast traffic must also be allowed through the data center core to support a growing list of multicast applications.

Data Center Core Drivers
  • 10 Gigabit Ethernet density: Are there enough links to link multiple aggregation layers together?
  • Administrative domains/policies: Separate cores help isolate campus distribution from data center aggregation for troubleshooting and QoS/ACL policies
  • Future Growth: Future impact/downtime that would be needed to expand later makes it important to provide enough core layers when designing for initial implementation
Characterisics of a Data Center Core
  • Low-latency switching
  • Distributed forwarding architecture
  • 10 Gigabit Ethernet
  • Scalable IP Multicast support

Virtualization Overview

Virtualization technology allows one physical device to emulate several, or several physical devices to emulate a single logical device. The modern data center is changing based on virtualizatuion and data center design changes with it.

Virtualization Driving Forces
  • Need to reduce rising cost of powering/cooling devices while getting more productivity
  • Data center consolidation of assets performing individual tasks
  • Logical, separate user groups secured from other groups on same network
  • Eliminate underutilized hardware that has poor performance/price ratio

Virtualization Benefits
  • Better use of computing resources, higher server densities, simplified server migration
  • Flexibility and ease of management for adds/reassignments/repurposing of resources
  • Separation of groups utilizing same physical network, enabling traffic isolation
  • Ability to provide per-department security policy
  • Reduction in power/space needed
  • Increased uptime, decreased operational cost

Network Virtualization
  • VLAN
  • VSAN
  • VRF (Virtual Routing/Forwarding)
  • VPN
  • vPC (Virtual Port Channel)

Device Virtualization
  • Server virtualization (VM)
  • Cisco Application Control Enginre (ACE) context
  • Virtual Switching System (VSS)
  • Cisco ASA firewall context
  • Virtual device contexts (VDC)

Virtualization Technologies


VSS
Virtual Switching System is network virtualization that allows two physical Cisco Catalyst 6500 series switches to act as a single logical switch. Similar to StackWise technology used on Cisco Catalyst 3750 switches that allows chaining multiple switches together into a single logical switch, but VSS is limited to two chassis linked together.

VRF
Virtual routing and forwarding virtualizes Layer 3 route tables to allow multiple routing tables to exist on a single device. In Multi-Protocol Label Switching VPN environment, VRF allows  multiple networks to exist on the same MPLS network. Routing information is contained in VRF and is only visible to other routers participating in the same VRF instance. Because of this duplicate IP address schemes can be used.

vPC
Virtual Port Channel technology works by virtualizing two Cisco Nexus 7000 or Nexus 5000 series switches as a single logical switch. 10GE links connect the two physical switches which then represent themselves as a single logical switch for purposes of port channeling. Although multiple redundant paths exist, the spanning tree topology appears loop-free. This allows all links to be utilized.

Device Contexts
Device contexts allow a single physical network device to host multiple virtual devices. Each context is its own instance with its own configuration, policies, network interfaces and management. Most features available on single network devices also exist on contexts. These devices support contexts:
  • Cisco Nexus 7000 series switches
  • Cisco ASA Firewall
  • Cisco Catalyst 6500 Firewall Services Module (FSM)
  • Cisco Application Control Engine Appliance
  • Cisco Catalyst 6500 Application Control Engine Module
  • Cisco IPS
Server Virtualization
Server virtualization is a software technique which abstracts server resources from hardware to provide flexibility and optimize the usage of the underlying hardware. The virtualized hypervisor controls hardware and physical resources that can be allocated to the different server VMs. This shares resources among the VMs without the VMs being aware of their actual physical hardware. Several vendors for server virtualization, along with products:
  • VMWare ESX Server
  • Citrix XenServer
  • Microsoft Hyper-V

Network Virtualization Design Considerations


Access Control
Access should be controlled to make sure users and devices are identified and authorized to communicate with their assigned network segment.

Path Isolation
Path isolation involves the creation of independent logical paths over the same physical network infrastructure. MPLS VPN assigned to specific VRFs is an example of this. VLANs and VSANs also logical separate networks.

Services Edge
Services Edge refers to making services available to the users, groups and devices intended with an enforced centralized managed policy. Effective way to enforce service access is a firewall or other centralized device that contains policies on what should and should not be accessible.

 




No comments:

Post a Comment