Quantcast
Channel: SAN Admin-Detailed about SAN,NAS,VMAX,VNX,Netapp Storage,HP Storage
Viewing all 126 articles
Browse latest View live

NaviCLI Download

$
0
0

NaviCLI Download

NaviCLI is used for accessing the VNX Storage through Command Line Interface.

Without this tool we can't able to access the VNX. The Setup file will available in 32 bit and 64 bit.

To download the NavilCLI, just click the below link and it will take you the download page.

NaviCLI Setup 

The procedure for downloading the NaviCLI is as follows:


1. Download the Setup file and click on run option to start the installation.

2. There are 5 simple steps to install the setup file, just we have to hit the Next button.

NaviCLI_Download
NaviCLI Download
3. In 2nd step it will ask for the path where we want to install the software. A default address will be taken or we can choose our own desired path.

NaviCLI_Download_page
NaviCLI Download page
4. Now the installation will start and once it complete.

Open the command prompt and choose the specific path location where the setup is installed.

Specified_path_with_command_to_see_ the_LUN_Details
Specified path with command to see the LUN Details

For example:

From Naviseccli:

For VNX1 Series, use the following commands to copy data from online disk 0_1_5 to any hotspare available:

>> naviseccli –h <SP_IPaddress> copytohotspare 0_1_5 –initiate

***If command above does not work, attempt the following command***

>> naviseccli -h <SP_IPaddress> -user <username> -password <password> copytohotspare 0_1_5 –initiate

note: Default username and password are sysadmin, sysadmin. If for some reason this does not work, request customer for username and password.


**To verify that disk is actually copying to hotspare use command:

>> naviseccli getdisk <disk location>

*** If sometime the above naviseccli command does not work, use naviseccli.exe instead of naviseccli.

---------------------------------------------------------------------------------------------------------------------


To gather the SP collects logs through Navicli, visit the page below.








How to create a hot spare in VNX Unisphere

$
0
0


Creating a hot spare in VNX Unisphere

Issue: An available drive must to be configured as a hot-spare.

Cause: A hot spare is single disk that serves as a temporary replacement for a failed disk in a RAID 6, 5, 3, 1, or 1/0 RAID group. Data from the failed disk is reconstructed automatically onto the hot spare from the parity or mirrored data on the working disks in the LUN so that the data on the LUN is always accessible.

Resolution: To bind an available drive as a hot spare, perform the following steps in Unisphere:

In the drop-down list on the menu bar, select a system.

Select System > Hardware > Hot Spares.

Click Create. 

The RAID Group Storage Pool Type is assigned by default. 

Assign a storage pool ID. The default value is the smallest available ID for the currently selected storage system.  You can assign a different ID by using the drop-down list box.

The Storage Pool Name and the Hot Spare RAID Type are assigned by default. The Number of Disks option is set to Invalid disk selection. These values cannot be changed.

Under Disks, click Select to open the Disk Selection dialog box. 

In Available Disks, select the disk you want to make a hot spare and then click the right arrow.  The disk moves from Available Disks into Selected Disks.

Click OK to close the Disk Selection dialog box. 

If necessary, click the Advanced tab to set advanced properties:

a) Choose whether you want to automatically delete the RAID group after the last LUN is deleted.

b) Choose whether to set the priority for expanding and defragmenting the RAID group to either Low, medium, or High. The default value is Medium.

c) Select the Allow Power Savings checkbox if you want to enable power savings for the RAID group.

Click Apply to create the hot spare and then click Cancel to close the dialog box.

Notes: Only RAID group LUNs can be hot spares (that is, a Pool LUN could not be a hot-spare). Vault drives (the first four drives in a storage system) cannot be hot spares.

---------------------------------------------------------------------------------------------------------------------

Introduction to EMC Clariion/VNX
EMC Clariion/VNX (Block Level) Architecture

Cisco Networking IT Jobs vacanies (tech jobs) in Multinational Company

$
0
0

JOB POSTING

Tech Job openings are there in Hexaware Technologies LTD., at Bangalore.

Please find the venue details below:



Skill Set

CISCO Networking

Role

Sr. Network Eng. / Network Lead.

Work Location

Bangalore

Grade

G4 / G5

Qualification

B.E / B.Tech / MSc / MCA / ME / M.Tech

Relevant Experience

3-7 Years

JobDescription
Should have 3-7 years of experience in Network and Firewall Support environment. Experience in Remote support is preferred.
• CCNP Routing and Switching knowledge is must, certification is desirable.
• Experience in firewall support - Checkpoint is preferred.
• Good knowledge on Wireless technologies and experience in Cisco Wireless APs, and Controllers.
• Sound knowledge and experience in Routing protocols EIGRP, BGP and OSPF.
• Good knowledge and experience in Technologies like MPLS, Frame relay, ATM, Metro Ethernet, VPLS, Tunneling (IPSEC and GRE).
• Experience in Juniper SSL VPN and end-user VPN support (preferred)
• Must have knowledge of ITIL (certification is desirable)
• Will be involved in providing L2 network support.
• Should be ready to work in 24/7 environment with rotational shifts
• Monitoring incident queue for assigned tickets and investigating to diagnose and resolve issues
• Liaising with telecom vendors, equipment vendors, and with peer support teams as required for the quick resolution
• Identifying problems, investigating and diagnosing issues in depth if required with the help of vendors.
• Closely work with L3/engineering team for critical issues/changes.
• Monitoring change request queues for assigned Network infrastructure changes – take ownership, prepare solution/plan while adhering to the defined network and security policies and standards for approval and executing them after approval
• Creating and maintaining applicable documentation (Knowledge Base articles, process documentation, quality deliverables, etc)
• Support service improvement initiatives that pertain to Network infrastructure support.
• Communicate issues, solutions, process updates that may impact the team.
Preferences:
Level 2 Network Support.
Remote Network Management.
Cisco.
CCNA.
CCNP Routing and Switching.
Routing Protocols – BGP,EIGRP & OSPF.
LAN/WAN.
Check Point Firewalls.
F5 Load balancers.
Juniper SSL VPN.
Vendor Management.
ITIL.

Venue  & Contact Person
Hexaware Technologies
Prestige Pegasus, Level 1 & 2, #19 & 14, Bellandur Gate, Near Total Mall, Sarjapura Main Road Bengaluru-560 035
Contact Person: Davindr Shah
Mandatory: Please ensure to carry a printout copy of this email so as to attend the interview in-person.

Date & Timings
Saturday , 6th February 2016
Time – 9.30 AM onwards

Send resumes to


Documents to carry

1 passport size photograph, latest revision/offer letter, last 3 months salary slips, highest education final year mark-sheet & degree certificate


About SAN | Data Centre Solutions | Information Lifecycle Management

$
0
0

Information Storage & Management

Preface

Information is increasingly important in our daily lives. We have become information dependents of the twenty-first century, living in an on-command, on-demand world that means we need information when and where it is required. We access the Internet every day to perform searches, participate in social networking, send and receive e-mails, share pictures and videos, and scores of other applications.

Here we can learn about the basics of Information, Evolution of Storage technology and Architecture and its core elements.

Data

Data is a collection of raw facts from which conclusions may be drawn. Handwritten letters, a printed book, a family photograph, a movie on video tape, printed and duly signed copies of mortgage papers, a bank’s ledgers, and an account holder’s passbooks are all examples of data.


Today, the same data can be converted into more convenient forms such as an e‑mail message, an e-book, a bitmapped image, or a digital movie. This data can be generated using a computer and stored in strings of 0s and 1s.

Digital-data
Digital Data 

Types of data

Data is of two types.

a) Structured data

Structured data is organized in rows and columns in a rigidly defined format.

b) Un-structured data

Data is unstructured if its elements cannot be stored in rows and columns, and is therefore difficult to query and retrieve by business applications. For example, customer contacts may be stored in various forms such as sticky notes, e-mail messages, business cards, or even digital format files such as .doc, .txt, and .pdf. Due its unstructured nature, it is difficult to retrieve using a customer relationship management application.

Information

Data, whether structured or unstructured, does not fulfill any purpose for individuals or businesses unless it is presented in a meaningful form. Information is the intelligence and knowledge derived from data.

Types-of-data
Types of data



Data Center Infrastructure

Organizations maintain data centers to provide centralized data processing capabilities across the enterprise. Data centers store and manage large amounts of mission-critical data. The data center infrastructure includes computers, storage systems, network devices, dedicated power backups, and environmental controls (such as air conditioning and fire suppression).

Core Elements

Five core elements are essential for the basic functionality of a data center:

Application: An application is a computer program that provides the logic for computing operations. Applications, such as an order processing system, can be layered on a database, which in turn uses operating system services to perform read/write operations to storage devices.

Database: More commonly, a database management system (DBMS) provides a structured way to store data in logically organized tables that are interrelated. A DBMS optimizes the storage and retrieval of data.

Server and operating system: A computing platform that runs applications and databases.

Network: A data path that facilitates communication between clients and servers or between servers and storage.

Storage array: A device that stores data persistently for subsequent use.

Order-processing-system
Order processing system



Key Requirements for Data Center Elements

Availability: All data center elements should be designed to ensure accessibility. The inability of users to access data can have a significant negative impact on a business.

Security: Polices, procedures, and proper integration of the data center core elements that will prevent unauthorized access to information must be established. In addition to the security measures for client access, specific mechanisms must enable servers to access only their allocated resources on storage arrays.

Scalability: Data center operations should be able to allocate additional processing capabilities or storage on demand, without interrupting business operations. Business growth often requires deploying more servers, new applications, and additional databases. The storage solution should be able to grow with the business.

Performance: All the core elements of the data center should be able to provide optimal performance and service all processing requests at high speed. The infrastructure should be able to support performance requirements.

Data integrity: Data integrity refers to mechanisms such as error correction codes or parity bits which ensure that data is written to disk exactly as it was received. Any variation in data during its retrieval implies corruption, which may affect the operations of the organization.

Capacity: Data center operations require adequate resources to store and process large amounts of data efficiently. When capacity requirements increase, the data center must be able to provide additional capacity without interrupting availability, or, at the very least, with minimal disruption. Capacity may be managed by reallocation of existing resources, rather than by adding new resources.

Manageability: A data center should perform all operations and activities in the most efficient manner. Manageability can be achieved through automation and the reduction of human (manual) intervention in common tasks.

Data-centre-elements
Data centre elements



Information Life cycle 

The information lifecycle is the “change in the value of information” over time. When data is first created, it often has the highest value and is used frequently. As data ages, it is accessed less frequently and is of less value to the organization. Understanding the information lifecyclehelps to deploy appropriate storage infrastructure, according to the changing value of information.


Information-Lifecycle-Management
Information Lifecycle Management


Storage Raid | Raid 0 | Raid 1 | Raid 5 | Raid 1 0 | Raid 0 1 | storage iops calculator

$
0
0

Storage RAID

History of RAID:

In the late 1980’s, rapid growth of new applications and databases created a high demand for storage capacity. At that time, data was stored on a single large, expensive disk drive called Single Large Expensive Drive(SLED).

In 1987, Patterson, Gibson, and Katz at the University of California, Berkeley, published a paper titled “A Case for Redundant Arrays of Inexpensive Disks (RAID).” This paper described the use of small-capacity, inexpensive disk drives as an alternative to large-capacity drives common on mainframe computers. The term RAID has been redefined to refer to independent disks, to reflect advances in the storage technology.

Types of RAID:

There are two types of RAID implementation, hardware and software. Both have their merits and demerits and are discussed in this section.

Software RAID

Software RAID uses host-based software to provide RAID functions. It is implemented at the operating-system level and does not use a dedicated hardware controller to manage the RAID array.

Hardware RAID

A specialized hardware controller is implemented either on the host or on the array. These implementations vary in the way the storage array interacts with the host.

RAID Array Components

A RAID array is an enclosure that contains a number of HDDs and the supporting hardware and software to implement RAID. HDDs inside a RAID array are usually contained in smaller sub-enclosures. These sub-enclosures, or physical arrays, hold a fixed number of HDDs, and may also include other supporting hardware, such as power supplies. A subset of disks within a RAID array can be grouped to form logical associations called logical arrays, also known as a RAID set or a RAID group.

Logical arrays are comprised of logical volumes (LV). The operating system recognizes the LVs as if they are physical HDDs managed by the RAID controller. The number of HDDs in a logical array depends on the RAID level used.


Components-of-a-Raid-Array
Components of a Raid Array


Raid Levels

RAID levels are defined on the basis of striping, mirroring, and parity techniques. These techniques determine the data availability and performance characteristics of an array.

RAID 0: RAID 0 is also known as disk striping. All the data is spread out in chunks across all the disks in the RAID set. RAID 0 is only good for better performance, and not for high availability, since parity is not generated for RAID 0 disks. RAID 0 requires at least two physical disks.

Raid-0-Striping
Raid 0 Striping


Raid 1:  RAID 1 is also known as disk mirroring. All the data is written to at least two separate physical disks. The disks are essentially mirror images of each other. If one of the disks fails, the other can be used to retrieve data. Disk mirroring is good for very fast read operations. It's slower when writing to the disks, since the data needs to be written twice. RAID 1 requires at least two physical disks.

Raid-1-Mirrroring
Raid 1 Mirroring

RAID 5: RAID 5 uses disk striping with parity. The data is striped across all the disks in the RAID set; it achieves a good balance between performance and availability. RAID 5 requires at least three physical disks.

Raid-5-Striping-with-single-parity
Raid 5 Striping with single parity

RAID 6: RAID 6 increases reliability by utilizing two parity stripes, which allows for two disk failures within the RAID set before data is lost. RAID 6 is seen in SATA environments, and solutions that require long data retention periods, such as data archiving or disk-based backup.

Raid-6-Striping-with-double-parity
Raid 6 Striping with double parity

RAID 1+0: RAID 1+0, which is also called RAID 10, uses a combination of disk mirroring and disk striping. The data is normally mirrored first and then striped. RAID 1+0 requires a minimum of four physical disks.

Raid-1-0-Stipe-Mirror
Raid 1 0 Stipe + Mirror

RAID 0+1: RAID 0+1 also called as RAID 01, is a RAID level using a mirror of stripes, achieving both replication and sharing of data between disks.

Raid-0-1-Mirror-stripe
Raid 0 1 Mirror + stripe

RAID Comparison

Here we will discuss about the comparison between all RAID levels such as read & write performance and min. disks required to build a Raid and so on.

Raid-comparison-Chart-1
Raid comparison chart 1


Raid-comparison-Chart-2
Raid-comparison-Chart-2

Disk-Details-in-Raid-types
Disk Details in Raid types

Mini/Maxi-drives-in-a-Raid-level
Mini/Maxi drives in a Raid level

Capacity-utilization-in-Raid-Level
Capacity utilization in Raid Level


Application IOPS and RAID Configurations

When deciding the number of disks required for an application, it is important to consider the impact of RAID based on IOPS generated by the application. The total disk load should be computed by considering the type of RAID configuration and the ratio of read compared to write from the host.

The following example illustrates the method of computing the disk load in different types of RAID.

Consider an application that generates 5,200 IOPS, with 60 percent of them being reads.

The disk load in RAID 5 is calculated as follows:

RAID 5 disk load = 0.6 × 5,200 + 4 × (0.4 × 5,200) [because the write penalty for RAID 5 is 4]
= 3,120 + 4 × 2,080
= 3,120 + 8,320
= 11,440 IOPS

The disk load in RAID 1 is calculated as follows:

RAID 1 disk load = 0.6 × 5,200 + 2 × (0.4 × 5,200) [because every write manifests as two writes to the disks]

= 3,120 + 2 × 2,080
= 3,120 + 4,160
= 7,280 IOPS

The computed disk load determines the number of disks required for the application. If in this example an HDD with a specification of a maximum 180 IOPS for the application needs to be used, the number of disks required to meet the workload for the RAID configuration would be as follows:

RAID 5: 11,440 / 180 = 64 disks

RAID 1: 7,280 / 180 = 42 disks (approximated to the nearest even number)

Hot Spares

A hot spare refers to a spare HDD in a RAID array that temporarily replaces a failed HDD of a RAID set. A hot spare takes the identity of the failed HDD in the array.

Hot spares are of 2 types. Permanent and Temporary hot spare.

Permanent Hot Spare: The hot spare replaces the new HDD permanently. This means that it is no longer a hot spare, and a new hot spare must be configured on the array.

Temporary Hot Spare: When a new HDD is added to the system, data from the hot spare is copied to it. The hot spare returns to its idle state, ready to replace the next failed drive.



MC-Data Switch zoning

$
0
0

M-Series

Mc-data switches use both web and CLI, the table below displays some but not all the CLI commands.

commadelim
Toggle comma-delimited display mode
config
configure settings
login
login into CLI with different access rights
maint
maintenance settings
perf
Performance statistics
reserved
reserved for future development
show
display attributes
features
configure feature settings
ip
configure IP settings
logout
logout of the CLI
port
configure port data
security
configure security settings
snmp
configure snmp
switch
configure switch data
system
configure system data
zoning
configure zoning settings

There are several commands that allow you to navigate through the switch "..", "ctrl-U" and "root" .

Mc-data zoning commands are detailed in the below table


showactive
show actively running zoneSet
clearzone
clear WWN's in a zone
deletezone
remove zone from running config
activezoneset
activation of changes
addzone
add a new zone to the working area
addwwnmem
add a WWN to a zone
showpending
show pending zones
renamezone
rename a zone
deletewwn
delete a WWN from a zone
renamezoneset
rename a zone set


Zoning limits
  • 64 zone sets (max)
  • 2000 zones (max)
  • 1024 zones per zone set (max)
  • deafult should all ways be disabled (causes all port to see each other creating ghosts in a FA's login table) 

---------------------------------------------------------------------------------------------------------------


To know about Cisco MDS Zoning, click the below URL







To know about Brocade Zoning, click the below URL






zoning commands in Brocade fabric switch | Process for zoning request

$
0
0

Brocade

About


Brocade Communications Systems, Inc. is an American technology company specializing in data and storage networking products. Originally known for its leadership in Fibre Channel storage networks, the company has expanded its focus to include a wide range of products for New IP and Third platform technologies.

Brocade was founded in August 1995, by Seth Neiman (a venture capitalist, a former executive from Sun Microsystems and a professional auto racer), Kumar Malavalli (a co-author of the Fibre Channel specification).

The company's first product, SilkWorm, which was a Fibre Channel Switch, was released in early 1997. A second generation of switches was announced in 1999.

On January 14, 2013, Brocade named Lloyd Carney as new chief executive Officer.

Brocade FC Switch have so many models with the port variations, the details are below

List_of_Brocade_FC_switches
List of Brocade FC switches 


Work flow for zoning activity


The Platform team will inform you that they are going to provision a new server in the environment and requests you to give the free port details on the switches which are exists in the data center.

Once you share the information to Platform team, they co-ordinate with the Data center guys to lay the cables between the server and switch. (Already the storage ports or tape library are connected to the switch).


After laying the cables, Platform team will requests you to check the connectivity and they shares the server HBA WWPN to verify with the connected one.


zoning
Physical cabling between Server and storage through Switch with Single path



zoning
Physical cabling between Server and storage through Switch with Multipath

Zoning can be done in 7 simple steps, the pictorial diagram is as follows.


Steps_to_perform_zoning
Steps to perform zoning

Zoning steps:-

1. Identify the WWPN of Server HBA and Storage HBA.

2. Create Alias of server and storage HBA’s.

     Alicreate

3. Create zones for server and storage by using the command

     Zonecreate

4. We need to check whether active configurations is present or not by using the command.

      Cfgactvshow

5. If an active configuration already exits we just need to add the zone to this, by using the command.

     Cfgactvadd

6. If not there we need to create new active configuration by using the command.

      Cfgcreate

7. Save it and enable it.

Please find the example for zoning,

alicreate "ser ver_hba","11:11:11:11:11:11:11:11"

alicreate "storage_hba","22:22:22:22:22:22:22:22"

zonecreate "server_hba-storage_hba"," ser ver_hba; storage_hba "

cfgcreate "cfg_switch1"," server_hba-storage_hba "

cfgenable " cfg_switch1"

cfgsave

Brocade switches uses both web and CLI, the table below displays some but not all the CLI commands.

help
prints available commands
switchdisabled
disable the switch
switchenable
enable the switch
licensehelp
license commands
diaghelp
diagnostic commands
configure
change switch parameters (BB credits, etc)
diagshow
POST results since last boot
routehelp
routing commands
switchshow
display switch show (normally first command to run to obtain switch configuration)
supportshow
full detailed switch info
portshow
display port info
nsshow
namesever contents
nsallshow
NS for full fabric
fabricshow
Fabric information
version
firmware code revision
reboot
full reboot with POST
fastboot
reboot without POST


B-Series (Brocade) zoning commands are detailed in the below table


zonecreate (zone)
create a zone
zoneshow
shows defined and effective zones and configurations
zoneadd
adds a member to a zone
zoneremove
removes a member from a zone
zonedelete
delete a zone
cfgcreate (zoneset)
create a zoneset configuration
cfgadd
adds a zone to a zone configuration
cfgshow
display the zoning information
cfgenable
enable a zone set
cfgsave
saves defined config to all switches in fabric across reboots
cfgremove
removes a zone from a zone configuration
cfgdelete
deletes a zone from a zone configuration
cfgclear
clears all zoning information (must disable the effective config first)
cfgdisable
disables the effective zone set

B-series creating a zone commands

Creating zone by WWN
zonecreate "zone1", "20:00:00:e0:69:40:07:08 ; 50:06:04:82:b8:90:c1:8d"
Create a zone configuration
cfgcreate "test_cfg", "zone1 ; zone2"
saving the zone configuration
cfgsave (this will save across reboots)
enable the zone configuration
cfgenable "test_cfg"
saving the zone configuration
cfgsave
view zoning information
zoneshow or cfgshow


aliAdd   Add a member to a zone alias
aliCopy   Copy a zone alias
aliCreate  Create a zone alias
aliDelete  Delete a zone alias
aliRemove  Remove a member from a zone alias
aliRename  Rename a zone alias
aliShow   Print zone alias information

cfgAdd   Add a member to a configuration
cfgCopy   Copy a zone configuration
cfgCreate  Create a zone configuration
cfgDelete  Delete a zone configuration
cfgRemove  Remove a member from a configuration
cfgRename  Rename a zone configuration
cfgShow   Print zone configuration information

zoneAdd   Add a member to a zone
zoneCopy  Copy a zone
zoneCreate  Create a zone
zoneDelete  Delete a zone
zoneRemove  Remove a member from a zone
zoneRename  Rename a zone
zoneShow  Print zone information

cfgClear  Clear all zone configurations
cfgDisable  Disable a zone configuration
cfgEnable  Enable a zone configuration
cfgSave   Save zone configurations in flash
cfgSize   Print size details of zone database
cfgActvShow  Print effective zone configuration

cfgTransAbort  Abort zone configuration transaction






VNX Architecture | EMC Clariion Architecture | VNX 1 and VNX 2 Spec

$
0
0

VNX/Clariion Block Architecture


Clariion_VNX_Architecture
VNX/Clariion Architecture

Note:

EMC introduced the VNX Series, designed to replace both the Clariion and Celerra products.

Architecture


Clariion & VNX Block Level Storage share the same architecture and VNX Unified Storage architecture includes both the SAN & NAS part. 

And the main differences will be in the specifications like Capacity, Ports, drives & connectivity. 

Clariion/VNX is a mid-range storage array. It is an active passive architecture. 

It has different modules, 

SPE – Storage Processor Enclosure 

DPE – Disk Processor Enclosure 

DAE – Disk Array Enclosure 

SPS – Stand-by Power Supply 

Each SPE has two storage processor named as SP A & SP B, they are connected with Clariion Messaging Interface (CMI).  

And each SP has front end ports and back end ports and cache memory. 

Front end ports helps to serve host I/O requests; back end ports helps to communicate with the disks.
Cache is of two types, Write Cache which is mirrored & Read Cache which is not mirrored. 

The first DAE which is connected is known as DAE OS. 

In this first five drives are known as Vault drives or Code drives. These are used to save the critical data in case of power failure and also save the data like SP A & SP B boot information which is mirrored. 

All the drives are connected through the Link Control Cards (LCC). 

FLARE which is triple mirrored and Persistent storage manager (PSM) is also triple mirrored. 

Each DAE has Primary and Expansion ports which is used to connect other DAE’s.

Basically VNX has 3 main models of Storage's

Block Level Storage

File Level Storage and 

Unified Storage Array

For more details about the VNX series & models, please refer the below link.

https://www.emc.com/en-us/storage/vnx.htm

VNX 1 Series

The VNX 1 series includes six models that are available in block, file, and unified configurations:

VNX 5100, VNX 5300, VNX 5500 , VNX 5700 and VNX 7500.

The Block configuration for VNX 5500 & VNX 5700 shows below:

Block_configuration_for_VNX_5500_&_VNX_7500
Block configuration for VNX 5500 & VNX 7500

The File configuration for VNX 5300 & VNX 5700 shows below:

File_configuration_for_VNX_5300_&_VNX_ 5700
File configuration for VNX 5300 & VNX 5700
The Unified configuration for VNX 5300 & VNX 5700 shows below:

Unified_configuration_for_VNX_5300_&_VNX_5700
Unified configuration for VNX 5300 & VNX 5700

The VNX series used updated components that make it significantly denser than earlier drives.


Block_dense_configuration
                                Example of a Block dense configuration 

Unified_dense_configuration
                            Example of a Unified dense configuration


A 25 drive 2.5” SAS-drive DAE

Front_view_of_25_SAS_drive_DAE
Front view of 25 SAS-drive DAE

Back_view_of_25_SAS_drive_DAE
 Back view of 25 SAS-drive DAE

A close up of the Back view with the ports naming

A_close_up_of_the_back_view
A close up of the back view

A 15 drives DAE

Front_view_of_a_15_drives_DAE
Front view of a 15 drives DAE

Back_view_of_15_drives_DAE
Back view of 15 drives DAE

 A close up of the Back view with the ports naming

Close_up_view_of_a_15_drives_DAE
Close up view of a 15 drives DAE

A picture of Link Control Cards LCC cards connectivity

Link_Control_Cards
Link Control Cards
A picture of a cooling module

Cooling_module
Cooling module

VNX 2 Series

The VNX 2 series includes six models that are available in block, file, and unified configurations:

VNX 5200, VNX 5400, VNX 5600 , VNX 5800, VNX 7600 and VNX 8000.

There are two existing Gateway models:

VNX VG2 and VNX VG8

There are two VMAX® Gateway models:

VNX VG10 and VNX VG50 

A model comparison chart for VNX 2 series.

comparison_chart
A model comparison chart

The Block configuration for VNX 5600 & VNX 8000 shows below:

Block_configuration_for_VNX_5600_&_VNX_8000
Block configuration for VNX 5600 & VNX 8000

The File configuration for VNX 5600 & VNX 8000 shows below:

File_configuration_VNX_5600_&_VNX_8000
File configuration VNX 5600 & VNX 8000

The Unified configuration for VNX 5600 & VNX 8000 shows below:

Unified_configuration_VNX_5600_&_VNX_8000
Unified configuration VNX 5600 & VNX 8000
As informed earlier, the VNX 2 series used updated components that make it significantly denser than earlier drives.

VNX2_Block_dense_configuration
          Example of Block dense configuration

VNX2_Unified_dense_configuration
         Example of Unified dense configuration

The below picture shows the back view of the DPE with SP A (on the right) and SP B (on the left).

Back_view_of_DPE
Back view of DPE 
Picture shows a close-up of the back of the DPE-based storage processor.
Close_up_view_of_the_back_view_of_the_DPE_based_Storage_Processor
Close up view of the back view of the DPE based Storage Processor

Power_fault_activity_link_and_status_LED
Power, fault, activity, link and status LED

A pic of Storage processor management and base module ports

Storage_processor_management_and_base_module_ports
Storage processor management and base module ports







VNX Installation | VNX Architecture | Rules to deploying the VNX Array

$
0
0

VNX Installation 

Introduction

The VNX series is designed for a wide range of environments that include mid-tier through enterprise. VNX provides offering that file only, block only and unified (block and file) implementations. The VNX series is managed through a simple and user interface called Unisphere. The VNX software environment offers significant advancements in efficiency, simplicity and performance.

Architecture


VNX-Architecture
VNX Architecture


Basic Hardware


VNX-Hardware
Basic Hardware


Installation procedure


Installing Rails


Installing the Standby power supply (SPS) rails.

  • The Standby power supply (SPS) is a 1U component and uses a 1 U adjustable kit.

  • Insert the adjustable rail slide and seat the rail extensions into the rear channel of your cabinet.

  • Extend the rail and align the front of the rails as shown below. Ensure that they are level front to back and with the companion rail, left to right.

  • Insert two retention screws in the front and two retention screws in the back of each rail as shown below.
Stand-by-power-supply-rail
Installing the Standby power supply (SPS) rails

Installing the Disk Processor Enclosure (DPE) rail

  • The Disk processor enclosure (DPE) rails should be installed immediately above the SPS rails.

  • Insert the adjustable 3 U rail slide and seat both alignment pins into the rear channel of your cabinet.

  • Extend the rail and align the front of the rails as shown below.

  • Insert two retention screws in the middle two holes in the front.

  • Insert two retention screws in the back of each rail.
Disk-processor-enclosure-rail

Installing the Disk Processor Enclosure (DPE) rail


Installing the components


Installing the standby power supply

  • Slide the SPS enclosure into the cabinet rails at the bottom of the rack. Ensure that the enclosure is fully in the cabinet.

  • When the SPS is in place, insert the screws and bezel brackets to hold the enclosure in place in the cabinet. Do not tighten the screws completely until all of the components are in place.


Installing-the-standby-power-supply
Installing the standby power supply

Installing the Disk Processor Enclosure (DPE)

  • There are two types of disk processor enclosures(DPEs). Your disk processor enclosure can be either 3U, 25 2.5 “drive DPE or a 3U, 15 3.5” drive DPE.

Disk-processor-enclosure
Disk Processor Enclosure

  • Locate the product ID/SN from the product serial number tag (PSNT) located at the back side of the DPE.

  • Record this number to use when you register the product during system setup steps.

  • Slide the DPE into the 3U DPE rails in the cabinet. Ensure that the enclosure is fully in the cabinet.

  • When the DPE is in place, insert and tighten all of the screws.
Inserting-the-DPE
Inserting/Racking the DPE


  • Cabling the standby power supply to SP serial port.

  • RJ45 connections on one end and a 9-pin mini-connector on the other end as shown below.


Cabling 

  • Connect SPS A to SP A.

  • Connect SPS B to SP B.

Cabling-of-DPE-to-SPS
Cabling of Disk Processor Enclosure to Standby power supply

Attaching Storage Processor to the network


  • The storage processor and the windows host from which you initialize the storage system must share the same subnet on your public LAN.


  • Locate your Ethernet cables.

Cabling-from-Storage-to-LAN
Cabling from Storage to LAN

  • Connect your public LAN using a Ethernet cable to the RJ45 port on SP A and SP B.

Public-LAN-cabling
Public LAN Cabling


Power up


Before you power up


  • Ensure that the switches for SPS A and B are turned off.

  • Ensure that all cabinet circuit breakers are in the on position, all necessary PDU switches are switched on, and power is connected. 

  • Connecting or verifying power cables. 

  • Connect the SPS power A to power distribution unit (PDU) A.

  • Connect SPS A to SP A.

  • Connect SPS power B to PDU B.

  • Connect SPS B to SP B.

  • Lock each power cable in place.

  • Turn the SPS switches ON. Wait 15 minutes for the system to power up completely.

  • Monitor the system as it power up.

SPS-to-PDU-connectivity
SPS to PDU connectivity


Setup

  • After you have completed all of the installation steps, continue to set up your system by performing the post-installation tasks.
  • Connect a management station.

  • You must connect a management station to your system directly or remotely over a sub network.

  • This computer will be used to continue setting up your system and must be on the same subnet as the storage system to complete the initialization.

  • Downloading the Unisphere Storage System Initialization Wizard. 

  • Go to htpps://support.emc.com and select VNX series > Install and Configure.

  • From VNX installation tools, download the Unisphere Storage System Initialization Wizard.

  • Double click the downloaded executable and follow the steps in the wizard to install the utility.

  • On the install complete screen, make sure that the Launch Unisphere Storage System Initialization Wizard check box is selected, click done.

  • The initialization utility opens. Follow the online instructions to discover and assign IP address to your storage system.

  • Check system health.

  • Login to Unisphere to check the health of your system, including alerts, events and statistics.

  • Open a browser and enter the IP address of SP.

  • Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related warnings. Accept all the certificates as “Always trust”.


  • Select your storage system and select System > Monitoring and alerts. 


Set the storage system cache values

  • You must allocate cache on the system. Allocate 10 % of the available cache (with a minimum of 100 MB and a maximum of 1024 MB) to read and the rest to write. 

  • Open a browser and enter the IP address of SP.

  • Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related warnings. Accept all the certificates as “Always trust”.

  • Select your storage system and select System > Hardware > Storage Hardware.

  • From the task list, under system management, select manage cache.


Install ESRS and configure ConnectHome

  • You can ensure that your system communicates with your service provider by installing the VNX ESRS IP Client. 


  • Under VNX tasks, select initialize and register VNX for block and configure ESRS.

  • Select the appropriate options for your configuration.

  • Select install and configure ESRS to generate a customized version of EMC Secure Remote Support IP Client.

CONFIGURE SERVERS FOR VNX SYSTEM

  • Installing HBAs in the server.

  • For the server to communicate with the system Fibre Channel data ports, it must have one or more supported HBAs installed.

  • If the server is powered up:

 a. Shut down the server's operating system.

 b. Power down the server.

 c. Unplug the server's power cord from the power outlet.

Put on an ESD wristband, and clip its lead to bare metal on the server's chassis.

For each HBA that you are installing:

 a. Locate an empty PCI bus slot or a slot in the server that is preferred for PCI cards.

b. Install the HBA following the instructions provided by the HBA vendor.

c. If you installed a replacement HBA, reconnect the cables that you removed in the exact same way as they were connected to the original HBA.

Plug the server's power cord into the power outlet, and power up the server.


Installing or updating the HBA driver

The server must run a supported operating system and a supported HBA driver. EMC recommends that you install the latest supported version of the driver.

If you have an Emulex driver, download the latest supported version and instructions.

For installing the driver from the vendor’s website.

http://www.emulex.com/products/fibre-channel-hbas.html

If you have a QLogic driver, download the latest supported version and instructions.

For installing the driver from the vendor’s website.

http://support.qlogic.com/support/oem_emc.asp


Installing the HBA driver

Install any updates, such as hot fixes or service packs, to the server’s operating system  that are required for the HBA driver version you are installing.

If the hot fix or patch requires it, reboot the server.

Install the driver following the instructions on the HBA vendor’s website.

Reboot the server when the installation program prompts you to do so. If the installation program did not prompt you to reboot, then reboot the server when the driver installation is complete.

Installing the Unisphere Host Agent on a windows server

Log in as the administrator or a user who has administrative privileges.

If your server is behind a firewall, open TCP/IP port 6389. This port is used by the host agent. If this port is not opened, the host agent will not function properly.

If you are running a version prior to 6.26 of the host agent, you must remove it before continuing with the installation.

Download the software

From the EMC Online Support website, select the appropriate VNX Series Support by Product page and select Downloads.

Select the Unisphere Host Agent, and then select the option to save the software to your server.

Double-click the executable file listed below to start the installation wizard.

Unisphere Host Agent-Win-32-x86-en_US-version-build.exe.

Follow the instructions on the installation screens to install the Unisphere Host Agent.

The Unisphere Host Agent software is installed on the Windows server. If you selected the default destination folder, the software is installed in the C:\ProgramFiles\EMC\HostAgent.

Once the Unisphere Host Agent installation is complete, the Initialize Privileged User List dialog box is displayed.

In the Initialize Privileged User List dialog box, perform one of the following:

If the Config File field contains a file entry, then a host agent configuration files already exists on the server from a previous agent installation. Select Use Existing file to use this configuration file or select Browse to use a different file.

The host agent configuration file contains a list of login names for this server. Only users whose usernames are listed in the Privileged User List can send CLI commands to the system.

To add a user to the list:

Click Add to open the Add Privileged User dialog box.
In the Add Privileged User dialog box, under User Name, enter the person’s account username, for example, Administrator.

Under System Name, enter the name of the host running Unisphere > (for example, Host4) and click OK.

To remove a privileged user from the list:

a. Select the privileged username, and click Remove

Click OK to save the new privileged user list and /or >the new configuration file.

The program saves the host agent configuration file with the new privileged user entries and starts the host agent.

Click Finish.

A command line window opens indicating that the host agent service is starting.

If the system prompts you to reboot the server, click yes.

Connecting the VNX to the server in a Fibre Channel switch configuration

Use optical cables to connect switch ports to the VNX Fibre Channel host ports and to Fibre Channel switch ports and to connect the switch ports to the server HBA ports.

For highest availability in a multiple-HBA server, connect one or more HBA ports to ports on the switch and connect the same number of HBA ports to ports on same switch or on another switch, if two switches are available.


High-Availability-(HA)-of-servers
High Availability (HA) of servers


Zoning the switches

We recommend single-initiator zoning as a best practice. In single-initiator zoning each HBA port has a separate zone that contains it and the SP ports with which it communicates.

For more details, please refer the link below.


























































zoning commands in Brocade fabric switch | Process for zoning request

$
0
0

Brocade

About


Brocade Communications Systems, Inc. is an American technology company specializing in data and storage networking products. Originally known for its leadership in Fibre Channel storage networks, the company has expanded its focus to include a wide range of products for New IP and Third platform technologies.

Brocade was founded in August 1995, by Seth Neiman (a venture capitalist, a former executive from Sun Microsystems and a professional auto racer), Kumar Malavalli (a co-author of the Fibre Channel specification).

The company's first product, SilkWorm, which was a Fibre Channel Switch, was released in early 1997. A second generation of switches was announced in 1999.

On January 14, 2013, Brocade named Lloyd Carney as new chief executive Officer.

Brocade FC Switch have so many models with the port variations, the details are below

List_of_Brocade_FC_switches
List of Brocade FC switches 


Work flow for zoning activity


The Platform team will inform you that they are going to provision a new server in the environment and requests you to give the free port details on the switches which are exists in the data center.

Once you share the information to Platform team, they co-ordinate with the Data center guys to lay the cables between the server and switch. (Already the storage ports or tape library are connected to the switch).


After laying the cables, Platform team will requests you to check the connectivity and they shares the server HBA WWPN to verify with the connected one.


zoning
Physical cabling between Server and storage through Switch with Single path



zoning
Physical cabling between Server and storage through Switch with Multipath

Zoning can be done in 7 simple steps, the pictorial diagram is as follows.


Steps_to_perform_zoning
Steps to perform zoning

Zoning steps:-

1. Identify the WWPN of Server HBA and Storage HBA.

2. Create Alias of server and storage HBA’s.

     Alicreate

3. Create zones for server and storage by using the command

     Zonecreate

4. We need to check whether active configurations is present or not by using the command.

      Cfgactvshow

5. If an active configuration already exits we just need to add the zone to this, by using the command.

     Cfgactvadd

6. If not there we need to create new active configuration by using the command.

      Cfgcreate

7. Save it and enable it.

Please find the example for zoning,

alicreate "ser ver_hba","11:11:11:11:11:11:11:11"

alicreate "storage_hba","22:22:22:22:22:22:22:22"

zonecreate "server_hba-storage_hba"," ser ver_hba; storage_hba "

cfgcreate "cfg_switch1"," server_hba-storage_hba "

cfgenable " cfg_switch1"

cfgsave

Brocade switches uses both web and CLI, the table below displays some but not all the CLI commands.

help
prints available commands
switchdisabled
disable the switch
switchenable
enable the switch
licensehelp
license commands
diaghelp
diagnostic commands
configure
change switch parameters (BB credits, etc)
diagshow
POST results since last boot
routehelp
routing commands
switchshow
display switch show (normally first command to run to obtain switch configuration)
supportshow
full detailed switch info
portshow
display port info
nsshow
namesever contents
nsallshow
NS for full fabric
fabricshow
Fabric information
version
firmware code revision
reboot
full reboot with POST
fastboot
reboot without POST


B-Series (Brocade) zoning commands are detailed in the below table


zonecreate (zone)
create a zone
zoneshow
shows defined and effective zones and configurations
zoneadd
adds a member to a zone
zoneremove
removes a member from a zone
zonedelete
delete a zone
cfgcreate (zoneset)
create a zoneset configuration
cfgadd
adds a zone to a zone configuration
cfgshow
display the zoning information
cfgenable
enable a zone set
cfgsave
saves defined config to all switches in fabric across reboots
cfgremove
removes a zone from a zone configuration
cfgdelete
deletes a zone from a zone configuration
cfgclear
clears all zoning information (must disable the effective config first)
cfgdisable
disables the effective zone set

B-series creating a zone commands

Creating zone by WWN
zonecreate "zone1", "20:00:00:e0:69:40:07:08 ; 50:06:04:82:b8:90:c1:8d"
Create a zone configuration
cfgcreate "test_cfg", "zone1 ; zone2"
saving the zone configuration
cfgsave (this will save across reboots)
enable the zone configuration
cfgenable "test_cfg"
saving the zone configuration
cfgsave
view zoning information
zoneshow or cfgshow


aliAdd   Add a member to a zone alias
aliCopy   Copy a zone alias
aliCreate  Create a zone alias
aliDelete  Delete a zone alias
aliRemove  Remove a member from a zone alias
aliRename  Rename a zone alias
aliShow   Print zone alias information

cfgAdd   Add a member to a configuration
cfgCopy   Copy a zone configuration
cfgCreate  Create a zone configuration
cfgDelete  Delete a zone configuration
cfgRemove  Remove a member from a configuration
cfgRename  Rename a zone configuration
cfgShow   Print zone configuration information

zoneAdd   Add a member to a zone
zoneCopy  Copy a zone
zoneCreate  Create a zone
zoneDelete  Delete a zone
zoneRemove  Remove a member from a zone
zoneRename  Rename a zone
zoneShow  Print zone information

cfgClear  Clear all zone configurations
cfgDisable  Disable a zone configuration
cfgEnable  Enable a zone configuration
cfgSave   Save zone configurations in flash
cfgSize   Print size details of zone database
cfgActvShow  Print effective zone configuration

cfgTransAbort  Abort zone configuration transaction






VNX - LUN Migration | How to perform the LUN migration in VNX

$
0
0

LUN Migration

Hi All,

Today I am going to discuss about the LUN Migration in VNX or EMC Clariion.

Introduction about the LUN Migration


LUN migration permits the data to be moved from one LUN to other LUN, no matter what RAID type, disk type, LUN type, speed and number of disks in the RAID group or Storage Pool with some restrictions.

The LUN Migration is a internal migration within the Storage array either it is VNX or Clariion from one location to other location. It has 2 steps of process. First it is a block by block copy of a “Source LUN” to its new location “Destination LUN”. Once the copy is completed, it then moves the “Source” LUNs location to its new place.


Procedure 

Login to the Unisphere

Go to the Storage tab and select the LUN which you want to migrate.

Right Click on the ‘Source LUN = eg-145’ and choose Migrate option.

Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN field

The Migration Rate drop-down menu: Low

NOTE:  The Migration Rate drop-down menu offers four levels of service: ASAP, High, Medium, and Low. 

Source LUN > 150GB; Migration Rate: Medium

Source LUN < 150GB; Migration Rate: High

Click OK 

YES

DONE

The Destination LUN assumes the identity of Source LUN.

The Source LUN is unbound when the migration process is complete.

Note: During Migration destination LUN will be shown under ‘Private LUNS’

IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be equal or greater than source LUN.

As discussed above we will perform the LUN migration activity in VNX or EMC Clariion Storage Arrays.










Detailed introduction to EMC Symmetrix | History about Symmetrix DMX

$
0
0

EMC SYMMETRIX


Hello Guys,

Today I will let you know the detailed introduction to Symmetrix series and about it's journey.

History of Symmetrix


Symmetrix series was initiated at EMC by a team led by visionary Moshe Yanai - known as Mr. Symmetrix, who joined EMC in 1987 - and his team of engineers with the a first product, the Symmetrix 4200, introduced in 1990, that began to beat IBM for mainframes' storage. It was based on three main ideas: the use of new lower priced 5.25-inch HDDs - and not much bigger IBM 3390 drives -, a big DRAM cache (at this time 4Mb) named Integrated Cache Disk Array, and RAID-1 (mirroring). 

The Generations of Symmetrix

First Generation (4200): 1990
Second Generation: 1992
Symmetrix 3.0: 1994
Symmetrix 4.0: 1996
Symmetrix 4.8: 1998
Symmetrix 5.0: 2000
Symmetrix 5.5: 2001
Symmetrix DMX (Generation 6.0): 2003
Symmetrix DMX-2 (Generation 6.5): 2004
Symmetrix DMX-3 (Generation 7.0): 2005
Symmetrix DMX-4 (Generation 7.5): 2007
Symmetrix VMAX (Generation 8.0): 2009
Symmetrix VMAXe (Generation 9.0): 2011

DMX_4
DMX 4 

Yanai left - or was fired - by EMC in 2001 because he disagreed on the pricey acquisition of Data General (at the origin of CLARiiON, then VNX) for $1.1 billion in 1999, as he wanted the company to design its own open storage system with his engineers, according to several sources. He is now probably one of the healthiest man in the storage industry. He quits EMC with a huge amount of money and some people told us that he got personally a percentage on each Symmetrix sold. The same Yanai was also part owner of Israeli de-dupe start-up Diligent, sold to IBM in 2008 for a reported $200 million. He then joined IBM after selling another company he co-founded in Israel, XIV, for $300 million in 2008. Before leaving Big Blue, he was at the origin of another Israeli start-up, Axxana, being now one of its board's director. And the main - and apparently only - partner of this firm, in innovative zero loss disaster recovery, is ... EMC.

Symmetrix was at the origin of the fast growing revenues for EMC and continues to be one of its flagship hardware products. In the company's most recent fiscal 1Q11, revenues of Symmetrix increased 25% compared with the year-ago quarter, putting a lot of pressure on IBM and HDS, its two competitors in storage for mainframes or high-end open systems.

DMX Models

Symmetrix_DMX_Models
DMX Models

DMX Features


TimeFinder, TimeFinder/Clone — Local Replication
Symmetrix Remote Data Facility (SRDF) supports remote replication
Symmetrix Optimizer -- Dynamical swap disks based on workload
Symmetrix command line interface (SymmCli)
SymmWin, Eguinuty -- Symmetrix GUI console (since Symm3, Symm4 models)
AnatMain — Symmetrix Pseudo-GUI console (before Symm 3, Symm4 models)
Symmetrix remote console (SymmRemote)
FAST -- Fully automated storage tiering
FTS -- Federated tiered storage
ECC --EMC Control Center

The road map for the EMC Symmetrix models 


Orion 1 (Kick-Off Symmetrix)

·       1988
·       Single-bay, half-height chassis
·       2 directors with dual Block Mux Channel switches
·       2 SCSI disk drives (384 MB)
·       Max system capacity: 512 MB

EMC_Orion_1

Orion 1 

Symmetrix 1 (Product did not release)

·       Single-bay, half-height chassis
·       2 directors
·       64 MB or 256 MB memory board
·       Max number of drives: 4 (625 MB)
·       Max system capacity up to 2 GB



EMC_Symmetrix_1
Symmetrix 1

Symmetrix 2 (4200/4400)

·       1988
·       World's first integrated cached disk array
·       20 slot, single-bay chassis
·       Up to 8 directors
·       Up to 12 memory boards (Max. Capacity: 3 GB)
·       Max number of drives: 24 (1 GB, 2 GB)
·       Max system capacity up to 48 GB


EMC_Symmetrix_2
Symmetrix 2


Symmetrix 3 Family

Symmetrix 5500 (Elephant)
·       1990
·       World's first terabyte disk array
·       20-slot, 3-bay chassis
·       Up to 16 directors
·       Up to 6 memory boards (Max Capacity: 24 GB)
·       Max number of drives: 128 (3 GB, 9 GB, 12 GB)
·       Max system capacity up to 1 TB
Symmetrix 5100 (Roadrunner)
·       1992
·       8-slot, single-bay chassis
·       Up to 6 directors (4 host, 2 disk)
·       Up to 2 memory boards (Max Capacity: 8 GB)
·       Max number of drives: 16 (3 GB, 9 GB)
·       Max system capacity up to 144 GB
Symmetrix 5200 (Jaguar)
·       1992
·       12-slot, single-bay chassis
·       Up to 8 directors 
·       Up to 8 memory boards (Max Capacity: 24 GB)
·       Max number of drives: 32 (3 GB, 9 GB)
·       Max system capacity up to 288 GB



EMC_Symmetrix_3
Symmetrix 3

Symmetrix 4 Family - 3000 Series - Open Systems
Symmetrix 3700/5700 (Ibis)
1994                 
·       "Open Symm" stored data from all major server types
·       20-slot, 3-bay chassis
·       Up to 16 directors (8 host, 8 disk)
·       Up to 4 memory boards (Max Capacity: 16 GB)
·       Max number of drives: 128 5.25" (47 GB)
·       Max system capacity up to 13 TB
Symmetrix 3330/5330 (Bobcat)
·       1997
·       8-slot, single-bay chassis
·       Up to 6 directors (4 host, 2 disk)
·       Up to 2 memory boards (Max Capacity: 8 GB)
·       Max number of drives: 32 3.25" (36 GB)
·       Max system capacity up to 1 TB
Symmetrix 3430/5430 (Coyote)
·       1997
·       12-slot, single-bay chassis
·       Up to 10 directors (6 host, 4 disk)
·       Up to 4 memory boards (Max Capacity: 16 GB)
·       Max number of drives: 96 3.5" (36 GB)
·       Max system capacity up to 3 TB


EMC_Symmetrix_4
Symmetrix 4

Symmetrix 5 Family 
Symmetrix 8430 (Greywolf)
·       2000
·       Quad Bus design
·       CacheStorm memory directors
·       12-slot, single-bay chassis
·       Up to 10 directors (Fibre Channel and ESCON)
·       Up to 2 memory boards (Max Capacity: 16 GB)
·       Max number of drives: 96 (50 GB)
·       Max system capacity up to 4 TB
Symmetrix 8730 (Bison)
·       2000
·       Quad Bus design
·       20-slot, 3-bay chassis
·       Up to 16 directors (Fibre Channel and ESCON)
·       Up to 4 memory boards (Max Capacity: 32 GB)
·       Max number of drives: 384 (50 GB)
·       Max system capacity up to 19 TB


EMC_Symmetrix_5
Symmetrix 5


Symmetrix 6 Family - Direct Matrix Architecture
Symmetrix DMX800
·       2003
·       "Rack Mount Symmetrix"
·       8-slot, single-bay chassis
·       Up to 4 Fibre directors, 2 FEBE
·       Up to 2 memory boards (Max Capacity: 32 GB)
·       Max number of drives: 120 (73 GB, 146 GB)
·       Max system capacity up to 17 TB
Symmetrix DMX1000 (Leopard)
·       2003
·       12-slot, single-bay chassis
·       Up to 8 directors
·       P-model option available
·       Up to 4 memory boards (Max Capacity: 64 GB)
·       Max number of drives: 144 (73 GB, 146 GB)
·       Max system capacity up to 20 TB
Symmetrix DMX2000 (Panther)
·       2003
·       24-slot, 2-bay chassis
·       Up to 16 directors
·       P-model option available
·       Up to 8 memory boards (Max Capacity: 128 GB)
·       Max number of drives: 288 (73 GB, 146 GB)
·       Max system capacity up to 41 TB
Symmetrix DMX3000 (Rhino)
·       2003
·       24-slot, 3-bay chassis
·       Up to 16 directors
·       Up to 8 memory boards (Max Capacity: 288 GB)
·       Max number of drives: 576 (73 GB, 146 GB)
·       Max system capacity up to 82 TB


EMC_Symmetrix_6_DMX
Symmetrix 6 DMX
Symmetrix 7 Family - DMX-3 and DMX-4
Symmetrix DMX-3
·       2005
·       World's first petabyte disk array
·       24-slot, scalable (2 to 9 bays)
·       Up to 16 directors
·       Up to 8 memory boards (Max Capacity: 512 GB)
·       Max number of drives: 2,400 (73 GB, 146 GB, 300 GB, 500 GB)
·       Max system capacity up to 1 PB
Symmetrix DMX-4
·       2007
·       World's first enterprise class flash drive array
·       24-slot, scalable (2 to 9 bays)
·       Up to 16 directors
·       Up to 8 memory boards (Max Capacity: 512 GB)
·       Max number of drives: 2,400 
·       73 GB, 146 GB, 200 GB, 400 GB EFD
·       73 GB, 146 GB, 300 GB, 450 GB FC
·       500 GB, 1 TB SATA
·       Max system capacity up to 2 PB


EMC_Symmetrix_DMX3_DMX4
Symmetrix DMX 3 & 4

Symmetrix VMAX Virtual Matrix Architecture Family 
Symmetrix VMAX
·       2009
·       World's first high-end array purpose built for virtual environments
·       Virtual Matrix sRIO interface
·       1 system bay, up to 10 storage bays
·       Up to 8 VMAX Engines, running Intel multi-core CPUs
·       Up to 16 Symmetrix directors
·       Up to 1 TB of global memory
·       Max number of drives: 2,400 
·       200 GB, 400 GB EFD
·       146 GB, 300 GB, 450 GB, 600 GB FC
·       1 TB, 2 TB SATA
·       Max system capacity up to 3 PB

EMC_Symmetrix_VMAX
Symmetrix VMAX




Cisco Switch | Cisco Switch Configuration | Steps to implement the switch configuration

$
0
0

Initial Switch Configuration

Procedure

Once the Customer Engineer (CE) racked the switch and the entire cabling task was done, the next task will be the switch configuration.

Configuration Prerequisites


Before you configure a switch in the Cisco MDS 9000 Family for the first time, make sure you have the following information: Administrator password

Switch name - This name is also used as your switch prompt.

IP address for the switch’s management interface.

Subnet mask for the switch's management interface.

IP address of the default gateway.

Network team will assists you to get the IP and Subnet mask details.
Procedure to configure switch

·         Verify the following physical connections for the new Cisco MDS 9000 Family switch

·         Power on the switch. The switch boots automatically

Note: If the switch boots to the loader> or switch (boot) prompt, contact your storage vendor support organization for technical assistance.

 After powering on the switch, you see the following output:

General Software Firmbase[r] SMM Kernel 1.1.1002 Aug 6 2003 22:19:14 Copyright (C) 2002 General Software, Inc.

Firmbase initialized.

00000589K Low Memory Passed
01042304K Ext Memory Passed
Wait.....

General Software Pentium III Embedded BIOS 2000 (tm) Revision 1.1.(0)
(C) 2002 General Software, Inc.ware, Inc.
Pentium III-1.1-6E69-AA6E

+------------------------------------------------------------------------------+
| System BIOS Configuration, (C) 2002 General Software, Inc. |
+---------------------------------------+--------------------------------------+
| System CPU : Pentium III | Low Memory : 630KB |
| Coprocessor : Enabled | Extended Memory : 1018MB |
| Embedded BIOS Date : 10/24/03 | ROM Shadowing : Enabled |
+---------------------------------------+--------------------------------------+
Loader Loading stage1.5.

Loader loading, please wait...
Auto booting bootflash:/m9500-sf1ek9-kickstart-mz.2.1.1a.bin bootflash:/m9500-s f1ek9-mz.2.1.1a.bin...
Booting kickstart image: bootflash:/m9500-sf1ek9-kickstart-mz.2.1.1a.bin...................Image verification OK

Starting kernel...
INIT: version 2.78 booting
Checking all filesystems..... done.
Loading system software
Uncompressing system image: bootflash:/m9500-sf1ek9-mz.2.1.1a.bin
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC

INIT: Entering runlevel: 3
·         Make sure you enter the password you wish to assign for the admin username.

Tip: If you create a password that is short and easy to decipher, then your password is rejected. Be sure to configure a strong password. Passwords are case-sensitive.

·         Enter yes to enter setup mode.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.

*Note: setup is mainly used for configuring the system initially, when no configuration is present. So setup always assumes system defaults and not the current system configuration values.

Press Enter at any time to skip a dialog. Use ctrl-c at anytime to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): yes

The switch setup utility guides you through the basic configuration process. Press Ctrl-C at any prompt to end the configuration process.

·         Enter no (no is the default) to not create any additional accounts.

Create another login account (yes/no) [n]: no

·         Enter no (no is the default) to not configure any read-only SNMP community strings.

Configure read-only SNMP community string (yes/no) [n]: no

·         Enter no (no is the default) to not configure any read-write SNMP community strings.

Configure read-write SNMP community string (yes/no) [n]: no

·         Enter a name for the switch.

Note: The switch name is limited to 32 alphanumeric characters. The default is switch.

Enter the switch name: switch_name

·         Enter yes (yes is the default) to configure the out-of-band management configuration.

Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: yes

a.         Enter the IP address for the mgmt0 interface.

Mgmt0 IP address: mgmt_IP_address

b.         Enter the netmask for the mgmt0 interface in the xxx.xxx.xxx.xxx format.

Mgmt0 IP netmask : xxx.xxx.xxx.xxx

·         Enter yes (yes is the default) to configure the default gateway (recommended).

Configure the default-gateway: (yes/no) [y]: yes

·         Enter the default gateway IP address.

IP address of the default-gateway: default_gateway

·         Enter no (no is the default) to configure advanced IP options such as in-band management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: no

·         Enter yes (yes is the default) to enable Telnet service.

Enable the telnet service? (yes/no) [y]: yes

·         Enter no (no is the default) to not enable the SSH service.

Enable the ssh service? (yes/no) [n]: no

·         Enter no (no is the default) to not configure the NTP server.

Configure the ntp server? (yes/no) [n]: no

·         Enter noshut (shut is the default) to configure the default switch port interface to the noshut state.

Configure default switchport interface state (shut/noshut) [shut]: noshut

·         Enter on (on is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [on]: on
  
·         Enter deny (deny is the default) to configure a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: deny

This step denies traffic flow for all members of the default zone.

·         Enter yes (no is the default) to enable a full zone set distribution (refer to the Cisco MDS 9000 Family Configuration Guide).

Enable full zoneset distribution (yes/no) [n]: yes

You see the new configuration. Review and edit the configuration that you have just entered.

·         Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:
switchname switch_name
interface mgmt0
ip address mgmt_IP_address
subnetmask mgmt0_ip_netmask
no shutdown
ip default-gateway default_gateway
telnet server enable
no ssh server enable
no system default switchport shutdown
system default switchport trunk mode on
no zone default-zone permit vsan 1-4093
zoneset distribute full vsan 1-4093
Would you like to edit the configuration? (Yes/no) [n]: no

·         Enter yes (yes is the default) to use and save this configuration.


Use this configuration and save it? (Yes/no) [y]: yes








Troubleshooting zones and zone sets - Cisco MDS switches | Configure the zones

$
0
0

Troubleshooting zones and zone sets

Troubleshooting_commands
Troubleshooting commands

Procedure

To issue commands with the internal keyword, you must have a network-admin group account.

Example for Full Zoning Analysis

Switch # show zone analysis vsan 1
Zoning database analysis vsan 1
Full zoning database
Last updated at: 15:57:10 IST Feb 20 2006
Last updated by: Local [ CLI ]
Num zonesets: 1
Num zones: 1
Num aliases: 0
Num attribute groups: 0
Formattted size: 36 bytes / 2048 Kb
Unassigned Zones: 1
zone name z1 vsan 1

Example for Active Zoning Database Analysis

Switch # show zone analysis active vsan 1
Zoning database analysis vsan 1
Active zoneset: zs1 [*]
Activated at: 08:03:35 UTC Nov 17 2005
Activated by: Local [ GS ]
Default zone policy: Deny
Number of devices zoned in vsan: 0/2 (Unzoned: 2)
Number of zone members resolved: 0/2 (Unresolved: 2)
Num zones: 1
Number of IVR zones: 0
Number of IPS zones: 0
Formattted size: 38 bytes / 2048 Kb

Example for Zone Set Analysis

Switch # show zone analysis zoneset zs1 vsan 1
Zoning database analysis vsan 1
Zoneset analysis: zs1
Num zonesets: 1
Num zones: 0
Num aliases: 0
Num attribute groups: 0
Formattted size: 20 bytes / 2048 Kb

Different Methods

Resolving Host Not Communicating with Storage Using the CLI

To verify that the host is not communicating with storage using the CLI, follow these steps:

·         Verify that the host and storage device are in the same VSAN.

·         Configure zoning, if necessary, by using the show zone status vsan-id command to determine if the default zone policy is set to deny.

Switch # show zone status vsan 1

VSAN: 1 default-zone: deny distribute: active only Interop: default
mode: basic merge-control: allow session: none
hard-zoning: enabled
Default zone:
qos: low broadcast: disabled ronly: disabled
Full Zoning Database :
Zonesets:0 Zones:0 Aliases: 0
Active Zoning Database:
Name: Database Not Available
Status:

The default zone policy of permit means all nodes can see all other nodes. Deny means all nodes are isolated when not explicitly placed in a zone.

·         Use the show zone member command for host and storage device to verify that they are both in the same zone.

·         Use the show zoneset active command to determine if the zone in Step 4 and the host and disk appear in the active zone set.

Switch # show zoneset active vsan 2

zoneset name ZoneSet3 vsan 2
zone name Zone5 vsan 2
pwwn 10:00:00:00:77:99:7a:1b
pwwn 21:21:21:21:21:21:21:21

·          If there is no active zone set, use the zonesetactivate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2


·         Verify that the host and storage can now communicate

Resolving Host and Storage Not in the Same Zone Using the CLI

To move the host and storage device into the same zone using the CLI, follow these steps:

·         Use the zone name zonename vsan-id command to create a zone in the VSAN if necessary, and add the host or storage into this zone.

Switch (config) # zone name NewZoneName vsan 2
Switch (config-zone) # member pwwn 22:35:00:0c:85:e9:d2:c2
Switch (config-zone) # member pwwn 10:00:00:00:c9:32:8b:a8

Note:   The pWWNs for zone members can be obtained from the device or by issuing the show flogi database vsan-id command.

·         Use the show zone command to verify that host and storage are now in the same zone.

Switch # show zone

zone name NewZoneName vsan 2
pwwn 22:35:00:0c:85:e9:d2:c2
pwwn 10:00:00:00:c9:32:8b:a8

zone name Zone2 vsan 4
pwwn 10:00:00:e0:02:21:df:ef
pwwn 20:00:00:e0:69:a1:b9:fc

zone name zone-cc vsan 5
pwwn 50:06:0e:80:03:50:5c:01
pwwn 20:00:00:e0:69:41:a0:12
pwwn 20:00:00:e0:69:41:98:93

·          Use the show zoneset active command to verify that you have an active zone set. If you do not have an active zone set, use the zoneset activate command to activate the zone set.

·          Use the show zoneset active command to verify that the zone in Step 2 is in the active zone set. If it is not, use the zoneset name command to enter the zone set configuration submode, and use the member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2
Switch (config-zoneset) # member NewZoneName

·         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2


·         Verify that the host and storage can now communicate.

Resolving Zone is Not in Active Zone Set Using the CLI

To add a zone to the active zone set using the CLI, follow these steps:

·         Use the show zoneset active command to verify that you have an active zone set. If you do not have an active zone set, use the zoneset activate command to activate the zone set.

·         Use the show zoneset active command to verify that the zone in Step 1 is not in the active zone set.

·         Use the zoneset name command to enter the zone set configuration submode, and use the member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2
Switch (config-zoneset) # member NewZoneName

·         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2


·         Verify that the host and storage can now communicate.



Cisco Trunking and Port Channeling Implementation | Cisco Load Balancing | Cisco Aggregation Switch

$
0
0

Port Channeling 

Introduction

Port Channels refer to the aggregation of multiple physical interfaces into one logical interface to provide higher aggregated bandwidth, load balancing, and link redundancy. Port Channel scan connect to interfaces across switching modules, so a failure of a switching module cannot bring down the Port Channel link.

Cisco_PortChannel
Port Channel

About Port Channeling and Trunking

 Trunking is a commonly used storage industry term. However, the Cisco NX-OS software and switches in the Cisco MDS 9000 Family implement trunking and Port Channeling as follows:

• Port Channeling enables several physical links to be combined into one aggregated logical link.

Trunking
Trunking


Portchanneling_and_Trunking
Port channeling and Trunking


• Trunking enables a link transmitting frames in the EISL format to carry (trunk) multiple VSAN traffic. For example, when trunking is operational on an E port, that E port becomes a TE port.

• Port Channeling—Interfaces can be channeled between the following sets of ports:

– E ports and TE ports

– F ports and NP ports

– TF ports and TNP ports

• Trunking—Trunking permits carrying traffic on multiple VSANs between switches. See the Cisco MDS 9000 Family NX-OS Fabric Configuration Guide.

• Both Port Channeling and trunking can be used between TE ports over EISLs.

About Port Channel Modes

There are two types of Port channel modes.

1.         ON (Default) 

2.         Active

Differences_between_ON_and_Active_modes
Differences between ON and Active modes

Restrictions

Cisco MDS 9000 Family switches support the following number of Port Channels per switch: 
• Switches with only Generation 1 switching modules do not support F and TF Port Channels. 
• Switches with Generation 1 switching modules, or a combination of Generation 1 and Generation 2 switching modules, support a maximum of 128 Port Channels. Only Generation 2 ports can be included in the Port Channels. 
• Switches with only Generation 2 switching modules or Generation 2 and Generation 3 modules support a maximum of 256 Port Channels with 16 interfaces per Port Channel. 
• A Port Channel number refers to the unique identifier for each channel group. This number ranges from of 1 to 256.

Port Channel Configuration 

Port Channels are created with default values. You can change the default configuration just like any other physical interface.

PortChannel_Configurations
Port Channel Configurations

About Port Channel Configuration
 Before configuring a Port Channel, consider the following guidelines:

• Configure the Port Channel across switching modules to implement redundancy on switching module reboots or upgrades.

• Ensure that one Port Channel is not connected to different sets of switches. Port Channels require point-to-point connections between the same set of switches.

If you misconfigure Port Channels, you may receive a misconfiguration message. If you receive this message, the Port Channel’s physical links are disabled because an error has been detected.

A Port Channel error is detected if the following requirements are not met:

• Each switch on either side of a Port Channel must be connected to the same number of interfaces.

• Each interface must be connected to a corresponding interface on the other side.

• Links in a Port Channel cannot be changed after the Port Channel is configured. If you change the links after the Port Channel is configured, be sure to reconnect the links to interfaces within the Port Channel and re enable the links.

If all three conditions are not met, the faulty link is disabled.


Enter the show interface command for that interface to verify that the Port Channel is functioning as required.

Creating a Port Channel

1.         Enters configuration mode.

Switch # config t

2.         Configures the specified Port Channel (1) using the default ON mode.

Switch (config) # interface port-channel 1

3.   Configures the ACTIVE mode.

Switch (config-if) # channel mode active

4.   Reverts to the default ON mode.

Switch (config-if) # no channel mode active

5.   Deletes the specified Port Channel (1), its associated interface mappings, and the hardware associations for this Port Channel.

Switch (config) # no interface port-channel 1

Port-channel 1 deleted and all its members disabled please do the same operation on the switch at the other end of the port-channel

You can add or remove a physical interface (or a range of interfaces) to an existing Port Channel.

The compatible parameters on the configuration are mapped to the Port Channel. Adding an interface to a Port Channel increases the channel size and bandwidth of the Port Channel.

Removing an interface from a Port Channel decreases the channel size and bandwidth of the Port Channel.

You can add a physical interface (or a range of interfaces) to an existing Port Channel. The compatible parameters on the configuration are mapped to the Port Channel. Adding an interface to a Port Channel increases the channel size and bandwidth of the Port Channel.

 A port can be configured as a member of a static Port Channel only if the following configurations are the same in the port and the Port Channel:

Speed

Mode

Rate mode

Port VSAN

Trunking mode

Allowed VSAN list or VF-ID list 

Compatibility Check

A compatibility check ensures that the same parameter settings are used in all physical ports in the channel. Otherwise, they cannot become part of a Port Channel. The compatibility check is performed before a port is added to the Port Channel.

The check ensures that the following parameters and settings match at both ends of a Port Channel:

• Capability parameters (type of interface, Gigabit Ethernet at both ends, or Fibre Channel at both ends).

• Administrative compatibility parameters (speed, mode, rate mode, port VSAN, allowed VSAN list, and port security).

To add an interface to a Port Channel, follow these steps:

1.         Configures the specified port interface (fc1/15). 
Switch (config) # interface fc1/15

2.   Adds physical Fibre Channel port 1/15 to channel group 15. If channel group 15 does not exist, it is created. The port is shut down.

Switch (config-if) # channel-group 15


To add a range of ports to a Port Channel, follow these steps:

1.         Configures the specified range of interfaces. In this example, interfaces from 1/1 to 1/5 are configured.

Switch (config) # interface fc1/1 – 5

2.         Adds physical interfaces 1/1, 1/2, 1/3, 1/4, and 1/5 to channel group 2. If channel group2 does not exist, it is created.

If the compatibility check is successful, the interfaces are operational and the corresponding states apply to these interfaces.

Switch (config-if) # channel-group 2

3.   Deletes the physical Fibre Channel interfaces in channel group 2.

Switch (config-if) # no channel-group 2

Enabling and Configuring Auto creation

1.         Enters the configuration mode for the selected interface(s).

Switch (config) # interface fc8/13

2.   Automatically creates the channel group for the selected interface(s).

Switch (config- if) # channel-group auto

3.   Disables the auto creation of channel groups for this interface, even if the system default configuration may have auto creation enabled.

Switch (config- if) # no channel-group auto

Some useful commands

 Displays the Port Channel Summary

Switch # show port-channel summary

---------------------------------------------------------------- Interface        Total Ports Oper Ports First Oper Port
---------------------------------------------------------------- port-channel 77     2          0           --
port-channel 78     2          0           --
port-channel 79     2          2           fcip200


Displays the Port Channel Configured in the Default ON Mode

Switch # show port-channel database

port-channel 77
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fcip1 [down]
  fcip2 [down]

port-channel 78
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fc2/1 [down]
  fc2/5 [down]

port-channel 79
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
First operational port is fcip200
2 ports in total, 2 ports up
Ports: fcip101 [up]
  fcip200 [up] *


Displays the Port Channel Configured in the ACTIVE Mode

port-channel 77
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fcip1 [down]
  fcip2 [down]

port-channel 78
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fc2/1 [down]
  fc2/5 [down]

port-channel 79
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
First operational port is fcip200
2 ports in total, 2 ports up
Ports: fcip101 [up]
  fcip200 [up] *

Displays the Consistency Status without Details

Switch # show port-channel consistency
Database is consistent

Displays the Consistency Status with Details

Switch # show port-channel consistency detail

 Displays the Port Channel Usage

Switch # show port-channel usage

 Displays the Port Channel Compatibility

Switch # show port-channel compatibility-parameters

 Displays Auto created Port Channels

Switch # show interface fc1/1

Displays the Specified Port Channel Interface

Switch # show port-channel database interface port-channel 128

Displays the Port Channel Summary


Switch # show port-channel summary






Installing the Iscsi Switch | Procedure to Install Fiber Channel Switch

$
0
0

Procedure to Install Fiber Channel Switch 

Brief Introduction


The client will contact the FC SAN Switch vendor to procure a new switch into his IT environment with the specifications. 

Once the Switch delivered to the Data Center place, we have to do the BOM (Bill of Material) verification. 

After that Switch vendor will assign an Engineer to install it, if not we can do ourselves like this.

Let us assume we are going to install a CISCO MDS 9148 Switch.

The Cisco MDS 9148 Multilayer Fabric Switch has 48 Fibre Channel ports with speeds of 8, 4, 2, and 1 Gbps. 

The Cisco MDS 9148 Switch is a top-of-rack (TOR) Fibre Channel switch based on System-on-a-Chip (SOC) technology, which is a Cisco innovation. 

The Cisco MDS 9148 Multilayer Fabric Switch has these features:

·         16, 32, or 48 default licensed ports and an 8-port on-demand license.

·         8, 4, 2, 1 Gbps full line rates.

·         128 buffers available as a shared pool to each port group: 32 buffers per Fibre Channel (FC) port. A maximum of 125 buffers per port in a port group.

·         Fair bandwidth arbiters.

·         Device Manager Quick Config Wizard for the Cisco MDS 9148 Switch. 

·         Redundant power supplies and fans.


Enterprise class features such as In-Service Software Upgrades (ISSU), Virtual SANs (VSANs), security features, and quality of service (QoS).


Front_view_of_a_switch
Front view
Back_view_of_a_switch
Back view

Verifying Your Shipping Contents


Verify that you have received all items, including the following:

•         Rack-mount kit

•         ESD wrist strap

•         Cables and connectors


•        Any optional items ordered

Installing the Switch

Install the switch in one of the following enclosures:

·         An open EIA rack

·         A perforated or solid-walled EIA cabinet

·         A two-post Telco rack




Racking_the_Switch_in_the_cabinet
Racking the Switch in the cabinet


Installing the SFPs (Small factor pluggable)

Install one of the following SFPs in each empty port:

·         A Fibre Channel Shortwave 1, 2, 4 or 8 Gbps SFP transceiver, part number DS-SFP-FC8G-SW

·         A Fibre Channel Long wavelength 1, 2, 4 or 8 Gbps SFP transceiver, part number DS-SFP-FC8G-LW

·         A Fibre Channel Short wavelength 1, 2, 4 Gbps SFP transceiver, part number DS-SFP-FC4G-SW



Installing_SFP
Installing SFP


Powering Up the Switch

To power up the switch, follow these steps:


·         Ground the switch

Switch_Ground
Switch Ground
·         Connect the power cable to the AC power receptacle.

·         The Cisco MDS 9148 Switch supports only AC power supply. The power supply status is indicated on front panel LED.

·         The Cisco MDS 9148 Switch includes a front panel reset button that resets the switch without cycling the power.


·         Power up the switch.

Power_On/Off_and_Power_receptacle
Power On/Off and Power receptacle

Setting up a Network

To set up a network, follow these steps:

·         Ensure that the Mgmt0 port is connected to the management network.


·         Ensure that the console port is connected to the PC serial port (or to a terminal server). For example, on a Windows PC used as a terminal emulator, you can use HyperTerminal. The default baud rate on the console port is 9600.

Connection_to_the_terminal
Connection to the terminal

·         Use the switch setup utility that appears on the console connection.

·         Use the switch setup utility to do the following:

a)   Set the admin password for the switch

b)   Assign an IP address and a net mask to the switch

IP Address Step in the Setup Utility

Continue with Out-of-band (mgmt0) management configuration? {yes/no]: yes

 Mgmt0 IPV4 address: 209.165.200.225

Mgmt0 IPV4 netmask: 255.255.255.224

c)   Set up the default gateway.


 Note: The switch is now ready to be managed via the Mgmt port using Telnet or Device Manager or Fabric Manager









Introduction to VNX / Clariion | Specifications of VNX & Clariion Models

$
0
0

VNX/Clariion

Preface


Clariion is a SAN disk array manufactured and sold by EMC Corporation, it occupied the entry-level and mid-range of EMC's SAN disk array products. In 2011, EMC introduced the EMC VNX Series, designed to replace both the Clariion and Celerra products.

Upon launch in 2008, the latest generation CX4 series storage arrays supported fibre channel and iSCSI front end bus connectivity. The fibre channel back end bus offered 4 Gbit/s of bandwidth with FC-SCSI disks or SATA II disks.

The EMC Celerra NAS device is based on the same X-blade architecture as the CLARiiON storage processor.

In 2011, EMC introduced the new VNX series of unified storage disk arrays intended to replace both CLARiiON and Celerra products. 

In early 2012, both CLARiiON and Celerra were discontinued.


EMC_Clariion
EMC Clariion

Specifications


CX series 

CX_series_specification_sheet
CX series specification sheet
 CX3 series

CX3_series_specification_sheet
CX3 series specifications sheet
CX4 UltraFlex series

CX4_series_specification_sheet
CX4 series specifications sheet
VNX 
EMC_VNX
EMC VNX
EMC VNX series have 3 models.


VNX_series
VNX series
Each VNX series have different models of Entry-level, Medium level and High level of Storage Boxes.

To more details about VNX family, please refer the link below.


http://www.ais-cur.com/vnx-family%20presentation.pdf


VNX 1 Series

VNX_1_models
VNX 1 models
To know more introduction about VNX 1 models, please refer the below link.


VNX 2 Series

VNX_2_models
VNX 2 models
To know more introduction about VNX 2 models, please refer the below link.

https://www.emc.com/collateral/white-papers/h12145-intro-new-vnx-series-wp.pdf

VNXe Series
VNXe_Models
VNXe 3100                      VNXe 3300
To know more introduction about VNXe models, please refer the below link.

http://www.emc.com/collateral/hardware/white-papers/h8178-vnxe-storage-systems-wp.pdf

Specifications of VNX series are belowVNX 1 Series


VNX_1_Physical_Specifications
VNX 1 Physical Specifications 1

VNX_1_Physical_Specifications
VNX 1 Physical Specifications 2

VNX_1_Physical_Specifications
VNX 1 Physical Specifications 3
For more details about the VNX 1 series models, please refer the below link.


VNX 2 Series


VNX_2_Physical_Specifications
VNX 2 Physical Specifications 1


VNX_2_Physical_Specifications
VNX 2 Physical Specifications 2

VNX_2_Physical_Specifications
VNX 2 Physical Specifications 3

VNX_2_Physical_Specifications
VNX 2 Physical Specifications 4
For more details about the VNX 2 series models, please refer the below link.


VNXe Series

VNXe_specifications
VNXe Physical Specifications 1

VNXe_specifications
VNXe Physical Specifications 2

VNXe_specifications
VNXe Physical Specifications 3

For more details about the VNXe series models, please refer the below link.





VNX Architecture | EMC Clariion Architecture | VNX 1 and VNX 2 Spec

$
0
0

VNX/Clariion Block Architecture


Clariion_VNX_Architecture
VNX/Clariion Architecture

Note:

EMC introduced the VNX Series, designed to replace both the Clariion and Celerra products.

Architecture


Clariion & VNX Block Level Storage share the same architecture and VNX Unified Storage architecture includes both the SAN & NAS part. 

And the main differences will be in the specifications like Capacity, Ports, drives & connectivity. 

Clariion/VNX is a mid-range storage array. It is an active passive architecture. 

It has different modules, 

SPE – Storage Processor Enclosure 

DPE – Disk Processor Enclosure 

DAE – Disk Array Enclosure 

SPS – Stand-by Power Supply 

Each SPE has two storage processor named as SP A & SP B, they are connected with Clariion Messaging Interface (CMI).  

And each SP has front end ports and back end ports and cache memory. 

Front end ports helps to serve host I/O requests; back end ports helps to communicate with the disks.
Cache is of two types, Write Cache which is mirrored & Read Cache which is not mirrored. 

The first DAE which is connected is known as DAE OS. 

In this first five drives are known as Vault drives or Code drives. These are used to save the critical data in case of power failure and also save the data like SP A & SP B boot information which is mirrored. 

All the drives are connected through the Link Control Cards (LCC). 

FLARE which is triple mirrored and Persistent storage manager (PSM) is also triple mirrored. 

Each DAE has Primary and Expansion ports which is used to connect other DAE’s.

Basically VNX has 3 main models of Storage's

Block Level Storage

File Level Storage and 

Unified Storage Array

For more details about the VNX series & models, please refer the below link.

https://www.emc.com/en-us/storage/vnx.htm

VNX 1 Series

The VNX 1 series includes six models that are available in block, file, and unified configurations:

VNX 5100, VNX 5300, VNX 5500 , VNX 5700 and VNX 7500.

The Block configuration for VNX 5500 & VNX 5700 shows below:

Block_configuration_for_VNX_5500_&_VNX_7500
Block configuration for VNX 5500 & VNX 7500

The File configuration for VNX 5300 & VNX 5700 shows below:

File_configuration_for_VNX_5300_&_VNX_ 5700
File configuration for VNX 5300 & VNX 5700
The Unified configuration for VNX 5300 & VNX 5700 shows below:

Unified_configuration_for_VNX_5300_&_VNX_5700
Unified configuration for VNX 5300 & VNX 5700

The VNX series used updated components that make it significantly denser than earlier drives.


Block_dense_configuration
                                Example of a Block dense configuration 

Unified_dense_configuration
                            Example of a Unified dense configuration


A 25 drive 2.5” SAS-drive DAE

Front_view_of_25_SAS_drive_DAE
Front view of 25 SAS-drive DAE

Back_view_of_25_SAS_drive_DAE
 Back view of 25 SAS-drive DAE

A close up of the Back view with the ports naming

A_close_up_of_the_back_view
A close up of the back view

A 15 drives DAE

Front_view_of_a_15_drives_DAE
Front view of a 15 drives DAE

Back_view_of_15_drives_DAE
Back view of 15 drives DAE

 A close up of the Back view with the ports naming

Close_up_view_of_a_15_drives_DAE
Close up view of a 15 drives DAE

A picture of Link Control Cards LCC cards connectivity

Link_Control_Cards
Link Control Cards
A picture of a cooling module

Cooling_module
Cooling module

VNX 2 Series

The VNX 2 series includes six models that are available in block, file, and unified configurations:

VNX 5200, VNX 5400, VNX 5600 , VNX 5800, VNX 7600 and VNX 8000.

There are two existing Gateway models:

VNX VG2 and VNX VG8

There are two VMAX® Gateway models:

VNX VG10 and VNX VG50 

A model comparison chart for VNX 2 series.

comparison_chart
A model comparison chart

The Block configuration for VNX 5600 & VNX 8000 shows below:

Block_configuration_for_VNX_5600_&_VNX_8000
Block configuration for VNX 5600 & VNX 8000

The File configuration for VNX 5600 & VNX 8000 shows below:

File_configuration_VNX_5600_&_VNX_8000
File configuration VNX 5600 & VNX 8000

The Unified configuration for VNX 5600 & VNX 8000 shows below:

Unified_configuration_VNX_5600_&_VNX_8000
Unified configuration VNX 5600 & VNX 8000
As informed earlier, the VNX 2 series used updated components that make it significantly denser than earlier drives.

VNX2_Block_dense_configuration
          Example of Block dense configuration

VNX2_Unified_dense_configuration
         Example of Unified dense configuration

The below picture shows the back view of the DPE with SP A (on the right) and SP B (on the left).

Back_view_of_DPE
Back view of DPE 
Picture shows a close-up of the back of the DPE-based storage processor.
Close_up_view_of_the_back_view_of_the_DPE_based_Storage_Processor
Close up view of the back view of the DPE based Storage Processor

Power_fault_activity_link_and_status_LED
Power, fault, activity, link and status LED

A pic of Storage processor management and base module ports

Storage_processor_management_and_base_module_ports
Storage processor management and base module ports






VNX Installation | VNX Architecture | Rules to deploying the VNX Array

$
0
0

VNX Installation 

Introduction

The VNX series is designed for a wide range of environments that include mid-tier through enterprise. VNX provides offering that file only, block only and unified (block and file) implementations. The VNX series is managed through a simple and user interface called Unisphere. The VNX software environment offers significant advancements in efficiency, simplicity and performance.

Architecture


VNX-Architecture
VNX Architecture


Basic Hardware


VNX-Hardware
Basic Hardware


Installation procedure


Installing Rails


Installing the Standby power supply (SPS) rails.

  • The Standby power supply (SPS) is a 1U component and uses a 1 U adjustable kit.

  • Insert the adjustable rail slide and seat the rail extensions into the rear channel of your cabinet.

  • Extend the rail and align the front of the rails as shown below. Ensure that they are level front to back and with the companion rail, left to right.

  • Insert two retention screws in the front and two retention screws in the back of each rail as shown below.
Stand-by-power-supply-rail
Installing the Standby power supply (SPS) rails

Installing the Disk Processor Enclosure (DPE) rail

  • The Disk processor enclosure (DPE) rails should be installed immediately above the SPS rails.

  • Insert the adjustable 3 U rail slide and seat both alignment pins into the rear channel of your cabinet.

  • Extend the rail and align the front of the rails as shown below.

  • Insert two retention screws in the middle two holes in the front.

  • Insert two retention screws in the back of each rail.
Disk-processor-enclosure-rail

Installing the Disk Processor Enclosure (DPE) rail


Installing the components


Installing the standby power supply

  • Slide the SPS enclosure into the cabinet rails at the bottom of the rack. Ensure that the enclosure is fully in the cabinet.

  • When the SPS is in place, insert the screws and bezel brackets to hold the enclosure in place in the cabinet. Do not tighten the screws completely until all of the components are in place.


Installing-the-standby-power-supply
Installing the standby power supply

Installing the Disk Processor Enclosure (DPE)

  • There are two types of disk processor enclosures(DPEs). Your disk processor enclosure can be either 3U, 25 2.5 “drive DPE or a 3U, 15 3.5” drive DPE.

Disk-processor-enclosure
Disk Processor Enclosure

  • Locate the product ID/SN from the product serial number tag (PSNT) located at the back side of the DPE.

  • Record this number to use when you register the product during system setup steps.

  • Slide the DPE into the 3U DPE rails in the cabinet. Ensure that the enclosure is fully in the cabinet.

  • When the DPE is in place, insert and tighten all of the screws.
Inserting-the-DPE
Inserting/Racking the DPE


  • Cabling the standby power supply to SP serial port.

  • RJ45 connections on one end and a 9-pin mini-connector on the other end as shown below.


Cabling 

  • Connect SPS A to SP A.

  • Connect SPS B to SP B.

Cabling-of-DPE-to-SPS
Cabling of Disk Processor Enclosure to Standby power supply

Attaching Storage Processor to the network


  • The storage processor and the windows host from which you initialize the storage system must share the same subnet on your public LAN.


  • Locate your Ethernet cables.

Cabling-from-Storage-to-LAN
Cabling from Storage to LAN

  • Connect your public LAN using a Ethernet cable to the RJ45 port on SP A and SP B.

Public-LAN-cabling
Public LAN Cabling


Power up


Before you power up


  • Ensure that the switches for SPS A and B are turned off.

  • Ensure that all cabinet circuit breakers are in the on position, all necessary PDU switches are switched on, and power is connected. 

  • Connecting or verifying power cables. 

  • Connect the SPS power A to power distribution unit (PDU) A.

  • Connect SPS A to SP A.

  • Connect SPS power B to PDU B.

  • Connect SPS B to SP B.

  • Lock each power cable in place.

  • Turn the SPS switches ON. Wait 15 minutes for the system to power up completely.

  • Monitor the system as it power up.

SPS-to-PDU-connectivity
SPS to PDU connectivity


Setup

  • After you have completed all of the installation steps, continue to set up your system by performing the post-installation tasks.
  • Connect a management station.

  • You must connect a management station to your system directly or remotely over a sub network.

  • This computer will be used to continue setting up your system and must be on the same subnet as the storage system to complete the initialization.

  • Downloading the Unisphere Storage System Initialization Wizard. 

  • Go to htpps://support.emc.com and select VNX series > Install and Configure.

  • From VNX installation tools, download the Unisphere Storage System Initialization Wizard.

  • Double click the downloaded executable and follow the steps in the wizard to install the utility.

  • On the install complete screen, make sure that the Launch Unisphere Storage System Initialization Wizard check box is selected, click done.

  • The initialization utility opens. Follow the online instructions to discover and assign IP address to your storage system.

  • Check system health.

  • Login to Unisphere to check the health of your system, including alerts, events and statistics.

  • Open a browser and enter the IP address of SP.

  • Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related warnings. Accept all the certificates as “Always trust”.


  • Select your storage system and select System > Monitoring and alerts. 


Set the storage system cache values

  • You must allocate cache on the system. Allocate 10 % of the available cache (with a minimum of 100 MB and a maximum of 1024 MB) to read and the rest to write. 

  • Open a browser and enter the IP address of SP.

  • Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related warnings. Accept all the certificates as “Always trust”.

  • Select your storage system and select System > Hardware > Storage Hardware.

  • From the task list, under system management, select manage cache.


Install ESRS and configure ConnectHome

  • You can ensure that your system communicates with your service provider by installing the VNX ESRS IP Client. 


  • Under VNX tasks, select initialize and register VNX for block and configure ESRS.

  • Select the appropriate options for your configuration.

  • Select install and configure ESRS to generate a customized version of EMC Secure Remote Support IP Client.

CONFIGURE SERVERS FOR VNX SYSTEM

  • Installing HBAs in the server.

  • For the server to communicate with the system Fibre Channel data ports, it must have one or more supported HBAs installed.

  • If the server is powered up:

 a. Shut down the server's operating system.

 b. Power down the server.

 c. Unplug the server's power cord from the power outlet.

Put on an ESD wristband, and clip its lead to bare metal on the server's chassis.

For each HBA that you are installing:

 a. Locate an empty PCI bus slot or a slot in the server that is preferred for PCI cards.

b. Install the HBA following the instructions provided by the HBA vendor.

c. If you installed a replacement HBA, reconnect the cables that you removed in the exact same way as they were connected to the original HBA.

Plug the server's power cord into the power outlet, and power up the server.


Installing or updating the HBA driver

The server must run a supported operating system and a supported HBA driver. EMC recommends that you install the latest supported version of the driver.

If you have an Emulex driver, download the latest supported version and instructions.

For installing the driver from the vendor’s website.

http://www.emulex.com/products/fibre-channel-hbas.html

If you have a QLogic driver, download the latest supported version and instructions.

For installing the driver from the vendor’s website.

http://support.qlogic.com/support/oem_emc.asp


Installing the HBA driver

Install any updates, such as hot fixes or service packs, to the server’s operating system  that are required for the HBA driver version you are installing.

If the hot fix or patch requires it, reboot the server.

Install the driver following the instructions on the HBA vendor’s website.

Reboot the server when the installation program prompts you to do so. If the installation program did not prompt you to reboot, then reboot the server when the driver installation is complete.

Installing the Unisphere Host Agent on a windows server

Log in as the administrator or a user who has administrative privileges.

If your server is behind a firewall, open TCP/IP port 6389. This port is used by the host agent. If this port is not opened, the host agent will not function properly.

If you are running a version prior to 6.26 of the host agent, you must remove it before continuing with the installation.

Download the software

From the EMC Online Support website, select the appropriate VNX Series Support by Product page and select Downloads.

Select the Unisphere Host Agent, and then select the option to save the software to your server.

Double-click the executable file listed below to start the installation wizard.

Unisphere Host Agent-Win-32-x86-en_US-version-build.exe.

Follow the instructions on the installation screens to install the Unisphere Host Agent.

The Unisphere Host Agent software is installed on the Windows server. If you selected the default destination folder, the software is installed in the C:\ProgramFiles\EMC\HostAgent.

Once the Unisphere Host Agent installation is complete, the Initialize Privileged User List dialog box is displayed.

In the Initialize Privileged User List dialog box, perform one of the following:

If the Config File field contains a file entry, then a host agent configuration files already exists on the server from a previous agent installation. Select Use Existing file to use this configuration file or select Browse to use a different file.

The host agent configuration file contains a list of login names for this server. Only users whose usernames are listed in the Privileged User List can send CLI commands to the system.

To add a user to the list:

Click Add to open the Add Privileged User dialog box.
In the Add Privileged User dialog box, under User Name, enter the person’s account username, for example, Administrator.

Under System Name, enter the name of the host running Unisphere > (for example, Host4) and click OK.

To remove a privileged user from the list:

a. Select the privileged username, and click Remove

Click OK to save the new privileged user list and /or >the new configuration file.

The program saves the host agent configuration file with the new privileged user entries and starts the host agent.

Click Finish.

A command line window opens indicating that the host agent service is starting.

If the system prompts you to reboot the server, click yes.

Connecting the VNX to the server in a Fibre Channel switch configuration

Use optical cables to connect switch ports to the VNX Fibre Channel host ports and to Fibre Channel switch ports and to connect the switch ports to the server HBA ports.

For highest availability in a multiple-HBA server, connect one or more HBA ports to ports on the switch and connect the same number of HBA ports to ports on same switch or on another switch, if two switches are available.


High-Availability-(HA)-of-servers
High Availability (HA) of servers


Zoning the switches

We recommend single-initiator zoning as a best practice. In single-initiator zoning each HBA port has a separate zone that contains it and the SP ports with which it communicates.

For more details, please refer the link below.


























































Welcome to SAN Storage and it's manufacturers | Storage Terminology

$
0
0

SAN STORAGE

Hi All,

Today you will get an overview about the SAN & NAS and also about the Storage market leaders in the world. To know about the Information Storage and Management , refer the link below

http://www.sanadmin.net/2016/02/Information-Storage-Management.html


Introduction about SAN Storage

A storage area network (SAN) is a network which provides access to consolidated, block level data storage. SANs are primarily used to enhance storage devices, such as disk arrays, tape libraries, and optical jukeboxes, accessible to servers so that the devices appear to the operating system as locally attached devices. A SAN typically has its own network of storage devices that are generally not accessible through the local area network (LAN) by other devices.

SAN compared to NAS


SAN_vs_NAS
SAN compared to NAS


Here we can see some of the manufacturer details who is leading in the IT Infrastructure.  
    
SAN Storage manufacturers: EMC, Netapp, HP, Hitachi, IBM, Dell, Tintri, Orcale, and etc.

Fiber Switch manufacturers: Brocade, Cisco and Mc-Data

HBA manufacturers: Qlogic and Emulex.

Servers manufacturers: Orcale, Dell, IBM, HP, Hitachi Servers.

Types of drives: Sata, SAS, NL-SAS, FC and EFD/SSD/Flash drives.

Storage Terminology

There are some familiar words will be remained in the Storage Platform.

LUN:  LUN is known as Logical Unit Number. It’s a slice of space from a hard drive.

Raid Group:  A collection of 16 drives in a group with same drive type from where the LUN is created.

Storage Pool: A collection of drives in a pool with same or different type of drives from where the LUN is created.

Masking:  It means particular LUN is visible to particular Host. In clear, A LUN can be visible to only one Storage Group/Host.

Storage Group:  It’s nothing but a Host name. Storage group is a collection of one or more LUNs (or meta LUNs) to which you connect one or more servers.

Meta LUN: The meta LUN feature allows Traditional LUNs to be aggregated in order to increase the size or performance of the base LUN. LUN will be expanded by the addition of other LUNs. The LUNs that make up a meta LUN are called meta members and the base LUN is known as Meta head. We can add 255 meta members to 1 meta head (256 LUNs).

Access logix: Access Logix provides LUN masking that allows sharing of storage system.

PSM: The Persistent Storage Manager LUN stores the configuration information about the VNX/Clariion such as Disks, Raid Groups, Luns, Access Logix information, SnapView configuration, MirrorView and SanCopy configuration as well.

Migration: Storage migration you can move storage from one location to another without interrupting the workload of the virtual machine, if it is running. You can also use storage migration to move, service, or upgrade storage resources, or for migration of a standalone or cluster virtual machine

Replication: Remote replication is the process of copying production data to a device at a remote location for data protection or disaster recovery purposes. Remote replication may be either synchronous or asynchronous. Synchronous replication writes data to the primary and secondary sites at the same time

Archiving: Data archiving is the process of moving data that is no longer actively used to a separate storage device for long-term retention. Archive data consists of older data that is still important to the organization and may be needed for future reference, as well as data that must be retained for regulatory compliance

The FLARE Code is broken down as follows:-

 1.14.600.5.022 (32 Bit)
 2.16.700.5.031 (32 Bit)
 2.24.700.5.031 (32 Bit)
 3.26.020.5.011 (32 Bit)
 4.28.480.5.010 (64 Bit)

The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be installed on. For the 1st and the 2nd generation of machines (CX600 and CX700), you should be able to use standard 2nd Generation code levels. CX3 code levels would have a 3 in front of it and so forth. 
These numbers will always increase as new Generations of VNX/Clariion machines are added.

The next two digits are the release numbers; these release numbers are very important and really give you additional features related to the VNX/Clariion FLARE Operating Environment. When someone comes up to you and says, my VNX/Clariion CX3 is running Flare 26, this is what they mean. These numbers will always increase, 28 being the latest FLARE Code Version.

The next 3 digits are the model number of the VNX/Clariion, like the CX600, CX700, CX3-20 and CX4-480. These numbers can be all over the map, depending what the model number of your VNX/Clariion is.

The 5 here is unknown, it’s coming across from previous FLARE releases. Going back to the pre CX days (FC), this 5 was still used in there. I believe this was some sort of code internally used at Data General indicating it’s a FLARE release.

The last 3 digits are the Patch level of the FLARE Environment. This would be the last known compilation of the code for that FLARE version.

Failover mode: They are 4 types
.
                1. Failover mode 1 or Passive/Passive mode

                2. Failover mode 2 or Passive/Active mode

                3. Failover mode 3 or Active/Passive mode

                4. Failover mode 4 or Active/Active mode

As per best practice, failover mode 4 is suitable than others.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning SP, the LUN may not trespass immediately.

To know about components of a storage system environment, refer the link below

http://www.sanadmin.net/2015/10/fc-storage.html

Viewing all 126 articles
Browse latest View live