Quantcast
Channel: SAN Admin-Detailed about SAN,NAS,VMAX,VNX,Netapp Storage,HP Storage
Viewing all 126 articles
Browse latest View live

Cisco Switch Configuration

$
0
0

Initial Switch Configuration

                        Once the Customer Engineer (CE) racked the switch and all the cabling task was done, the next task will be the switch configuration.

Configuration Prerequisites

Before you configure a switch in the Cisco MDS 9000 Family for the first time, make sure you have the following information: Administrator password

Switch name —This name is also used as your switch prompt.

IP address for the switch’s management interface.

Subnet mask for the switch's management interface.

IP address of the default gateway.

Network team will assists you to get the IP and Subnet mask details.

Procedure to configure switch

1.         Verify the following physical connections for the new Cisco MDS 9000 Family switch

2.         Power on the switch. The switch boots automatically

Note: If the switch boots to the loader> or switch (boot) prompt, contact your storage vendor support organization for technical assistance.


After powering on the switch, you see the following output:

General Software Firmbase[r] SMM Kernel 1.1.1002 Aug 6 2003 22:19:14 Copyright (C) 2002 General Software, Inc.

Firmbase initialized.

00000589K Low Memory Passed
01042304K Ext Memory Passed
Wait.....

General Software Pentium III Embedded BIOS 2000 (tm) Revision 1.1.(0)
(C) 2002 General Software, Inc.ware, Inc.
Pentium III-1.1-6E69-AA6E

+------------------------------------------------------------------------------+
| System BIOS Configuration, (C) 2002 General Software, Inc. |
+---------------------------------------+--------------------------------------+
| System CPU : Pentium III | Low Memory : 630KB |
| Coprocessor : Enabled | Extended Memory : 1018MB |
| Embedded BIOS Date : 10/24/03 | ROM Shadowing : Enabled |
+---------------------------------------+--------------------------------------+
Loader Loading stage1.5.

Loader loading, please wait...
Auto booting bootflash:/m9500-sf1ek9-kickstart-mz.2.1.1a.bin bootflash:/m9500-s f1ek9-mz.2.1.1a.bin...
Booting kickstart image: bootflash:/m9500-sf1ek9-kickstart-mz.2.1.1a.bin...................Image verification OK

Starting kernel...
INIT: version 2.78 booting
Checking all filesystems..... done.
Loading system software
Uncompressing system image: bootflash:/m9500-sf1ek9-mz.2.1.1a.bin
CCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCCC
INIT: Entering runlevel: 3


3.         Make sure you enter the password you wish to assign for the admin username.

Tip: If you create a password that is short and easy to decipher, then your password is rejected. Be sure to configure a strong password. Passwords are case-sensitive.


4.         Enter yes to enter setup mode.

This setup utility will guide you through the basic configuration of the system. Setup configures only enough connectivity for management of the system.
*Note: setup is mainly used for configuring the system initially,
when no configuration is present. So setup always assumes system
defaults and not the current system configuration values.
Press Enter at any time to skip a dialog. Use ctrl-c at anytime
to skip the remaining dialogs.

Would you like to enter the basic configuration dialog (yes/no): yes

The switch setup utility guides you through the basic configuration process. Press Ctrl-C at any prompt to end the configuration process.


5.         Enter no (no is the default) to not create any additional accounts.

Create another login account (yes/no) [n]: no


6.         Enter no (no is the default) to not configure any read-only SNMP community strings.

Configure read-only SNMP community string (yes/no) [n]: no


7.         Enter no (no is the default) to not configure any read-write SNMP community strings.

Configure read-write SNMP community string (yes/no) [n]: no


8.         Enter a name for the switch.

Note: The switch name is limited to 32 alphanumeric characters. The default is switch.

Enter the switch name: switch_name


9.         Enter yes (yes is the default) to configure the out-of-band management configuration.

Continue with Out-of-band (mgmt0) management configuration? (yes/no) [y]: yes

a.         Enter the IP address for the mgmt0 interface.

Mgmt0 IP address: mgmt_IP_address

b.         Enter the netmask for the mgmt0 interface in the xxx.xxx.xxx.xxx format.

Mgmt0 IP netmask : xxx.xxx.xxx.xxx


10.       Enter yes (yes is the default) to configure the default gateway (recommended).

Configure the default-gateway: (yes/no) [y]: yes


11.       Enter the default gateway IP address.

IP address of the default-gateway: default_gateway


12.       Enter no (no is the default) to configure advanced IP options such as in-band management, static routes, default network, DNS, and domain name.

Configure Advanced IP options (yes/no)? [n]: no


13.       Enter yes (yes is the default) to enable Telnet service.

Enable the telnet service? (yes/no) [y]: yes


14.       Enter no (no is the default) to not enable the SSH service.

Enable the ssh service? (yes/no) [n]: no


15.       Enter no (no is the default) to not configure the NTP server.

Configure the ntp server? (yes/no) [n]: no


16.       Enter noshut (shut is the default) to configure the default switch port interface to the noshut state.

Configure default switchport interface state (shut/noshut) [shut]: noshut


17.       Enter on (on is the default) to configure the switch port trunk mode.

Configure default switchport trunk mode (on/off/auto) [on]: on


18.       Enter deny (deny is the default) to configure a default zone policy configuration.

Configure default zone policy (permit/deny) [deny]: deny

This step denies traffic flow for all members of the default zone.


19.       Enter yes (no is the default) to enable a full zone set distribution (refer to the Cisco MDS 9000 Family Configuration Guide).

Enable full zoneset distribution (yes/no) [n]: yes

You see the new configuration. Review and edit the configuration that you have just entered.


20.       Enter no (no is the default) if you are satisfied with the configuration.

The following configuration will be applied:
switchname switch_name
interface mgmt0
ip address mgmt_IP_address
subnetmask mgmt0_ip_netmask
no shutdown
ip default-gateway default_gateway
telnet server enable
no ssh server enable
no system default switchport shutdown
system default switchport trunk mode on
no zone default-zone permit vsan 1-4093
zoneset distribute full vsan 1-4093
Would you like to edit the configuration? (yes/no) [n]: no


21.       Enter yes (yes is the default) to use and save this configuration.




Use this configuration and save it? (yes/no) [y]: yes



------------------------------------------------------

To know about Switch Installation, click the URL below










Troubleshooting the Zoning infomation - Cisco

$
0
0

Troubleshooting Zone Configuration Issues with the CLI

 
Troubleshooting_commands
Troubleshooting commands

Note: To issue commands with the internal keyword, you must have a network-admin group account.

Example for Full Zoning Analysis

Switch # show zone analysis vsan 1
Zoning database analysis vsan 1
Full zoning database
Last updated at: 15:57:10 IST Feb 20 2006
Last updated by: Local [ CLI ]
Num zonesets: 1
Num zones: 1
Num aliases: 0
Num attribute groups: 0
Formattted size: 36 bytes / 2048 Kb
Unassigned Zones: 1
zone name z1 vsan 1

Example for Active Zoning Database Analysis

Switch # show zone analysis active vsan 1
Zoning database analysis vsan 1
Active zoneset: zs1 [*]
Activated at: 08:03:35 UTC Nov 17 2005
Activated by: Local [ GS ]
Default zone policy: Deny
Number of devices zoned in vsan: 0/2 (Unzoned: 2)
Number of zone members resolved: 0/2 (Unresolved: 2)
Num zones: 1
Number of IVR zones: 0
Number of IPS zones: 0
Formattted size: 38 bytes / 2048 Kb

Example for Zone Set Analysis

Switch # show zone analysis zoneset zs1 vsan 1
Zoning database analysis vsan 1
Zoneset analysis: zs1
Num zonesets: 1
Num zones: 0
Num aliases: 0
Num attribute groups: 0
Formattted size: 20 bytes / 2048 Kb

Resolving Host Not Communicating with Storage Using the CLI

To verify that the host is not communicating with storage using the CLI, follow these steps:

1.         Verify that the host and storage device are in the same VSAN.

2.         Configure zoning, if necessary, by using the show zone status vsan-id command to determine if the default zone policy is set to deny.

Switch # show zone status vsan 1

VSAN: 1 default-zone: deny distribute: active only Interop: default
mode: basic merge-control: allow session: none
hard-zoning: enabled
Default zone:
qos: low broadcast: disabled ronly: disabled
Full Zoning Database :
Zonesets:0 Zones:0 Aliases: 0
Active Zoning Database:
Name: Database Not Available
Status:

The default zone policy of permit means all nodes can see all other nodes. Deny means all nodes are isolated when not explicitly placed in a zone.

3.         Use the show zone member command for host and storage device to verify that they are both in the same zone.

4.         Use the show zoneset active command to determine if the zone in Step 4 and the host and disk appear in the active zone set.

Switch # show zoneset active vsan 2

zoneset name ZoneSet3 vsan 2
zone name Zone5 vsan 2
pwwn 10:00:00:00:77:99:7a:1b
pwwn 21:21:21:21:21:21:21:21

5.   If there is no active zone set, use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

6.         Verify that the host and storage can now communicate

Resolving Host and Storage Not in the Same Zone Using the CLI

To move the host and storage device into the same zone using the CLI, follow these steps:

1.         Use the zone name zonename vsan-id command to create a zone in the VSAN if necessary, and add the host or storage into this zone.

Switch (config) # zone name NewZoneName vsan 2
Switch (config-zone) # member pwwn 22:35:00:0c:85:e9:d2:c2
Switch (config-zone) # member pwwn 10:00:00:00:c9:32:8b:a8

Note:   The pWWNs for zone members can be obtained from the device or by issuing the show flogi database vsan-id command.


2.             Use the show zone command to verify that host and storage are now in the same zone.

Switch # show zone

zone name NewZoneName vsan 2
pwwn 22:35:00:0c:85:e9:d2:c2
pwwn 10:00:00:00:c9:32:8b:a8

zone name Zone2 vsan 4
pwwn 10:00:00:e0:02:21:df:ef
pwwn 20:00:00:e0:69:a1:b9:fc

zone name zone-cc vsan 5
pwwn 50:06:0e:80:03:50:5c:01
pwwn 20:00:00:e0:69:41:a0:12
pwwn 20:00:00:e0:69:41:98:93

3.         Use the show zoneset active command to verify that you have an active zone set. If you do not have an active zone set, use the zoneset activate command to activate the zone set.

4.         Use the show zoneset active command to verify that the zone in Step 2 is in the active zone set. If it is not, use the zoneset name command to enter the zone set configuration submode, and use the member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2
Switch (config-zoneset) # member NewZoneName

5.         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

6.         Verify that the host and storage can now communicate.


 Resolving Zone is Not in Active Zone Set Using the CLI

To add a zone to the active zone set using the CLI, follow these steps:

1.         Use the show zoneset active command to verify that you have an active zone set. If you do not have an active zone set, use the zoneset activate command to activate the zone set.

2.         Use the show zoneset active command to verify that the zone in Step 1 is not in the active zone set.

3.         Use the zoneset name command to enter the zone set configuration submode, and use the member command to add the zone to the active zone set.

Switch (config) # zoneset name zoneset1 vsan 2
Switch (config-zoneset) # member NewZoneName

4.         Use the zoneset activate command to activate the zone set.

Switch (config) # zoneset activate ZoneSet1 vsan 2

5.         Verify that the host and storage can now communicate.


---------------------------------------------------------------------------------------------------------------

Brocade Zoning


Mc-Data Zoning

http://www.sanadmin.net/2015/11/mc-data-switch-zoning-and-other-useful.html

Cisco Zoning





Cisco - Configuring Portchannel

$
0
0

PortChannels

PortChannels refer to the aggregation of multiple physical interfaces into one logical interface to provide higher aggregated bandwidth, load balancing, and link redundancy. PortChannelscan connect to interfaces across switching modules, so a failure of a switching module cannot
bring down the PortChannel link.


Cisco_PortChannel
PortChannel

About PortChanneling and Trunking

Trunking is a commonly used storage industry term. However, the Cisco NX-OS software and
 switches in the Cisco MDS 9000 Family implement trunking and PortChanneling as follows:

• PortChanneling enables several physical links to be combined into one aggregated logical link.


Trunking
Trunking

Portchanneling_and_Trunking
Portchanneling and Trunking


• Trunking enables a link transmitting frames in the EISL format to carry (trunk) multiple VSAN
traffic. For example, when trunking is operational on an E port, that E port becomes a TE port.

• PortChanneling—Interfaces can be channeled between the following sets of ports:

 – E ports and TE ports

– F ports and NP ports

– TF ports and TNP ports

 • Trunking—Trunking permits carrying traffic on multiple VSANs between switches. See the Cisco MDS 9000 Family NX-OS Fabric Configuration Guide. • Both PortChanneling and trunking can be used between TE ports over EISLs.

About PortChannel Modes

There are two types of Portchaneel modes.

1.         ON (Default)

2.         Active

Differences_between_ON_and_Active_modes
Differences between ON and Active modes

Restrictions

Cisco MDS 9000 Family switches support the following number of PortChannels per switch:

 • Switches with only Generation 1 switching modules do not support F and TF PortChannels.

• Switches with Generation 1 switching modules, or a combination of Generation 1 and
Generation 2 switching modules, support a maximum of 128 PortChannels. Only Generation 2
ports can be included in the PortChannels.

• Switches with only Generation 2 switching modules or Generation 2 and Generation 3
modules support a maximum of 256 PortChannels with 16 interfaces per PortChannel.

• A PortChannel number refers to the unique identifier for each channel group. This number
ranges from of 1 to 256.

PortChannel Configuration

PortChannels are created with default values. You can change the default configuration just like
any other physical interface.

PortChannel_Configurations
PortChannel Configurations

About PortChannel Configuration

Before configuring a PortChannel, consider the following guidelines:

 • Configure the PortChannel across switching modules to implement redundancy on switching
module reboots or upgrades.

 • Ensure that one PortChannel is not connected to different sets of switches. PortChannels
require point-to-point connections between the same set of switches.

If you misconfigure PortChannels, you may receive a misconfiguration message. If you receive
this message, the PortChannel’s physical links are disabled because an error has been detected.

A PortChannel error is detected if the following requirements are not met:

 • Each switch on either side of a PortChannel must be connected to the same number of
interfaces.

• Each interface must be connected to a corresponding interface on the other side.

• Links in a PortChannel cannot be changed after the PortChannel is configured. If you change
the links after the PortChannel is configured, be sure to reconnect the links to interfaces within
the PortChannel and reenable the links.

If all three conditions are not met, the faulty link is disabled.

Enter the show interface command for that interface to verify that the PortChannel is
functioning as required.

Creating a PortChannel

1.         Enters configuration mode.

Switch # config t

2.         Configures the specified PortChannel (1) using the default ON mode.

Switch (config) #interface port-channel 1

3.   Configures the ACTIVE mode.

Switch (config-if) #channel mode active

4.   Reverts to the default ON mode.

Switch (config-if) #no channel mode active

5.   Deletes the specified PortChannel (1), its associated interface mappings, and the hardware associations for this PortChannel.

Switch (config) #no interface port-channel 1
port-channel 1 deleted and all its members disabled
please do  the same operation on the switch at the other end of
the port-channel

You can add or remove a physical interface (or a range of interfaces) to an existing PortChannel.
The compatible parameters on the configuration are mapped to the PortChannel. Adding an
interface to a PortChannel increases the channel size and bandwidth of the PortChannel.
Removing an interface from a PortChannel decreases the channel size and bandwidth of the
PortChannel.

You can add a physical interface (or a range of interfaces) to an existing PortChannel. The
compatible parameters on the configuration are mapped to the PortChannel. Adding an
interface to a PortChannel increases the channel size and bandwidth of the PortChannel.

 A port can be configured as a member of a static PortChannel only if the following configurations are the same in the port and the PortChannel:

Speed

Mode

Rate mode

Port VSAN

Trunking mode

Allowed VSAN list or VF-ID list

Compatibility Check

A compatibility check ensures that the same parameter settings are used in all physical ports in
the channel. Otherwise, they cannot become part of a PortChannel. The compatibility check is
performed before a port is added to the PortChannel.

The check ensures that the following parameters and settings match at both ends of a PortChannel:

 • Capability parameters (type of interface, Gigabit Ethernet at both ends, or Fibre Channel at
both ends).

 • Administrative compatibility parameters (speed, mode, rate mode, port VSAN, allowed VSAN
list, and port security).


To add an interface to a PortChannel, follow these steps:

1.         Configures the specified port interface (fc1/15).

Switch (config) #interface fc1/15

2.   Adds physical Fibre Channel port 1/15 to channel group 15. If channel group 15 does not
 exist, it is created. The port is shut down.

Switch (config-if) #channel-group 15


To add a range of ports to a PortChannel, follow these steps:


1.         Configures the specified range of interfaces. In this example, interfaces from 1/1 to 1/5 are configured.

Switch (config) #interface fc1/1 – 5

2.         Adds physical interfaces 1/1, 1/2, 1/3, 1/4, and 1/5 to channel group 2. If channel group
2 does not exist, it is created.

If the compatibility check is successful, the interfaces are operational and the corresponding states apply to these interfaces.

Switch (config-if) #channel-group 2

3.   Deletes the physical Fibre Channel interfaces in channel group 2.

Switch (config-if) #no channel-group 2

Enabling and Configuring Auto creation

1.         Enters the configuration mode for the selected interface(s).

Switch (config) #interface fc8/13

2.   Automatically creates the channel group for the selected interface(s).

Switch (config- if) #  channel-group auto

3.   Disables the autocreation of channel groups for this interface, even if the system default
configuration may have autocreation enabled.

Switch (config- if)no channel-group auto

Some useful commands

 Displays the PortChannel Summary

Switch #show port-channel summary

---------------------------------------------------------------- Interface        Total Ports Oper Ports First Oper Port
---------------------------------------------------------------- port-channel 77     2          0           --
port-channel 78     2          0           --
port-channel 79     2          2           fcip200


Displays the PortChannel Configured in the Default ON Mode

Switch #show port-channel database

port-channel 77
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fcip1 [down]
  fcip2 [down]

port-channel 78
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fc2/1 [down]
  fc2/5 [down]

port-channel 79
Administrative channel mode is on
Operational channel mode is on
Last membership update succeeded
First operational port is fcip200
2 ports in total, 2 ports up
Ports: fcip101 [up]
  fcip200 [up] *


 Displays the PortChannel Configured in the ACTIVE Mode

port-channel 77
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fcip1 [down]
  fcip2 [down]

port-channel 78
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
2 ports in total, 0 ports up
Ports: fc2/1 [down]
  fc2/5 [down]

port-channel 79
Administrative channel mode is active
Operational channel mode is active
Last membership update succeeded
First operational port is fcip200
2 ports in total, 2 ports up
Ports: fcip101 [up]
  fcip200 [up] *

Displays the Consistency Status without Details

Switch #show port-channel consistency
Database is consistent

Displays the Consistency Status with Details

Switch #show port-channel consistency detail

 Displays the PortChannel Usage

Switch #show port-channel usage

 Displays the PortChannel Compatibility

Switch #show port-channel compatibility-parameters

 Displays Autocreated PortChannels

Switch #show interface fc1/1

Displays the Specified PortChannel Interface

Switch #show port-channel database interface port-channel 128

Displays the PortChannel Summary

Switch #show port-channel summary



-----------------------------------------------------------------

Cisco Virtual Storage Area Network


Cisco ISL Trunking



Switch Initial Configuration


Cisco - Troubleshooting zone configuration







FC Switch Installation guide

$
0
0

Fibre Channel Switch Installation Procedure

The client will contact the FC SAN Switch vendor to procure a new switch into his IT environment with the specifications.

Once the Switch delivered to the Data Center place, we have to do the BOM (Bill of Material) verification. 

After that Switch vendor will assign an Engineer to install it, if not you can do ourselves like this.

Let us assume we are going to install a CISCO MDS 9148 SWITCH.

The Cisco MDS 9148 Multilayer Fabric Switch has 48 Fibre Channel ports with speeds of 8, 4, 2, and 1 Gbps. The Cisco MDS 9148 Switch is a top-of-rack (TOR) Fibre Channel switch based on System-on-a-Chip (SOC) technology, which is a Cisco innovation. The Cisco MDS 9148 Multilayer Fabric Switch has these features:

         16, 32, or 48 default licensed ports and an 8-port on-demand license.

         8-,4-, 2-, 1-Gbps full line rates.

         128 buffers available as a shared pool to each port group: 32 buffers per Fibre Channel (FC) port. A maximum of 125 buffers per port in a port group.

         Fair bandwidth arbiters.

         Device Manager Quick Config Wizard for the Cisco MDS 9148 Switch. 

         Redundant power supplies and fans.

        Enterprise class features such as In-Service Software Upgrades (ISSU), Virtual SANs (VSANs),               security features, and quality of service (QoS).


Front_view_of_a_switch
Front view
Back_view_of_a_switch
Back view
Verifying Your Shipping Contents

Verify that you have received all items, including the following:

         Rack-mount kit

         ESD wrist strap

         Cables and connectors

        Any optional items ordered

Installing the Switch

Install the switch in one of the following enclosures:

                     An open EIA rack

                     A perforated or solid-walled EIA cabinet

                    A two-post Telco rack


Racking_the_Switch_in_the_cabinet
Racking the Switch in the cabinet


Installing the SFPs (Small factor pluggable)

 Install one of the following SFPs in each empty port:

      A Fibre Channel Shortwave 1-, 2-, 4-, or 8-Gbps SFP transceiver, part number DS-SFP-FC8G-SW

                A Fibre Channel Long wavelength 1-, 2-, 4-, or 8-Gbps SFP transceiver, part number DS-SFP-FC8G-       LW

               A Fibre Channel Short wavelength 1-, 2-, or 4-Gbps SFP transceiver, part number DS-SFP-FC4G-SW



Installing_SFP
Installing SFP


Powering Up the Switch

To power up the switch, follow these steps:
  •  Ground the switch.

Switch_Ground
Switch Ground
  •  Connect the power cable to the AC power receptacle.

 •  The Cisco MDS 9148 Switch supports only AC power supply. The power supply status is indicated on front panel LED.

     The Cisco MDS 9148 Switch includes a front panel reset button that resets the switch without cycling the power.

•   Power up the switch.

Power_On/Off_and_Power_receptacle
Power On/Off and Power receptacle

Setting Up a Network

To set up a network, follow these steps:

•   Ensure that the Mgmt0 port is connected to the management network.

•   Ensure that the console port is connected to the PC serial port (or to a terminal server).
For example, on a Windows PC used as a terminal emulator, you can use HyperTerminal. The default baud rate on the console port is 9600.

Connection_to_the_terminal
Connection to the terminal

 •   Use the switch setup utility that appears on the console connection.

•   Use the switch setup utility to do the following:

a.         Set the admin password for the switch.
 b.         Assign an IP address and a netmask to the switch
 IP Address Step in the Setup Utility
 Continue with Out-of-band (mgmt0) management configuration? {yes/no]:yes
 Mgmt0 IPV4 address: 209.165.200.225

Mgmt0 IPV4 netmask: 255.255.255.224

 c.         Set up the default gateway.
 Note: The switch is now ready to be managed via the Mgmt port using Telnet or Device Manager or Fabric Manager

---------------------------------------------------------------------------------------------------------------------


Cisco ISL Trunking



Switch Initial Configuration


Cisco – Portchanneling










Introduction to EMC Clariion/VNX

$
0
0

EMC Clariion/VNX

Clariion is a SAN disk array manufactured and sold by EMC Corporation, it occupied the entry-level and mid-rangeof EMC's SAN disk array products. In 2011, EMCintroduced the EMC VNX Series, designed to replace both the Clariion and Celerra products.

 Upon launch in 2008, the latest generation CX4 series storage arrays supported fibre channel and iSCSI front end bus connectivity. The fibre channel back end bus offered 4 Gbit/s of bandwidth with FC-SCSI disks or SATAII disks.

The EMC Celerra NAS device is based on the same X-blade architecture as the CLARiiON storage processor.
In 2011, EMCintroduced the new VNX series of unified storage disk arrays intended to replace both CLARiiONand Celerra products. 
In early 2012, both CLARiiON and Celerra were discontinued.


EMC_Clariion
EMC Clariion

Specifications of Clariion models are below

CX series 

CX_series_specification_sheet
CX series specification sheet

 CX3 series

CX3_series_specification_sheet
CX3 series specifications sheet

CX4 UltraFlex series

CX4_series_specification_sheet
CX4 series specifications sheet

VNX 

EMC_VNX
EMC VNX
EMC VNX series have 3 models.


VNX_series
VNX series
Each VNX series have different models of Entry-level, Medium level and High level of Storage Boxes.

To more details about VNX family, please refer the link below.

http://www.ais-cur.com/vnx-family%20presentation.pdf

VNX 1 Series



VNX_1_models
VNX 1 models
To know more introduction about VNX 1 models, please refer the below link.


VNX 2 Series

VNX_2_models
VNX 2 models
To know more introduction about VNX 2 models, please refer the below link.

https://www.emc.com/collateral/white-papers/h12145-intro-new-vnx-series-wp.pdf

VNXe Series

VNXe_Models
VNXe 3100                                                   VNXe 3300
To know more introduction about VNXe models, please refer the below link.

http://www.emc.com/collateral/hardware/white-papers/h8178-vnxe-storage-systems-wp.pdf

Specifications of VNX series are below

VNX 1 Series



VNX_1_Physical_Specifications

VNX_1_Physical_Specifications

VNX_1_Physical_Specifications
VNX 1 Physical Specifications
For more details about the VNX 1 series models, please refer the below link.


VNX 2 Series

VNX_2_Physical_Specifications

VNX_2_Physical_Specifications

VNX_2_Physical_Specifications

VNX_2_Physical_Specifications
VNX 2 Physical Specifications
For more details about the VNX 2 series models, please refer the below link.


VNXe Series

VNXe_specifications

VNXe_specifications

VNXe_specifications
VNXe Physical specifications

For more details about the VNXe series models, please refer the below link.



---------------------------------------------------------------------------------------------------------------------

EMC Clariion/VNX (Block Level) Architecture


Storage Terminology


EMC VNX Installation Guide


EMC VNX – LUN Provisioning







Clariion/VNX Architecture (Block Level)

$
0
0

Clariion/VNX Block Architecture


Clariion_VNX_Architecture
Clariion/VNX  Architecture

Note: EMC introduced the VNX Series, designed to replace both the Clariion and Celerra products.

Note: Clariion & VNX Block Level Storage share the same architecture and VNX Unified Storage architecture includes both the SAN & NAS part.

                 Clariion and VNX both shares the same architecture and the main differences will be in the specifications like Capacity, Ports, drives & connectivity.....

Clariion/VNX is a mid-range storage array. It is an active passive architecture.

It has different modules,

SPE– Storage Processor Enclosure

DPE– Disk Processor Enclosure

DAE– Disk Array Enclosure

SPS– Stand-by Power Supply

Each SPE has two storage processor named as SP A& SP B,they are connected with Clariion Messaging Interface (CMI).

And each SP has front end ports and back end ports and cache memory.

Front end ports helps to serve host I/O requests; back end ports helps to communicate with the disks.

Cache is of two types, Write Cache which is mirrored & Read Cache which is not mirrored

The first DAE which is connected is known as DAE OS.

In this first five drives are known as Vault drives or Code drives. These are used to save the critical data in case of power failure and also save the data like SP A& SP B boot information which is mirrored.

All the drives are connected through the Link Control Cards (LCC).

FLARE which is triple mirrored and Persistent storage manager (PSM) is also triple mirrored.

Each DAE has Primary and Expansion ports which is used to connect other DAE’s.

---------------------------------------------------------------------------------------------------------------------

Basically VNX has 3 main models of Storages.

VNX Block Level Storage

File Level Storage and 

Unified Storage Array.

For more details about the VNX series & models, please refer the below link.

                                   https://www.emc.com/en-us/storage/vnx.htm

VNX 1 Series

The VNX 1 series includes six models that are available in block, file, and unified configurations:

VNX 5100, VNX 5300,  VNX 5500 , VNX 5700 and VNX 7500.


The Block configuration for VNX 5500& VNX 5700 shows below:

Block_configuration_for_VNX_5500_&_VNX_7500
Block configuration for VNX 5500 & VNX 7500

The File configuration for VNX 5300 & VNX 5700 shows below:

File_configuration_for_VNX_5300_&_VNX_ 5700
File configuration for VNX 5300 & VNX 5700
The Unified configuration for VNX 5300 & VNX 5700 shows below:

Unified_configuration_for_VNX_5300_&_VNX_5700
Unified configuration for VNX 5300 & VNX 5700

The VNX series used updated components that make it significantly denser than earlier drives.


Block_dense_configuration
                                Example of a Block dense configuration 

Unified_dense_configuration
                             Example of a Unified dense configuration


A 25 drive 2.5” SAS-drive DAE

Front_view_of_25_SAS_drive_DAE
Front view of 25 SAS-drive DAE

Back_view_of_25_SAS_drive_DAE
 Back view of 25 SAS-drive DAE

A close up of the Back view with the ports naming.

A_close_up_of_the_back_view
A close up of the back view
A 15 drives DAE

Front_view_of_a_15_drives_DAE
Front view of a 15 drives DAE

Back_view_of_15_drives_DAE
Back view of 15 drives DAE

 A close up of the Back view with the ports naming.

Close_up_view_of_a_15_drives_DAE
Close up view of a 15 drives DAE
A picture of Link Control Cards LCC cards connectivity.

Link_Control_Cards
Link Control Cards
A picture of a cooling module.

Cooling_module
Cooling module

VNX 2 Series

The VNX 2 series includes six models that are available in block, file, and unified configurations:

VNX 5200, VNX 5400,  VNX 5600 , VNX 5800, VNX 7600 and VNX 8000.

There are two existing Gateway models:

VNX VG2 and  VNX VG8

There are two VMAX® Gateway models:

VNX VG10 and VNX VG50 


A model comparison chart for VNX 2 series.

comparison_chart
A model comparison chart

The Block configuration for VNX 5600 & VNX 8000 shows below:

Block_configuration_for_VNX_5600_&_VNX_8000
Block configuration for VNX 5600 & VNX 8000
The File configuration for VNX 5600 & VNX 8000 shows below:

File_configuration_VNX_5600_&_VNX_8000
File configuration VNX 5600 & VNX 8000

The Unified configuration for VNX 5600 & VNX 8000 shows below:

Unified_configuration_VNX_5600_&_VNX_8000
Unified configuration VNX 5600 & VNX 8000
As informed earlier, the VNX 2 series used updated components that make it significantly denser than earlier drives.

VNX2_Block_dense_configuration
          Example of Block dense configuration

VNX2_Unified_dense_configuration
         Example of Unified dense configuration

The below picture shows the back view of the DPE with SP A (on the right) and SP B (on the left).

Back_view_of_DPE
Back view of DPE 
Picture shows a close-up of the back of the DPE-based storage processor.

Close_up_view_of_the_back_view_of_the_DPE_based_Storage_Processor
Close up view of the back view of the DPE based Storage Processor

Power_fault_activity_link_and_status_LED
Power, fault, activity, link and status LED

A pic of Storage processor management and base module ports

Storage_processor_management_and_base_module_ports
Storage processor management and base module ports

Need more details about the hardware& connectivity issues, please refer this link below.







VNX Installation Guide

$
0
0

VNX Installation Guide

INTRODUCTION

                      The VNXseries is designed for a wide range of environments that include mid-tier through enterprise. VNX provides offering that file only, block only and unified (block and file) implementations. The VNX series is managed through a simple and user interface called Unisphere. The VNX software environment offers significant advancements in efficiency, simplicity and performance.

ARCHITECTURE
VNX_Unified_Architecture
VNX Unified Architecture
BASICHARDWARE
Basic_hardware_details
Basic hardware details
INSTALLATION PROCEDURE

Installing Rails

  Installing the Standby power supply (SPS) rails

          The Standby power supply (SPS) is a 1U component and uses a 1 U adjustable kit.

1.      Insert the adjustable rail slide and seat the rail extensions into the rear channel of your cabinet.

2.      Extend the rail and align the front of the rails as shown below. Ensure that they are level front to         back and with the companion rail, left to right.

3.      Insert two retention screws in the front and two retention screws in the back of each rail as shown       below.

Installing_the_SPS_rails
Installing the SPS rails
 Installing the Disk Processor Enclosure (DPE) rail

              The Disk processor enclosure (DPE) rails should be installed immediately above the SPS            rails.

1.      Insert the adjustable 3 U rail slide and seat both alignment pins into the rear channel of your               cabinet.

2.      Extend the rail and align the front of the rails as shown below.

3.      Insert two retention screws in the middle two holes in the front.

4.      Insert two retention screws in the back of each rail.
Installing_the_DPE_rails
Installing the DPE rails
Installing the components

         Installing the standby power supply

1.      Slide the SPS enclosure into the cabinet rails at the bottom of the rack. Ensure that the enclosure          is fully in the cabinet.

2.      When the SPS is in place, insert the screws and bezel brackets to hold the enclosure in place in the        cabinet. Do not tighten the screws completely until all of the components are in place.
Installing_the_standby_power_supply
Installing the standby power supply
         Installing the Disk Processor Enclosure (DPE)

                 There are two types of disk processor enclosures (DPEs). Your disk processor enclosure         can be either 3U, 25 2.5 “drive DPE or a 3U, 15 3.5” drive DPE.
Types_of_DPEs
Types of DPE(s)

1.      Locate the product ID/SN from the product serial number tag (PSNT) located at the back side of         the DPE.

2.      Record this number to use when you register the product during system setup steps.

3.      Slide the DPE into the 3U DPE rails in the cabinet. Ensure that the enclosure is fully in the                 cabinet.

4.      When the DPE is in place, insert and tighten all of the screws.
Installing_the_DPE
Installing the DPE

1.      Cabling the standby power supply to SP serial port

2.      RJ45 connections on one end and a 9-pin mini-connector on the other end as shown below.


Cables_connecting_SPS_to_SP
Cables connecting SPS to SP

3.      Connect SPS A to SP A.

4.      Connect SPS B to SP B.

Cabling_to_management_ports
Cabling to management ports

        Attaching Storage Processor to the network

          The storage processor and the windows host from which you initialize the storage system must       share the same subnet on your public LAN.

1.      Locate your Ethernet cables.


Customer_supplied_management_cables
Customer-supplied management cables

2..     Connect your public LAN using a Ethernet cable to the RJ45 port on SP A and SP B.

Attaching_the_SPs_to_the_network
Attaching the SPs to the network

Power up

        Before you power up

1.      Ensure that the switches for SPS A and B are turned off.

2.      Ensure that all cabinet circuit breakers are in the on position, all necessary PDU switches are               switched on, and power is connected.


      Connecting or verifying power cables

1.      Connect the SPS power A to power distribution unit (PDU) A.

2.      Connect SPS A to SP A.

3.      Connect SPS power B to PDU B.

4.      Connect SPS B to SP B.

5.      Lock each power cable in place.

6.      Turn the SPS switches ON. Wait 15 minutes for the system to power up completely.

7.      Monitor the system as it power up.
Connecting_power_cables
Connecting power cables
SETUP

               After you have completed all of the installation steps, continue to set up your system by              performing the post-installation tasks.


        Connect a management station

           You must connect a management station to your system directly or remotely over a                        subnetwork. This computer will be used to continue setting up your system and must be on the            same subnet as the storage system to complete the initialization.

         Downloading the Unisphere Storage System Initialization Wizard

         Go to htpps://support.emc.com and select VNX series > Install and Configure

1.      From VNX installation tools, download the Unisphere Storage System Initialization Wizard.

2.      Double click the downloaded executable and follow the steps in the wizard to install the utility.

3.      On the install complete screen, make sure that the Launch Unisphere Storage System                     Initialization Wizard check box is selected.

4.      Click done. The initialization utility opens. Follow the online instructions to discover and assign         IP address to your storage system.


        CHECK SYSTEM HEALTH 

            Login to Unisphere to check the health of your system, including alerts, events and                  statistics.

1.      Open a browser and enter the IP address of SP.

2.      Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related         warnings. Accept all the certificates as “Always trust”.

3.      Select your storage system and select System> Monitoring and alerts

    Set the storage system cache values

           You must allocate cache on the system. Allocate 10 % of the available cache (with a                     minimum of 100 MB and a maximum of 1024 MB) to read and the rest to write.

1.      Open a browser and enter the IP address of SP.

2.      Use the sysadmin credentials to log in to Unisphere. You may be prompted by certificate-related        warnings. Accept all the certificates as “Always trust”

3.      Select your storage system and select System > Hardware> Storage Hardware.

4.      From the task list, under system management, select manage cache.

       Install ESRS and configure ConnectHome

           You can ensure that your system communicates with your service provider by installing the           VNX ESRS IP Client.

         Go to https://mydocs.emc.com/VNX/

1.      Under VNX tasks, select initialize and register VNX for block and configure ESRS.

2.      Select the appropriate options for your configuration.

3.      Select install and configure ESRS to generate a customized version of EMC Secure Remote               Support IP Client.

CONFIGURE SERVERS FOR VNX SYSTEM

        Installing HBAs in the server

           For the server to communicate with the system Fibre Channel data ports, it must have one          or more supported HBAs installed.

1.      If the server is powered up:

         a. Shut down the server's operating system.

         b. Power down the server.

         c. Unplug the server's power cord from the power outlet.

2.      Put on an ESD wristband, and clip its lead to bare metal on the server's chassis.

3.      For each HBA that you are installing:

         a. Locate an empty PCI bus slot or a slot in the server that is preferred for PCI cards.

         b. Install the HBA following the instructions provided by the HBA vendor.

         c. If you installed a replacement HBA, reconnect the cables that you removed in the exact same              way as they were connected to the original HBA.

4.      Plug the server's power cord into the power outlet, and power up the server.

      Installing or updating the HBA driver

                 The server must run a supported operating system and a supported HBA driver.                      EMC recommends that you install the latest supported version of the driver.

            If you have an Emulex driver, download the latest supported version and instructions.
        For installing the driver from the vendor’s website

       http://www.emulex.com/products/fibre-channel-hbas.html

            If you have a QLogic driver, download the latest supported version and instructions.
        For installing the driver from the vendor’s website

       http://support.qlogic.com/support/oem_emc.asp

         Installing the HBA driver

1.      Install any updates, such as hot fixes or service packs, to the server’s operating system  that are           required for the HBA driver version you are installing.

2.      If the hot fix or patch requires it, reboot the server.

3.      Install the driver following the instructions on the HBA vendor’s website

4.      Reboot the server when the installation program prompts you to do so. If the installation program       did not prompt you to reboot, then reboot the server when the driver installation is complete.

        Installing the Unisphere Host Agent on a windows server

1.      Log in as the administrator or a user who has administrative privileges.

2.      If your server is behind a firewall, open TCP/IP port 6389. This port is used by the host agent. If       this port is not opened, the host agent will not function properly.

3.   If you are running a version prior to 6.26 of the host agent, you must remove it before
      continuing with the installation.

Download the software:

         a.      From the EMC Online Support website, select the appropriate VNX Series Support by                        Product page and select Downloads.

         b.      Select the Unisphere Host Agent, and then select the option to save the software
               to your server.

         c.       Double-click the executable file listed below to start the installation wizard.
               Unisphere HostAgent-Win-32-x86-en_US-version-build.exe.

4.    Follow the instructions on the installation screens to install the Unisphere Host Agent.The                 Unisphere Host Agent software is installed on the Windows server. If you selected the default          destination folder, the software is installed in the C:\ProgramFiles\EMC\HostAgent.

              Once the Unisphere Host Agent installation is complete, the Initialize Privileged User List         dialog box is displayed.

5.      In the Initialize Privileged User List dialog box, perform one of the following:

         a.      If the Config File field contains a file entry, then a host agent configuration files already                    exists on the server from a previous agent installation. Select Use Existing file to use this                  configuration file or select Browse to use a different file.

         b.      The host agent configuration file contains a list of login names for this server. Only users                  whose usernames are listed in the Privileged User List can send CLI commands to the                      system.

6.      To add a user to the list:

         a.      Click Add to open the Add Privileged User dialog box.

         b.      In the Add Privileged User dialog box, under User Name, enter the person’s account                      username, for example, Administrator.

         c.       Under System Name, enter the name of the host running Unisphere> (for example,                         Host4) and click OK.

7.      To remove a privileged user from the list:

         a. Select the privileged username, and click Remove

8.      Click OK to save the new privileged user list and /or >the new configuration file.

                The program saves the host agent configuration file with the new privileged user entries           and starts the host agent.

9.      Click Finish.

      A command line window opens indicating that the host agent service is starting.

10.      If the system prompts you to reboot the server, click yes.


        Connecting the VNX to the server in a Fibre Channel switch configuration

             Use optical cables to connect switch ports to the VNX Fibre Channel host ports and to
      Fibre Channel switch ports and to connect the switch ports to the server HBA ports.

             For highest availability in a multiple-HBA server, connect one or more HBA ports to ports
      on the switch and connect the same number of HBA ports to ports on same switch or on
      another switch, if two switches are available.


Cabling_the_VNX_to_the_servers_through_FC_SAN_Switch
Cabling the VNX to the servers through FC SAN Switch

      Zoning the switches

                    We recommend single-initiator zoning as a best practice. In single-initiator zoning each
          HBA port has a separate zone that contains it and the SP ports with which it communicates

Single_FC_cabling
Single FC cabling

Double_FC_cabling
Double FC cabling
.
For more details, please refer the link below.

http://www.storagenetworks.com/documents/emc/vnx-documentation-set-rev-1a1/VNX5300Block.pdf

-----------------------------------------------------------------------------------------------------------------

Introduction to EMC Clariion/VNX


EMC Clariion/VNX (Block Level) Architecture








Storage Terminology

$
0
0

Introduction

 Here we can see some of the vendor details who is leading in the IT Infrastructure.       

Storage Vendors: EMC, Netapp, HP, Hitachi, IBM, Dell, Tintri, Orcale, and e.t.c.,

HBA Vendors: Qlogic and Emulex.

Servers Vendors: Orcale, Dell, IBM, HP, Hitachi Servers.

Types of drives: Sata, SAS, NL-SAS, FC and EFD/SSD/Flash drives.

There are some familiar words will be remained in the Storage Platform.

LUN:  LUN is known as Logical Unit Number. It’s a slice of space from a hard drive.

Raid Group:  A collection of 16 drives in a group with same drive type from where the LUN is created.

Storage Pool: A collection of drives in a pool with same or different type of drives from where the LUN is created.

Masking:  It means particular LUN is visible to particular Host. In clear, A LUN can be visible to only one Storage Group/Host.

Storage Group: It’s nothing but a Host name. Storage group is a collection of one or more LUNs (or meta LUNs) to which you connect one or more servers.

Meta LUN: The meta LUN feature allows Traditional LUNs to be aggregated in order to increase the size or performance of the base LUN. LUN will be expanded by the addition of other LUNs. The LUNs that make up a meta LUN are called meta members and the base LUN is known as Meta head. We can add 255 meta members to 1 meta head (256 LUNs).

Access logix: Access Logix provides LUN masking that allows sharing of storage system.

PSM: The Persistent Storage Manager LUN stores the configuration information about the VNX/Clariion such as Disks, Raid Groups, Luns, Access Logix information, SnapView configuration, MirrorView and SanCopy configuration as well.

The FLARE Code is broken down as follows:-

 1.14.600.5.022 (32 Bit)
 2.16.700.5.031 (32 Bit)
 2.24.700.5.031 (32 Bit)
 3.26.020.5.011 (32 Bit)
 4.28.480.5.010 (64 Bit)

The first digit: 1, 2, 3 and 4 indicate the Generation of the machine this code level can be installed on. For the 1st and the 2nd generation of machines (CX600 and CX700), you should be able to use standard 2nd Generation code levels. CX3 code levels would have a 3 in front of it and so forth. 
These numbers will always increase as new Generations of VNX/Clariion machines are added.

The next two digits are the release numbers; these release numbers are very important and really give you additional features related to the VNX/Clariion FLARE Operating Environment. When someone comes up to you and says, my VNX/Clariion CX3 is running Flare 26, this is what they mean.
These numbers will always increase, 28 being the latest FLARE Code Version.

The next 3 digits are the model number of the VNX/Clariion, like the CX600, CX700, CX3-20 and CX4-480.
These numbers can be all over the map, depending what the model number of your VNX/Clariion is.

The 5 here is unknown, it’s coming across from previous FLARE releases. Going back to the pre CX days (FC), this 5 was still used in there. I believe this was some sort of code internally used at Data General indicating it’s a FLARE release.

The last 3 digits are the Patch level of the FLARE Environment. This would be the last known compilation of the code for that FLARE version.

Failover mode: They are 4 types
.
                1. Failover mode 1 or Passive/Passive mode

                2. Failover mode 2 or Passive/Active mode

                3. Failover mode 3 or Active/Passive mode

                4. Failover mode 4 or Active/Active mode

As per best practice, failover mode 4 is suitable than others.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning SP, the LUN may not trespass immediately.





VNX LUN Allocation

$
0
0

Clariion/VNX LUN Provisioning or Allocation

Login into the Unisphere with the specific IP Address and authorized user credentials.

For Ex:- 10.XX.XX.X.XX

The VNX Unisphere Dashboard will look as it

Dashboard_page
Dashboard page
Go to Storage Tab and select the LUN option

LUN_option_in_Unisphere
Slide 1

Click on Create option a popup window will open.

LUN_details
Slide 2

Fill all the columns with the required info.  and click on Apply and hit Ok button.

LUN_creation_page
Slide 3
Select the newly created LUN and then select the Add to Storage Group option below right side of the window.

LUN_details
Slide 4
Select the Specific Storage group (Host) listed in the available storage group column and click the right side arrow, the selected storage group will be listed in the selected storage group column and hit Ok option.

Adding_LUN_to_storage_group
Slide 5

Now we have to inform this information to the platform team to check the visibility of the LUN.

---------------------------------------------------------------------------------------------------------------

Introduction to EMC Clariion/VNX


EMC Clariion/VNX (Block Level) Architecture



EMC VNX Installation Guide




How trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON storage system ?

$
0
0

Trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON 

Product:

CLARiiON CX3 & CX4 Series/VNX

Description:

 How trespassing works using ALUA (Failover mode 4) on a VNX/CLARiiON storage system?

Resolution:

Since FLARE 26, Asymmetric Active/Active has provided a new way for CLARiiON arrays to present LUNs to hosts, eliminating the need for hosts to deal with the LUN ownership model. Prior to FLARE 26, all CLARiiON arrays used the standard active/passive presentation feature which one SP "owns" the LUN and all I/O to that LUN is sent only to that SP. If all paths to that SP fail, the ownership of the LUN was 'trespassed' to the other SP and the host-based path management software adjusted the I/O path accordingly.
Asymmetric Active/Active introduces a new initiator Failover Mode (Failover mode 4) where initiators are permitted to send I/O to a LUN regardless of which SP actually owns the LUN.

Manual trespass:

when a manual trespass is issued (using Navisphere Manager or CLI) to a LUN on a SP that is accessed by a host with Failover Mode 1, subsequent I/O for that LUN is rejected over the SP on which the manual trespass was issued. The failover software redirects I/O to the SP that owns the LUN.   

A manual trespass operation causes the ownership of a given LUN owned by a given SP to change. If this LUN is accessed by an ALUA host (Failover Mode is set to 4), and I/O is sent to the SP that does not currently own the LUN, this would cause I/O redirection. In such a situation, the array based on how many  I/Os (threshold of 64000 +/- I/Os) a LUN processes on each SP will change the ownership of the LUN.

Path, HBA, switch failure:

If a host is configured with Failover Mode 1 and all the paths to the SP that owns a LUN fail, the LUN is  trespassed to the other SP by the host’s failover software.

With Failover Mode 4, in the case of a path, HBA, or switch failure, when I/O routes to the non-owning SP, the LUN may not trespass immediately (depending on the failover software on the host). If the LUN is not trespassed to the owning SP, FLARE will trespass the LUN to the SP that receives the most I/O requests to  that LUN. This is accomplished by the array keeping track of how many I/Os a LUN processes on each SP. If the non-optimized SP processes 64,000 or more I/Os than the optimal SP, the array will change the ownership to the non-optimal SP, making it optimal.   

SP failure

In case of an SP failure for a host configured as Failover Mode 1, the failover software trespasses the LUN to the surviving SP.

With Failover Mode 4, if an I/O arrives from an ALUA initiator on the surviving SP (non-optimal), FLARE initiates an internal trespass operation. This operation changes ownership of the target LUN to the surviving SP since its peer SP is dead. Hence, the host (failover software) must have access to the secondary SP so that it can issue an I/O under these circumstances.  

Single backend failure

Before FLARE Release 26, if the failover software was misconfigured (for example, a single attach  configuration), a single back-end failure (for example, an LCC or BCC failure) would generate an I/O error since the failover software would not be able to try the alternate path to the other SP with a stable backend.

With release 26 of FLARE, regardless of the Failover Mode for a given host, when the SP that owns the LUN cannot access that LUN due to a back-end failure, I/O is redirected through the other SP by the lower redirector. In this situation, the LUN is trespassed by FLARE to the SP that can access the LUN. After the  failure is corrected, the LUN is trespassed back to the SP that previously owned the LUN.  See the “Enabler for masking back-end failures” section for more information.   

Note: Information in this solution is taken from the White Paper "EMC CLARiiON  Asymmetric Active/Active Feature"

                        For more information refer to Primus” emc202744 “.


---------------------------------------------------------------------------------------------------------------

Storage Terminology


EMC VNX Installation Guide


EMC VNX – LUN Provisioning










VNX LUN Expansion

$
0
0

VNX LUN Expansion

Procedure:

Platform team will request you about the expansion of LUN activity.The workflow will be in this below pattern:

  • If a request raised to expand the LUN. We have to check the prerequisites like the LUN naa ID& LUN Name.

  • Login in to the VNX Unisphere with authorized credentials.

  • Go to the Storage tab and select the LUN option.

  • Select the specific LUN and click the properties option, verify the LUN naa ID and LUN Name.

  • Once verified right click on the specific LUN and select the Expand option.

  • Mention the size of space needs to expand the LUN and click OK.

---------------------------------------------------------------------------------------------------------


For creation of LUN in VNX Unisphere, click the below URL

http://www.sanadmin.net/2015/12/vnx-lun-allocation.html

Trepassing works using ALUA (Failover mode 4)

http://www.sanadmin.net/2015/12/how-trespassing-works-using-alua.html





Putty Download

$
0
0

Putty

PuTTY is an SSH and telnet client, developed originally by Simon Tatham for the Windows platform. 

PuTTY is open source software that is available with source code and is developed and supported by a group of volunteers.


Putty_for_Windows_32_and_64_bit
Putty Window

You can download the putty for Windows 32 bit and 64 bit, refer the below link to download:

http://www.putty.org/

-----------------------------------------------------------------------------------------------------------------

Switch Initial Configuration


Cisco - Troubleshooting zone configuration


Cisco – Portchanneling







Storage Group Creation in VNX Unisphere

$
0
0

Creation of Storage Group

The procedure is as follows:

Login to the Unisphere.

Go to Host Tab and select the Storage Group option.

Unisphere_Image
Unisphere image

 Select the Create option at the below of the page.

Storage_Group_in_VNX_Unisphere
Storage Group

Name the storage group and hit on OK button.

Creation_tab
Creation Window


--------------------------------------------------------------------------------------------------------------------


Procedure to do zoning in Cisco switch, refer this below link:


Procedure to do zoning in Brocade switch, refer this below link:


Procedure to create a LUN in VNX Unisphere, refer this below link:








Storage Pool creation in VNX Unisphere

$
0
0

Creation of Storage Pool in VNX Unisphere


Storage Pool: 

                  It's a physical collection of disks on which logical units (LUNs) are created

Pools are dedicated for use by pool (thin or thick) LUNs. Where RAID group can only contain up to 16 disks, pool can contain hundreds of disks. 

Login to the VNX Unisphere.

VNX_Unisphere
VNX Unisphere

Go to Storage Tab at the menu bar and select the “Storage Pool” option.

Storage_Pool_Tab
Storage Pool Tab

Click on Create button to create a new storage pool.

Storage_pool
Storage Pool Create option

A pop-window will open and fill all the columns like Name of the Storage Pool, ID, what type of pool you required to create like Extreme Performance, Performance and capacity and select the Automatic Disks option and hit on OK button.

Storage_Pool_creation
Creation of Storage Pool

------------------------------------------------------------------------------------------------------


For creating a Storage Group in VNX Unisphere, please refer the link below:


For creating a LUN in VNX Unisphere, please refer the link below:





Gather SP Collects in VNX Unisphere GUI

$
0
0

Gather SP Collects in VNX Unisphere GUI

The procedure as follows :

The main purpose of collecting the SP Collects Logs is to analysis the Storage Array Performance.
We can also able to find errors like disk soft media error, hardware failures errors and so on.

Login to the VNX Unisphere.

Click on the System tab on the menu bar.

Storage_tab_in_VNX_Unisphere
Storage tab in VNX Unisphere
Right side of the main screen you can able to see Wizard columns.

Go to Diagnostic Files tab.

Select the Generate Diagnostic Files – SP A to generate the logs for Storage Processor A

Diagnostic_Files_Column
Diagnostic Files Column
And also select the Generate Diagnostic Files – SP B to generate the logs for Storage Processor B as shown above.

Select the Get Diagnostic Files – SP A to retrieve the logs

Diagnostic_Files_Column
Diagnostic Files Column 
And also select the Get Diagnostic Files – SP B to retrieve the logs as shown above.

A page will open with the logs generating, sort out the logs by Date range and your log file name will be shown as XXX.runlog.txt

SP_File_Transfer_Manager
SP File Transfer Manager
It will take 10 -15 mins of time to gather the logs for each SP’s.

Once it complete the log file will convert from runlog.txt to data.zip as shown below

SP_File_Transfer_Manager
SP File Transfer Manager
Transfer the file to your desired location and upload the logs to EMC Support Team to analysis the Array Performance.

------------------------------------------------------------------------------------------------------------


To gather the SP Collects from the NaviCLI , please refer the link below:


To gather the .NAR files, please refer the link below:



Gather SP Collects from NaviCLI

$
0
0

Commands to gather SP Collects from NaviCLI

Open a command prompt on the Management Station.

Type cd c:\program files\emc\navisphere cli

Type navicli -h <SP_IP_address> spcollect -messner            

This will starts to gather the SP collect script.

Type navicli -h <SP_IP_address> managefiles -list     

This will list the files created by SPcollect.

Type navicli -h <SP_IP_address> managefiles –retrieve

This will display the files that can be moved from the SP to the Management Station.

Example:

Gathering_SP_Collects_through_NaviCLI
Gathering SP Collects through NaviCLI
Enter files to be retrieved with index separated by comma (1,2,3,4,5) OR by a range (1-3) OR enter 'all' to retrieve all file OR 'quit' to quit> 13.

This will pull the index number 13 from the corresponding SP and copy it to the c:\program files\emc\navisphere cli directory with a filename of SPA__APM00023000437_9c773_05-27-2004_46_data.zip.
 
Upload the file or files to an FTP site as directed by EMC Support.  

-------------------------------------------------------------------------------------------------------------


To gather SP Collects from VNX Unisphere GUI, please refer the below link

http://www.sanadmin.net/2015/12/gather-sp-collects-in-vnx-unisphere-gui.html







Email Alert Notification in VNX

$
0
0

Email Alert Notification

Notification alerts are like taking the  pro-active steps to overcome the future hardware or software failures.

Whenever a change occurs in our storage array either it is critical, informational, warring message an alert notification will trigger to your specified email address or to a group members.

Login to the VNX Unisphere.

Click on the System tab on the menu bar.

Click on Monitoring and Alert option.

Select Notification option.

Click on Notification template tab.

Go to Engineering Mode by pressing the Shift+Ctrl+F12buttons on your keyboard.

Type the password as “messner” and click on Ok.

Select the Call_Home_Template and click on Properties.

Notification_Template
Notification Template

A new window will open and in “General” tab click on “Advanced” option to select the Event code like Critical Error, Error, Warning, Information alerts to trigger to your email address.

Go to E-Mail tab and specify all the parameters for the email notification.

To trigger the email notification alerts SMTP Server IPaddress is mandatory.

Template_Properties
Template Properties
The Message will look like in this pattern:

Time Stamp %T% (GMT) Event Number %N%

Severity %SEV% Host %H%

Storage Array %SUB% %SP% Device %D%

Description %E%

Company Name:

Contact Name:

Contact Phone Number:

Contact Email Address:

Secondary Contact Name:

Secondary Contact phone Number:

Secondary Contact Email Address:

Additional Comments:

IP Address SP A: 10.XX.XX.XXX SP B:10.XX.XX.XXX

Once all the parameters are specified, click on Test to test that the email notifications alerts are triggering to your specified email address or not.


If Test is completed click on Ok.

-------------------------------------------------------------------------------------------------------------------

To gather SP Collects through NaviCLI


To gather SP Collects through VNX Unisphere GUI


LUN Allocation for a New Server in EMC VNX

$
0
0

VNX LUN Provisioning for a New Server

            Whenever a new server deployed in the environment, the platform team eitherit is Windows, Linux and Solaris domain will contact you for the free ports on the Switch (For Example:- Cisco Switch) to connect with the Storage

We will login into the Switch with authorized credentials via putty. 

To download putty please find the link below:

Once logged in we check the free ports details by using the command.

Switch # sh interface brief
 
Note:  As a storage admin, we have to know the server HBA’s details too. According to that information we have to fetch the free ports details on the two switches.

Will update the free ports details to the platform team, then platform team will contact the Data center folks to lay the physical cable connectivity between the new server to the switches.

Note:  The Storage ports are already connected to the switches.

Once the cable connectivity completes the platform team will inform us to do zoning.

Zoning:

Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each other.

All the commands should be run in Configuration Mode.

SwitchName# config t     (configuration terminal)

SwitchName# zone name ABC vsan 2

SwitchName – zone # member pwwn 50:06:01:60:08:60:37:fb

SwitchName – zone # member pwwn 21:00:00:24:ff:0d:fc:fb

SwitchName – zone # exist

SwitchName# zoneset name XYZ vsan 2

SwitchName – zoneset # member ABC

SwitchName – zoneset # exist

SwitchName# zoneset activate name XYZ vsan 2


For more details about the zoning, please refer the below link.

http://www.sanadmin.net/2015/11/cisco-zoning-procedure-with-commands.html

Once the zoning is completed, we have to check the Initiators status by login into the Unisphere (VNX Storage Array).

The procedure is as follows below:

Go to the Host Tab and select the Initiators tab.

Search for the Host for which you have done the zoning activity

Verify the Host Name, IPAddress and Host HBA WWPN & WWNN No.’s.

In the Initiators window check the Registered and Logged Incolumns. If “YES” is there in both the columns then your zoning activity is correct and the Host is connected to the Storage Box.

If “YES” is in one column and “NO” in another column, then your zoning part is not correct. Check the zoning steps, if it correct and the issue is sustain then check the Host WWPN &WWNN and cable connectivity.

Now we have to create a New Storage Group for the New Host.

The procedure is as follows:

Go to Host Tab and select the Storage group option.

Click on the Create option to create a new storage group.

Name the storage group for your identification and hit on OKbutton to complete your task.


Before creating a LUN to a specified size we have to check the prerequisite like:

Check the availability of free space in the Storage Poolfrom where you are going to create a LUN.

If free space is not available in that specific storage pool, share the information to your Reporting Manager or to your seniors.

Now will create a LUN with the specified size

Login to the VNX Unisphere.

Go to Storage Tab and select the LUN option.

Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created, Name of the LUN and specific the LUN is THICK or THIN.

And hit on OK button to complete the LUN creation task.

Now we have to add the newly created LUN to the newly created Storage Group (Masking). 

To know more about the Storage Terminology, refer the below link:

http://www.sanadmin.net/2015/12/storage-terminology.html

In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the below of the page.

Click on it and a new page will open.

Two columns will be appeared in the page one is “Available Host  and “Connected Host”.

Select the New Storage Group in the available host column and click on the Right side arrow and the host will appear in the connected host column and then hot on OK button.

Inform the Platform team that the LUN has been assigned to host taking a snapshot of the page and also inform them to rescan the disks from the platform level.



How to generate .NAR Files in VNX Unisphere

$
0
0

.NAR Files

Whenever we face any Performance issues either on Server, Storage and Switch level sides. We have to generate the .NAR Files and uploaded to the EMC. The Performance related Team will analyze the files and they will give the recommendation to resolve the issue.

To generate the logs , please follow the below steps.

Login to the Unisphere

Go to System tab and select the Monitoring and alertsoptions.

Select the Statisticsoption
.
Go to the Performance Data Logging under the Settingcolumn.

Performance_Data_Logging_option_under_the_Statistics_tab
Performance Data Logging option under the Statistics tab


Check the Status of the Data Logging. If the status is in Stop mode, please start it and we have to wait for 24 – 48 hrs of time to generate the .NAR file.

If the Status is in Startmode,  go to the Retrieve archives option under the Archive Management column.

Select the file and browse it to the desired location and then click OK button.


----------------------------------------------------------------------------------------------

Email alert notification in VNX


Gather SP Collects through NaviCLI


Gather SP Collects through VNX Unisphere GUI





LUN Migration in VNX

$
0
0

Initiating LUN Migration

Login to the Unisphere, 

Go to the Storage tab and select the LUN which you want to migrate.

Right Click on the ‘Source LUN = eg-145’ and choose Migrate

Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN field

The Migration Rate drop-down menu: Low

NOTE:  The Migration Rate drop-down menu offers four levels of service: ASAP, High, Medium, and Low. 

Source LUN > 150GB; Migration Rate: Medium

Source LUN < 150GB; Migration Rate: High

Click OK 

YES

DONE

The Destination LUN assumes the identity of Source LUN.

The Source LUN is unbound when the migration process is complete.

Note: During Migration Destination LUN will be shown under ‘Private LUNS’

IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be equal or greater than source LUN.

----------------------------------------------------------------------------------------------

To know more about 

EMC VNX – LUN Expansion


EMC VNX – Storage Group Creation


EMC VNX – Storage Pool Creation







Viewing all 126 articles
Browse latest View live