Quantcast
Channel: SAN Admin-Detailed about SAN,NAS,VMAX,VNX,Netapp Storage,HP Storage
Viewing all 126 articles
Browse latest View live

LUN Provisioning in Symmetrix DMX | Symmetrix - LUN Provisioning | DMX - LUN Allocation

$
0
0

LUN Provisioning


Today we will discuss about the LUN Provisioning in the DMX through CLI.


Symmetrix_LUN_Provisioning
Symmetrix LUN Provisioning

Before going for the LUN Provisioning, have understand the Symmetrix Architecture 

Basically, LUN allocation will have 4 simple steps like below

1. Creating STD device
2. Meta device creation
3. Mapping
4. Masking

The step by step procedure for LUN Provisioning in Symmetrix DMX is as follows:

1. Open a text file to create STD devices, by using the command

Create dev count=7, size=10240, emulation=FBA, config=2-way-mir, disk_group=2;

Execute the text file using symconfigure command with preview, prepare and commit options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify the newly created devices by using the command

Symdev -sid XXX list -noport

2. Open a text file to form metas and devices to the meta head.

Form meta from dev 27CA, config=striped,stripe_size=1920; add dev 27CB:27E4 to meata 27CA


Execute the text file using symconfigure command with preview, prepare and commit options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify the newly created meta devices by using the command

Symdev -sid XXX list -noport

Find the host connected directors and port details by using the command

Symcfg -sid XXX list -connections

Find the available addresses on that port by using the command

Symcfg -sid XXX list -address -available -dir 6 d -p 1

3. Open a text file with the following entry to map the device to the FA port

Map dev 27CA to dir 6d:1, lun=023;

Execute the text file using symconfigure command with preview, prepare and commit options.

Symconfigure -sid XXX -f "name of the text file" -v -noprompt preview

Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

4. Mask the devices to the host HBA

Symmaskdb -sid XXX -wwn 1000000c94d35cd -dir 6 d -p 1 add devs 27CA -nop

Refresh the Sym configuration by using the command

Symmask -sid XXX -refresh 


To know about the LUN Provisioning in EMC VNX and Clariion, refer the link below

http://www.sanadmin.net/2015/12/LUN-provisioning.html






EMC Storage Products Guide | List all the EMC Storage arrays | EMC Storage Arrays

$
0
0

All EMC Storage Products


Today we will come to know all storage products released by the EMC Corporation. To know the history of EMC Corporation


EMC_Storage_products
EMC Storage Products

Atmos:

EMC Atmos is a cloud storage services platform, Atmos can be deployed as either a hardware appliance or as software in a virtual environment. The Atmos technology uses an object storage architecture designed to manage petabytes of information and billions of objects across multiple geographic locations as a single system.

Atmos was organically developed by EMC Corporation and was made generally available in November 2008. A second major release in February 2010 added a "GeoProtect" distributed data protection feature, faster processors and denser hard drives.

During EMC World in May 2011, EMC announced the 2.0 version of Atmos with better performance, more efficient "GeoParity" data protection and expanded access with Windows client software (Atmos GeoDrive) and an Atmos SDK with Centera/XAM and Apple iOS compatibility.

Avamar:

Avamar is a used for backup the data and it uses the backup-to-disk technology. Backup-to-disk refers to technology that allows one to back-up large amounts of data to a disk storage unit. The backup-to-disk technology is often supplemented by tape drives for data archival or replication to another facility for disaster recovery. Additionally, backup-to-disk has several advantages over traditional tape backup for both technical and business reasons.

Another advantage that backup-to-disk offers is data de-duplication and compression. The disk appliances offer either de-duplication at the source or at the destination. The de-duplication at the destination is faster and requires less performance overhead on the source host. The de-duplication requires less disk space on the disk appliance as it stores only one copy of the possible multiple copies of one file on the network.

Centra:

Content-addressable storage, also referred to as associative storage or abbreviated CAS, is a mechanism for storing information that can be retrieved based on its content, not its storage location. It is typically used for high-speed storage and retrieval of fixed content, such as documents stored for compliance with government regulations.

Content-addressed vs. location-addressed

When being contrasted with content-addressed storage, a typical local or networked storage device is referred to as location-addressed. In a location-addressed storage device, each element of data is stored onto the physical medium, and its location recorded for later use. The storage device often keeps a list, or directory, of these locations. When a future request is made for a particular item, the request includes only the location (for example, path and file names) of the data. The storage device can then use this information to locate the data on the physical medium, and retrieve it. When new information is written into a location-addressed device, it is simply stored in some available free space, without regard to its content. The information at a given location can usually be altered or completely overwritten without any special action on the part of the storage device.

when information is stored into a CAS system, the system will record a content address, which is an identifier uniquely and permanently linked to the information content itself. A request to retrieve information from a CAS system must provide the content identifier, from which the system can determine the physical location of the data and retrieve it. Because the identifiers are based on content, any change to a data element will necessarily change its content address. In nearly all cases, a CAS device will not permit editing information once it has been stored. Whether it can be deleted is often controlled by a policy.

Clariion:

Clariion is discontinued SAN disk array manufactured and sold by EMC Corporation, it occupied the entry-level and mid-range of EMC's SAN disk array products. In 2011, EMC introduced the EMC VNX Series, designed to replace both the Clariion and Celerra products. 

Upon launch in 2008, the latest generation CX4 series storage arrays supported fibre channel and iSCSI front end bus connectivity. The fibre channel back end bus offered 4 Gbit/s of bandwidth with FC-SCSI disks or SATAII disks.
The EMC Celerra NAS device is based on the same X-blade architecture as the CLARiiON storage processor.

In 2011, EMC introduced the new VNX series of unified storage disk arrays intended to replace both CLARiiON and Celerra products. Internally the VNX is labeled the CX5. In early 2012, both CLARiiON and Celerra were discontinued.

CloudBoost:

CloudBoost technology enables EMC NetWorker and Avamar users to reduce CapEx and eliminate tape in their environments by using a private, public, or hybrid cloud for long-term retention. Specifically, NetWorker with CloudBoost and Avamar with CloudBoost enable long-term retention of monthly and yearly backups to the cloud.

Data Domain:

Data Domain Corporation was an Information Technology company from 2001-2009 specializing in target-based deduplication solutions for disk based backup. Since its acquisition by EMC Corporation in 2009.

Data Domain was founded by Kai Li, Ben Zhu, and Brian Biles. The goal of the company was to minimize the tape automation market with a disk-based substitute. It did this by inventing a very fast implementation of lossless data compression, optimized for streaming workloads, which compares incoming large data segments against all others stored in its multi-TB store. Originally categorized as "capacity optimization" by industry analysts, it later became more widely known as inline "data deduplication".

Data Protection Advisor:

It improve recovery confidence and ensure service levels with EMC’s powerful data protection management software. With Data Protection Advisor, you’ll be able to unify and automate monitoring, analysis and reporting across backup and recovery environments while reducing complexity, lowering costs, and eliminating manual efforts.

Greenplum:

Greenplum was a big data analytics company headquartered in San MateoCalifornia. Greenplum was acquired by EMC Corporation in July 2010.

Greenplum, the company, was founded in September 2003 by Scott Yara and Luke Lonergan.

In July 2006 a partnership with Sun Microsystems was announced. Sun Microsystems was a reference architecture and used by the majority of Greenplum's customers to run its database until a transition was made to Linux in the 2009 timeframe. Greenplum was acquired by EMC Corporation in July 2010 becoming the foundation of EMC's Big Data Division. 

Islion:


Isilon is a scale out network-attached storage platform offered by EMC Corporation for high-volume storage, backup and archiving of unstructured data. It provides a cluster-based storage array based on industry standard hardware, and is scalable to 50 petabytes in a single filesystem using OneFS file system.

An Isilon clustered storage system is composed of three or more nodes. Each node is a self-contained, rack-mountable device that contains industry standard hardware, including disk drives, CPU, memory and network interfaces, and is integrated with proprietary operating system software called OneFS, which unifies a cluster of nodes into a single shared resource.

Isilon Systems was a computer hardware and software company founded in 2001 by Sujal Patel. Isilon Systems was acquired by EMC Corporation in November 2010 for $2.25 billion.

NetWorker:


EMC NetWorker (formerly Legato NetWorker) is a suite of enterprise level data protection software that unifies and automates backup to tape, disk-based, and flash-based storage media across physical and virtual environments for granular and disaster recovery. Cross-platform support is provided for LinuxWindowsMac OS X, NetWare, OpenVMX and Unix environments. Deduplication of backup data is provided by integration with EMC Data Domain (DD Boost) and EMC Avamar storage solutions.

Prosphere:

EMC ProSphere is a cloud storage management software to monitor and analyze storage service levels across a virtual infrastructure. Built to meet the demands of the cloud computing era, ProSphere enables enterprises to enhance performance and improve storage utilization as they adopt technologies and processes for the cloud.

RecoverPoint:


RecoverPoint is a continuous data protection solution offered by EMC Corporation which supports asynchronous and synchronous data replication of block-based storage.

Models:
RecoverPoint continuous data protection (CDP)
RecoverPoint continuous remote replication (CRR)
RecoverPoint  concurrent local and remote (CLR)

ScaleIO:

EMC ScaleIO is a software-defined storage product from EMC Corporation that creates a server-based storage area network (SAN) from local application server storage, converting direct-attached storage into shared block storage. It uses existing host-based internal storage to create a scalable, high-performance, low-cost server SAN. EMC promotes its ScaleIO server storage-area network software as a way to converge computing resources and commodity storage into a “single-layer architecture".

SRM:

In computing, Storage Resource Management (SRM) involves optimizing the efficiency and speed with which a storage area network (SAN) utilizes available drive space.Data growth averages around 50% to 100% per year Organisations face rising hardware-costs and the increased costs of managing their storage. Storage professionals who face out-of-control data-growth are looking at SRM to help them navigate the storage environment. SRM identifies underutilized capacity, identifies old or non-critical data that could be moved to less-expensive storage, and helps predict future capacity requirements.

Symmetrix:

Symmetrix arrays, EMC's flagship product at that time, began shipping in 1990 as a storage array connected to an IBM mainframe via the block multiplexer channel . Newer generations of Symmetrix brought additional host connection protocols which include ESCONSCSIFibre Channel-based storage area network. The Symmetrix product was initially popular within the airline industry and with companies that were willing to deviate from the safety of IBM's 3390 disk subsystem and take a risk with the unproven Symmetrix array. This product is the main reason for the rapid growth of EMC in the 1990s, both in size and value, from a company valued hundreds of millions of dollars to a multi-billion company.

VBLOCK:


Vblock is the brand name VCE uses for racks containing the components of its data center products. Prepackaging, called converged infrastructure, allows customers to select preconfigured and integrated solutions, with predictable units of power, weight, cooling, and geometry for data center planning purposes.

Vblock systems consist of storage and provisioning from EMC, switches and servers from Cisco, and VMware virtualization software running on the servers. In addition, Vblock system customers' support calls are handled by VCE.

Models:
Vblock had two series based on the following compositional elements.
EMC provides storage and provisioning

·         VNX
·         VMAX
·         Ionix UIM/P

Cisco provides compute and networking

·         UCS
·         Nexus

VMware provides virtualization

·         vSphere
·         with vDS provided via Cisco Nexus 1000V
·         with MPIO provided via EMC PowerPath/VE

VPLEX:


EMC VPLEX is a virtual computer data storage software product introduced by EMC Corporation in May 2010. VPLEX implements a distributed "virtualization" layer within and across geographically disparate Fibre Channel storage area networks and data centers. 

A previous virtual storage product from EMC Corporation called Invista was announced in 2005. Five months after the announcement, Invista had not shipped, and was expected to not have much impact until 2007. By 2009, some analysts suggested the Invista product might best be shut down. Another product called the Symmetrix Remote Data Facility (SRDF) also was marketed when VPLEX was announced in May 2010.

Refer the below link for Storage Vendors in the world market and to know more about the Storage RAIDs,click the below link





How to do LUN Deallocation in Symmetrix DMX | How to do LUN Deallocation in DMX | Steps to do for LUN Deallocation in Symmetrix

$
0
0

LUN De-allocation


Hello Guys,

Today we will let you know about the LUN deallocation procedure in Symmetrix DMX. To know about LUN Provisioning in DMX, refer this link below
http://www.sanadmin.net/2016/04/Symmetrix-lun-provisioning.html
Symmetrix DMX LUN Deallocation

Steps:

Below are the 5 simple steps to perform the LUN deallocation in Symmetrix DMX.

1. Unmasking
2. Write disable
3. Un-mapping
4. Dissolve meta
5. Deleting hypers


Procedure:


1. Unmasking devices from the host, by using the below command

Symmaskdb –sid XXX –wwn 10000003efgae62 –dir 6d – p 1 remove devs 27CA

Refresh the symmetrix array, by using the command

Symmask –sid XXX –refresh


2. Write disable the device before unmapping from the director port, by using the below command

Symdev –sid XXX write –disable 27CA –sa 6d –p 1 –noprompt

3. Open a text file to unmap the devices, by using the command

Unmap dev 27CA from dir all;all

Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview


Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that the device has been unmapped or not, by using the below command

Symdev –sid XXX list –noport

4. Open a text file to dissolve the meta head, by using the command

Dissolve meta dev 27CA;


Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview


Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that the meta has been dissolve.

Symdev –sid XXX list –noport

5. Open a text file to delete the device, by using the command

Delete dev 27CA;

Perform preview, prepare and commit operations using symconfigure command

Symconfigure –sid XXX –f "name of the text file" –v –nop preview


Symconfigure -sid XXX -f "name of the text file" -v -noprompt prepare

Symconfigure -sid XXX -f "name of the text file" -v -noprompt commit

Verify that hypers have been deleted.

Symdev –sid XXX list - noport


EMC Storage Product list detailed are mentioned, to know more details about FC SAN Switch installation procedure, click the link below

How to register host initiators in the VNX | How to add Initiators HBA WWPN in the VNX Unisphere | How to do Host connectivity in VNX Unisphere

$
0
0

Registering Host Initiators in the VNX Unisphere

Now we will see how to register the Host Initiators in the VNX, before that we will discuss about the importance of it.

Once the Zoning task is completed for the newly deployed server in the Data center, we have to login to the VNX Unisphere to check the host initiators are logged in or not. If we have installed the Unisphere Host Agent already on the server, it will logged in automatically to the VNX Unisphere, if not we have to register the server initiators manually.

To know about in Zoning in Brocade Model Switches, click here http://www.sanadmin.net/2015/11/zoning-brocade-fabric.html

click the link http://www.sanadmin.net/2015/11/VNX-specifications.html

Procedure:

Login to the VNX Unisphere with the authorized login credentials.

Select the Initiators  tab under the Host tab at the menu bar.

Initiator_Tab_in_Unisphere
Initiator Tab in Unisphere

Click on the Create option located at the bottom left hand side.

Initiators_logged_in
Initiator logged in

A popup will open mention the details like Host WWN, SP Port, Initiator  type and Failover mode and then hit on OK button.

Registering_the_initiators
Registering the Initiators


Repeat the same procedure for all the host WWN which we used while performing the zoning.

Once all the WWN are registered check the " Register and Logged in tabs " in the Initiators main tab. If in these two tabs are showing as "Yes-Yes" it means the host is able to see the storage in other words the zoning which we done earlier is right. If in case it shows as " Yes-No" then the host is not able to see the storage. Have to check the zoning and other trouble shooting procedure have to be carried out.

Register_and_Logged_in_Tabs
Register and Logged in Tabs
This is the procedure to register the host initiators in the VNX Unisphere.

To know about the VNX Installation and implementation, refer the link below

http://www.sanadmin.net/2015/11/vnx-installation.html





Detailed about EMC VMAX generations | Introduction to VMAX Generations

$
0
0

EMC VMAX Generations


Hi All,

Today we will have a look on VMAX generations and it's architectural differences. 

In VMAX generations, 20K is the first launched one then 10K and at last 40K.

Engines will be there in VMAX. In every engine there should be frontend, backend cards & cache board will be present.

Backend directors & cache is global in the array.

In 10K model maxi. of 4 engines are present.

In 20K & 40K engines starts with 4,5,3,6,2,7,1 and 8.

In 10K, on each engine 2 cards are present; they are 2.8 GHz Xenon, 24 CPU per engine, 256 GB global memory.

The card number is physically mentioned on the engine backside.

On each engine we have this 2 cards, each card we have 6 Quad intel CPU with 2.8 GHz.

Each of the quads is hypercode into 2 cores.



EMC_VMAX
EMC VMAX

VMAX 20K


Each engine has 2 separate cards and for each card physically there are 2 CPU’s. These 2 CPU’s are 4 Quad core.

Each of the CPU is Intel based 2.3 GHz CPU.

CPU 1 is dedicated to back end and CPU 2 is dedicated to front end. 

Backend we have A, B, C, and D as directors.

Each backend directors will have 2 ports named as port 0 & 1.

Those ports sharing the CPU core A or B or C or D.


VMAX 40K


In 20K each engine has 2 separate cards having 2 CPU’s. These CPU’s are 4 Quad core CPU’s.

In 40K, there will be 2 cards of 6 Quads core CPU’s.

Each of the CPU’s is Intel based with 2.8 GHz speed.

6 Quad core 1 & 2 are acting as front end directors E, F, G and H.

The others (3-3), (4-4), (5-5) and (6-6) together bunked out and act as back end directors.

The 2 cores which are left in each CPU’s (3, 4, 5, 6) acts as backend directors.
Each back end directors has 2 ports named port 0 & 1.

In 40K each back end port will have dedicate a core to manage the workload.

Coming to cache installation on each engine, cache will be installed in the form of cache chips. These cache will be participate in the global cache.

In each engine the cache capacity is 256 GB.

In 40K 8X256 = 2 TB of cache capacity.

In 20K 8X128 = 1 TB of cache capacity.


VMAX 3


With VMAX 3, the industry leading tier 1 array has evolved into a thin hardware platform with complete set of rich software data services servicing internal and now external block storage. VMAX data services are delivered by a high resilient,agile hardware platform that offers global cache, CPU processing flexibility, performance and HA at scaleable to meet the most demanding enterprise or hybird cloud infrastructures. The redefined HYPERMAX OS is the industry's first open platform for efficiently delivering mission-critical application and data services.

VMAX 3 100K, 200K and 400K Specification details


Do anyone need EMC dumps and also we provide the EMC Vouchers @ flat 50% off, if need just shoot an email to shaik.techi@gmail.com



Detailed Introduction about VMAX | Overview to VMAX Models and Components

$
0
0

VMAX Introduction 


Hi Guys,

Today we will see some important specifications or components by which EMC VMAX became world wide leader in SAN Storage.

To know more about the VMAX Generations,refer the page http://www.sanadmin.net/2016/06/detailed-about-emc-vmax-generations.html


This Symmetrix product is the main reason for the rapid growth of EMC in the 1990s, both in size and value, from a company valued hundreds of millions of dollars to a multi-billion company.

EMC_VMAX
EMC VMAX


VMAX Models:


Single Engine (SE)

VMAX Series

VMAX_Model_comparsion
Model comparison


Types of Bays:


Storage Bay / Disk Bay – It consists of Disks, Link control cards, Power supplies and Standby power supply.

System Bay – It consists of Engines, Directors ( Backend I/O module and Frontend I/O module), Fan modules, Power supply and KVM ( Keyboard virtual monitor).


Engine Front view


Engine_specifications
Engine Rear view Specifications 2


System_and_storage_bay
System and Storage Bay


We can configure maximum of 10 Storage Bays (5 on left and 5 on right side of the System Bay).

Components of a Storage Bay:

Components_of_a_storage_bay
Components of a Storage Bay


A Storage Bay consists of either eight or sixteen Drive Enclosures, and eight SPS modules. Symmetrix V-Max arrays are configured with capacities of up to 120 disk drives for a half populated bay or 240 disk drives for a full populated bay. Each Drive Enclosure includes the following components: redundant power and cooling modules for disk drives, two Link Control Cards, and 5 to 15 disk drive.

In simple

1 Storage Bay consists of 16 DAE’s to maximum.

10 Storage Bay = 10 X 16 = 160 DAE’s.

1 DAE = 15 Disks.

1 Storage Bay = 15 X 16 = 240 Drives.

10 Storage Bay = 240 X 10 = 2400 Drives.


Key benefits over DMX:
Difference_between_DMX_and_VMAX
Difference between DMX & VMAX


1 VMAX Engine with Storage Bays
4th_Engine_Installation
1 Engine installation

The Symmetrix V-Max array requires at least one V-Max Engine in the System Bay. As shown, the first engine in the System Bay will always be Engine 4 as counted starting at 1 from the bottom of the System Bay. In this example, Engine 4 has two half populated Storage Bays. One bay is directly attached and the second is a daisy chain attached Storage Bay. This allows for a total of 240 drives. To populate the upper half of these Storage Bays with drives you will need to add another V-Max Engine.

2 VMAX Engines with Storage Bays


2_engines_installation
2 Engines installation

V-Max Engines are added from the middle, out starting with 4, then 5, then 3

3 VMAX Engines with Storage Bays


3_engines_installation
3 Engines installation 

8 VMAX Engines with Storage Bays


vmax_engines
Fully populated engines installation

Engines starts with 4,5,3,6,2,7,1 and 8. To know more details about Engines in VMAX

Bay Numbering: Comparison to DMX


Bay_numbering_in_VMAX
Bay Numbering
Virtual Matrix Architecture: Comparison to DMX


Virtual_matrix_architecture
Virtual Matrix Architecture









Foundations of Microsoft Azure | Introduction to Microsoft Azure | About Microsoft Azure

$
0
0

Foundations to Cloud Computing and Microsoft Azure


Cloud Computing 

Cloud computing is the delivery of computing services - servers, storage, databases, networking, software, analytics and more - over the Internet (“the cloud”). Companies offering these computing services are called cloud providers and typically charge for cloud computing services based on usage, similar to how you are billed for water or electricity at home.

Uses of Cloud Computing

You are probably using cloud computing right now, even if you don’t realise it. If you use an online service to send email, edit documents, watch movies or TV, listen to music, play games or store pictures and other files, it is likely that cloud computing is making it all possible behind the scenes.
Here are a few of the things you can do with the cloud:
·         Create new apps and services
·         Store, back up and recover data
·         Host websites and blogs
·         Stream audio and video
·         Deliver software on demand
·         Analyse data for patterns and make predictions

                            With the help of Microsoft Azure we can any developer or IT professional can be productive with Azure. The integrated tools, pre-built templates and managed services make it easier to build and manage enterprise, mobile, Web and Internet of Things (IoT) apps faster, using skills you already have and technologies you already know.



Top benefits of cloud computing


1. Cost
Cloud computing eliminates the capital expense of buying hardware and software and setting up and running on-site datacenters - the racks of servers, the round-the-clock electricity for power and cooling, the IT experts for managing the infrastructure. It adds up fast.
 
2. Speed
Most cloud computing services are provided self service and on demand, so even vast amounts of computing resources can be provisioned in minutes, typically with just a few mouse clicks, giving businesses a lot of flexibility and taking the pressure off capacity planning.

3. Global scale
The benefits of cloud computing services include the ability to scale elastically. In cloud speak, that means delivering the right amount of IT resources—for example, more or less computing power, storage, bandwidth—right when its needed and from the right geographic location.

4. Productivity
On-site datacenters typically require a lot of “racking and stacking”—hardware set up, software patching and other time-consuming IT management chores. Cloud computing removes the need for many of these tasks, so IT teams can spend time on achieving more important business goals.

5. Performance
The biggest cloud computing services run on a worldwide network of secure datacenters, which are regularly upgraded to the latest generation of fast and efficient computing hardware. This offers several benefits over a single corporate datacenter, including reduced network latency for applications and greater economies of scale.

6. Reliability
Cloud computing makes data backup, disaster recovery and business continuity easier and less expensive, because data can be mirrored at multiple redundant sites on the cloud provider’s network.

Types of cloud services: IaaS, PaaS, SaaS

Most cloud computing services fall into three broad categories: infrastructure as a service (IaaS), platform as a service (PaaS) and software as a service (Saas). These are sometimes called the cloud computing stack, because they build on top of one another. Knowing what they are and how they are different makes it easier to accomplish your business goals.
Infrastructure-as-a-service (IaaS)

Infrastructure as a service (IaaS) is an instant computing infrastructure, provisioned and managed over the Internet. Quickly scale up and down with demand and pay only for what you use.
IaaS helps you avoid the expense and complexity of buying and managing your own physical servers and other datacenter infrastructure. Each resource is offered as a separate service component and you only need to rent a particular one for as long as you need it. The cloud computing service provider manages the infrastructure, while you purchase, install, configure and manage your own software—operating systems, middleware and applications.

To know more about IaaS, refer the link https://azure.microsoft.com/en-in/overview/what-is-iaas/
Platform as a service (PaaS)

Platform as a service (PaaS) is a complete development and deployment environment in the cloud, with resources that enable you to deliver everything from simple cloud-based apps to sophisticated, cloud-enabled enterprise applications. You purchase the resources you need from a cloud service provider on a pay-as-you-go basis and access them over a secure Internet connection.
Like IaaS, PaaS includes infrastructure—servers, storage and networking—but also middleware, development tools, business intelligence (BI) services, database management systems and more. PaaS is designed to support the complete web application lifecycle: building, testing, deploying, managing and updating.
PaaS allows you to avoid the expense and complexity of buying and managing software licenses, the underlying application infrastructure and middleware or the development tools and other resources. You manage the applications and services you develop and the cloud service provider typically manages everything else.


Software as a service (SaaS)

Software as a service (SaaS) allows users to connect to and use cloud-based apps over the Internet. Common examples are email, calendaring and office tools (such as Microsoft Office 365).
SaaS provides a complete software solution which you purchase on a pay-as-you-go basis from a cloud service provider. You rent the use of an app for your organisation and your users connect to it over the Internet, usually with a web browser. All of the underlying infrastructure, middleware, app software and app data are located in the service provider’s data center. The service provider manages the hardware and software and with the appropriate service agreement, will ensure the availability and the security of the app and your data as well. SaaS allows your organisation to get quickly up and running with an app at minimal upfront cost.


Types_of_cloud_services
Types of Cloud Services

Microsoft Azure will provide youa total Packaged Software kit, from which we can choose according to our project environment purpose.

The above picture explains that - Under IaaS, the green color services are supported by Microsoft Azure and blue colored services are supported by you.

Under PaaS, the green color services are supported by Microsoft Azure and blue colored services are supported by you.

Under SaaS, all the packaged software kit is supported by Microsoft Azure.


Introduction to Microsoft Azure | Advantage of Microsoft Azure | Azure Resources

$
0
0

Introduction to Azure

            Now the whole nation is behind Azure. It’s a Microsoft implementation of a cloud and it's built around what Microsoft feels their customers need, based on surveys and research they've done, so the first thing to kind of bear mind when we think about Azure is that Azure is a hybrid cloud. It's not a solution that is designed to be only up in the cloud and azure it's designed to supplement what you already have in your existing investments in both infrastructure and application development. So some basic things you can do with Azure, you can create customized dashboards.

For example, here's my Azure subscription, I like to keep mine really simple I've got a dashboard that shows me the health of everything I've got deployed and then what I'm charging and what I'm being billed per month. That's what I want to look at when I quickly log on to Azure and so you know as an administrator you can build these dashboards that give you a really quick look at we know what's the current state of health of both Azure and also the applications and services that I happen to care about.

Some other examples of resources that are not necessarily you know things you deploy on Azure. The marketplace, anyone can build a solution for the Azure marketplace. So if you were a company and you have a product you can sell that product through the Azure marketplace. You simply need to build it, you apply it, it gets certified, gets tested and checked and so on and its added to the marketplace. 

Introduction to Azure


What is Azure?


Microsoft Azure is Microsoft’s public cloud offering that enables individuals and organizations to create, deploy, and operate cloud-based applications and infrastructure services. The Microsoft Azure public cloud platform offers IaaS, PaaS, and SaaS services to organizations worldwide.

Microsoft Azure can also be used in conjunction with various Microsoft solutions, such as Microsoft System Center, and can be leveraged together to extend an organization's current datacenter into a hybrid cloud that expands capacity and provides capabilities beyond what could be delivered solely from an on-premises standpoint.

Uses for Microsoft Azure

Microsoft Azure offers many services and resource offerings. For example, you can use the Azure Virtual Machines compute services to build a network of virtual servers to host an application, database, or custom solution, which would be an IaaS based offering. Other services can be categorized as PaaS because you can use them without maintaining the underlying operating systems. For example, when you run a website in Azure Web Apps, or a SQL database in Azure SQL Databases it is not necessary to ensure that you are using the latest version of Internet Information Services (IIS) or SQL and have the latest patches and updates installed, as this is the responsibility of the Microsoft Azure platform.

An example of a SaaS service on Microsoft Azure would be Operations Management Suite, which is set up and accessed via Operational Insights services. Here, you just set up your management tools and connect into the services you wish to manage all through Microsoft Azure, so no local infrastructure is required or needed to be managed. The Microsoft Azure platform is responsible for all of that and provides direct access to the Management software. Microsoft Azure provides lots of services which fall into IaaS, PaaS or SaaS contexts and these services are constantly being added to and evolving. 

Other Azure Resources

Now is a good time to become familiar with some valuable resources to help you with managing your Azure environment. This page briefly describes those options with links to relevant sites for more information.

azure_resources
Azure Resources


Azure Marketplace

The Azure Marketplace is an online applications and services marketplace that currently offers virtual machine images, virtual machine extensions, APIs, applications, Machine Learning services and data services. You can search for and purchase solutions from a wide range of startups and independent software vendors (ISVs).

VM Depot

VM Depot is a community-based catalog of open source virtual machine images that you can deploy directly from within Azure. VM Depot is available through Microsoft Open Technologies.

GitHub

GitHub is a web-based Git repository hosting service that is free to use for public and open source projects. There are various paid options for businesses 
and teams that want a greater degree of control over their projects.

Azure Trust Center

If you invest in a cloud service, you must be able to have confidence that your customer's data is secure, the privacy of the data is protected, and you comply with whatever government and regulatory controls are required.
Take a look at the Azure Trust Center where you'll find technical resources, whitepapers, and frequently asked questions.

Considerations in Moving to a Cloud Model

Providing a third party with data and business sensitive information requires a lot of trust. Typically, businesses take time to build up a relationship with cloud providers and evaluate their trustworthiness and ability to deliver what they promise.

To know about Cloud Computing , please refer the below link

http://www.sanadmin.net/2016/12/foundations-to-cloud-computing.html 






CIFS SHARE CREATION IN NETAPP STORAGE ARRAY | SMB SHARE & VOLUME CREATION IN NETAPP

$
0
0

CIFS SHARE CREATION IN NETAPP STORAGE ARRAY


Introduction


The Common Internet File System (CIFS) is the standard way that computer users share files across corporate intranets and the Internet. A Cifs share works on the Server Message Block (SMB) protocolCIFS is a native file-sharing protocol in Windows 2000.

The Server Message Block (SMB) Protocol is a network file sharing protocol, and as implemented in Microsoft Windows is known as Microsoft SMB Protocol. The set of message packets that defines a particular version of the protocol is called a dialect.

The Server Message Block (SMB) protocol is a network file sharing protocol that allows applications on a computer to read and write to files and to request services from server programs in a computer network. The SMB protocol can be used on top of its TCP/IP protocol or other network protocols.

TCP port 445 rather than TCP port 139 have to be enable to access the CIFS share throught the SMB Protocol.

Procedure to Create a CIFS Share


Login to the filer with the Putty session 

To check the available space in the aggregate, run the command 

Netapp>df -Ah

Aggregate_in_Netapp_filer
Aggregate List

To create a volume of size 10 GB, run the command 

Netapp>vol create /vol/CIFS_TEST/ -s volume agg2 10g

Volume_creation
Volume creation

NOTE: By default while creating a Volume a snapshot will reserve the 5 % space of a volume size.

To check the snapshot  details, run the command

Netapp>snap reserve CIFS_TEST

Snap_Reserve_details
Snap Reserve

If you don’t need a snapshot for particular volume, we can change to zero by using the command

Netapp> snap reserve CIFS_TEST 0

Snap_Reserve_details
Snap Reserve

To check the snap schedules are created, run the command

 Netapp>  snap sched CIFS_TEST

Snap_Schedule_details
Snap Schedule

To change to zero, run the command 

Netapp> snap sched CIFS_TEST 0 0 0

Snap_Schedules
Snap Schedules

To create a Qtree, run the command

Netapp> qtree create /vol/CIFS_TEST/test_qtree

Qtree_Creation_in_Netapp
Qtree Creation

To check the Qtree security, run the command

Netapp>qtree status CIFS_TEST

Qtree_Status_in_Netapp
Qtree Status

To change the volume security style to NTFS, run the command

Netapp>qtree security /vol/CIFS_TEST ntfs

Volume_security
Volume Security

To check the security status, run the command

Netapp> qtree status CIFS_TEST

Qtree_Security_status
Qtree Security Status

To change the Qtree security style to NTFS, run the command

Netapp> qtree security /vol/CIFS_TEST/test_qtree ntfs

Qtree_Security_status
Qtree Security status

To check the Qtree status, run the command

Netapp> qtree status CIFS_TEST

Qtree_Security_status
Qtree Security status

To create a cifs share, run the command

Netapp>cifs shares -add test_share /vol/CIFS_TEST/test_qtree

Cifs_share_creation
Cifs share creation

To provide user access to the cifs share as full control, run the command

Netapp>cifs access "test_share" EMP ID full control

Cifs_share_access
Cifs share access

To provide user access to the cifs share by GUI,  go to the filer IP using the run option

GUI_Cifs_share_access
GUI Cifs Share access

Right click on the created share and select the properties option.

GUI_Cifs_share_access
GUI Cifs share access

Go to security tab and click on edit option

security_tab
Security tab


Click on add option to entry the user or groups id details to access the share and at last click on OK. 

GUI_cifs_share_access
Access page




About Microsoft Azure Data Centers and It's Services

$
0
0

Azure Data Centers and Services


Azure services are hosted in physical Microsoft-managed data centers throughout the world. The data centers are located in multiple geographic areas, with a pair of regional data centers in each geographic region.

Azure_Data_Centers
Azure Data Centers

Azure Services: Compute, Storage, and Identity


Microsoft Azure provides cloud services for accomplishing various tasks and functions across the IT spectrum and those services can be organized in several broad categories. There are services for different usage scenarios and a wide range of services that can be used as building blocks to create custom cloud solutions.

Compute and Networking Services

Azure_Compute_and_Networking_Services
Azure Compute & Networking Services

Azure Virtual Machines - Create Windows® and Linux virtual machines from pre-defined templates, or deploy your own custom server images in the cloud.

Azure RemoteApp - Provision Windows applications on Azure and run them from any device.

Azure Cloud Services - Define multi-tier PaaS cloud services that you can deploy and manage on Windows Azure.

Azure Virtual Networks - Provision networks to connect your virtual machines, PaaS cloud services, and on-premises infrastructure.

Azure ExpressRoute- Create a dedicated high-speed connection from your on-premises data center to Azure.

Traffic Manager - Implement load-balancing for high scalability and availability.

Storage and Backup Services

Azure Storage - Store data in files, binary large objects (BLOBs), tables, and queues.

Azure Import/Export Service - Transfer large volumes of data using physical media.

Azure Backup - Use Azure as a backup destination for your on-premises servers.

Azure Site Recovery- Manage complete site failover for on-premises and Azure private cloud infrastructures.

Identify and Access Management Services

Azure Active Directory -Integrate your corporate directory with cloud services for a single sign on (SSO) solution.

Azure Multi-Factor Authentication - Implement additional security measures in your applications to verify user identity.

Azure Services: Web, Data, and Media


Azure_Web_Data_and_Media_Services
Azure Web, Data and Media Services


Web and Mobile Services

Azure Websites - Create scalable websites and services without the need to manage the underlying web server configuration.

Mobile Services - Implement a hosted back-end service for mobile applications that run on multiple mobile platforms.

API Management - Publish your service APIs securely.

Notification Hubs- Build highly-scalable push-notification solutions.

Event Hubs - Build solutions that consume and process high volumes of events.

Data and Analytics Services

SQL Database - Implement relational databases for your applications without the need to provision and manage a database server.

HDInsight®- Use Apache Hadoop to perform big data processing and analysis.

Azure Redis Cache- Implement high-performance caching solutions for your applications.

Azure Machine Learning - Apply statistical models to your data and perform predictive analytics.

DocumentDB - Implement a NoSQL data store for your applications.

Azure Search - Provide a fully managed search service.

Media and Content Delivery Services

Azure Media Services- Deliver multimedia content such as video and audio

Azure CDN -Distribute content to users throughout the world.

Azure BizTalk Services -Build integrated business orchestration solutions that integrate enterprise applications with cloud services.

Azure Service Bus- Connect applications across on-premises and cloud environments.


Grouping and Colocating Services

Grouping_and_Colocating_Services
Grouping and Colocating Services

Grouping Related Services

When provisioning Azure services, you can group related services that exist in multiple regions to more easily manage those services. Resource groups are logical groups and can therefore span multiple regions.

Colocating Services by Using Regions

Although resource groups provide a logical grouping of services, they do not reflect the geographical location of the data centers in which those services are deployed. You can specify the region in which you want to host those services. This is known as colocating the services and it is a best practice to colocate interdependent Azure services in the same region. In some cases, Azure will actually enforce the colocation of services where a resource in that same region would be required.


Introduction to Commvault Enterprise Data Protection And Management | About Commvault Backup Technology

$
0
0

Commvault Backup Technology


Foundation


                   Commvault is a publicly traded data protection and information management software company headquartered in Tinton Falls, New Jersey. It was formed in 1988 as a development group in Bell Labs, and later became a business unit of AT&T Network Systems. It was incorporated in 1996.
Commvault software assists organizations with data backup and recovery, cloud and infrastructure management, and retention and compliance.

Commvault_Backup_Technology
Commvault 

Commvault software is an enterprise-level data platform that contains modules to backup, restore, archive, replicate, and search data. It is built from the ground-up on a single platform and unified code base.
Data is protected by installing agent software on the physical or virtual hosts, which use operating system or application native APIs to protect data in a consistent state. Production data is processed by the agent software on client computers and backed up through a data manager, the MediaAgent, to disk, tape, or cloud storage. All data management activity in the environment is tracked by a centralized server, the CommServe, and can be managed by administrators through a central user interface. End users can access protected data using web browsers and mobile devices.

Key features of the software platform include


  • Simplified management through a single console: view, manage, and access all functions and all data and information across the enterprise.
  • Multiple protection methods including backup and archive, snapshot management, replication, and content indexing for eDiscovery.
  • Efficient storage management using deduplication for disk and tape.
  • Integrated with the industry's top storage arrays to automate the creation of indexed, application-aware hardware snapshot copies across multi-vendor storage environments.
  • Complete virtual infrastructure management that supports many hypervisors, including VMware and Hyper-V.
  • Advanced security capabilities to limit access to critical data, to provide granular management capabilities, and to provide single sign-on access for Active Directory users.
  • Policy-based data management, which transcends the limitations of legacy backup products by managing data based on business needs and not on physical location.
  • Self-service for end users, which allows them to protect, find, and recover their own data using common tools such as web browsers, mobile devices, Microsoft Outlook, and File Explorer.

If we have to learn Commvault, we have to know about these things first

Simpana Software 

Backup and Recovery

OnePass™ and Archive

Virtual Machine Integration

Snapshot Management

Edge Endpoint Solutions

Security and Encryption

Deduplication

Reporting and Insight

Analytics

CommVault Backup Appliance

Replication

Disaster Recovery

Cloud Services


Introduction to Commvault Simpana Software | Features of Simpana Software

$
0
0

Simpana Software


Overview



Commvault_Simpana_Software
Commvault Simpana Software

                     The Simpana software platform is an enterprise level, which is integrated with data and information management solution, built from the ground up on a single platform and unified code base. All functions share the same back-end technologies to deliver the unparalleled advantages and benefits to protecting, managing and accessing data. 

The Simpana software platform contains modules to protect and archive, analyze, replicate, and search your data, which all share a common set of back-end services and advanced capabilities, seamlessly interacting with one another. The Simpana software platform addresses all aspects of data management in the enterprise, while providing infinite scalability and unmatched control of data and information.

Architecture_of_Commvault_Simpana_Software
Architecture


Production data is protected by installing agent software on the physical or virtual hosts which use operating system or application native APIs to properly protect data in a consistent state. Production data is processed by the agent software on client computers and backed up through a data manager, the MediaAgent, to disk, tape, or cloud storage. All data management activity in the environment is tracked by a centralized server, the CommServe, and can be managed by administrators through a central user interface.

Key features of the Simpana software platform

  • Complete data protection solution supporting all major operating systems, applications, and databases on virtual and physical servers, NAS shares, cloud-based infrastructures, and mobile devices.
  • Simplified management through a single console; view, manage, and access all functions and all data and information across the enterprise.
  • Multiple protection methods including backup and archive, snapshot management, replication, and content indexing for eDiscovery.
  • Efficient storage management using deduplication for disk and tape.
  • Integrated with the industry's top storage arrays to automate the creation of indexed, application-aware hardware snapshot copies across multi-vendor storage environments.
  • Complete virtual infrastructure management supporting both VMware and Hyper-V.
  • Advanced security capabilities to limit access to critical data, provide granular management capabilities, and provide single sign on access for Active Directory users.
  • Policy-based data management, transcending limitations of legacy backup products by managing data based on business needs and not the physical location.
  • Cutting edge end-user experience empowering them to protect, find and recover their own data using common tools such as web browsers, Microsoft Outlook and File Explorer.
Features of Simphana Software Capabilities

Backup and Recovery

Simpana software provides smooth and efficient backup and restore of data and information in your enterprise from most mainstream operating systems, databases, and applications. The backup and recovery system uses agents to interface with file systems and applications to facilitate the transfer of data from production systems to the protected environment. Data protection is available in three areas:
·         File Systems
·         Applications
·         Databases


OnePass and Archive

Using data archiving, you can retain, store, classify, and access information according to its business, compliance value with one method of access and preservation across all ESI (Electronically Stored Information).
·         Simpana OnePass is the industry's first converged process for backup, archive, and reporting. It incorporates both the backup and archive processes in a single, low-impact data collection operation, moving data to secondary storage where the data functions as both a backup and archive copy.
·         Database archive agents securely archive inactive data to both an archiving database and backup media, and providing smooth access to the archived data from the production database.
·         Classic archive agents move infrequently used mailbox items from primary to secondary storage to optimize storage space, and provide lower cost long term data retention.

Virtual Machine Integration

Virtualization demands a data management solution that is aware of dynamic workloads, consolidated resources, and cloud-based computing models. Simpana software is built with this in mind to let you virtualize even the most demanding applications, leveraging deep integration into the virtual infrastructure to deliver advanced data management capabilities and automate the protection of VMs.  Simpana software protects all of your VMs quickly and unifies the data protection of physical and virtual environments.

Snapshot Management

IntelliSnap technology integrates the complex lifecycle of snapshot management seamlessly into the Simpana software framework. This integrated approach makes it quicker, easier, and more affordable to harness the power of multiple vendor array-based snapshots, accelerating backup and recovery of applications, systems, virtual machines, and data. IntelliSnap automates the creation of application-aware hardware snapshot copies across a multi-vendor storage environment, and catalogs snapshot data to simplify the recovery of individual files without the need for a collection of scripts and disparate snapshot, backup, and recovery tools.

Edge Endpoint Solutions 

Edge Endpoint Solutions offer data protection, security, access from anywhere, and search capabilities for end users, to protect against data breaches and increase productivity while providing self-service capabilities. End users have immediate access to their files, regardless of where they create them, and can securely share, search, and restore files using their own mobile, desktop and laptop devices, without assistance.
Security and Encryption

Simpana software securely protects data and information - whether it's on premises, at the edge, or in the cloud. Security is baked in to the platform to secure data on desktops or laptops, in the office or on the road, at rest or in flight, utilizing efficient encryption, granular and customizable access controls for content and operations, role-based security, single sign-on, alerting, and audit trails to keep your information secure. Protected data is efficiently stored in the Simpana ContentStore—the virtual repository of all Simpana software-managed information. Simpana security will reduce privacy breaches and exposure events, and reduce costs by efficiently securing stored data.

Deduplication

The deduplication integrated into Simpana software reduces backup times while saving on storage and network resources by identifying and eliminating duplicate blocks of data during backups. All data types from Windows, Linux, and UNIX operating systems are deduplicated before moving the data to secondary storage, reducing the time and bandwidth required to move data by up to 90 percent, reducing the space required for storage, and reducing the time required to restore data. Deploy deduplication where it makes the most sense: at the source, on the target, or both. 

Reporting and Insight

Access to actionable information is critical to informed decision-making and operational excellence. Simpana software has robust, built-in reporting analytics, with deep operational reporting integrated with data management operations, eliminating the need for third-party reporting tools. Global, web-based reporting provides a rich understanding of operations with deep views into data, usage and environmental characteristics, business intelligence for infrastructure cost planning, and simplified compliance audits. Live instrumentation and dashboard views provide summary and analytic views of utilization, success rates, and a host of other parameters designed to simplify data management, while historical operations data is available for regular status reporting, trend analysis and best practice comparison to achieve operational excellence. And, yes, there's an app for that! Install the CommVault Monitor app on your smart device to view reports and event details, and even monitor and manage jobs in the CommCell.

Analytics

Discover meaningful insights and unleash your data's full potential with our analytics software. Analyze your data to gain insight to the underlying processes and meaningful patterns to gain a business advantage. Simpana software provides data analytics to view statistical information about your data, web analytics to improve the usability and the content of a website or application, and data connectors to collect the information residing in various data repositories throughout your enterprise.

CommVault Backup Appliance 

The CommVault Appliance A600 combines Simpana software with NetApp’s simple, fast, scalable E-Series Storage System to address key challenges around scalability, flexibility, and manageability. With simple configuration and management designed to meet enterprise data protection requirements, you can go from power-up to backup in less than an hour.
Replication
Replicate data from a source computer to a destination computer in near real-time by logging all file write activity to a replication log in the source computer, transferring it to the destination computer, and replaying it, thus ensuring that the destination remains a nearly real-time replica of the source.

Disaster Recovery

The potential impact of application outages and data loss can be staggering. That is why disaster recovery is among the most vital of operations for every enterprise that requires the agility and availability to handle unplanned outages caused by anything from intrusions to unforeseen natural events. Simpana software dramatically simplifies business continuity and disaster recovery operations with an integrated, flexible, and efficient platform that improves business resiliency while reducing costs and the risks of data loss.

Cloud Services

The Software Store is an online store available in the Cloud Services site where you can download the software installer, service packs, hot fixes, reports, limited distribution tools, and workflows. In addition, several Simpana software tools are hosted on our cloud site.

CommCell Management Group Overview | CommVault Architecture

$
0
0

CommCell Management Group Overview

 


A CommCell management group is the logical grouping of all Simpana software components that protect, move, store, and manage the movement of data and information. A CommCell group contains one CommServe server, one or more MediaAgents, and one or more Clients.
CommCell_management_group
CommCell Management Group

CommServe 

The CommServe server is the central management component of the CommCell group. It coordinates and executes all CommCell group operations, maintaining Microsoft SQL Server databases that contain all configuration, security, and operational history for the CommCell group. There can be only one CommServe server in a CommCell group. The CommServe software can be installed in physical, virtual, and clustered environments. 

MediaAgent 

The MediaAgent is the data transmission manager in the CommCell group. It provides high-performance data movement and manages the data storage libraries. The CommServe server coordinates MediaAgent tasks. For scalability, there can be more than one MediaAgent in a CommCell. The MediaAgent software can be installed in physical, virtual, and clustered environments. 
Client 

A client is a logical grouping of the Simpana software agents that facilitate the protection, management, and movement of data associated with the client. 
Agent 

An agent is a Simpana software module that is installed on a client computer to protect a specific type of data. Different agent software is available to manage different data types on a client, such as Windows file system data, Oracle databases, etc. Agent software can be installed in physical, virtual, and clustered environments, and may be installed either on the computer or on a proxy server.
CommCell Console 

The CommCell Console is the central management user interface for managing the CommCell group - monitoring and controlling active jobs, and viewing events related to all activities. The CommCell Console allows centralized and decentralized organizations to manage all data movement activities through a single, common interface.

Commcell Logical Architecture


A CommCell management group consists of one CommServe server and any number of MediaAgents and Clients. There is also a logical architecture to a CommCell, which can be defined in two main areas
Production data being used by servers and computers in the enterprise, and 
Protected data which has been backed up, archive, or replicated to storage media.
Glossary
Client
A computer for which Simpana agents are protecting data.

Agent
A Simpana software component that is installed to protect a specific type of data on a client, e.g., Windows File System, Oracle databases, etc.

Subclient
A logical container that identifies and manages specific production data (drives, folders, databases, mailboxes) to be protected.

Backup Set
One or more logical groupings of subclients, which are the containers of all of data managed by the agent. For some agents, this might be called an archive set or replication set. For a database agent, the equivalent of a backup set is generally a database instance.

Storage Policy
A logical data management entity with rules that define the lifecycle management of the protected data in a subclient's content.

Logical Management of Production Data

Production data is managed using agents, which interface natively with the file system or application and can be configured based on the specific functionality of data being protected. Data within these agents is grouped into backup sets. Within the backup set, one or more subclients can be used to map to specific data.
Logical_Management_of_Production_Data
Logical Management of Production Data

Logical Management of Protected Data

Simpana’s concept of data management utilizes a data protection strategy based on logical policies. The flexibility gained from policy-driven data protection and management is the ability to group data based on protection and retention needs, rather than by the physical location of the data, which greatly simplifies the organization and management of protected data.
In a CommCell group, subclients define the actual data that will be protected. Subclients can contain an entire server, drive, folders, database, user mailboxes, virtual machines, or even document repositories. The data defined in these subclients is protected through backup, archive, or snapshot operations into Simpana protected storage. Once in protected storage, the data from these subclients can be independently managed regardless of what production server they came from.
A storage policy manages subclient data based on business requirements, even when the subclients' content resides on different servers in the CommCell. It defines a specific set of rules to manage the associated data; which data will be protected (which subclients); where it will reside (the data path and library); how long it will be kept (retention settings); and other media management options such as deduplication, compression, and encryption of the data in protected storage. The first storage policy defines the primary copy of the backed up data, which can be stored on local libraries for quick access. Additional copies of the backed data can be automatically created from existing copies already in the protected storage environment to other libraries and locations for consolidation, auditing, business continuity.
Examples:
  • A project may have different types of data that reside on numerous servers and storage devices. Agents for each type of data are installed, and subclients are defined to access the data in all of its locations. All of these subclients can be associated with a single storage policy to manage the business related data as a single entity.
  • Financial and legal data from different servers or location can be combined into a storage policy for compliance reasons. Databases can be managed in a storage policy and sent to a disaster recovery location. User files can be kept in an on-site copy for quick file recovery.




Commvault

$
0
0

Introduction to Commvault Enterprise Data Protection And Management | About Commvault Backup Technology http://www.sanadmin.net/2017/05/introduction-to-commvault.html

Configuring the Email Alert Notification in VNX | VNX - Email notification alerts configuration

$
0
0

Email Alert Notification

Hi Amigos,

Today I will let you know the configuration of email notification alerts on your VNX or Clariion Storage Arrays.

Purpose:


Notification alerts are the pro-active steps to know the hardware or software failures occurred on the Storage Array.

Whenever a change occurs in our storage array either it is critical, informational, warring message an alert notification will trigger to your specified email address or to a group members.

To set the notification alerts on your VNX storage, the procedure is below.

Procedure:


Login to the VNX Unisphere.

Click on the System tab on the menu bar.

Click on Monitoring and Alert option.

Select Notification option.

Click on Notification template tab.

Go to Engineering Mode by pressing the Shift+Ctrl+F12 buttons on your keyboard.

Type the password as “messner” and click on Ok.

Select the Call_Home_Template and click on Properties.

Notification_Template
Notification Template

A new window will open and in “General” tab click on “Advanced” option to select the Event code like Critical Error, Error, Warning, Information alerts to trigger to your email address.

Go to E-Mail tab and specify all the parameters for the email notification.

To trigger the email notification alerts SMTP Server IPaddress is mandatory.

Template_Properties
Template Properties
The Message will look like in this pattern:

Time Stamp %T% (GMT) Event Number %N%

Severity %SEV% Host %H%

Storage Array %SUB% %SP% Device %D%

Description %E%

Company Name:

Contact Name:

Contact Phone Number:

Contact Email Address:

Secondary Contact Name:

Secondary Contact phone Number:

Secondary Contact Email Address:

Additional Comments:

IP Address SP A: 10.XX.XX.XXX SP B:10.XX.XX.XXX

Once all the parameters are specified, click on Test to test that the email notifications alerts are triggering to your specified email address or not.

If testis completed click on Ok.

In this way we can able to configure the email notification alerts on the VNX or Clariion Storage Arrays.







Detailed Introduction About Deduplication in Commvault Backup | Detailed Overview On Deduplication in Commvault Backup

$
0
0

Deduplication Overview


                        Deduplication provides an efficient method to transmit and store data by identifying and eliminating duplicate blocks of data during backups.

All data types from Windows, Linux, UNIX operating systems and multiple platforms can be deduplicated when data is copied to secondary storage. 

Deduplication offers the following benefits:
·        Optimizes use of storage media by eliminating duplicate blocks of data.
·        Reduces network traffic by sending only unique data during backup operations.


Deduplication_in_commvault_backup
Deduplicate reduces the amount of data

How Deduplication Works

The following is the general workflow for deduplication:

·        Generating signatures for data blocks

A block of data is read from the source and a unique signature for the block of data is generated by using a hash algorithm.
Data blocks can be compressed (default), encrypted (optional), or both. Data block compression, signature generation, and encryption are performed in that order on the source or destination host.

·        Comparing signatures

The new signature is compared against a database of existing signatures for previously backed up data blocks on the destination storage. The database that contains the signatures is called the Deduplication Database (DDB).

o   If the signature exists, the DDB records that an existing data block is used again on the destination storage. The associated MediaAgent writes the index information to the DDB on the destination storage, and the duplicate data block is discarded.

o   If the signature does not exist, the new signature is added to the DDB. The associated MediaAgent writes both the index information and the data block to the destination storage.

Signature comparison is done on a MediaAgent. For improved performance, you can use a locally cached set of signatures on the source host for the comparison. If a signature does not exist in the local cache set, it is sent to the MediaAgent for comparison.

·        Using MediaAgent roles

During the deduplication process, two different MediaAgents roles are used. These roles can be hosted by the same MediaAgent or different MediaAgents

o   Data Mover Role: The MediaAgent has write access to disk libraries where the data blocks are stored.

o   Deduplication Database Role: The MediaAgent has access to the DDB that stores the data block signatures.

o   An object (file, message, document, and so on) written to the destination storage may contain one or many data blocks. These blocks might be distributed on the destination storage whose location is tracked by the MediaAgent index. This index allows the blocks to be reassembled so that the object can be restored or copied to other locations. The DDB is not implemented during the restore process. 

Strategies for Deduplication Implementation

Source-Side (Client-Side) Deduplication

(Recommended). Use source-side deduplication when the MediaAgent and the clients are in a delayed or low bandwidth network environment such as WAN. You can also use source-side deduplication for Remote Office backup solutions.
MediaAgent-Side (Storage-Side) Deduplication

Use MediaAgent-side deduplication when the MediaAgent and the clients are in a fast network environment such as LAN and if you do not want any CPU utilization on client computers.
When the signature generation option is enabled on the MediaAgent, MediaAgent-side deduplication reduces the CPU usage on the client computers by moving the processing to the MediaAgent.
Global Deduplication

Global deduplication provides greater flexibility in defining retention policies when protecting the data.
Use global deduplication storage policies in the following scenarios:
·        To consolidate Remote Office backup data in one location.
·        When you must manage data types, such as file system data and virtual machine data, by different storage policies but in the same disk library.

Deduplication to Tape (Silo Storage)

Deduplication to Tape can copy deduplicated data to tape in a deduplicated format.
Deduplication to Tape extends the primary disk storage by managing the disk space and periodically moving the deduplicated data to the secondary storage.
Deduplicated data on tape responds automatically to restore requests by copying only necessary data back to the disk library and then restoring the data.
DASH Copy

An Auxiliary Copy job uses DASH (Deduplication Accelerate Streaming Hash) Copy, which is an option for a deduplication-enabled storage policy copy, to send only unique data to that copy. DASH Copy uses network bandwidth efficiently and minimizes the use of storage resources.
DASH Copy transmits only unique data blocks, which reduces the volume and time of an Auxiliary Copy job by up to 90%.
Use DASH Copy when remote secondary copies can only be reachable on low bandwidth connections.
DASH Full (Accelerated Synthetic Full Backups)

DASH Full is a Synthetic Full operation that updates the DDB and index files for existing data rather than physically copying data like a normal Synthetic Full backup.
Use DASH Full backup operations to increase performance and reduce network usage for full backups.
DASH Full is used with Simpana OnePass to manage the retention of archived data.


VNX - LUN Provisioning for a New host | LUN Provisioning in EMC Clariion

$
0
0

VNX LUN Provisioning for a New Host


Hello Friends,

Today I am going to explain about LUN Provisioning for anew host.

Introduction


Whenever a new server deployed in the Data Center, the platform team either it is Windows, Linux or Solaris team will contact you for the free ports on the Switch (For Example:- Cisco Switch) to connect with the Storage. 

We will login into the Switch with authorized credentials via putty.

Once logged in we check the free ports details by using the command.

Switch # sh interface brief

Note:  As a storage admin, we have to know the server HBA’s details too. According to that information we have to fetch the free ports details on the two switches for redundancy.

Will update the free ports details to the platform team, then platform team will contact the Data center folks to lay the physical cable connectivity between the new server to the switches.

Note:  The Storage ports are already connected to the switches.

Once the cable connectivity completes the platform team will inform us to do zoning.

Zoning:



Grouping of Host HBA WWPN and Storage Front End Ports WWPN to speak each other.


All the commands should be run in Configuration Mode.


SwitchName # config t (configuration terminal)


SwitchName # zone name ABC vsan 2


SwitchName – zone # member pwwn 50:06:01:60:08:60:37:fb


SwitchName – zone # member pwwn 21:00:00:24:ff:0d:fc:fb


SwitchName – zone # exist


SwitchName # zoneset name XYZ vsan 2


SwitchName – zoneset # member ABC


SwitchName – zoneset # exist


SwitchName # zoneset activate name XYZ vsan 2

Once the zoning is completed, we have to check the Initiators status by login into the Unisphere.

The procedure is as follows below:

Go to the Host Tab and select the Initiators tab.

Search for the Host for which you have done the zoning activity

Verify the Host Name, IPAddress and Host HBA WWPN & WWNN No.’s.

In the Initiators window check the Registered and Logged Incolumns. If “YES” is there in both the columns then your zoning activity is correct and the Host is connected to the Storage Box.

If “YES” is in one column and “NO” in another column, then your zoning part is not correct. Check the zoning steps, if it correct and the issue is sustain then check the Host WWPN &WWNN and cable connectivity.


Now we have to create a New Storage Group for the New Host.

The procedure is as follows:

Go to Host Tab and select the Storage group option.

Click on the Create option to create a new storage group.

Name the storage group for your identification and hit on OKbutton to complete your task.

Before creating a LUN to a specified size we have to check the prerequisite like:

Check the availability of free space in the Storage Poolfrom where you are going to create a LUN.

If free space is not available in that specific storage pool, share the information to your Reporting Manager or to your seniors.

Now will create a LUN with the specified size

Login to the VNX Unisphere.

Go to Storage Tab and select the LUN option.

Fill all the Columns like Storage Pool, LUN Capacity, No. of LUNs to be created, Name of the LUN and specific the LUN is THICK or THIN.

And hit on OK button to complete the LUN creation task.

Now we have to add the newly created LUN to the newly created Storage Group (Masking).

In the LUN creation page, there is an option known as “ADD TO STORAGE GROUP” at the below of the page.

Click on it and a new page will open.

Two columns will be appeared in the page one is “Available Host  and “Connected Host”.

Select the New Storage Group in the available host column and click on the Right side arrow and the host will appear in the connected host column and then hot on OK button.

Inform the Platform team that the LUN has been assigned to host taking a snapshot of the page and also inform them to rescan the disks from the platform level. 

As mentioned above, these are the points to be considered before provisioning a LUN to a newly deployed server in the data center.



VNX - How to gather .NAR Files | How to retrieve the data logging files in VNX

$
0
0

.NAR Files

Hi Everyone,

Today we will discuss about how to gather or generate the .NAR files in the VNX/Clariion Storage Array.

Introduction


Whenever we face any Performance issues either on Server, Storage and Switch level sides. We have to generate the .NAR Files and uploaded to the EMC support team.

The EMC Performance Team will analyze the files and they will give the recommendation or fixes to resolve the issue.

Procedure


To generate the logs , please follow the below steps.

Login to the Unisphere

Go to System tab and select the Monitoring and alertsoptions.

Select the Statisticsoption
.
Go to the Performance Data Logging under the Settingcolumn.

Performance_Data_Logging_option_under_the_Statistics_tab
Performance Data Logging option under the Statistics tab


Check the Status of the Data Logging. If the status is in Stop mode, please start it and we have to wait for 24 – 48 hrs of time to generate the .NAR files.

If the Status is in Startmode,  go to the Retrieve archives option under the Archive Management column. 

If EMC requested to upload the .NAR files at a certain date and time period, we have to check the logs files. If the files are exists then simply we browse it to the desired location on the desktop and then click OK button and then we upload to the given EMC Serivce Request or to a FTP site.

Note: 

1) If we didn't find the certain period of .NAR files we can't do anything.

2) If we keep the Data Logging in Start mode always, there will be a      minimal impact on the performance level from the storage end.

In this way we can able to gather the  .NAR files from VNX.









VNX - LUN Migration | How to perform the LUN migration in VNX

$
0
0

LUN Migration

Hi All,

Today I am going to discuss about the LUN Migration in VNX or EMC Clariion.

Introduction about the LUN Migration


LUN migration permits the data to be moved from one LUN to other LUN, no matter what RAID type, disk type, LUN type, speed and number of disks in the RAID group or Storage Pool with some restrictions.

The LUN Migration is a internal migration within the Storage array either it is VNX or Clariion from one location to other location. It has 2 steps of process. First it is a block by block copy of a “Source LUN” to its new location “Destination LUN”. Once the copy is completed, it then moves the “Source” LUNs location to its new place.


Procedure 

Login to the Unisphere

Go to the Storage tab and select the LUN which you want to migrate.

Right Click on the ‘Source LUN = eg-145’ and choose Migrate option.

Select the ‘targeted destination LUN = LUN 149’ from the Available Destination LUN field

The Migration Rate drop-down menu: Low

NOTE:  The Migration Rate drop-down menu offers four levels of service: ASAP, High, Medium, and Low. 

Source LUN > 150GB; Migration Rate: Medium

Source LUN < 150GB; Migration Rate: High

Click OK 

YES

DONE

The Destination LUN assumes the identity of Source LUN.

The Source LUN is unbound when the migration process is complete.

Note: During Migration destination LUN will be shown under ‘Private LUNS’

IMP: Only 1 LUN Migration per array at any time. The size of target LUN should be equal or greater than source LUN.

As discussed above we will perform the LUN migration activity in VNX or EMC Clariion Storage Arrays.










VNX - How to download the NavisecCLI | How to manage the VNX/Clariion storage arrays through command line interface

$
0
0

NavisecCLI Download


Hello Everyone,

Today will let you know how to download the NavisecCLI and how to use it.

Introduction

Navicli originally known as Classic CLI is over 10 years of old. Classic CLI does not support user authentication or navisphere roles. Some degree of security is provided by the privileged user list, which can be configured to limit the users and IP addresses that can manage the array.

NaviCLI is used for accessing the VNX or Clariion Storage arrays through Command Line Interface.

Without this tool we can't able to access the VNX storage. The Setup file will available in 32 bit and 64 bit.

To download the NavilCLI, just click the below link and it will take you the download page.


Procedure

The procedure for downloading the NaviCLI is as follows:

1. Download the Setup file and click on run option to start the installation.

2. There are 5 simple steps to install the setup file, just we have to hit the Next button.

NaviCLI_Download
NaviCLI Download
3. In 2nd step it will ask for the path where we want to install the software. A default address will be taken or we can choose our own desired path.

NaviCLI_Download_page
NaviCLI Download page
4. Now the installation will start and once it complete.

Open the command prompt and choose the specific path location where the setup is installed.

Specified_path_with_command_to_see_ the_LUN_Details
Specified path with command to see the LUN Details

For example:

From Naviseccli:

For VNX1 Series, use the following commands to copy data from online disk 0_1_5 to any hotspare available:

>> naviseccli –h <SP_IPaddress> copytohotspare 0_1_5 –initiate

***If command above does not work, attempt the following command***

>> naviseccli -h <SP_IPaddress> -user <username> -password <password> copytohotspare 0_1_5 –initiate

note: Default username and password are sysadmin, sysadmin. If for some reason this does not work, request customer for username and password.


**To verify that disk is actually copying to hotspare use command:

>> naviseccli getdisk <disk location>

*** If sometime the above naviseccli command does not work, use naviseccli.exe instead of naviseccli.








Viewing all 126 articles
Browse latest View live