CommVault Partitioned Deduplication without shared storage

One of CommVaults greatest assets is Deduplication. On average savings of 85% can be achieved when writing to disk, savings which can be seen not only in raw storage capacity; but also in data transfer times. In order to take full advantage of this technology; it’s necessary to understand and properly plan the CommVault architecture to best suit your requirements.

Sizing

In order to achieve these data protection goals, hardware must be size accordingly. CommVault provides a “Deduplication Building block guide” to aid in picking the correct configuration for your data protection needs. Sizing is based primarily on 3 factors:

  1. How many front end terabytes (FET) do you need to protect?
  2. How long (what retention) will the data be kept on disk?
  3. What kind of data are you protecting?

The topic of this post is sourced from point 1: How may front end terabytes (FET) do you need to protect? Front End Terabytes equates to the amount of data being protected i.e. If you have 5 servers consuming 1TB each, that’s 5 FET.

Commvault “Standard Mode” sizing guidelines range from Extra Small – Extra Large, with the latter supporting 90-120TB (depending on the type of data). This means that potentially  120TB of data could be deduplicated down to 18TB or less. The issue is; What if you have more than 120TB of data? You could have multiple MediaAgents each reducing their 120TBs worth; but this would result in some data being duplicated across MediaAgents as the MediaAgents have no way to reference what each other are protecting. This is where Partitioned Mode deduplication comes in.

PartitionedDedupe

With partitioned deduplication; upto 400 FET can be deduplicated across 4 MediaAgents. The storage savings gleaned from this are shown in the following table:

FET

(Front-End Terabytes)

Partitioned Deduplicated data size (assuming

85%) (TB)

Equivalent non-partitioned Deduplicated data
(assuming 85% per node & duplication across nodes) (TB)
100 15 15
200 30 60
300 45 135
400 60 240

This works well, but in some situations the hardware requirements may not be within budget. The MediaAgents for a 4 node 400FET configuration are already highly spec’d; but factor in enterprise NAS shared storage and your costs can start to look scary. During a recent project the question was raised:

Can we get most of the benefits of partitioned deduplication without using expensive enterprise NAS hardware?

The answer is Yes; and it’s not that difficult to achieve. Shared storage can be easily replaced with direct attached block level storage. There are some restrictions which I will cover later; but to achieve load balanced; globally deduplicated data protection, local or SAS attached storage (such as Dell PowerVault MD series) will achieve the desired result.

Storage Data Flow

In the following example the storage architecture was shared among the MediaAgents as follows:

StorageDataFlow

The simplified diagram above gives an explanatory view of how each MediaAgent accesses the multiple mount paths making up the storage.

  • Each Media Agent is connected to the SAS attached storage enclosure via 2 SAS connections. MPIO is configured on the MediaAgent.
  • The storage enclosure (in this case a Dell MD series) is aware of each of the MediaAgents, and its available storage is configured, formatted and divided amongst the connected servers.
  • The MediaAgents see their allocation of LUNs, they are formatted as NTFS (64k block size is strongly recommended), and configured as mount paths inside windows.
  • The windows mount paths are shared at the windows level, using either a domain or local security account restricting access accordingly.

MediaAgent Physical Connectivity

Each MediaAgent (in this case a PowerEdge R630) is equipped with the following connectivity

  • On board 4 x 1GbE NIC
  • 2 x 4 port 10GbE Network Adapter
  • 2 port 12Gbps SAS HBA
  • 1 x Out of Band (iDRAC)

The primary LAN connectivity is achieved over the 10GbE cards. LACP is configured between port 0 of each card, allowing both load balanced multi stream traffic, in addition to accounting for card failure. These will be connected to a routable VLAN to allow backup traffic to reach the MediaAgents.

A second LACP pair is configured on port 1 of each 10GbE card. This forms the GRID connectivity, creating a dedicated link for the MediaAgents communicate with each other directly. Splitting traffic across the cards will minimise the impact of a card failure. Grid configuration is discussed later in the post. These should be connected to a non-routable VLAN, traffic on this VLAN will only be used for inter-MediaAgent communication. It is a good idea to prevent this connection from registering with DNS.

The 2 SAS connections are split across the 2 storage controllers. MPIO is configured at the Windows layer to avoid duplicate LUNS appearing in disk manager (and provide multi pathing functionality).

MediaAgent software configuration

Shares

Each MediaAgent will have a number of LUNs presented to it. These should be formatted as NTFS 64k block size and presented as a mount path to the operating system. My method for doing this is to carve a small partition off one of the system drives, give it a drive letter and create a “MP” directory under that. Each Mount path can then be mapped to a subdirectory of that path. For example:

  • M:
    • MP
      • MP01
      • MP02
      • MP03
      • …..

Note that the latest version of CommVault does not require mount paths to be divided into 2-8TB chunks. Administration can be reduced by using larger; easier to deploy mount paths.

Each of the Mount Paths should be published as hidden shares, with the share security restricted to a MediaAgent service account.

Disk Library

mountpaths1

Each mount path from each MediaAgent will form the disk library. The image above (from expert storage configuration) shows a mount path local to one MediaAgent, with alternative paths via CIFS from the remaining 2 MediaAgents. When viewed from the mount path properties under the disk library it shows as follows:

mountpaths2

REMEMBER: The IP address used for accessing the CIFS path is the GRID IP. There’s no point using the LAN network when you have dedicated links at your disposal. The table below details the mount paths for a 6 mount path disk library.

Name Local
MA
Local
Path
CIFS
Path
Share
Permissions
CV
Mount Path
MA1-MP1 MA1 M:\MP1 \\MA1\MP1$ RW:
[local svc account]
All
MAs RW
MA1-MP2 MA1 M:\MP2 \\MA1\MP2$ RW:
[local svc account]
All
MAs RW
MA2-MP1 MA2 M:\MP1 \\MA2\MP1$ RW:
[local svc account]
All
MAs RW
MA2-MP2 MA2 M:\MP2 \\MA2\MP2$ RW:
[local svc account]
All
MAs RW
MA3-MP1 MA3 M:\MP1 \\MA3\MP1$ RW:
[local svc account]
All
MAs RW
MA3-MP2 MA3 M:\MP2 \\MA3\MP2$ RW:
[local svc account]
All
MAs RW

Inter-Host Networking

To ensure any inter-MediaAgent communication uses the dedicated NIC team, it’s necessary to create a number of DIPs (Data Interface Pairs). This can be done via the control panel, follow CommVaults documentation if you’re unsure. In the scenario of 3 MediaAgents requiring dedicated communication paths; the following example DIPs would be used.

Client1 NIC Client
2
NIC
MA1 GRID
(Team)
MA2 GRID(Team)
MA1 GRID(Team) MA3 GRID(Team)
MA2 GRID(Team) MA1 GRID(Team)
MA2 GRID(Team) MA3 GRID(Team)
MA3 GRID(Team) MA1 GRID(Team)
MA3 GRID(Team) MA2 GRID(Team)

Partitioned Global Deduplication

Now that all MediaAgents can see the same disk library, it is possible to create a partitioned global deduplication database (GDDB). For reference; the following image represents a partitioned deduplication database across 2 nodes:

partddb

To configure the partitioned global deduplication policy:

  1. Right click storage policies and choose “New Global Deduplication Policy”

createddb1

2.  Name the policy appropriately, and click next.

createddb2

3. Select the disk library created earlier & click next

4. Select one of the MediaAgents with paths to that disk library and click next (we’ll        ensure those paths are used properly later).

5. Check the box to use partitioned global deduplication.createddb3

6. On the next page you will configure the number & location of the partitions. The     number should match the number of MediaAgents sharing the disk library. It is           best practice to use a similar path on each MediaAgents to avoid confusion.

mountpaths2

7. There is no need to adjust the DDB network interface as the DIPs created in the        previous section will ensure traffic is routed accordingly. Click next, review the             settings and click Finish.

8. Right click the primary copy of the new GDDB and ensure the GDDB settings            appear as follows:

9. Right click the primary copy of the new GDDB and ensure the GDDB settings             appear as follows:

createddb5

These settings ensure that:

  1. You are using a transactional GDDB (far better recoverability in the event of a dirty shutdown)
  2. Jobs will not run whilst one of the MediaAgents (and part of the library) is offline. It’s possible the jobs may still run, but the errors generated by trying to access unavailable blocks would be more trouble than it’s worth.

Multi-Pathing

It’s all very well having multiple paths configured, but would be less then efficient if all the traffic was going through one MediaAgent. Any primary copy associated with the previously created GDDB will automatically have multiple paths, however it will favour the first MediaAgent to access those paths.  To help with traffic distribution selecting your multi-pathing options properly is essential.

multipathing

Round Robin between data paths will ensure traffic is distributed equally across the MediaAgents.

Restrictions & Disadvantages

There is one primary area that suffers as a result of not using a true NAS disk library target: Redundancy. In the event that one of the MediaAgents goes offline:

  • You will lose the partition of the global ddb associated with that MediaAgent. This can be configured to continue whilst n partitions are offline, however you will duplicate any blocks reprotected (and owned by the offline partition) during the outage.
  • You will lose the mount paths of the disk library to which the MediaAgent is directly connected. This can be tolerated for backups, but will very likely cause issues during a restore.

In reality most enterprise environments will have 4-hour replacement warranties in place; meaning you should be able to resolve most issues within a 24-hour period. This of course is still longer than zero-downtime associated with true NAS shared storage.

For most people this isn’t an issue, if they miss out on backups for one night this may be seen an acceptable risk. A notable exception to this is if you are using the system for archiving (file and/or email stubbing) or compliance search (typical in legal/medical industries). If these more production-critical workloads are used in your environment it may be worth forking out the extra for shared storage.

Advertisements