Securing Access to the Web & Admin consoles using a SAN certificate.

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. The official documentation does not however; instruct on creating a certificate that will allow the use of multiple SAN (Subject Alternative Name) aliases. This guide will include the necessary fields for creating a SAN cert request, in addition to ensuring the cert complies with the SHA256withRSA signature algorithm.


  • CommCell with Web & Admin Console
  • Certificate Authority
    • This can be either internal or external. If you are allowing access to the web console externally an external authority is recommended.
    • If you are using an internal CA, ensure it is capable of issuing SHA2 certs rather than the deprecated SHA2. Details can be found here. SHA1 certs are acceted by IE however chrome and firefox will complain.


  1. In order to ensure the necessary java versions are in place you may need to update Java. The minimum version required is JRE 1.8.0_65. Check which version you have installed; the SP10 release of Commvault is packaged with java version “1.8.0_121” so does not require updating. To check your java version run “java -version” from the command prompt of the web server. if you need to update java follow the official doco here.
  2. From the web console computer, navigate to the following directory via an elevated command prompt (replace the java version if different):
    C:\Program Files\Java\jre1.8.0_121\bin
  3. You must create a keystore file using the keytool utility contained in the above directory. Run the following command:
    keytool -genkey -alias tomcat -keyalg RSA -sigalg SHA256withRSA -keystore "C:\mykeystore.jks" -keysize 2048
  4. You will be prompted for the following details:
    1. Keystore password: Be sure to make this a strong password and keep it safe.
    2. First & Last Name: This is the domain name for which you are creating a certificate i.e.
    3. Org Unit: Leave this blank or use company name
    4. Org Name: use company name
    5. City or Locality, State, Country Code: As Described
  5. At the “Is this correct?” prompt, type yes.
  6. At the “Enter password for tomcat”, click ENTER to use the same as the keystore file.
  7. Now we use the created keyfile to generate a csr:
    keytool -certreq -file C:\somename.csr -keystore C:\mykeystore.jks -keyalg RSA -sigalg SHA256withRSA -alias tomcat -ext SAN=dns:alias1.mydomain.local, -keysize 2048
  8. You will be prompted for the password you created earlier. Once entered the certificate request will be saved as specified (C:\somename.csr if you used the above command).
  9. Use this csr to generate the certificate. Upload the request to your certificate authority and download the signed certificates. The files you will require for the next step are as follows:
    1. root certificate
    2. intermediate certificate (if available)
    3. issued certificate
  10. All of these files can be in either cer or crt format. If you have been given a *.p7b. file dont panic; this can be opened to show the issued certs. In the example below you can see only the root and issued certificate are available.
  11. Right click the certificates one by one and choose All Tasks –> Export. Use the default option as pictured below to export the individual certs as *.cer files.
  12. Once you have your 2 or 3 certificates, head back to the command prompt. Its now time to import your root, intermediate (if you have one) and issued certs.
  13. First import the root certificate:
    keytool -import -alias root -keystore C:\mykeystore.jks -trustcacerts -file C:\root.cer
  14. Next the intermediate:
    keytool -import -alias intermed -keystore C:\mykeystore.jks -trustcacerts -file C:\intermediate.cer
  15. And finally the issued SAN cert:
    keytool -import -alias tomcat -keystore C:\mykeystore.jks -trustcacerts -file C:\actual.cer
  16. You now have a keystore (in this case “mykeystore.jks”) which can be used by Commvault to secure its web console traffic. To make Commvault aware of this file you’ll need to copy it into the Program Files/Commvault/ContentStore/Apache/Conf folder on the web server. Once the file is copied, stop the “Commvault Tomcat Service” using Commvault Process Manager.
  17. If you are using a Commvault version prior to v11 SP9 you’ll need to refer to the official doco here, if not carry on…
  18. Using a text editor (such as Notepad++) edit the “server.xml” file. It’s wise at this point to take a copy of the file first in case things go pear-shaped.
  19. Find the line “<Certificate certificateKeystoreFile=”” and edit the path and password to match your keystore file:
  20. Ensure the path is correct, if you placed the file in the conf folder as instructed, the path should be “conf/mykeystore.jks”.
  21. Start the Tomcat service using the Commvault Process Manager, give the web server a couple of minutes to start and then browse to the server using one of your specified SAN aliases :-).


Commvault v11 SP10 – New Features

Commvault SP10 is out and the new features deserve a mention:

Server Retirement Workflow

This workflow, once downloaded and installed; will handle the following retirement functions:

  • Disables all activity on the client
  • Updates audit notes
  • Sets the job retention time period
  • Releases the client license
  • (Optional): Trigger an approval email

more info here

Data Encryption Configuration for Global Policies

This is extremely useful and ensures all data deduplicated using the global deduplication (or global secondary) policy will be encrypted. Prior to SP10 the encryption has to be specified at either the storage policy copy or client level.

More info here

SQL Server agent support for SQL 2017 on Windows & Linux

Supported Linux distros are:

  • Red Hat Enterprise Linux 7.3 or 7.4 Workstation, Server, and Desktop
  • SuSE Linux 12 SP2 Enterprise Server
  • Ubuntu16.04 LTS

All supported windows version for SQL 2017 will work. The SQL Management Studio Plug-In is also supported for SQL 2016 & 2017

“Proxyless” Intellisnap Backups.

Intellisnap backups are becoming more and more relevant considering the ever increasing size of virtual machines. It is not always practical to perform daily disk backups of multi-TB VMs. In previous service packs intellisnap backups required a proxy ESXi host in order to mount the hardware snapshot for backup copy. SP10 has introduced the process of retrieving the VMFS and CBT information during the snapshot operation. This adds a small time penalty to the snapshot operation, but considering the amount of time saved during the proxies mount and rescan operations; this is inconsequential. For backup copy operations the snapshot is mounted directly to the VSA proxy; File System APIs are then used to stream a copy to disk.

More info here

SP10 can be obtained via now or via the CommCell console after January 15th.

For more information and to read about additional new features, click here.

Commvault Mailbox backup using the new pseudoclient architecture

* This process has changed significantly since the release of v11 SP12. I’ll be writing an updated blog post once I’ve had a chance to play with the revised architecture. 

In a previous post I detailed the process for implementing O365 mailbox backups using the “Classic” mailbox agent. Unfortunately support for RPC over HTTP  will be depreciated on the 31st October 2017; this makes it necessary to migrate any O365 mailbox backups to the new mailbox agent. Commvaults support site shows the following warning when viewing the documentation relating to the classic agent and O365:

“On October 31, 2017, Microsoft deprecates RPC over HTTP for Office 365. For more information, see the Microsoft support article: “RPC over HTTP deprecated in Office 365 on October 31, 2017″,–2017.
You can no longer use Office 365 with Exchange for the Exchange Mailbox (Classic) Agent or OnePass for Exchange Mailbox (Classic).
We recommend that you transition to the Exchange Mailbox Agent. The Exchange Mailbox Agent uses EWS for archiving instead of MAPI. With EWS, the archiving throughput is increased.
For information, see Transitioning to the Exchange Mailbox Agent.”

The new agent does promise better throughput than the legacy method; however, it is worth noting that additional licensing may be required. License is consumed per mailbox as opposed to the previous capacity (or in some cases per proxy) based model. If you do not have this currently included you will get 30 days worth of evaluation licensing.


The new architecture has some important differences when compared to the legacy “Classic” agent. The following prerequisites are necessary:

  • Sp9
    • This document requires features only introduced in SP9
  • Index Server
    • This can be either interactively installed or pushed from the CommCell Console. The package is Index Store.
    • This should be visible to each of the mailbox proxies allowing index data to be shared.
    • You can use only one index server per Exchange Mailbox client. You must create an index server for each Exchange Mailbox client in your CommCell environment.
      • If you are running the solution as an MSP it is possible to use 1 mailbox index for multiple clients, however this is based on the assumption that mailboxes cannot migrate between clients. More details on the additional setting required here.
    • This should ideally be a different server to your MediaAgent, however for smaller clients (500 mailboes or less) it is possible to combine the two.
    • The CommVault  recommended sizing for the Index server is significant. Based on up to 500 million messages (of approx 150KB each) the following servers specifications are advised for index server without content indexing:
      • 16 cores
      • Index storage space of 3 to 5 percent of application size
      • 48GB RAM
      • Minimum 800IOPS, recommended 1600IOPS
    • As with previous index node hardware prereqs; I would advise starting small and scaling up as required.
  • Job Results directory – This must be a network share visible to all mailbox proxies. If you have multiple proxies they are used in a round-robin order and each will require access to the Job Results directory.
  • Archiving, cleanup, and retention policies
    • Policies around mailbox retention have changed. Even if you are not using the archiving & cleanup features of CommVault, you will want to pay close attention here.
    • Retention (Primary Copy) has been moved away from the storage policy and into the retention policy. The Exchange Mailbox Agent uses the retention rules that are defined in the Exchange retention policy. The agent does not use the retention rules that are defined in the storage policy.

Configuring the Index Server

Index server setup is relatively easy. The following steps will have it ready:

  • Install the Index Store binaries using either push or interactive install.
  • From the CommCell Console, Right click Index Servers and select New Index Server.
  • Give the Index Server a meaningful name (in the Cloud Name field), assign it to a storage policy (optional) and specify the index directory. Ensure the Index directory is a properly sized dedicated drive.
  • On the Roles tab, add the Exchange Index role.
  • On the Nodes tab, be sure to include your new installed index node.

Click OK and the index directories will be populated, Index server creation is complete.

Configuring the Proxy (or proxies)

This is similar to my previous post Backing up Office365 Mailboxes with CommVault however, as there are some important differences I have included the steps below.

The proxy client is used to connect to O365 and use its installed Outlook client to stream messages back to the ContentStore. Ensure the following prerequisites are available/in place before proceeding.

  • Windows service account/Office365 Account
    • This account should be synced between o365 & the local Active Directory
    • It should be a local admin account on the proxy VM
    • It should be a global administrator on o365
  • Office 2013 x64 SP1 or above + Updates
  • If you are using a hybrid (On-prem & Online) you must use a separate proxy for both. This guide is aimed at an online-only deployment.
  • dotNET framework 3.5
  • Disable UAC

Connect to Office365 and apply permissions to mailboxes. Log onto the proxy VM using the Windows service account/Office365 Account mentioned above. Using an elevated powershell prompt run the following commands:

Get-ExecutionPolicy Remotesigned
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session

This will allow you to run powershell commands on your exchange online environment.

Apply Full access rights to all mailboxes for the service account you will be using for mailbox backup. If you are using more than one proxy, you will need more than one service account.

Get-Mailbox -ResultSize unlimited | Add-MailboxPermission -User "<service_account>" -AccessRights FullAccess -InheritanceType all -AutoMapping: $false
  • Note: These permissions are applied directly and will not affect any mailboxes added after this command is run. I would recommend a scheduled task to run the permissions application on a regular basis to ensure no mailboxes are missed.  To do this you will need a way to securely store the credential for use in the script, details here.

The following command needs to be run to allow impersonation permissions to the service accounts.

New-ManagementRoleAssignment -name:NameOfAssignment -Role:ApplicationImpersonation -User:service_account
  • NameOfAssignment = Unique name for assignment
  • ServiceAccount = Exchange Online Administrator

It is not necessary for the powershell session to remain open during the backup process. Once finished setting permissions you need to close the powershell session using the following command. This will avoid using up the sessions available to you.

Remove-PSSession $Session

You can now configure the Outlook profile:

  • Create the Outlook profile using Control Panel – Mail. Name the profile as you see fit (try to keep it simple i.e. Outlook – you will need to know this later)
  • Ensure the profile doesn’t use cached mode. Once you have created the profile, start Outlook and when prompted for credentials, ensure the “Remember Credentials” option is selected.
  • You now need to ensure outlook uses RPC/HTTP as opposed to MAPI/HTTP to connect. Using regedit navigate to HKEY_CURRENT_USER –> Software –>  Microsoft –> Exchange. Right click Exchange and choose new DWORD.
  • Name the new DWORD MapiHttpDisabled and give it a value of 1Registry
  • Open Outlook and use Ctrl+Right click to select the Outlook icon in the taskbar. Click Connection Status.
  • OutlookTaskbarThe protocol Column in the Outlook Connection Status windows should read “RPC/HTTP”. If it doesn’t, double check the previous registry change and restart Outlook.

The Commvault Client Manager & Commvault Communications Service services need to be running under the same service account you are currently logged on with (The account with local admin and o365 privileges). Open Services.msc, adjust the Log On parameters as shown below for each service and then restart both services.


Setting up the new Client

From the CommCell Console, Right click the CommCell name and create a new mailbox client as shown below.


This starts the wizard for creating a mailbox pseudoclient. Just as with VMware pseudoclients; this creates a logical client which is backed by one or more physcial proxies allowing for load balancing and scaleability.

General Tab


The fields required are as follows:

  • Client Name – Whatever you want
  • Storage Policy -Where will the protected data be stored
  • Index Server – As configured earlier
  • Recall Service – Used for recall of mailbox items. Web server must be accessible from the proxies.
  • Job Results – A CIFS share accessible to all proxies.


Add one or more configured proxies here. These must have Outlook & the mailbox client installed in addition to all other customisation covered earlier.

Exchange Server/Azure AD Settings

Details of any on-prem Exchange or AzureAD subscriptions should be entered here. If you dont have On-Prem or Azure AD details it isn’t mandatory.

Service Account Settings


The fields required are as follows:

  • Email Address – Global Administrator Account for o365
    • For this example the account is synced with the local AD. You can view the accounts at in the Admin portal.
  • Username – The same account as above but in the DOMAIN\Username format
  • Password – This is the password for the above accounts.
  • Service Type – Exchange Online
  • Use Static Profile = true
  • Profile Name = profile created earlier

Configuring the Policies

The four new policy types are as follows:

  • Archiving – Archive is the new Backup. This policy dictates what messages will be protected, it has no effect on stubbing.
  • Cleanup – If you are archiving, this is where it is configured.
  • Retention – Primary Copy retention is configured here and will override any retention settings configured in the storage policy. Secondary copies will still adhere to the Storage Policy values.
  • Journal – The new compliance archive. Use this for journal mailboxes.

Policies are configured under Policies, Configuration Policies, Exchange policies as shown below:


Only configure the policies you need, for a standard mailbox backup (no archive) setup, your policies listing may look like this:


Subclient Configuration

As with the previous steps, this area of the configuration has had a complete overhaul. I will be associating the mailboxes based on AD Groups, so ensuring the AD username and password is configured correctly. NOTE: If you are transitioning from an existing classic mailbox agent, you must follow the transition steps here.

From the Exchange mailbox pseudoclient; navigate to Exchange Mailbox then User Mailbox.


At the bottom of the right hand side of the screen, select Auto Discover Associations.


Right click in the white space above and choose New Association then AD Groups.


Click Configure then Discover, accept the warning (Yes). Select the Group(s) you want included in this Association and click OK. These will be configured with the same policies (Archive, Cleanup etc). Select the Policies tab; Select a policy for each, cleanup is not necessary if not stubbing.  Remember: Retention is only concerning the primary copy, secondary copies are retained according to the storage policy config.

Once you’ve selected your policies click OK. Click Mailboxes at the bottom of the screen to see what mailboxes have been discovered.

You are now in a position to test the mailbox backups. Select a mailbox from the Mailbox tab of the the User Mailbox backup set. Right click and choose Archive. Keep in mind that all archive (backup) operations are incremental. If all goes well; feel free to schedule as you see fit.

Protecting SharePoint Online with Commvault

CommVault has the ability to protect SharePoint Online. To configure you will need the following.

  • Commvault v11 SP9 (SP8 works but SP9 is revamped & recommended)
  • A proxy client (Cannot be the same client used to protect mailboxes)
  • AD service account with admin rights to the proxy VM
  • A Global Administrator Account for Office365 (
    • This must also be a Site Collection Administrator
      • This can be done manually or using a script such as
  • An Azure Storage account (
  1. Create a proxy VM with internet or direct access to Share Point Online. Server 2012R2 seems to be the safest option compatibility wise (at the time of writing). Update, Domain Join & Add the AD service account to the local administrators group.
  2. Ensure Microsoft .NET Framework 4.5, and PowerShell 3.0 are installed
  3. Install the SharePoint agent, this can be done either interactively or pushed via the CommCell console.
  4. Log onto the proxy VM with your AD service acount. Install the SharePoint Online Management Shell. Navigate to C:\Program Files\Commvault\ContentStore\Base (or wherever the SharePoint agent is installed) and run spoms.msi
  5. Using the Comm Cell console, right click on the SharePoint client under the proxy and click properties. Change the account to use the service account with admin rights to the proxy VM.Client Properties
  6. Back on the proxy VM, run the following command from an elevated command prompt (navigate to the Commvault\ContentStore\Base directory first).
    CVSPWebPartInstallerLauncher.exe /registerassemblies -vm Instance001
  7. Restart the Commvault services on the Proxy VM.
  8. Create the o365 backup set. From the CommCell console, right click on the SharePoint Server client beneath the proxy and choose Create New BackupSet. Select Office365 as the ‘Document Level’.
  9. Right click on the new backup set and select properties. Use the Office 365 tab to enter the credentials and Tenant admin site URL.
    1. Login Credential username should be entered in email format.
    2. Tenant Admin site URL should be in the format
    3. Azure Credentials refer to the storage account created at Note this requires specific azure subscriptions and is used during the restore process only.
  10.  You can now configure the subclient content & assign a storage/schedule policy, splitting the subclients can help with throughput. If you experience failures check to ensure the account (the global administrator account) has Site Collection Administrator rights to each of the site collections. This can be done manually via the SharePoint admin center or via a PowerShell script (however I haven’t tried the script at the time of writing).

Understanding One-Way Firewalls with CommVault

Networking is by far my least skilled area, and hence getting firewall rules to work in CommVault has always been a pain. The most common cause is usual outside CommVault and with the network layer itself, however ensuring the CommVault config is correct in the first place is a good start.

The most common use for a one-way firewall is for clients outside your primary network, whether on your DMZ or connected via the internet. CommVault v11 introduced “Firewall Topologies” which go a long way to simplifying the process. With firewall topologies you can choose either 1-way, 2 way or via-proxy as template configurations which in most cases will work fine. This post aims to highlight the manual components of the firewall configuration in an effort to understand how the 1-way firewall is configured; helping to troubleshoot it if necessary.


This is one area where I believe better terminology could’ve been used, it makes sense once you’ve got your head round it but it can lead to confusion. A good way to translate this is as follows:

  • BLOCKED = Cannot Initiate a connection to this object/group
  • RESTRICTED = Can initiate a connection on specific ports

Examples of this used in Client groups are shown below:



Example Setup

Port Forwarding

In the above example CommVault has been configured to allow communications to be initiated by Laptop group members over specific ports. As there is only one public IP available; the CommServe and MediaAgents are allocated specific ports, which are translated by the NAT firewall into the actual ports & IPs used internally.

Configuration Steps

These rules can be configured on a per-client basis, however its simpler to use the client groups. Ensure your infrastructure (CommServe/MediaAgents) and External (in this case “Laptop Clients”) groups are configured.

  • Enter the properties of the Laptop Clients group and choose ‘Network’. Select ‘Configure Firewall Settings’ followed by ‘Advanced’.
  • Click ‘Add’. Select the ‘Infrastructure’ client group and assign it the state of BLOCKED.


  • We will now specify how members of this group will initiate communication with systems on the inside of the firewall. On the ‘Outgoing Routes’ tab click ‘Add’. Select the CommServe as the remote client & ‘Via Gateway’ as the Route. Specify a gateway hostname (or IP). The port should be configured on your network hardware to forward traffic to the CommServe.


  • Repeat the same process for your MediaAgent. Ensure the tunnel port is different from the CommServe and that your network hardware is configured to translate the external port to the one used internally by the MediaAgent. Once configured; click OK 3 times to return to the main console.
  • Enter the properties of the infrastructure group & choose ‘Network’. Select ‘Configure Firewall Settings’ followed by ‘Advanced’.
  • Click ‘Add’. Select the ‘Laptop Clients’ client group and assign it the state of RESTRICTED.


  • Click OK twice to return to the console. Right click the Infrastructure group –> All Tasks –> ‘Push Firewall Configuration’. Repeat for the ‘Laptop Clients’ group.

Thats It! Now any client configured in the Infrastructure group will not try to open a communication tunnel to the ‘Laptop Clients’ group & the ‘Laptop Clients’ group will initiate tunnels on startup with the Infrastructure servers on specific ports.

When installing a new client outside of your network, ensure you use the external address and port of the CommServe. You will be given the option to select a client group, at this point select the correct group (in this case ‘Laptop Clients’) and the firewall configuration will be pushed to the client.

Backing up Office365 Mailboxes with CommVault

If litigation hold isn’t enough of a retention method for your Office365 mailboxes, Commvault v11 gives you the option  of mailbox  backups using a traditional mailbox agent. At the time of writing this process has much room for improvement; it’s slow and fairly cumbersome to configure; however it does work.

SP8 also introduces the new pseudoclient style agent for mailbox backups, where multiple proxies can be placed under a single client (similar to the virtual server agent) however; as this is still early release this post focuses on the traditional mailbox agent method.


Proxy Client

The proxy client is used to connect to O365 and use its installed Outlook client to stream messages back to the ContentStore.

  • Windows service account. Needs to be local admin on the proxy.
  • Office 2013 x64 SP1 or above + Updates
  • If you are using a hybrid (On-prem & Online) you must use a separate proxy for both. This guide is aimed at an online-only deployment.
  • dotNET framework 3.5
  • Disable UAC

Commvault Books Online provides the following examples when sizing how many proxies to use:

Example 1
If your Office 365 with Exchange environment includes 300 mailboxes:

  • Create 5 subclients.
  • Assign 60 mailboxes to each subclient.
  • Use one proxy server.
  • Use one Online Service account.

Example 2
If your Office 365 with Exchange environment includes 3000 mailboxes:

  • Create 30 subclients.
  • Assign 100 mailboxes to each subclient.
  • Use two proxy servers (15 subclients per proxy server).
  • Use two Online Service accounts (one per proxy server).


Office365 requires a few changes to allow mailboxes to be accessed and protected.

  • Global Administrator account with access to all mailboxes
    • Update: Make sure you limit the number of “special” characters in the password. Stick with @ and # if possible, as of SP8 some of the others cause issues.
  • Connection initiated from proxy via powershell – details here



Proxy Client

  • Log on using the windows service account.
  • Install the OS & Outlook x64. In this example i’m using Server 2012R2 with Outlook 2013 64bit SP1. Ensure all updates are installed before continuing. For supported OS’s and Outlook editions click here, note: at present “click to run” editions of office are not supported.
  • Install .NET framework 3.5
  • Connect to Office365 and apply permissions to mailboxes. Using an elevated powershell prompt run the following commands:
Set-ExecutionPolicy Remotesigned
$UserCredential = Get-Credential
$Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $UserCredential -Authentication Basic -AllowRedirection
Import-PSSession $Session

This will allow you to run powershell commands on your exchange online environment.

  • Apply Full access rights to all mailboxes for the service account you will be using for mailbox backup. If you are using more than one proxy, you will need more than one service account.
Get-Mailbox -ResultSize unlimited | Add-MailboxPermission -User "<service_account>" -AccessRights FullAccess -InheritanceType all -AutoMapping: $false
  • If you want to apply permissions to a specific mailbox you can use:
Add-MailboxPermission -Identity "<mailbox_name>" -User "<service_account>" -AccessRights FullAccess -InheritanceType all -AutoMapping:$false

It is not necessary for the powershell session to remain open during the backup process.  Once finished setting permissions you need to close the powershell session using the following command. This will avoid using up the sessions available to you.

Remove-PSSession $Session
  • Create the outlook profile using Control Panel – Mail. Name the profile the same as the O365 account alias (before the @). Profile
  • Ensure the profile doesn’t use cached mode. Once you have created the profile, start Outlook and when prompted for credentials, ensure the “Remember Credentials” option is selected.
  • You now need to ensure outlook uses RPC/HTTP as opposed to MAPI/HTTP to connect. Using regedit navigate to HKEY_CURRENT_USER –> Software –>  Microsoft –> Exchange. Right click Exchange and choose new DWORD.
  • Name the new DWORD MapiHttpDisabled and give it a value of 1Registry
  • Open outlook and use Ctrl+Right click to select the Outlook icon in the taskbar. Click Connection Status.


  • The protocol Column in the Outlook Connection Status windows should read “RPC/HTTP”. If it doesn’t, double check the previous registry change and restart Outlook.

CommVault Mailbox Agent

The CommVault mailbox agent (traditional not archiver) can be pushed from the central CommServe. Before doing so, you’ll need to add the “bEnableExchangeOnline” additional setting to the ComCell (top level) as shown below:

  • Right click on the CommCell name in the ComCell browser windows and choose properties.
  • On the additional settings tab; add the bEnableExchangeOnline key with a value of true


Once that’s done you’re ready to deploy the mailbox agent to your proxy VM (Same place that you installed outlook. To deploy follow the instructions below:

  • Choose Tools from the CommCell ribbon and Select Add/Remove software.
  • Use the wizard to deploy the Exchange Mailbox software to the proxy, selecting defaults along the way. When prompted for exchange details, ensure the entries are blank & simply click Next.
  • You can monitor the installation progress from the job monitor.

Once the software has been installed; you can configure the proxy client as follows. Do this from the Commvault console, you may have to press F5 to refresh the clients view.

  •  Right Click the Exchange mailbox (Classic) agent under the proxy client and choose properties


  • Enter the Exchange Administrator account using the “Change Account” button higher up the page. This is the service account you used to log onto the proxy and configure the Outlook profile.
  • Select “Exchange Online”, click OK on the warning.
  • Enter the profile name used on the proxy.
  • Click the lower “Change Account” button. Enter the full SMTP address and password for the exchange online service account.
  • Click OK to return to the main CommServe screen. Right click on the proxy client and choose properties.
  • Click “Advanced” and select the “Additional Settings” tab. Add the following Keys:
    • nMailboxesperSession = 1
    • nRestartMAPIOnNetworkError = 1
    • nSkipUserImpersonation = 1
    • nExchangeOnlineOnly = 1


Disable User Impersonation

Now that the mailbox client & agent are configured, the logon user for the CommVault services on the proxy need to be adjusted.

  • Log onto the proxy VM. Open services.msc from the Run command.
  • Adjust the Log On As properties for the following services to reflect the windows service account.
    • Commvault Communications Service
    • Commvault Client Event Manager
    • Commvault Communications Service
  • If prompted to allow Log On As rights, click OK. Restart the services once complete.


Add Content

Content is added via subclients in the same way you would many other agents. As descrived in the prerequisites section, in order to get the best throughput from your mauilbox backups, it would be wise to split the mailboxes into multiple subclients. In this example I have split the content into 4 subclients, in a real life scenario I would recommend considerably more.


The subclients are split by first letter resulting in any mailboxes with the alias starting with A-F being picked up by the one subclient, any with their alias starting with H-K selected by another etc. This is achieved by enabling “Auto Discover” on the defaultBackupSet properties as shown below.

Auto Discover

Once thats done you can use wildcards (more detail here) to specify the content. Wildcards used for my H-K subclient are shown below:

Subclient Wildcard Content

Run Backup

Now the content is configured you can run the first backup & schedule accordingly. As stated previously if you have a lot of mailboxes this will take a while. Make sure you have sufficient subclients & proxies to suit the size of your environment. If you have multiple proxies, that means multiple accounts need to be created in O365. SP8 brings new functionality to mailbox backup which will be discussed in a later post, some simplification of the process would be a very good thing!

CommVault Partitioned Deduplication without shared storage

One of CommVaults greatest assets is Deduplication. On average savings of 85% can be achieved when writing to disk, savings which can be seen not only in raw storage capacity; but also in data transfer times. In order to take full advantage of this technology; it’s necessary to understand and properly plan the CommVault architecture to best suit your requirements.


In order to achieve these data protection goals, hardware must be size accordingly. CommVault provides a “Deduplication Building block guide” to aid in picking the correct configuration for your data protection needs. Sizing is based primarily on 3 factors:

  1. How many front end terabytes (FET) do you need to protect?
  2. How long (what retention) will the data be kept on disk?
  3. What kind of data are you protecting?

The topic of this post is sourced from point 1: How may front end terabytes (FET) do you need to protect? Front End Terabytes equates to the amount of data being protected i.e. If you have 5 servers consuming 1TB each, that’s 5 FET.

Commvault “Standard Mode” sizing guidelines range from Extra Small – Extra Large, with the latter supporting 90-120TB (depending on the type of data). This means that potentially  120TB of data could be deduplicated down to 18TB or less. The issue is; What if you have more than 120TB of data? You could have multiple MediaAgents each reducing their 120TBs worth; but this would result in some data being duplicated across MediaAgents as the MediaAgents have no way to reference what each other are protecting. This is where Partitioned Mode deduplication comes in.


With partitioned deduplication; upto 400 FET can be deduplicated across 4 MediaAgents. The storage savings gleaned from this are shown in the following table:


(Front-End Terabytes)

Partitioned Deduplicated data size (assuming

85%) (TB)

Equivalent non-partitioned Deduplicated data
(assuming 85% per node & duplication across nodes) (TB)
100 15 15
200 30 60
300 45 135
400 60 240

This works well, but in some situations the hardware requirements may not be within budget. The MediaAgents for a 4 node 400FET configuration are already highly spec’d; but factor in enterprise NAS shared storage and your costs can start to look scary. During a recent project the question was raised:

Can we get most of the benefits of partitioned deduplication without using expensive enterprise NAS hardware?

The answer is Yes; and it’s not that difficult to achieve. Shared storage can be easily replaced with direct attached block level storage. There are some restrictions which I will cover later; but to achieve load balanced; globally deduplicated data protection, local or SAS attached storage (such as Dell PowerVault MD series) will achieve the desired result.

Storage Data Flow

In the following example the storage architecture was shared among the MediaAgents as follows:


The simplified diagram above gives an explanatory view of how each MediaAgent accesses the multiple mount paths making up the storage.

  • Each Media Agent is connected to the SAS attached storage enclosure via 2 SAS connections. MPIO is configured on the MediaAgent.
  • The storage enclosure (in this case a Dell MD series) is aware of each of the MediaAgents, and its available storage is configured, formatted and divided amongst the connected servers.
  • The MediaAgents see their allocation of LUNs, they are formatted as NTFS (64k block size is strongly recommended), and configured as mount paths inside windows.
  • The windows mount paths are shared at the windows level, using either a domain or local security account restricting access accordingly.

MediaAgent Physical Connectivity

Each MediaAgent (in this case a PowerEdge R630) is equipped with the following connectivity

  • On board 4 x 1GbE NIC
  • 2 x 4 port 10GbE Network Adapter
  • 2 port 12Gbps SAS HBA
  • 1 x Out of Band (iDRAC)

The primary LAN connectivity is achieved over the 10GbE cards. LACP is configured between port 0 of each card, allowing both load balanced multi stream traffic, in addition to accounting for card failure. These will be connected to a routable VLAN to allow backup traffic to reach the MediaAgents.

A second LACP pair is configured on port 1 of each 10GbE card. This forms the GRID connectivity, creating a dedicated link for the MediaAgents communicate with each other directly. Splitting traffic across the cards will minimise the impact of a card failure. Grid configuration is discussed later in the post. These should be connected to a non-routable VLAN, traffic on this VLAN will only be used for inter-MediaAgent communication. It is a good idea to prevent this connection from registering with DNS.

The 2 SAS connections are split across the 2 storage controllers. MPIO is configured at the Windows layer to avoid duplicate LUNS appearing in disk manager (and provide multi pathing functionality).

MediaAgent software configuration


Each MediaAgent will have a number of LUNs presented to it. These should be formatted as NTFS 64k block size and presented as a mount path to the operating system. My method for doing this is to carve a small partition off one of the system drives, give it a drive letter and create a “MP” directory under that. Each Mount path can then be mapped to a subdirectory of that path. For example:

  • M:
    • MP
      • MP01
      • MP02
      • MP03
      • …..

Note that the latest version of CommVault does not require mount paths to be divided into 2-8TB chunks. Administration can be reduced by using larger; easier to deploy mount paths.

Each of the Mount Paths should be published as hidden shares, with the share security restricted to a MediaAgent service account.

Disk Library


Each mount path from each MediaAgent will form the disk library. The image above (from expert storage configuration) shows a mount path local to one MediaAgent, with alternative paths via CIFS from the remaining 2 MediaAgents. When viewed from the mount path properties under the disk library it shows as follows:


REMEMBER: The IP address used for accessing the CIFS path is the GRID IP. There’s no point using the LAN network when you have dedicated links at your disposal. The table below details the mount paths for a 6 mount path disk library.

Name Local
Mount Path
MA1-MP1 MA1 M:\MP1 \\MA1\MP1$ RW:
[local svc account]
MA1-MP2 MA1 M:\MP2 \\MA1\MP2$ RW:
[local svc account]
MA2-MP1 MA2 M:\MP1 \\MA2\MP1$ RW:
[local svc account]
MA2-MP2 MA2 M:\MP2 \\MA2\MP2$ RW:
[local svc account]
MA3-MP1 MA3 M:\MP1 \\MA3\MP1$ RW:
[local svc account]
MA3-MP2 MA3 M:\MP2 \\MA3\MP2$ RW:
[local svc account]

Inter-Host Networking

To ensure any inter-MediaAgent communication uses the dedicated NIC team, it’s necessary to create a number of DIPs (Data Interface Pairs). This can be done via the control panel, follow CommVaults documentation if you’re unsure. In the scenario of 3 MediaAgents requiring dedicated communication paths; the following example DIPs would be used.

Client1 NIC Client
MA2 GRID(Team)
MA1 GRID(Team) MA3 GRID(Team)
MA2 GRID(Team) MA1 GRID(Team)
MA2 GRID(Team) MA3 GRID(Team)
MA3 GRID(Team) MA1 GRID(Team)
MA3 GRID(Team) MA2 GRID(Team)

Partitioned Global Deduplication

Now that all MediaAgents can see the same disk library, it is possible to create a partitioned global deduplication database (GDDB). For reference; the following image represents a partitioned deduplication database across 2 nodes:


To configure the partitioned global deduplication policy:

  1. Right click storage policies and choose “New Global Deduplication Policy”


2.  Name the policy appropriately, and click next.


3. Select the disk library created earlier & click next

4. Select one of the MediaAgents with paths to that disk library and click next (we’ll        ensure those paths are used properly later).

5. Check the box to use partitioned global deduplication.createddb3

6. On the next page you will configure the number & location of the partitions. The     number should match the number of MediaAgents sharing the disk library. It is           best practice to use a similar path on each MediaAgents to avoid confusion.


7. There is no need to adjust the DDB network interface as the DIPs created in the        previous section will ensure traffic is routed accordingly. Click next, review the             settings and click Finish.

8. Right click the primary copy of the new GDDB and ensure the GDDB settings            appear as follows:

9. Right click the primary copy of the new GDDB and ensure the GDDB settings             appear as follows:


These settings ensure that:

  1. You are using a transactional GDDB (far better recoverability in the event of a dirty shutdown)
  2. Jobs will not run whilst one of the MediaAgents (and part of the library) is offline. It’s possible the jobs may still run, but the errors generated by trying to access unavailable blocks would be more trouble than it’s worth.


It’s all very well having multiple paths configured, but would be less then efficient if all the traffic was going through one MediaAgent. Any primary copy associated with the previously created GDDB will automatically have multiple paths, however it will favour the first MediaAgent to access those paths.  To help with traffic distribution selecting your multi-pathing options properly is essential.


Round Robin between data paths will ensure traffic is distributed equally across the MediaAgents.

Restrictions & Disadvantages

There is one primary area that suffers as a result of not using a true NAS disk library target: Redundancy. In the event that one of the MediaAgents goes offline:

  • You will lose the partition of the global ddb associated with that MediaAgent. This can be configured to continue whilst n partitions are offline, however you will duplicate any blocks reprotected (and owned by the offline partition) during the outage.
  • You will lose the mount paths of the disk library to which the MediaAgent is directly connected. This can be tolerated for backups, but will very likely cause issues during a restore.

In reality most enterprise environments will have 4-hour replacement warranties in place; meaning you should be able to resolve most issues within a 24-hour period. This of course is still longer than zero-downtime associated with true NAS shared storage.

For most people this isn’t an issue, if they miss out on backups for one night this may be seen an acceptable risk. A notable exception to this is if you are using the system for archiving (file and/or email stubbing) or compliance search (typical in legal/medical industries). If these more production-critical workloads are used in your environment it may be worth forking out the extra for shared storage.