Activate – Sensitive Data Analysis

Following on from my previous post on Commvault File System Optimization,  Sensitive Data Analysis is the second feature pack under Commvault Activate licensing.  The Commvault Complete license guide describes Sensitive Data Governance as “Extends indexing and analytics into content and provides details on data that may contain information that could be considered sensitive and may need further attention”.

Licensing for these features is covered by the aforementioned license guide. The current method for Sensitive Data Analysis is per-TB for File/VM or per user account analysed for on-premises or cloud mailbox.

Sensitive Data Analysis enables organisations to seek out PII (Personally Identifiable Information) across multiple data types throughout there organisation. Examples of supported data types include:

  • Exchange User Mailbox
  • File System
  • Gmail
  • Google Drive
  • Microsoft OneDrive for Business
  • Microsoft SharePoint Server
  • Microsoft SQL Server databases
  • Oracle databases
  • Virtual Machines (When collect file details is used)

Examples of PII (Personally Identifiable Information) include:

  • Credit Card Numbers
  • Email Addresses
  • Driving Licenses
  • Hostnames
  • Social Security Numbers
  • IP Addresses
  • NHS Number

The ability to identify where PII sits in your organisation can aide in keeping compliance with regulations such as the EUs General Data Protection Regulation (GDPR).

System Requirements

  • Commserve
    • V11 SP18 for the examples given here
  • Server with the Index Store, Content Analyzer, and Web Server packages installed.

    • System Requirements for 20TB File Data as 40 million objects is as follows:
      • 16 vCPU cores
      • 32GB RAM
      • 1TB Index volume (SSD class)
    • System Requirements for 15TB mailbox data as either 150 million objects or 2000 mailboxes is as follows:
      • 16 vCPU cores
      • 32GB RAM
      • 6TB Index Space
      • 1600IOPS for Index Drive
    • Full range of sizing options can be seen here.


  1. From the Java GUI, Right-Click the Index Servers group and click New Index Server.
  2. Specify a Cloud Name and location for the Index Data.
  3. Add the following Roles
    1. Data Analytics
    2. EDGE Drive
    3. Exchange Index (If Required)
  4. Select the server with the Index Store package from the Nodes tab.
  5. Click OK.
  6. You should now see the name of the server with the Index Store package installed, listed under the Content Analyzer Cloud computer group. It will have a “_ContentAnalyzer” suffix added.
  7. The next step is to add a domain. This is done from the Command Center. You may have already done this if you followed my previous post for File System Optimization. If not follow the official documentation here.
  8. Use the Guided Setup Wizard to configure Sensitive data analysis. This can be done by clicking Guided Setup from the command center, clicking the Activate tab and selecting Sensitive Data Analysis.
    Guided Setup01
  9. Create a Data Classification Plan. Select the Index Server you created earlier. Click Next.
  10. On the Content Search tab, select Enable and choose Metadata and content. Click Next
  11. On the Content Analysis  page, check Entity detection and select a few relevant entities. Only select entities that are required, the more you select, the bigger the resource load. Select the Content Analyser and Click Next.
  12. Adjust the Advanced options (File Extensions, exclusions, File Sizes etc) if required and click Save.
  13. The next step is to create an Inventory. Inventories are the logical groupings of resources from your CommCell environment. Choose a Name and select the Index Server and Name Server created earlier. Click Save. 
  14. You now need to add a File Server (assuming this is what you want to analyze) to the Inventory. Navigate to Activate –> Inventory Manager. Select the “…” to the right of the inventory name and click details. Click Add –> File Server.
  15. Enter the details for the file server and click Save.
  16. You can now add a project. From Activate –> Sensitive data Analysis.  Create a New Project.
  17. Add a File System data source to the project. Select the File Server you added to the Inventory earlier.
  18. On the configuration page, enter the credentials and paths as required. Click Finish
  19. Collection should start immediately, you can view the current progress back on the project data sources page.
  20. Once the analysis has completed, navigate back to Activate –> Sensitive Data Analysis –> (Project Name) to see the results!

Example Results

Once your data has been analysed you start to get a real insight into the levels of PII across your dataset. Some examples are shown below:

This shows a dashboard view of discovered sensitive data for a given dataset:


This shows results filtered by the exception “Accessible by everyone”Example 2

This shows results filtered by files containing IP addresses (an example of PII)


The following options are available for discovered files:



Activate – File Storage Optimization

Commvault describes Activate as their “flagship product for turning information into action”. In recent service packs a lot of work has gone into the functionality and usability of the three main areas of this feature set:

  • File Storage Optimization
  • Sensitive Data Governance
  • Compliance Search and eDiscovery

With Forbes estimating that 175 Zettabytes of data will be generated annually by 2025 it should come as no surprise that a significant sprawl of unstructured data can be seen across many companies’ files systems. File Storage Optimization aims to provide insight into the data and optimize for volume, availability, performance, and risk. This post will focus on File Storage Optimization.

Supported Data Sources

Multiple data sources are supported, with more being added all the time. At present the following data sources are supported:

  • File System (Agent or NAS data source)
  • SharePoint (from backup data only)
  • OneDrive
  • Google Drive (from backup data only)
  • Gmail (from backup data only)


  • Commvault V11 SP18
  • Activate License
  • Web Server Installed and configured
    • For a lab or small environment, you can make use of the existing CommServe web server. For larger environments install both Index Storage and Web server packages on a dedicated host.
  • Name Server configured in Commvault.
  • Index Store Package Installed on Index Node

Procedure – Guided Setup – File System Optimisation

In order to analyse data and view the dashboards, you’ll first need to create a data classification plan and inventory. This is all configured using one of the Guided Setup wizards.

From the Command Center, select Guided Setup

Select Activate then Sensitive Data Analysis.


Select Use existing Index Server and give the plan a name.


On the Content Search page, check Enable and ensure only Metadata is selected. Click Next.


On the next page ensure Entity detection is unchecked. This is used for finding PII (Personally Identifiable Information) data and is not necessary for this guide. Click Next.


On the Advanced Options page, adjust the options as required. Keep in mind this is used for Content Indexing and Entity extraction so not something that will make much difference for FS optimization tasks. Exclude Paths will be adhered to so adjust if needed. Click Save. Please Note: I had issues here using Chrome, the Save button was unresponsive. Switched over to Edge and all went through fine.

Inventories are the logical groupings of resources from your CommCell environment.

Enter a name of the Inventory you wish to create and select the Index server used earlier. Click Save.


Procedure – Adding a Client Data Source

From the Command Center, select Activate –> File Storage Optimization.


Click Add Client. Select the Plan and Inventory created earlier. Click Next.


Select the client(s) containing the data you wish to analyse. Click Next.

On the Select File Server page, select the source country and Type. Here you have two options:

  1. To use the data collected from a content indexing job or a back up job, click Analyse from backup
  2. To crawl data from a local directory that is not content indexed or backed up, click Analyse from source, and in the Directory path box, enter the path to the data on the server you want to analyse.

For this example, we’ll used data that we have previously protected. Select Analyze from Backup.


Click Finish.

An Analytics job should run in the next 24hours, however if you want to kick this off manually it can be done via the Java Gui by right clicking on the client and selecting All Tasks –> Run Analytics.


Viewing Reports

Once the Analytics job has run, you’ll be able to view the results via Activate –> File Storage Optimization. From here you have a range of dashboards as shown below:


Once you have selected a View, you can choose which clients (You can select multiple) you wish to view the analysis of:


Some example reports are shown below:

Size Distribution


Tree Size


In future posts we’ll go through the other areas of Activate: Sensitive Data Governance and Compliance Search & eDiscovery.




Protecting VM Workloads in Azure using Managed Identity authentication with Azure Active Directory

When protecting VM workloads in Azure you have two deployment options for Azure Resource manager deployments:

  1. Traditional method of deployment with Azure Active Directory where you must set up the application and tenant. This is a less secure method of deployment and requires more parameters when configuring the pseudoclient in Commvault. This process is covered in my previous post.
  2. Managed Identity authentication with Azure Active Directory. This is a more secure method of deployment. Using this method ensures that your Azure subscription is accessed only from authorized Managed Identity-enabled virtual machines. This requires only the subscription Id and proxy details for the pseudoclient however the VSA proxy must be located on the Azure platform.

For this post we are going with the more secure option (2). The following prerequisites should be in place before you begin:

  • Azure Subscription
    • Take a note of the Subscription ID
    • User with Service Administrator capabilities.
  • Azure VSA Proxy/Proxies with Commvault software installed, connected to your CommServe
    • Deploy the VSA proxy and MediaAgent on virtual machines in the Azure cloud.
    • Deploy the VSA proxy on an Azure VM that is optimized for I/O intensive workloads to support faster backups. See Sizes for virtual machines in Azure for more information.
    • Enable Azure accelerated networking on the VSA proxy/MediaAgent machines in Azure. This step must be completed at the time of deploying the virtual machine.
    • Enable service endpoints for Microsoft Storage on the Azure virtual network subnet where the proxy and MediaAgent are connected. This will ensure that all network traffic from the proxy machine to the Azure storage account is securely flowing through the Microsoft Azure backbone network.
    • Commvault official documentation also recommends enabling CBT, unfortunately this is only available for unmanaged disks which in most cases are not used. EDIT: This is now available for managed disks as of SP18
  • MediaAgent with associated storage and (optional) GDDB.
  • Storage Policy configured


  1. From the Azure portal, goto virtual machines and select the VM on which you have the VSA installed. Record the Resource Group and Subscription ID
  2. 1
  3. Click the Identity Tab. Adjust the System Assigned Status to On.
  4. Click Yes at the prompt.
  5. Repeat this process (1-3) for any additional VSA proxies.
  6. Navigate to the subscriptions area of the Azure portal
  7. Select the subscription in which the VSA proxy VM resides.
  8. Click Access Control (IAM) then Add.
  9. Select Add Role Assignment.
    1. Select Contributor (More restrictive permissions can be achieved following the documentation here)
    2. Assign Access to: Virtual Machine. In my case the MA and VSA are on the same VM however these would usually be separated for large deployments.
    3. Subscription (Subscription where VSA resides).
    4. Select: The VMs with System Assigned Status should be listed here, select your VSA VM.
  10. Click Save.
  11. Open the Commvault Console and add a Azure Virtual Machine Client
  12. Select Connect Using Managed Service Identity. Enter the subscription ID and the VSA proxy or proxies configured earlier.
  13. Click OK.
  14. You should now be able to configure subclients and associate them with your chosen storage policy. If you are protecting the virtual machines to BLOB storage, ensure the Virtual Network attached to your VSA and MA has a service endpoint configured.



Protecting VM workloads in Azure


Commvault’s ability to protect workloads in public cloud is nothing new, however with the increased popularity of hybrid cloud architectures it makes sense to protect both on & off-premises workloads through a single management platform.

Block-level protection of Azure virtual machines has been available since v11 SP4, further enhanced by SP5 with the introduction of CBT (Changed Block Tracking) and by SP10 with the support for managed disks. At present CBT is available for unmanaged disks only but is roadmapped to support managed disks soon.




The following items should be in place prior to implementing this solution:

  • Commvault infrastructure/test lab installation. V11 SP14 or above.
  • Azure Subscription (keep in mind you have a limit of 4 vcores on a trial)
    • User credentials with Service Administrator capabilities, for logging in to your Azure account.
  • Resource Group created with:
    • Virtual Network Created
      • Azure Virtual Network configured with a Service Endpoint for Storage on the default subnet (or subnet used for the VSA VM)
    • Network Security Group allowing remote access and Commvault firewall ports access to Azure VSA. Typically this would be ports 3389 (RDP) and 8400-8403 (Commvault).
  • Azure Virtual machine with MediaAgent and Virtual Server Agent (VSA) software installed & connected to on-premises CommServe.
    • Connectivity verified with “Check Readiness” function.
    • Best Practice for this AzureVM can be found here.
    • Ensure a Premium SSD disk is allocated for the GDDB. This should be formatted and have a drive letter assigned.
    • Ensure your VSA can resolve If you are using your own internal DNS this may not be the case. More detail here.

It is also advisable to read the cloud architecture guide located here to ensure you have the latest best practice information.


Configure VSA

To configure the Azure VM to function as a VSA:

  1. Configure a Application and Tenant for Azure Resource Manager
    1. Note down your Azure account subscription ID
    2. Submit a new Application Registration
      You can choose the name and Sign-In URL, keep it simple. This will be used in a later step to reference this app registration.
    3. Adjust the required permissions
      1. Select the new application and select the settings blade.
        From the Required Permissions tab select Add:
      2. Click Select an API
        find “Windows Azure Service Management API” and select.
      3. Select “Access Azure Service Management as organization users (preview)” and click Select.
    4. Create a new Key by selecting the Keys blade.
      1. Give the Key a name and expiry and choose save. Take note of the generated key value (Its the only chance you’ll get).
    5. Enter the subscriptions tab.
      1. Select Access Control (IAM) and then Add a role assignment.
        Select contributor & the webapp you created earlier and click save.
        NOTE: You can add a custom JSON role here, see the official documentation for details.
    6. To configure the VSA pseudoclient within Commvault (Next Step) you will need:
      1. Your subscription ID
      2. The Key you assigned to the application.
      3. Your Tenant ID. This can be found under Azure Active Directory –> Properties –> Directory ID
  2. From the CommCell Console, Create a new Azure Virtualization client NewAzure.PNG
    1. Enter the credentials as follows:
      Subscription ID: From Subscription Properties
      Tenant ID: From Azure Active Directory
      Application: For App Registrations
      App Password: Key Created during App Registration.

Configure Disk Library

We are using Azure BLOB (Binary Large OBject) storage to store the protected VSA data.

  1. From the Azure Portal, Select Storage Accounts.
  2. Create a New Storage Account

    1. The name should be unique (and will be used when creating the cloud disk library later on).
    2. Account Kind should be Blob Storage.
    3. Replication can be chosen based on requirments, for a lab LRS is fine.
    4. Hot is best for primary backup targets.
    5. Click Review + Create. (You can add Tags & encryption options etc via advanced options if desired)
    6. Click Create
  3. Goto the newly create blob storage and select Keys.
  4. Select one of the access keys and copy for use later.
  5. Create a New container within the blob storage:
    New COntainer.PNG
    Configure as follows (choose your own name)
  6. From the CommCell Console, create a new Cloud Disk library, configure as follows:

    1. MediaAgent: Your Azure VSA/MediaAgent
    2. Service Host: leave default (in most cases)
    3. Access Key ID: The Key you copied earlier
    4. Account Name: The unique name you created earlier.
    5. Click Detect. Select the container you created earlier.
    6. Click OK.

Configure Global Deduplication Database (GDDB)

To make the most efficient use of your newly created Blob library, you’re gonna want it deduplicated. To configure the GDDB you’ll use the standard process as follows:

  1. From Storage Resources, RIght Click Deduplication Engines and select New Global Deduplication Policy
  2. Give it a name and choose next
  3. Select the previously created BLOB library and click next.
  4. Select the Azure VSA MediaAgetnt and choose next.
  5. CHoose whether you woud like to encrypt your data, keep in mind that Azure blob storage is encrypted by default anyway.
  6. Click Next until the Specify location page appears. Specify the location of the Premium SSD allocated for the GDDB.
  7. Click next until the process is completed. As this is a cloud library it will automatically specify the deduplication block size to 512Kb.

Configure Storage Policies

You are now free to create a storage policy using the new Global Dedupe policy as it primary. You can also (from SP14) add this GDDB as an additional copy in existing 128Kb based policies. See Change in Default Block Size on this link. I wont go into the details of creating a storage policy here, if you need a guide, check out Commvaults Books Online.

Create subclients and kick off your first backup!

You created your Azure pseudoclient earlier on, now its time to create a subclient to protect your selected VMs. The default subclient will automatically pick up anything it has the permssion to see so its usually best to create separate subclients to create granular control of retention & scheduling.

Azure subclients can select their content by:

  • Name (Wildcard if needed)
  • Resource Group
  • Region
  • Storage Account
  • Tag
  • Power State
  • A combination of the above 🙂

How you organise things depends on your environment. Tags can work well as once set up, any backup requirements can be set when creating the Azure VMs.

Associate your subclient with your Azure Storage Policy and kick it off. If you want higher throughput try increasing the instance size or adding additional proxies.




Creating a Site-to-Site VPN between your Lab & Azure

Almost every project I am involved in demands a cloud component design element. Typically this will be with either the MS Azure of AWS public clouds however it is not uncommon for Google Cloud, Alibaba or Oracle cloud offerings requiring consideration. In order to stay current with these technologies it is necessary to practice or “LAB” these skills, and what better way to make a start than to connect your existing lab and a Azure subscription?

In this post I will will:

  • Configure the necessary Azure networking components.
  • Deploy an Azure VPN Gateway
  • Deploy an Azure virtual machine
  • Configure a Server 2016 Routing & remote Access Server (RRAS) server in my home Lab
  • Demonstrate connectivity between the two environments.



A new Azure virtual network will be created with two subnets; the first (default) to be used by the Azure virtual machines & the second to be used by the “Gateway Subnet”. A Virtual machine will be created in the default subnet.

A Server 2016 RRAS server will be installed in my home lab & configured to connect to the public IP of the Azure VPN Gateway. To ensure both Home Lab & Azure can communicate; static routes will be introduced to ensure traffic is  routed correctly.


  • Created an Azure subscription
  • Have a home lab with domain configured
  • Have a new Server 2016 virtual machine deployed.
    • Configure 2 vNICs on the virtual machine
      • Internal
        • This should have a static address in your home lab address range. i.e.
        • No default gateway
      • External
        • This should have a static address in your home address range. i.e.
        • The default gateway should be set to your standard gateway (i.e.
  • You home router should be configured to allow VPN pass through.
    • Configure Ports UDP 500 & UDP4500 to forward through to your RRAS VM.


Configure Azure Networking

  1. From the Azure portal create a Resource group in which to place the Azure components. 
  2. Create a new Virtual Network, use the following fields as a guide:
    1. Name: CVLab
    2. Address Space:
    3. Subscription: Your Subscription/Free Trial
    4. Resource Group: As created earlier.
    5. Location: As Desired
    6. Subnet
      1. Name: default
      2. Address Range:
    7. DDoS Protection: Standard
    8. Service Endpoints: Disabled
    9. Firewall: Disabled
  3. Once the virtual network has deployed, create a Gateway subnet:
    newgateway subnet.PNG
    Ensure the Gateway subnet is configured accordingly:

    1. Address Range:
    2. Route Table – None
    3. Services None
    4. Delegate subnet: None
  4. Add a virtual network gateway. This will be used by your RRAS server as a VPN connection target.
    Configure as follows:

    1. Name: CVGateway
    2. Type: VPN
    3. VPN Type: Route Based
    4. Virtual Network: CVLab
    5. Public IP: Create New
      1. This is auto-generated and will be the endpoint targeted by your on-premises RRAS server.
    6. PublicIPName:CVPublicIP
    7. Subscription: Your Subscription/Free Trial
    8. Location: Same as where you created your vNet.
  5. Create a local Network Gateway. This will be used to allow your Azure VMs to connect to your on-premises VMs
    Configure as follows:

    1. Name: CVLabGateway
    2. IP Address: The IP Address of your home internet connection
      1. NOTE: if you have a dynamic IP address, this IP will will need to be updated when that IP changes.
    3. Address space: Your Lab (on-premises) network. For me this is You may add multiple address spaces here if appropriate.
    4. Subscription: Your Subscription/Free Trial
    5. ResourceGroup: Your Resource Group
    6. Location: Same as where you created your vNet.
  6. Once the local gateway has finished deploying (can take a while), you can create a connection object.
    NewConnectionConfigure as follows:

    1. Name: CVLabHomeCon
    2. Virtual Network Gateway: CVGateway
    3. PSK: Make one up, the longer and more complicated the better. This will be used by your RRAS Server.
    4. ResourceGroup : Your Resource Group

Configure RRAS

  1. Install Routing & Remote Access on your Server 2016 VM
    1. From server manager, add Roles & Features
    2. Click next through until you reach Roles. Select Remote Access and click Next until you reach the Select Role Services page.
    3. Select Routing and DirectAccess and VPN (RAS). Click Add Features when prompted.
    4. Click Next until the confirm page, then click Install.
  2. Once RRAS is installed, click Open the getting started wizard.
  3. Select Deploy VPN Only
  4. Right click the name of your sever in the RRAS console and select Configure and Enable Routing and Remote Access.
  5. On the Configuration page, click Custom configuration.
  6. Select VPN Access and LAN RoutingClick Next and Finish. If warned about the windows firewall click OK.
  7. Click Start Service when prompted.
  8. Create a new demand dial interface as shown below:

    1. Name: Azure On-Demand
    2. Connection Type: VPN
    3. Type: IKEv2
    4. Host Name: The public IP of your Azure Virtual Network Gateway:
    5. Protocols and Security:
      1. Route IP Packets on this interface
    6. Static Routes:
      1. Destination:
      2. Mask:
      3. Metric: 10
    7. Dial Out Credentials – just write “Nothing” in the Username and click next. We’ll be using the PSK created earlier.
  9. Once the interface has been created, right click on the Azure On-Demand interface and choose properties.
  10. Select Security and Use preshared key for authentication. Enter your PSK in the box.
    Click OK
  11. Right Click your connection and choose Connect.

Configure Azure VM

  1. Back in the Azure portal, deploy a new virtual machine. The size is upto you but ensure it has the following properties:
    1. Region: Same as where you deployed the previous components
    2. OS: Windows Server
    3. Public Inbound Ports: None
    4. Virtual Network: CVLab
    5. Subnet: default
    6. The other areas can be left as default or adjusted if necessary.
  2. Wait for the VM to be deployed then proceed.
  3. From the Networking tab on the new VM, create a new rule to allow traffic from your home lab network (i.e. This can be for specific ports or a blanket rule between your home & azure environments. Ensure at least RDP (3389) is allowed. In the below example I have allowed all traffic from my local Lab subnet.

Configure Static Routes

In the current state, your RRAS server will be able to connect to your AzureVM as the route to use has been added by the RRAS config. The next step is to add routes to the other VMs (both Azure and local Lab) to allow communication between subnets. The route add commands would be written as follows:

For Local lab VMs
ROUTE ADD mask metric 2 -p
For Azure VMs
ROUTE ADD mask metric 2 -p

Ping (ICMP) is not allowed through the windows firewall by default, however RDP on the Azure VMs is. From your RRAS server, RDP to the private IP of your new Azure VM, if you get a connection that proves RRAS is working. You will need to add the appropriate static routes to the test VMs before you can repeat the same test outside of the RRAS VM.

Next Steps

If you would like to make this Azure VM (and others) part of your domain for an extended period, it would be advisable to get a few other things tidied up, for example:

  • DNS Zones
  • Centralising the static Routes
    • Via Group Policy
    • Via Physical/Virtual network hardware.
  • Locking down the Azure ports using  network security groups.



Configuring o365 mailbox protection with Commvault

With each new service pack released, the process for protecting mailbox content hosted in Microsoft o365 is further refined.  The release of SP12 brings the following changes:

  • Combined & simplified agent install. Rather than choosing “Mailbox” or “Database” etc the binaries are combined into a single “Exchange” client.
  • If you need the OWA proxy enabler or Offline mining tool, these are available as separate installs.
  • No need for the scheduled tasks to update the service account permissions on new mailboxes.

This post will focus on “Exchange Mailbox Agent User Mailbox Office 365 with Exchange Using Azure AD Environment” as detailed on the official documentation here.


There’s a few things to set up before you can start protecting the mailboxes. These are summarised below and detailed in the following sections.


  • Server to install the Commvault Exchange agent
  • Administration account for Office365
  • Exchange Online Service account
    • Must be an online mailbox
    • Setup via
  • Local System Account
    • Member of Local administrators on machine


  • Index Server
  • Job Results Directory
    • Must be a UNC Path
  • Mailbox Configuration Policies
  • Storage Policy


Microsoft Tenant Configuration

  1. Connect to o365 with Powershell
    $UserCredential = Get-Credential
    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $UserCredential -Authentication Basic -AllowRedirection
    Import-PSSession $Session

    You’ll be prompted for a username and password, use the Administration account for Office365 mentioned in the prerequisites.

  2. Run the following commands:
    set-ExecutionPolicy RemoteSigned

    You may at this point see the following:
    enable-orgCustNot a problem, just continue to the next step.

  3. You now need to provide the service account (the one with the online mailbox mentioned earlier) with view only recipient permissions.
    New-RoleGroup -Name "ExchangeonlinebackupGrp" -Roles "ApplicationImpersonation", "View-Only Recipients" -Members svcCommvault
  4. Register the o365 application with Exchange using AzureAD, do this via
    1. Goto Azure Active Directory (left hand side)
    2. Choose App Registrations
    3. Select New Application Registration
    4. Complete the fields as follows (feel free to use a different name)
    5. Click Create.
    6. Note down the Application ID (you’ll need this later to set up the pseudoclient)
    7. Click the Settings button once the app is created, Select the properties button on the right hand side.
    8. Scroll to the bottom and change Multi-tenanted to yes.
    9. Click Save
    10. Select Settings–>Keys
    11. You’re going to create a new key, complete the form as follows, adjust the first two fields as you see fit:
      Click Save.
    12. Copy the Key value & description, you’ll need this later. May be worth remembering the expiry date too.
    13. Now select the Required Permissions menu. Click Add.
    14. Choose Select an API then Microsoft Graph
      Click Select.
    15. Scroll down to Read Directory data, check the box and click Select.
    16. Click Done then Grant Permissions. Click Yes when prompted.
    17. On the left hand side, click Azure Active Directory then Properties. Note down the value in the Directory ID field.

Commvault Configuration

Now it’s time to ensure you have the Exchange Policies, Index Server, Job Results Folder & Storage Policy setup. The latter 3 of these tasks are already well documented however the following should be noted:

  • The Job results directory needs to be shared (i.e. \\UNC Path) visible to all mailbox backup proxies
  • The primary copy retention  Retention for the mailboxes is governed by the Retention Policy and not by the Storage Policy as is typical with other agents. For this reason it is worth having a separate storage policy for the mailbox backups.
  • The mailbox agent index server should not be the MediaAgent responsible for you’re library and storage policies.
  • The mailbox pseudoclient to index server is a 1:1 relationship. It is possible via a registry key to have multiple pseudoclients use the same index server, however; if the multiple pseudoclients have any crossover you will very likely experience data loss.
  • Review the index store requirements before deploying the index server. If you’re doing this in a lab you can ususally start small and ramp up the specs on-the-fly.
  • You must have a web server installed in your environment, typical Commcells have this installed on the CommServe however larger CommCells split this role out to a dedicated server.

Exchange Policies

This is a copy from my previous post but the information is still valid:

The four new policy types are as follows:

  • Archiving – Archive is the new Backup. This policy dictates what messages will be protected, it has no effect on stubbing.
  • Cleanup – If you are archiving, this is where it is configured.
  • Retention – Primary Copy retention is configured here and will override any retention settings configured in the storage policy. Secondary copies will still adhere to the Storage Policy values.
  • Journal – The new compliance archive. Use this for journal mailboxes.

Policies are configured under Policies, Configuration Policies, Exchange policies as shown below:


Only configure the policies you need, for a standard mailbox backup (no archive) setup, your policies listing may look like this:


Creating the Index Server

To create the logical Index Server (assuming you’ve installed the index store package) do the following:

  1. From the Commvault Console, right click Client Computers –> Index Servers and select New Index Server.
  2. Complete the fields on the General tab. If possible ensure the drive nominated for the Index Directory is formatted as 64kb block size. The Storage Policy, although optional; is used to backup the index.
  3. On the Roles tab, click Add, select Exchange Index, and move it to the Include field.
  4. On the Nodes tab; add the server on which you installed the Index Store package.
  5. Click OK. There will be a delay while the index is created, you may notice the following status on the bar at the bottom of the screen
    Cloud reation in Progress

Creating the Mailbox Client

You’ll need to have the Index server & policies ready before continuing to the mailbox client creation.

  1. Right click the CommServe name at the top of the CommCell Browser on the right hand side and select All Tasks –> Add/Remove Software –> New Client.
  2. Expand Exchange Mailbox and select User Mailbox.
  3. Complete the fields as shown below. Note: I have used the index server to host the job results directory which isn’t best practice, but OK for a lab.
  4. On the access nodes page, select a client (or clients) that has the Commvault Exchange package installed.
  5. On the Environment Type page, choose Exchange Online (Access Through Azure Active Directory).
  6. On the Azure App Details page enter the following:
    1. Application ID:  as noted down earlier
    2. Key Value: As described (The auto generated key)
    3. Azure Directory ID: Noted down earlier
  7. On the service account settings page you’ll need to add 2 accounts:
    1. The Exchange online service account, this is the one we granted permission to earlier.
    2. A local system account. This needs to have local admin rights on your exchange proxy(ies).
  8. Optional: On the Recall Service, enter the URL for your web server as shown below. This is only used if Archiving or Content store viewer is implemented.
  9. Click Finish!
  10. To test that you are able to query the instance for its mailboxes navigate to MailboxAgent –> ExchangeMailbox –> User Mailbox and click the Mailbox Tab at the bottom of the screen.
  11. Right Click in the white space above the Tabs and choose New Association – User.
  12. On the Create New Association box, click Configure then Discover (Choose Yes at prompt).
  13. If your expected list of mailboxes appears, you’re doing it right!list mailboxes

The next step is to configure the auto associations. This can be easily achieved by following the official instructions here.


Securing Access to the Web & Admin consoles using a SAN certificate.

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. The official documentation does not however; instruct on creating a certificate that will allow the use of multiple SAN (Subject Alternative Name) aliases. This guide will include the necessary fields for creating a SAN cert request, in addition to ensuring the cert complies with the SHA256withRSA signature algorithm.


  • CommCell with Web & Admin Console
  • Certificate Authority
    • This can be either internal or external. If you are allowing access to the web console externally an external authority is recommended.
    • If you are using an internal CA, ensure it is capable of issuing SHA2 certs rather than the deprecated SHA2. Details can be found here. SHA1 certs are acceted by IE however chrome and firefox will complain.


  1. In order to ensure the necessary java versions are in place you may need to update Java. The minimum version required is JRE 1.8.0_65. Check which version you have installed; the SP10 release of Commvault is packaged with java version “1.8.0_121” so does not require updating. To check your java version run “java -version” from the command prompt of the web server. if you need to update java follow the official doco here.
  2. From the web console computer, navigate to the following directory via an elevated command prompt (replace the java version if different):
    C:\Program Files\Java\jre1.8.0_121\bin
  3. You must create a keystore file using the keytool utility contained in the above directory. Run the following command:
    keytool -genkey -alias tomcat -keyalg RSA -sigalg SHA256withRSA -keystore "C:\mykeystore.jks" -keysize 2048
  4. You will be prompted for the following details:
    1. Keystore password: Be sure to make this a strong password and keep it safe.
    2. First & Last Name: This is the domain name for which you are creating a certificate i.e.
    3. Org Unit: Leave this blank or use company name
    4. Org Name: use company name
    5. City or Locality, State, Country Code: As Described
  5. At the “Is this correct?” prompt, type yes.
  6. At the “Enter password for tomcat”, click ENTER to use the same as the keystore file.
  7. Now we use the created keyfile to generate a csr:
    keytool -certreq -file C:\somename.csr -keystore C:\mykeystore.jks -keyalg RSA -sigalg SHA256withRSA -alias tomcat -ext SAN=dns:alias1.mydomain.local, -keysize 2048
  8. You will be prompted for the password you created earlier. Once entered the certificate request will be saved as specified (C:\somename.csr if you used the above command).
  9. Use this csr to generate the certificate. Upload the request to your certificate authority and download the signed certificates. The files you will require for the next step are as follows:
    1. root certificate
    2. intermediate certificate (if available)
    3. issued certificate
  10. All of these files can be in either cer or crt format. If you have been given a *.p7b. file dont panic; this can be opened to show the issued certs. In the example below you can see only the root and issued certificate are available.
  11. Right click the certificates one by one and choose All Tasks –> Export. Use the default option as pictured below to export the individual certs as *.cer files.
  12. Once you have your 2 or 3 certificates, head back to the command prompt. Its now time to import your root, intermediate (if you have one) and issued certs.
  13. First import the root certificate:
    keytool -import -alias root -keystore C:\mykeystore.jks -trustcacerts -file C:\root.cer
  14. Next the intermediate:
    keytool -import -alias intermed -keystore C:\mykeystore.jks -trustcacerts -file C:\intermediate.cer
  15. And finally the issued SAN cert:
    keytool -import -alias tomcat -keystore C:\mykeystore.jks -trustcacerts -file C:\actual.cer
  16. You now have a keystore (in this case “mykeystore.jks”) which can be used by Commvault to secure its web console traffic. To make Commvault aware of this file you’ll need to copy it into the Program Files/Commvault/ContentStore/Apache/Conf folder on the web server. Once the file is copied, stop the “Commvault Tomcat Service” using Commvault Process Manager.
  17. If you are using a Commvault version prior to v11 SP9 you’ll need to refer to the official doco here, if not carry on…
  18. Using a text editor (such as Notepad++) edit the “server.xml” file. It’s wise at this point to take a copy of the file first in case things go pear-shaped.
  19. Find the line “<Certificate certificateKeystoreFile=”” and edit the path and password to match your keystore file:
  20. Ensure the path is correct, if you placed the file in the conf folder as instructed, the path should be “conf/mykeystore.jks”.
  21. Start the Tomcat service using the Commvault Process Manager, give the web server a couple of minutes to start and then browse to the server using one of your specified SAN aliases :-).


Protecting Salesforce with Commvault

System administrators are seeing more and more business processes becoming reliant on SaaS solutions. Salesforce is; to many organisations the lifeblood of their CRM & Sales lifecycle, providing efficiencies of customer management essential to an effective organisation.

The mistake many companies make is to assume that just because a solution is “in the cloud” it has the same level of protection as their on-premise systems. In many cases this assumption is false; data is very often replicated (such as the multi-zone replication of AWS S3), but rarely has any sort of historical recovery points. Salesforce for example at the time of writing only holds 2 weeks worth of restore points; any changes or deletions not spotted within this time are likely permanent. Other considerations are the fees charged by Salesforce in the event you do have to roll-back. This will easily run into thousands of dollars ($10k is what i’ve read), money that could be saved through the configuration of a Salesforce agent.

This capability is still relatively new and may be simplified in later releases. This guide is written with Commvault v11 SP11 in mind. The configuration presently is made up of the following components:

  • Commserve
  • MediaAgent
  • Salesforce Virtual Client (pseudoclient)
  • Cloud Connector (Linux)
  • SQL Server (Optional but required for restoration to Salesforce instance)

Data Flow

Not everything is protected as the data protection is limited to what is made available via the Salesforce API. For a list of items that are not protected check out books online here. The high level steps required to implement the Salesforce backup are as follows:

  1. Ensure you have a supported version of Salesforce
  2. Ensure you have sufficient capacity license available on your Commcell
  3. Configure a Connected App in Salesforce (click here for a guide).
  4. Install the cloud connector on a linux VM
  5. Create a MSSQL database on a seperate DB server for use by the cloud connector
  6. Create the Salesforce pseudoclient.
  7.   Backup!

Obviously this guide is no substitute for the official documentation, however hopefully it will aide in testing out the Salesforce backup functionality for yourself. During my testing I used a free salesforce developer account available here.

Required information

During the configuration of the Commvault Salesforce pseudoclient you will need to complete the following fields:

  • Client Name: The name which appears in the Commvault Console
  • Instance Name: Usually the same as the client name.
  • Backup Client: The linux proxy with “Cloud Apps” installed
  • Storage Policy: The storage policy through which data will be stored.
  • Number of Data Backup Streams: Start with 1 and scale if required.
  • Salesforce login URL: (You may use a custom URL here if configured)
  • Username: Your salesforce service account
  • Password: As described
  • API Token: The salesforce security token, this can be reset once the service account has been created.
  • Consumer Key: Associated with the Salesforce connected App
  • Consumer Secret: Associated with the Salesforce connected App
  • Download to Cache Path: This should be a folder on the Cloud Connector, plan for 120% of the total salesforce data.
  • Database Type: SQL Server (You can also use PostGreSQL)
  • Database Host: The database server containing the database preconfigured for use by the Commvault salesforce agent. I had difficulty when using DNS names here, try IP if DNS doesn’t work.
  • Server Instance Name: As described
  • DB Name: Name of the SQL database set up on the SQL Server
  • Database Port: 1433
  • Username:sysadmin user for the configured database
  • Password: As described.


As described above; you will need to plan for both the Cloud Connector cache and SQL server storage. Required storage is as follows:

In addition to this you should consider the amount of back-end storage required. This should be planned in the same way planning for any additional client is performed.

Creating a Connected App

  1. Login to Salesforce using your administrator account
  2. Click Setup
  3. On the left select Build then Create –> Apps
  4. Scroll down to Connected Apps and click New
  5. Completed the required fields, ensure “Enabled OAuth Settings” is enabled.
  6. OAuth Scope should be “Full Access”
  7. Callback URL should be “; if you are using a test account. If this is a production config it would be “;
  8. Click Save and then click the connected app name, note the following fields:
    1. Consumer Key
    2. Consumer Secret
    3. Callback URL
      1. Commvault requires this during setup, minus the /services/oauth2/token


Generating the API Key

The API key is associated with the administration account (service account) used by Commvault to connect to Salesforce. To generate it:

  1. Login as the service account.
  2. Click the username at the top of the screen, select “My Settings”
  3. On the left of the screen, select “Reset My Security Token”
  4. Click the button to reset the token, it will be sent to the email address registered to the service account.

Configure the Pseudoclient

You should now have all the components and details required to create a Salesforce pseudoclient (also described as a virtual client). In addition to the items configured above you should have a Linux client installed with the Cloud Apps package, sufficient space on that client for the cached salesforce data & a SQL server with a preconfigured SQL database (and associated credentials).

Steps are as follows:

  1. From the CommCell console, Create a new Salesforce client.
  2. Complete the details on each tab, use the “Required Information” section above if you need clarification on any of the fields. Be sure to Test Connection on the “Connection Details” and “Backup Options” tabs.
  3. Your new client will appear as displayed below. Right click on the default subclient to adjust what is to be protected. You’ll notice the defaultbackupset reflects the name of the service account used.
  4. In the Contents page of the default subclient you have the option to include metadata, selecting this option will give you access to the metadata view when performing a “browse & restore”.

Standard vs Metadata views

Not being a Salesforce administrator I cant provide much insight into the significance of these two views, I can however provide screenshots of the different browse and restore results:





Next Steps

Now it’s time to run a backup operation which can be performed by right clicking on the default subclient and choosing backup. You’ll notice an incremental backup is automatically performed following the completion of the full backup. Once this has completed you can test the solution by deleting accounts/opportunities etc and testing the restoration process.

If you are restoring directly to Salesforce you’ll notice the requirement for a configured database. The screenshot below shows the restore options window when restoring directly from the most recent database sync.


If you are restoring from another restore point, you may need to restore from Media, as shown in the screenshot below:


You’ll notice this clears the database details. All restores to Salesforce require a staging database (be that MSSQL or PostGreSQL). You would need to create and specify a SQL database to restore to (from media), prior to that data being restored to your Salesforce instance.

Implementing a floating CommServe name for faster DR recovery

UPDATE: As of v11 SP11 this method is not supported on the CommServe due to possible communication issues. I’ll leave the post up for reference and will update when an alternative method is available.

In the event of losing your primary CommServe (CS) you’ll need to restore your database to the DR CS. This is a simple enough process involving the following (high level) steps:

  1. Ensure your DR CS is installed and at the same patch level as your current (now unavailable) CS.
  2. Restore the latest DR database dump to the DR CommServe using CSRecoveryAssistant.exe
  3. Use the Name Management Wizard to update all clients & MediaAgents to use the DR CS name.

This works well in most scenarios; however, if you are keen to get the process finished as quickly as possible Step 3 can take some time. Books Online advises that it can take “30 seconds per online physical client and 20 seconds per offline physical client”. If you have 1000’s of clients, this can be a long time to wait. It is for this reason that changing the CS name on the clients ahead of time (during a defined outage window) can save precious time during an actual disaster.

Official documentation for this procedure can be found here. This document aims to achieve the CommCell rename without the requirement to rename the host name of the CommServe computer. In this example I’ll be renaming cs01.test.local to, allowing a common name for both internal and external backup clients.

The steps below assume you already have a working single CommServe (no SQL cluster) environment. Before making any changes you should:

  1. Ensure all clients (MediaAgents included) are referencing the CommServe by DNS name, not IP.
  2. Create a CNAME in your internal (end external if appropriate) DNS services.
  3. Stop all jobs
  4. Disable the scheduler
  5. Perform a DR backup
  6. Ensure there is no current DDB activity on the MediaAgents (Check SIDBEngine.log on the MediaAgents)

Before starting, the properties page of your CommCell should display your current actual CommCell host name.


Open a command prompt and navigate to the base directory. Login with the qlogin command.


Run the following command (adjusting instance & the two names as appropriate):

SetPreImagedNames.exe CSNAME -instance Instance001 -hostname youroldfullqualifiedname.local -skipODBC


Close the Commcell properties (if it was still open) and reopen it. You should see the new floating name reflected on the General tab:


Next Step is to update the clients to reference the new name. Restart the Commcell console GUI then open the Commvault Control Panel and select Name Management. Select “Update CommServe for Clients (use this option after DR restore)” and click Next. Ensure the New CommServe name matches your desired FQDN and click Next.


Move all clients requiring the updated name across, be sure to leave the checkbox unchecked.


Some clients may have issues updating, these will need to be investigated individually. Try restarting the services and check client readiness from the CommCell Console. In the case of the PC below it was the Firewall at fault (revealed with help from CVInstallMgr.log).


Check the client properties (you’ll need to refresh the client list first using F5) and you should see that the CommServe hostname has been updated to reflect the new FQDN.


Once all clients are pointing to the new CNAME, you can simply update DNS rather that using name management during a DR failover.



Securing Access to the Web & Admin consoles with Lets Encrypt!

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. This guide focuses on using Let’s encrypt as the certificate authority and is more suited to lab environments due to the 3 month expiry of certificates. I’m looking at ways to allow for auto renewal of public certs (as Let’s Encrypt is designed to do) however this will give a good starting point.


  • Commvault Commserve installed with web server.
  • ACMESharp powershell modules (installation covered in procedure section)
  • Internet access
  • Access to public DNS management (i.e. Route53)

Procedure – Obtain certificate

    1. First we need to install and configure ACMESharp. This will be used to request the certificate from LetsEncrypt. The steps included below are modified from the official quick start guide here. This post uses the manual method and as such the certificate is only valid for 3 months.
    2. Install the powershell modules using an elevated powershell window. You will be prompted twice, answer the 2 prompts with either “Y” or “A”.
      Install-Module ACMESharp
    3. Install the extension modules. Answer “A” when prompted for each of the extensions.
      Install-Module ACMESharp.Providers.AWS
      Install-Module ACMESharp.Providers.IIS
      Install-Module ACMESharp.Providers.Windows
    4. Enable the extension modules.
      Import-Module ACMESharp
      Enable-ACMEExtensionModule ACMESharp.Providers.AWS
      Enable-ACMEExtensionModule ACMESharp.Providers.IIS
      Enable-ACMEExtensionModule ACMESharp.Providers.Windows
    5. Verify the providers have been added correctly
      Get-ACMEExtensionModule | select Name

      You should receive the following output:

    6. Initialize the ACME vault as follows:
    7. Register with LetsEncrypt
New-ACMERegistration -Contacts -AcceptTos
  1. Submit a new domain identifier. This is the name of the dns name you wish to secure.
    New-ACMEIdentifier -Dns -Alias dns1
  2. You now need to prove you own the domain. The easiest way to do this is to automate the process using IIS, unfortunately this needs to be using port 80 which is already bound to the tomcat service. The workaround I’m using is to prove I have ownership of DNS.
    Complete-ACMEChallenge dns1 -ChallengeType dns-01 -Handler manual
  3. Run the following command to request the required details to prove DNS ownership.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    When you get a resoponse similar to the following; you can continue to the next step.

  4. Add the TXT record to your DNS as indicated in the challenge. For Route53 it would appear as follows:
  5. Once you have completed the DNS entry, run the following to submit the challenge:
    Submit-ACMEChallenge dns1 -ChallengeType dns-01
  6. Run the following to check whether the challenge was successful.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    If successful, you will be presented with the following:

  7. If the status is shown as valid (highlighted above) you can now request & retrieve your new certificate. It is possible to request a SAN (Subject Alternative Name) at this point, however for this example we’re sticking with one. Run the following two commands:
    New-ACMECertificate dns1347 -Generate -Alias cert1
    Submit-ACMECertificate cert1
  8. If almost all fields are populated in the response, the certificate is now ready to be stored in the vault
    Update-ACMECertificate cert1
  9. You can now export the certificate in pkcs12 format:
    Get-ACMECertificate cert1 -ExportPkcs12 "C:\certs\cert1.pfx" -CertificatePassword 'myPassw0rd'
  10. Check to ensure the new pfx file is visible in the chosen location.

Procedure – Update Commvault configuration

The next stage is to instruct the web server to use the created certificate bundle as its certificate source.

  1. Stop the Commvault Tomcat process via Process Manger.
  2. Copy the pfx file created earlier to [programdrive]:\Program Files\Commvault\ContentStore\Apache.
  3. Edit conf\server.xml (Notepad++ is a good choice for this)
  4. The official documentation indicates that the default connector redirect port should be adjusted, however new installations should already be configured to redirect to 443. Either way it should look like this:
    <Connector port="80" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="443" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressionMinSize="500" compressableMimeType="text/html,text/json,application/json,text/xml,text/plain,application/javascript,text/css,text/javascript,text/js" useSendfile="false"/>
  5. Add a second connector, beneath the line you have just edited. This will reference the pfx file created earlier.
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="443" URIEncoding="UTF-8" maxPostSize="40960000" maxHttpHeaderSize="1024000" maxThreads="2500" enableLookups="true" SSLEnabled="true" scheme="https" secure="true" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressableMimeType="application/javascript,text/css,text/javascript,text/js" useSendfile="false">
     <SSLHostConfig certificateVerification="none" honorCipherOrder="true" protocols="TLSv1,TLSv1.1,TLSv1.2" ciphers="TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA">
     <Certificate certificateKeystoreFile="E:\Program Files\Commvault\ContentStore\Apache\cert1.pfx" certificateKeystorePassword="myPasw0rd" certificateKeystoreType="PKCS12"/>
  6. You can test the certificate by visiting your web console using the web console button in the CommCell console; you’ll still see the certificate error but upon further inspection you should see the certificate issuer is Let’s Encrypt. The next step is to adjust Commvault to use the public domain name to access the web console (ensure your internal DNS is configured to direct the DNS name to the internal IP).
  7. As described here, add the following additional setting to the ComCell.
    Category CommServDB.GxGlobalParam
    Type String
    Value: https://hostname:port/webconsole/clientDetails/
    hostname:port should match the name you associated with the certificate.
  8. Restart the CommCell services. The web console link should now reference the new name.
  9. If you have a windows shortcut to the Admin Console, that will also need its properties adjusted to reflect the new link.