Activate – Sensitive Data Analysis

Following on from my previous post on Commvault File System Optimization,  Sensitive Data Analysis is the second feature pack under Commvault Activate licensing.  The Commvault Complete license guide describes Sensitive Data Governance as “Extends indexing and analytics into content and provides details on data that may contain information that could be considered sensitive and may need further attention”.

Licensing for these features is covered by the aforementioned license guide. The current method for Sensitive Data Analysis is per-TB for File/VM or per user account analysed for on-premises or cloud mailbox.

Sensitive Data Analysis enables organisations to seek out PII (Personally Identifiable Information) across multiple data types throughout there organisation. Examples of supported data types include:

  • Exchange User Mailbox
  • File System
  • Gmail
  • Google Drive
  • Microsoft OneDrive for Business
  • Microsoft SharePoint Server
  • Microsoft SQL Server databases
  • Oracle databases
  • Virtual Machines (When collect file details is used)

Examples of PII (Personally Identifiable Information) include:

  • Credit Card Numbers
  • Email Addresses
  • Driving Licenses
  • Hostnames
  • Social Security Numbers
  • IP Addresses
  • NHS Number

The ability to identify where PII sits in your organisation can aide in keeping compliance with regulations such as the EUs General Data Protection Regulation (GDPR).

System Requirements

  • Commserve
    • V11 SP18 for the examples given here
  • Server with the Index Store, Content Analyzer, and Web Server packages installed.

    • System Requirements for 20TB File Data as 40 million objects is as follows:
      • 16 vCPU cores
      • 32GB RAM
      • 1TB Index volume (SSD class)
    • System Requirements for 15TB mailbox data as either 150 million objects or 2000 mailboxes is as follows:
      • 16 vCPU cores
      • 32GB RAM
      • 6TB Index Space
      • 1600IOPS for Index Drive
    • Full range of sizing options can be seen here.


  1. From the Java GUI, Right-Click the Index Servers group and click New Index Server.
  2. Specify a Cloud Name and location for the Index Data.
  3. Add the following Roles
    1. Data Analytics
    2. EDGE Drive
    3. Exchange Index (If Required)
  4. Select the server with the Index Store package from the Nodes tab.
  5. Click OK.
  6. You should now see the name of the server with the Index Store package installed, listed under the Content Analyzer Cloud computer group. It will have a “_ContentAnalyzer” suffix added.
  7. The next step is to add a domain. This is done from the Command Center. You may have already done this if you followed my previous post for File System Optimization. If not follow the official documentation here.
  8. Use the Guided Setup Wizard to configure Sensitive data analysis. This can be done by clicking Guided Setup from the command center, clicking the Activate tab and selecting Sensitive Data Analysis.
    Guided Setup01
  9. Create a Data Classification Plan. Select the Index Server you created earlier. Click Next.
  10. On the Content Search tab, select Enable and choose Metadata and content. Click Next
  11. On the Content Analysis  page, check Entity detection and select a few relevant entities. Only select entities that are required, the more you select, the bigger the resource load. Select the Content Analyser and Click Next.
  12. Adjust the Advanced options (File Extensions, exclusions, File Sizes etc) if required and click Save.
  13. The next step is to create an Inventory. Inventories are the logical groupings of resources from your CommCell environment. Choose a Name and select the Index Server and Name Server created earlier. Click Save. 
  14. You now need to add a File Server (assuming this is what you want to analyze) to the Inventory. Navigate to Activate –> Inventory Manager. Select the “…” to the right of the inventory name and click details. Click Add –> File Server.
  15. Enter the details for the file server and click Save.
  16. You can now add a project. From Activate –> Sensitive data Analysis.  Create a New Project.
  17. Add a File System data source to the project. Select the File Server you added to the Inventory earlier.
  18. On the configuration page, enter the credentials and paths as required. Click Finish
  19. Collection should start immediately, you can view the current progress back on the project data sources page.
  20. Once the analysis has completed, navigate back to Activate –> Sensitive Data Analysis –> (Project Name) to see the results!

Example Results

Once your data has been analysed you start to get a real insight into the levels of PII across your dataset. Some examples are shown below:

This shows a dashboard view of discovered sensitive data for a given dataset:


This shows results filtered by the exception “Accessible by everyone”Example 2

This shows results filtered by files containing IP addresses (an example of PII)


The following options are available for discovered files:



Activate – File Storage Optimization

Commvault describes Activate as their “flagship product for turning information into action”. In recent service packs a lot of work has gone into the functionality and usability of the three main areas of this feature set:

  • File Storage Optimization
  • Sensitive Data Governance
  • Compliance Search and eDiscovery

With Forbes estimating that 175 Zettabytes of data will be generated annually by 2025 it should come as no surprise that a significant sprawl of unstructured data can be seen across many companies’ files systems. File Storage Optimization aims to provide insight into the data and optimize for volume, availability, performance, and risk. This post will focus on File Storage Optimization.

Supported Data Sources

Multiple data sources are supported, with more being added all the time. At present the following data sources are supported:

  • File System (Agent or NAS data source)
  • SharePoint (from backup data only)
  • OneDrive
  • Google Drive (from backup data only)
  • Gmail (from backup data only)


  • Commvault V11 SP18
  • Activate License
  • Web Server Installed and configured
    • For a lab or small environment, you can make use of the existing CommServe web server. For larger environments install both Index Storage and Web server packages on a dedicated host.
  • Name Server configured in Commvault.
  • Index Store Package Installed on Index Node

Procedure – Guided Setup – File System Optimisation

In order to analyse data and view the dashboards, you’ll first need to create a data classification plan and inventory. This is all configured using one of the Guided Setup wizards.

From the Command Center, select Guided Setup

Select Activate then Sensitive Data Analysis.


Select Use existing Index Server and give the plan a name.


On the Content Search page, check Enable and ensure only Metadata is selected. Click Next.


On the next page ensure Entity detection is unchecked. This is used for finding PII (Personally Identifiable Information) data and is not necessary for this guide. Click Next.


On the Advanced Options page, adjust the options as required. Keep in mind this is used for Content Indexing and Entity extraction so not something that will make much difference for FS optimization tasks. Exclude Paths will be adhered to so adjust if needed. Click Save. Please Note: I had issues here using Chrome, the Save button was unresponsive. Switched over to Edge and all went through fine.

Inventories are the logical groupings of resources from your CommCell environment.

Enter a name of the Inventory you wish to create and select the Index server used earlier. Click Save.


Procedure – Adding a Client Data Source

From the Command Center, select Activate –> File Storage Optimization.


Click Add Client. Select the Plan and Inventory created earlier. Click Next.


Select the client(s) containing the data you wish to analyse. Click Next.

On the Select File Server page, select the source country and Type. Here you have two options:

  1. To use the data collected from a content indexing job or a back up job, click Analyse from backup
  2. To crawl data from a local directory that is not content indexed or backed up, click Analyse from source, and in the Directory path box, enter the path to the data on the server you want to analyse.

For this example, we’ll used data that we have previously protected. Select Analyze from Backup.


Click Finish.

An Analytics job should run in the next 24hours, however if you want to kick this off manually it can be done via the Java Gui by right clicking on the client and selecting All Tasks –> Run Analytics.


Viewing Reports

Once the Analytics job has run, you’ll be able to view the results via Activate –> File Storage Optimization. From here you have a range of dashboards as shown below:


Once you have selected a View, you can choose which clients (You can select multiple) you wish to view the analysis of:


Some example reports are shown below:

Size Distribution


Tree Size


In future posts we’ll go through the other areas of Activate: Sensitive Data Governance and Compliance Search & eDiscovery.




Protecting VM Workloads in Azure using Managed Identity authentication with Azure Active Directory

When protecting VM workloads in Azure you have two deployment options for Azure Resource manager deployments:

  1. Traditional method of deployment with Azure Active Directory where you must set up the application and tenant. This is a less secure method of deployment and requires more parameters when configuring the pseudoclient in Commvault. This process is covered in my previous post.
  2. Managed Identity authentication with Azure Active Directory. This is a more secure method of deployment. Using this method ensures that your Azure subscription is accessed only from authorized Managed Identity-enabled virtual machines. This requires only the subscription Id and proxy details for the pseudoclient however the VSA proxy must be located on the Azure platform.

For this post we are going with the more secure option (2). The following prerequisites should be in place before you begin:

  • Azure Subscription
    • Take a note of the Subscription ID
    • User with Service Administrator capabilities.
  • Azure VSA Proxy/Proxies with Commvault software installed, connected to your CommServe
    • Deploy the VSA proxy and MediaAgent on virtual machines in the Azure cloud.
    • Deploy the VSA proxy on an Azure VM that is optimized for I/O intensive workloads to support faster backups. See Sizes for virtual machines in Azure for more information.
    • Enable Azure accelerated networking on the VSA proxy/MediaAgent machines in Azure. This step must be completed at the time of deploying the virtual machine.
    • Enable service endpoints for Microsoft Storage on the Azure virtual network subnet where the proxy and MediaAgent are connected. This will ensure that all network traffic from the proxy machine to the Azure storage account is securely flowing through the Microsoft Azure backbone network.
    • Commvault official documentation also recommends enabling CBT, unfortunately this is only available for unmanaged disks which in most cases are not used. EDIT: This is now available for managed disks as of SP18
  • MediaAgent with associated storage and (optional) GDDB.
  • Storage Policy configured


  1. From the Azure portal, goto virtual machines and select the VM on which you have the VSA installed. Record the Resource Group and Subscription ID
  2. 1
  3. Click the Identity Tab. Adjust the System Assigned Status to On.
  4. Click Yes at the prompt.
  5. Repeat this process (1-3) for any additional VSA proxies.
  6. Navigate to the subscriptions area of the Azure portal
  7. Select the subscription in which the VSA proxy VM resides.
  8. Click Access Control (IAM) then Add.
  9. Select Add Role Assignment.
    1. Select Contributor (More restrictive permissions can be achieved following the documentation here)
    2. Assign Access to: Virtual Machine. In my case the MA and VSA are on the same VM however these would usually be separated for large deployments.
    3. Subscription (Subscription where VSA resides).
    4. Select: The VMs with System Assigned Status should be listed here, select your VSA VM.
  10. Click Save.
  11. Open the Commvault Console and add a Azure Virtual Machine Client
  12. Select Connect Using Managed Service Identity. Enter the subscription ID and the VSA proxy or proxies configured earlier.
  13. Click OK.
  14. You should now be able to configure subclients and associate them with your chosen storage policy. If you are protecting the virtual machines to BLOB storage, ensure the Virtual Network attached to your VSA and MA has a service endpoint configured.



Protecting VM workloads in Azure


Commvault’s ability to protect workloads in public cloud is nothing new, however with the increased popularity of hybrid cloud architectures it makes sense to protect both on & off-premises workloads through a single management platform.

Block-level protection of Azure virtual machines has been available since v11 SP4, further enhanced by SP5 with the introduction of CBT (Changed Block Tracking) and by SP10 with the support for managed disks. At present CBT is available for unmanaged disks only but is roadmapped to support managed disks soon.




The following items should be in place prior to implementing this solution:

  • Commvault infrastructure/test lab installation. V11 SP14 or above.
  • Azure Subscription (keep in mind you have a limit of 4 vcores on a trial)
    • User credentials with Service Administrator capabilities, for logging in to your Azure account.
  • Resource Group created with:
    • Virtual Network Created
      • Azure Virtual Network configured with a Service Endpoint for Storage on the default subnet (or subnet used for the VSA VM)
    • Network Security Group allowing remote access and Commvault firewall ports access to Azure VSA. Typically this would be ports 3389 (RDP) and 8400-8403 (Commvault).
  • Azure Virtual machine with MediaAgent and Virtual Server Agent (VSA) software installed & connected to on-premises CommServe.
    • Connectivity verified with “Check Readiness” function.
    • Best Practice for this AzureVM can be found here.
    • Ensure a Premium SSD disk is allocated for the GDDB. This should be formatted and have a drive letter assigned.
    • Ensure your VSA can resolve If you are using your own internal DNS this may not be the case. More detail here.

It is also advisable to read the cloud architecture guide located here to ensure you have the latest best practice information.


Configure VSA

To configure the Azure VM to function as a VSA:

  1. Configure a Application and Tenant for Azure Resource Manager
    1. Note down your Azure account subscription ID
    2. Submit a new Application Registration
      You can choose the name and Sign-In URL, keep it simple. This will be used in a later step to reference this app registration.
    3. Adjust the required permissions
      1. Select the new application and select the settings blade.
        From the Required Permissions tab select Add:
      2. Click Select an API
        find “Windows Azure Service Management API” and select.
      3. Select “Access Azure Service Management as organization users (preview)” and click Select.
    4. Create a new Key by selecting the Keys blade.
      1. Give the Key a name and expiry and choose save. Take note of the generated key value (Its the only chance you’ll get).
    5. Enter the subscriptions tab.
      1. Select Access Control (IAM) and then Add a role assignment.
        Select contributor & the webapp you created earlier and click save.
        NOTE: You can add a custom JSON role here, see the official documentation for details.
    6. To configure the VSA pseudoclient within Commvault (Next Step) you will need:
      1. Your subscription ID
      2. The Key you assigned to the application.
      3. Your Tenant ID. This can be found under Azure Active Directory –> Properties –> Directory ID
  2. From the CommCell Console, Create a new Azure Virtualization client NewAzure.PNG
    1. Enter the credentials as follows:
      Subscription ID: From Subscription Properties
      Tenant ID: From Azure Active Directory
      Application: For App Registrations
      App Password: Key Created during App Registration.

Configure Disk Library

We are using Azure BLOB (Binary Large OBject) storage to store the protected VSA data.

  1. From the Azure Portal, Select Storage Accounts.
  2. Create a New Storage Account

    1. The name should be unique (and will be used when creating the cloud disk library later on).
    2. Account Kind should be Blob Storage.
    3. Replication can be chosen based on requirments, for a lab LRS is fine.
    4. Hot is best for primary backup targets.
    5. Click Review + Create. (You can add Tags & encryption options etc via advanced options if desired)
    6. Click Create
  3. Goto the newly create blob storage and select Keys.
  4. Select one of the access keys and copy for use later.
  5. Create a New container within the blob storage:
    New COntainer.PNG
    Configure as follows (choose your own name)
  6. From the CommCell Console, create a new Cloud Disk library, configure as follows:

    1. MediaAgent: Your Azure VSA/MediaAgent
    2. Service Host: leave default (in most cases)
    3. Access Key ID: The Key you copied earlier
    4. Account Name: The unique name you created earlier.
    5. Click Detect. Select the container you created earlier.
    6. Click OK.

Configure Global Deduplication Database (GDDB)

To make the most efficient use of your newly created Blob library, you’re gonna want it deduplicated. To configure the GDDB you’ll use the standard process as follows:

  1. From Storage Resources, RIght Click Deduplication Engines and select New Global Deduplication Policy
  2. Give it a name and choose next
  3. Select the previously created BLOB library and click next.
  4. Select the Azure VSA MediaAgetnt and choose next.
  5. CHoose whether you woud like to encrypt your data, keep in mind that Azure blob storage is encrypted by default anyway.
  6. Click Next until the Specify location page appears. Specify the location of the Premium SSD allocated for the GDDB.
  7. Click next until the process is completed. As this is a cloud library it will automatically specify the deduplication block size to 512Kb.

Configure Storage Policies

You are now free to create a storage policy using the new Global Dedupe policy as it primary. You can also (from SP14) add this GDDB as an additional copy in existing 128Kb based policies. See Change in Default Block Size on this link. I wont go into the details of creating a storage policy here, if you need a guide, check out Commvaults Books Online.

Create subclients and kick off your first backup!

You created your Azure pseudoclient earlier on, now its time to create a subclient to protect your selected VMs. The default subclient will automatically pick up anything it has the permssion to see so its usually best to create separate subclients to create granular control of retention & scheduling.

Azure subclients can select their content by:

  • Name (Wildcard if needed)
  • Resource Group
  • Region
  • Storage Account
  • Tag
  • Power State
  • A combination of the above 🙂

How you organise things depends on your environment. Tags can work well as once set up, any backup requirements can be set when creating the Azure VMs.

Associate your subclient with your Azure Storage Policy and kick it off. If you want higher throughput try increasing the instance size or adding additional proxies.




Creating a Site-to-Site VPN between your Lab & Azure

Almost every project I am involved in demands a cloud component design element. Typically this will be with either the MS Azure of AWS public clouds however it is not uncommon for Google Cloud, Alibaba or Oracle cloud offerings requiring consideration. In order to stay current with these technologies it is necessary to practice or “LAB” these skills, and what better way to make a start than to connect your existing lab and a Azure subscription?

In this post I will will:

  • Configure the necessary Azure networking components.
  • Deploy an Azure VPN Gateway
  • Deploy an Azure virtual machine
  • Configure a Server 2016 Routing & remote Access Server (RRAS) server in my home Lab
  • Demonstrate connectivity between the two environments.



A new Azure virtual network will be created with two subnets; the first (default) to be used by the Azure virtual machines & the second to be used by the “Gateway Subnet”. A Virtual machine will be created in the default subnet.

A Server 2016 RRAS server will be installed in my home lab & configured to connect to the public IP of the Azure VPN Gateway. To ensure both Home Lab & Azure can communicate; static routes will be introduced to ensure traffic is  routed correctly.


  • Created an Azure subscription
  • Have a home lab with domain configured
  • Have a new Server 2016 virtual machine deployed.
    • Configure 2 vNICs on the virtual machine
      • Internal
        • This should have a static address in your home lab address range. i.e.
        • No default gateway
      • External
        • This should have a static address in your home address range. i.e.
        • The default gateway should be set to your standard gateway (i.e.
  • You home router should be configured to allow VPN pass through.
    • Configure Ports UDP 500 & UDP4500 to forward through to your RRAS VM.


Configure Azure Networking

  1. From the Azure portal create a Resource group in which to place the Azure components. 
  2. Create a new Virtual Network, use the following fields as a guide:
    1. Name: CVLab
    2. Address Space:
    3. Subscription: Your Subscription/Free Trial
    4. Resource Group: As created earlier.
    5. Location: As Desired
    6. Subnet
      1. Name: default
      2. Address Range:
    7. DDoS Protection: Standard
    8. Service Endpoints: Disabled
    9. Firewall: Disabled
  3. Once the virtual network has deployed, create a Gateway subnet:
    newgateway subnet.PNG
    Ensure the Gateway subnet is configured accordingly:

    1. Address Range:
    2. Route Table – None
    3. Services None
    4. Delegate subnet: None
  4. Add a virtual network gateway. This will be used by your RRAS server as a VPN connection target.
    Configure as follows:

    1. Name: CVGateway
    2. Type: VPN
    3. VPN Type: Route Based
    4. Virtual Network: CVLab
    5. Public IP: Create New
      1. This is auto-generated and will be the endpoint targeted by your on-premises RRAS server.
    6. PublicIPName:CVPublicIP
    7. Subscription: Your Subscription/Free Trial
    8. Location: Same as where you created your vNet.
  5. Create a local Network Gateway. This will be used to allow your Azure VMs to connect to your on-premises VMs
    Configure as follows:

    1. Name: CVLabGateway
    2. IP Address: The IP Address of your home internet connection
      1. NOTE: if you have a dynamic IP address, this IP will will need to be updated when that IP changes.
    3. Address space: Your Lab (on-premises) network. For me this is You may add multiple address spaces here if appropriate.
    4. Subscription: Your Subscription/Free Trial
    5. ResourceGroup: Your Resource Group
    6. Location: Same as where you created your vNet.
  6. Once the local gateway has finished deploying (can take a while), you can create a connection object.
    NewConnectionConfigure as follows:

    1. Name: CVLabHomeCon
    2. Virtual Network Gateway: CVGateway
    3. PSK: Make one up, the longer and more complicated the better. This will be used by your RRAS Server.
    4. ResourceGroup : Your Resource Group

Configure RRAS

  1. Install Routing & Remote Access on your Server 2016 VM
    1. From server manager, add Roles & Features
    2. Click next through until you reach Roles. Select Remote Access and click Next until you reach the Select Role Services page.
    3. Select Routing and DirectAccess and VPN (RAS). Click Add Features when prompted.
    4. Click Next until the confirm page, then click Install.
  2. Once RRAS is installed, click Open the getting started wizard.
  3. Select Deploy VPN Only
  4. Right click the name of your sever in the RRAS console and select Configure and Enable Routing and Remote Access.
  5. On the Configuration page, click Custom configuration.
  6. Select VPN Access and LAN RoutingClick Next and Finish. If warned about the windows firewall click OK.
  7. Click Start Service when prompted.
  8. Create a new demand dial interface as shown below:

    1. Name: Azure On-Demand
    2. Connection Type: VPN
    3. Type: IKEv2
    4. Host Name: The public IP of your Azure Virtual Network Gateway:
    5. Protocols and Security:
      1. Route IP Packets on this interface
    6. Static Routes:
      1. Destination:
      2. Mask:
      3. Metric: 10
    7. Dial Out Credentials – just write “Nothing” in the Username and click next. We’ll be using the PSK created earlier.
  9. Once the interface has been created, right click on the Azure On-Demand interface and choose properties.
  10. Select Security and Use preshared key for authentication. Enter your PSK in the box.
    Click OK
  11. Right Click your connection and choose Connect.

Configure Azure VM

  1. Back in the Azure portal, deploy a new virtual machine. The size is upto you but ensure it has the following properties:
    1. Region: Same as where you deployed the previous components
    2. OS: Windows Server
    3. Public Inbound Ports: None
    4. Virtual Network: CVLab
    5. Subnet: default
    6. The other areas can be left as default or adjusted if necessary.
  2. Wait for the VM to be deployed then proceed.
  3. From the Networking tab on the new VM, create a new rule to allow traffic from your home lab network (i.e. This can be for specific ports or a blanket rule between your home & azure environments. Ensure at least RDP (3389) is allowed. In the below example I have allowed all traffic from my local Lab subnet.

Configure Static Routes

In the current state, your RRAS server will be able to connect to your AzureVM as the route to use has been added by the RRAS config. The next step is to add routes to the other VMs (both Azure and local Lab) to allow communication between subnets. The route add commands would be written as follows:

For Local lab VMs
ROUTE ADD mask metric 2 -p
For Azure VMs
ROUTE ADD mask metric 2 -p

Ping (ICMP) is not allowed through the windows firewall by default, however RDP on the Azure VMs is. From your RRAS server, RDP to the private IP of your new Azure VM, if you get a connection that proves RRAS is working. You will need to add the appropriate static routes to the test VMs before you can repeat the same test outside of the RRAS VM.

Next Steps

If you would like to make this Azure VM (and others) part of your domain for an extended period, it would be advisable to get a few other things tidied up, for example:

  • DNS Zones
  • Centralising the static Routes
    • Via Group Policy
    • Via Physical/Virtual network hardware.
  • Locking down the Azure ports using  network security groups.



Configuring o365 mailbox protection with Commvault

With each new service pack released, the process for protecting mailbox content hosted in Microsoft o365 is further refined.  The release of SP12 brings the following changes:

  • Combined & simplified agent install. Rather than choosing “Mailbox” or “Database” etc the binaries are combined into a single “Exchange” client.
  • If you need the OWA proxy enabler or Offline mining tool, these are available as separate installs.
  • No need for the scheduled tasks to update the service account permissions on new mailboxes.

This post will focus on “Exchange Mailbox Agent User Mailbox Office 365 with Exchange Using Azure AD Environment” as detailed on the official documentation here.


There’s a few things to set up before you can start protecting the mailboxes. These are summarised below and detailed in the following sections.


  • Server to install the Commvault Exchange agent
  • Administration account for Office365
  • Exchange Online Service account
    • Must be an online mailbox
    • Setup via
  • Local System Account
    • Member of Local administrators on machine


  • Index Server
  • Job Results Directory
    • Must be a UNC Path
  • Mailbox Configuration Policies
  • Storage Policy


Microsoft Tenant Configuration

  1. Connect to o365 with Powershell
    $UserCredential = Get-Credential
    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $UserCredential -Authentication Basic -AllowRedirection
    Import-PSSession $Session

    You’ll be prompted for a username and password, use the Administration account for Office365 mentioned in the prerequisites.

  2. Run the following commands:
    set-ExecutionPolicy RemoteSigned

    You may at this point see the following:
    enable-orgCustNot a problem, just continue to the next step.

  3. You now need to provide the service account (the one with the online mailbox mentioned earlier) with view only recipient permissions.
    New-RoleGroup -Name "ExchangeonlinebackupGrp" -Roles "ApplicationImpersonation", "View-Only Recipients" -Members svcCommvault
  4. Register the o365 application with Exchange using AzureAD, do this via
    1. Goto Azure Active Directory (left hand side)
    2. Choose App Registrations
    3. Select New Application Registration
    4. Complete the fields as follows (feel free to use a different name)
    5. Click Create.
    6. Note down the Application ID (you’ll need this later to set up the pseudoclient)
    7. Click the Settings button once the app is created, Select the properties button on the right hand side.
    8. Scroll to the bottom and change Multi-tenanted to yes.
    9. Click Save
    10. Select Settings–>Keys
    11. You’re going to create a new key, complete the form as follows, adjust the first two fields as you see fit:
      Click Save.
    12. Copy the Key value & description, you’ll need this later. May be worth remembering the expiry date too.
    13. Now select the Required Permissions menu. Click Add.
    14. Choose Select an API then Microsoft Graph
      Click Select.
    15. Scroll down to Read Directory data, check the box and click Select.
    16. Click Done then Grant Permissions. Click Yes when prompted.
    17. On the left hand side, click Azure Active Directory then Properties. Note down the value in the Directory ID field.

Commvault Configuration

Now it’s time to ensure you have the Exchange Policies, Index Server, Job Results Folder & Storage Policy setup. The latter 3 of these tasks are already well documented however the following should be noted:

  • The Job results directory needs to be shared (i.e. \\UNC Path) visible to all mailbox backup proxies
  • The primary copy retention  Retention for the mailboxes is governed by the Retention Policy and not by the Storage Policy as is typical with other agents. For this reason it is worth having a separate storage policy for the mailbox backups.
  • The mailbox agent index server should not be the MediaAgent responsible for you’re library and storage policies.
  • The mailbox pseudoclient to index server is a 1:1 relationship. It is possible via a registry key to have multiple pseudoclients use the same index server, however; if the multiple pseudoclients have any crossover you will very likely experience data loss.
  • Review the index store requirements before deploying the index server. If you’re doing this in a lab you can ususally start small and ramp up the specs on-the-fly.
  • You must have a web server installed in your environment, typical Commcells have this installed on the CommServe however larger CommCells split this role out to a dedicated server.

Exchange Policies

This is a copy from my previous post but the information is still valid:

The four new policy types are as follows:

  • Archiving – Archive is the new Backup. This policy dictates what messages will be protected, it has no effect on stubbing.
  • Cleanup – If you are archiving, this is where it is configured.
  • Retention – Primary Copy retention is configured here and will override any retention settings configured in the storage policy. Secondary copies will still adhere to the Storage Policy values.
  • Journal – The new compliance archive. Use this for journal mailboxes.

Policies are configured under Policies, Configuration Policies, Exchange policies as shown below:


Only configure the policies you need, for a standard mailbox backup (no archive) setup, your policies listing may look like this:


Creating the Index Server

To create the logical Index Server (assuming you’ve installed the index store package) do the following:

  1. From the Commvault Console, right click Client Computers –> Index Servers and select New Index Server.
  2. Complete the fields on the General tab. If possible ensure the drive nominated for the Index Directory is formatted as 64kb block size. The Storage Policy, although optional; is used to backup the index.
  3. On the Roles tab, click Add, select Exchange Index, and move it to the Include field.
  4. On the Nodes tab; add the server on which you installed the Index Store package.
  5. Click OK. There will be a delay while the index is created, you may notice the following status on the bar at the bottom of the screen
    Cloud reation in Progress

Creating the Mailbox Client

You’ll need to have the Index server & policies ready before continuing to the mailbox client creation.

  1. Right click the CommServe name at the top of the CommCell Browser on the right hand side and select All Tasks –> Add/Remove Software –> New Client.
  2. Expand Exchange Mailbox and select User Mailbox.
  3. Complete the fields as shown below. Note: I have used the index server to host the job results directory which isn’t best practice, but OK for a lab.
  4. On the access nodes page, select a client (or clients) that has the Commvault Exchange package installed.
  5. On the Environment Type page, choose Exchange Online (Access Through Azure Active Directory).
  6. On the Azure App Details page enter the following:
    1. Application ID:  as noted down earlier
    2. Key Value: As described (The auto generated key)
    3. Azure Directory ID: Noted down earlier
  7. On the service account settings page you’ll need to add 2 accounts:
    1. The Exchange online service account, this is the one we granted permission to earlier.
    2. A local system account. This needs to have local admin rights on your exchange proxy(ies).
  8. Optional: On the Recall Service, enter the URL for your web server as shown below. This is only used if Archiving or Content store viewer is implemented.
  9. Click Finish!
  10. To test that you are able to query the instance for its mailboxes navigate to MailboxAgent –> ExchangeMailbox –> User Mailbox and click the Mailbox Tab at the bottom of the screen.
  11. Right Click in the white space above the Tabs and choose New Association – User.
  12. On the Create New Association box, click Configure then Discover (Choose Yes at prompt).
  13. If your expected list of mailboxes appears, you’re doing it right!list mailboxes

The next step is to configure the auto associations. This can be easily achieved by following the official instructions here.


Securing Access to the Web & Admin consoles using a SAN certificate.

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. The official documentation does not however; instruct on creating a certificate that will allow the use of multiple SAN (Subject Alternative Name) aliases. This guide will include the necessary fields for creating a SAN cert request, in addition to ensuring the cert complies with the SHA256withRSA signature algorithm.


  • CommCell with Web & Admin Console
  • Certificate Authority
    • This can be either internal or external. If you are allowing access to the web console externally an external authority is recommended.
    • If you are using an internal CA, ensure it is capable of issuing SHA2 certs rather than the deprecated SHA2. Details can be found here. SHA1 certs are acceted by IE however chrome and firefox will complain.


  1. In order to ensure the necessary java versions are in place you may need to update Java. The minimum version required is JRE 1.8.0_65. Check which version you have installed; the SP10 release of Commvault is packaged with java version “1.8.0_121” so does not require updating. To check your java version run “java -version” from the command prompt of the web server. if you need to update java follow the official doco here.
  2. From the web console computer, navigate to the following directory via an elevated command prompt (replace the java version if different):
    C:\Program Files\Java\jre1.8.0_121\bin
  3. You must create a keystore file using the keytool utility contained in the above directory. Run the following command:
    keytool -genkey -alias tomcat -keyalg RSA -sigalg SHA256withRSA -keystore "C:\mykeystore.jks" -keysize 2048
  4. You will be prompted for the following details:
    1. Keystore password: Be sure to make this a strong password and keep it safe.
    2. First & Last Name: This is the domain name for which you are creating a certificate i.e.
    3. Org Unit: Leave this blank or use company name
    4. Org Name: use company name
    5. City or Locality, State, Country Code: As Described
  5. At the “Is this correct?” prompt, type yes.
  6. At the “Enter password for tomcat”, click ENTER to use the same as the keystore file.
  7. Now we use the created keyfile to generate a csr:
    keytool -certreq -file C:\somename.csr -keystore C:\mykeystore.jks -keyalg RSA -sigalg SHA256withRSA -alias tomcat -ext SAN=dns:alias1.mydomain.local, -keysize 2048
  8. You will be prompted for the password you created earlier. Once entered the certificate request will be saved as specified (C:\somename.csr if you used the above command).
  9. Use this csr to generate the certificate. Upload the request to your certificate authority and download the signed certificates. The files you will require for the next step are as follows:
    1. root certificate
    2. intermediate certificate (if available)
    3. issued certificate
  10. All of these files can be in either cer or crt format. If you have been given a *.p7b. file dont panic; this can be opened to show the issued certs. In the example below you can see only the root and issued certificate are available.
  11. Right click the certificates one by one and choose All Tasks –> Export. Use the default option as pictured below to export the individual certs as *.cer files.
  12. Once you have your 2 or 3 certificates, head back to the command prompt. Its now time to import your root, intermediate (if you have one) and issued certs.
  13. First import the root certificate:
    keytool -import -alias root -keystore C:\mykeystore.jks -trustcacerts -file C:\root.cer
  14. Next the intermediate:
    keytool -import -alias intermed -keystore C:\mykeystore.jks -trustcacerts -file C:\intermediate.cer
  15. And finally the issued SAN cert:
    keytool -import -alias tomcat -keystore C:\mykeystore.jks -trustcacerts -file C:\actual.cer
  16. You now have a keystore (in this case “mykeystore.jks”) which can be used by Commvault to secure its web console traffic. To make Commvault aware of this file you’ll need to copy it into the Program Files/Commvault/ContentStore/Apache/Conf folder on the web server. Once the file is copied, stop the “Commvault Tomcat Service” using Commvault Process Manager.
  17. If you are using a Commvault version prior to v11 SP9 you’ll need to refer to the official doco here, if not carry on…
  18. Using a text editor (such as Notepad++) edit the “server.xml” file. It’s wise at this point to take a copy of the file first in case things go pear-shaped.
  19. Find the line “<Certificate certificateKeystoreFile=”” and edit the path and password to match your keystore file:
  20. Ensure the path is correct, if you placed the file in the conf folder as instructed, the path should be “conf/mykeystore.jks”.
  21. Start the Tomcat service using the Commvault Process Manager, give the web server a couple of minutes to start and then browse to the server using one of your specified SAN aliases :-).