Configuring o365 mailbox protection with Commvault

With each new service pack released, the process for protecting mailbox content hosted in Microsoft o365 is further refined.  The release of SP12 brings the following changes:

  • Combined & simplified agent install. Rather than choosing “Mailbox” or “Database” etc the binaries are combined into a single “Exchange” client.
  • If you need the OWA proxy enabler or Offline mining tool, these are available as separate installs.
  • No need for the scheduled tasks to update the service account permissions on new mailboxes.

This post will focus on “Exchange Mailbox Agent User Mailbox Office 365 with Exchange Using Azure AD Environment” as detailed on the official documentation here.


There’s a few things to set up before you can start protecting the mailboxes. These are summarised below and detailed in the following sections.


  • Server to install the Commvault Exchange agent
  • Administration account for Office365
  • Exchange Online Service account
    • Must be an online mailbox
    • Setup via
  • Local System Account
    • Member of Local administrators on machine


  • Index Server
  • Job Results Directory
    • Must be a UNC Path
  • Mailbox Configuration Policies
  • Storage Policy


Microsoft Tenant Configuration

  1. Connect to o365 with Powershell
    $UserCredential = Get-Credential
    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri -Credential $UserCredential -Authentication Basic -AllowRedirection
    Import-PSSession $Session

    You’ll be prompted for a username and password, use the Administration account for Office365 mentioned in the prerequisites.

  2. Run the following commands:
    set-ExecutionPolicy RemoteSigned

    You may at this point see the following:
    enable-orgCustNot a problem, just continue to the next step.

  3. You now need to provide the service account (the one with the online mailbox mentioned earlier) with view only recipient permissions.
    New-RoleGroup -Name "ExchangeonlinebackupGrp" -Roles "ApplicationImpersonation", "View-Only Recipients" -Members svcCommvault
  4. Register the o365 application with Exchange using AzureAD, do this via
    1. Goto Azure Active Directory (left hand side)
    2. Choose App Registrations
    3. Select New Application Registration
    4. Complete the fields as follows (feel free to use a different name)
    5. Click Create.
    6. Note down the Application ID (you’ll need this later to set up the pseudoclient)
    7. Click the Settings button once the app is created, Select the properties button on the right hand side.
    8. Scroll to the bottom and change Multi-tenanted to yes.
    9. Click Save
    10. Select Settings–>Keys
    11. You’re going to create a new key, complete the form as follows, adjust the first two fields as you see fit:
      Click Save.
    12. Copy the Key value & description, you’ll need this later. May be worth remembering the expiry date too.
    13. Now select the Required Permissions menu. Click Add.
    14. Choose Select an API then Microsoft Graph
      Click Select.
    15. Scroll down to Read Directory data, check the box and click Select.
    16. Click Done then Grant Permissions. Click Yes when prompted.
    17. On the left hand side, click Azure Active Directory then Properties. Note down the value in the Directory ID field.

Commvault Configuration

Now it’s time to ensure you have the Exchange Policies, Index Server, Job Results Folder & Storage Policy setup. The latter 3 of these tasks are already well documented however the following should be noted:

  • The Job results directory needs to be shared (i.e. \\UNC Path) visible to all mailbox backup proxies
  • The primary copy retention  Retention for the mailboxes is governed by the Retention Policy and not by the Storage Policy as is typical with other agents. For this reason it is worth having a separate storage policy for the mailbox backups.
  • The mailbox agent index server should not be the MediaAgent responsible for you’re library and storage policies.
  • The mailbox pseudoclient to index server is a 1:1 relationship. It is possible via a registry key to have multiple pseudoclients use the same index server, however; if the multiple pseudoclients have any crossover you will very likely experience data loss.
  • Review the index store requirements before deploying the index server. If you’re doing this in a lab you can ususally start small and ramp up the specs on-the-fly.
  • You must have a web server installed in your environment, typical Commcells have this installed on the CommServe however larger CommCells split this role out to a dedicated server.

Exchange Policies

This is a copy from my previous post but the information is still valid:

The four new policy types are as follows:

  • Archiving – Archive is the new Backup. This policy dictates what messages will be protected, it has no effect on stubbing.
  • Cleanup – If you are archiving, this is where it is configured.
  • Retention – Primary Copy retention is configured here and will override any retention settings configured in the storage policy. Secondary copies will still adhere to the Storage Policy values.
  • Journal – The new compliance archive. Use this for journal mailboxes.

Policies are configured under Policies, Configuration Policies, Exchange policies as shown below:


Only configure the policies you need, for a standard mailbox backup (no archive) setup, your policies listing may look like this:


Creating the Index Server

To create the logical Index Server (assuming you’ve installed the index store package) do the following:

  1. From the Commvault Console, right click Client Computers –> Index Servers and select New Index Server.
  2. Complete the fields on the General tab. If possible ensure the drive nominated for the Index Directory is formatted as 64kb block size. The Storage Policy, although optional; is used to backup the index.
  3. On the Roles tab, click Add, select Exchange Index, and move it to the Include field.
  4. On the Nodes tab; add the server on which you installed the Index Store package.
  5. Click OK. There will be a delay while the index is created, you may notice the following status on the bar at the bottom of the screen
    Cloud reation in Progress

Creating the Mailbox Client

You’ll need to have the Index server & policies ready before continuing to the mailbox client creation.

  1. Right click the CommServe name at the top of the CommCell Browser on the right hand side and select All Tasks –> Add/Remove Software –> New Client.
  2. Expand Exchange Mailbox and select User Mailbox.
  3. Complete the fields as shown below. Note: I have used the index server to host the job results directory which isn’t best practice, but OK for a lab.
  4. On the access nodes page, select a client (or clients) that has the Commvault Exchange package installed.
  5. On the Environment Type page, choose Exchange Online (Access Through Azure Active Directory).
  6. On the Azure App Details page enter the following:
    1. Application ID:  as noted down earlier
    2. Key Value: As described (The auto generated key)
    3. Azure Directory ID: Noted down earlier
  7. On the service account settings page you’ll need to add 2 accounts:
    1. The Exchange online service account, this is the one we granted permission to earlier.
    2. A local system account. This needs to have local admin rights on your exchange proxy(ies).
  8. Optional: On the Recall Service, enter the URL for your web server as shown below. This is only used if Archiving or Content store viewer is implemented.
  9. Click Finish!
  10. To test that you are able to query the instance for its mailboxes navigate to MailboxAgent –> ExchangeMailbox –> User Mailbox and click the Mailbox Tab at the bottom of the screen.
  11. Right Click in the white space above the Tabs and choose New Association – User.
  12. On the Create New Association box, click Configure then Discover (Choose Yes at prompt).
  13. If your expected list of mailboxes appears, you’re doing it right!list mailboxes

The next step is to configure the auto associations. This can be easily achieved by following the official instructions here.



Securing Access to the Web & Admin consoles using a SAN certificate.

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. The official documentation does not however; instruct on creating a certificate that will allow the use of multiple SAN (Subject Alternative Name) aliases. This guide will include the necessary fields for creating a SAN cert request, in addition to ensuring the cert complies with the SHA256withRSA signature algorithm.


  • CommCell with Web & Admin Console
  • Certificate Authority
    • This can be either internal or external. If you are allowing access to the web console externally an external authority is recommended.
    • If you are using an internal CA, ensure it is capable of issuing SHA2 certs rather than the deprecated SHA2. Details can be found here. SHA1 certs are acceted by IE however chrome and firefox will complain.


  1. In order to ensure the necessary java versions are in place you may need to update Java. The minimum version required is JRE 1.8.0_65. Check which version you have installed; the SP10 release of Commvault is packaged with java version “1.8.0_121” so does not require updating. To check your java version run “java -version” from the command prompt of the web server. if you need to update java follow the official doco here.
  2. From the web console computer, navigate to the following directory via an elevated command prompt (replace the java version if different):
    C:\Program Files\Java\jre1.8.0_121\bin
  3. You must create a keystore file using the keytool utility contained in the above directory. Run the following command:
    keytool -genkey -alias tomcat -keyalg RSA -sigalg SHA256withRSA -keystore "C:\mykeystore.jks" -keysize 2048
  4. You will be prompted for the following details:
    1. Keystore password: Be sure to make this a strong password and keep it safe.
    2. First & Last Name: This is the domain name for which you are creating a certificate i.e.
    3. Org Unit: Leave this blank or use company name
    4. Org Name: use company name
    5. City or Locality, State, Country Code: As Described
  5. At the “Is this correct?” prompt, type yes.
  6. At the “Enter password for tomcat”, click ENTER to use the same as the keystore file.
  7. Now we use the created keyfile to generate a csr:
    keytool -certreq -file C:\somename.csr -keystore C:\mykeystore.jks -keyalg RSA -sigalg SHA256withRSA -alias tomcat -ext SAN=dns:alias1.mydomain.local, -keysize 2048
  8. You will be prompted for the password you created earlier. Once entered the certificate request will be saved as specified (C:\somename.csr if you used the above command).
  9. Use this csr to generate the certificate. Upload the request to your certificate authority and download the signed certificates. The files you will require for the next step are as follows:
    1. root certificate
    2. intermediate certificate (if available)
    3. issued certificate
  10. All of these files can be in either cer or crt format. If you have been given a *.p7b. file dont panic; this can be opened to show the issued certs. In the example below you can see only the root and issued certificate are available.
  11. Right click the certificates one by one and choose All Tasks –> Export. Use the default option as pictured below to export the individual certs as *.cer files.
  12. Once you have your 2 or 3 certificates, head back to the command prompt. Its now time to import your root, intermediate (if you have one) and issued certs.
  13. First import the root certificate:
    keytool -import -alias root -keystore C:\mykeystore.jks -trustcacerts -file C:\root.cer
  14. Next the intermediate:
    keytool -import -alias intermed -keystore C:\mykeystore.jks -trustcacerts -file C:\intermediate.cer
  15. And finally the issued SAN cert:
    keytool -import -alias tomcat -keystore C:\mykeystore.jks -trustcacerts -file C:\actual.cer
  16. You now have a keystore (in this case “mykeystore.jks”) which can be used by Commvault to secure its web console traffic. To make Commvault aware of this file you’ll need to copy it into the Program Files/Commvault/ContentStore/Apache/Conf folder on the web server. Once the file is copied, stop the “Commvault Tomcat Service” using Commvault Process Manager.
  17. If you are using a Commvault version prior to v11 SP9 you’ll need to refer to the official doco here, if not carry on…
  18. Using a text editor (such as Notepad++) edit the “server.xml” file. It’s wise at this point to take a copy of the file first in case things go pear-shaped.
  19. Find the line “<Certificate certificateKeystoreFile=”” and edit the path and password to match your keystore file:
  20. Ensure the path is correct, if you placed the file in the conf folder as instructed, the path should be “conf/mykeystore.jks”.
  21. Start the Tomcat service using the Commvault Process Manager, give the web server a couple of minutes to start and then browse to the server using one of your specified SAN aliases :-).


Protecting Salesforce with Commvault

System administrators are seeing more and more business processes becoming reliant on SaaS solutions. Salesforce is; to many organisations the lifeblood of their CRM & Sales lifecycle, providing efficiencies of customer management essential to an effective organisation.

The mistake many companies make is to assume that just because a solution is “in the cloud” it has the same level of protection as their on-premise systems. In many cases this assumption is false; data is very often replicated (such as the multi-zone replication of AWS S3), but rarely has any sort of historical recovery points. Salesforce for example at the time of writing only holds 2 weeks worth of restore points; any changes or deletions not spotted within this time are likely permanent. Other considerations are the fees charged by Salesforce in the event you do have to roll-back. This will easily run into thousands of dollars ($10k is what i’ve read), money that could be saved through the configuration of a Salesforce agent.

This capability is still relatively new and may be simplified in later releases. This guide is written with Commvault v11 SP11 in mind. The configuration presently is made up of the following components:

  • Commserve
  • MediaAgent
  • Salesforce Virtual Client (pseudoclient)
  • Cloud Connector (Linux)
  • SQL Server (Optional but required for restoration to Salesforce instance)

Data Flow

Not everything is protected as the data protection is limited to what is made available via the Salesforce API. For a list of items that are not protected check out books online here. The high level steps required to implement the Salesforce backup are as follows:

  1. Ensure you have a supported version of Salesforce
  2. Ensure you have sufficient capacity license available on your Commcell
  3. Configure a Connected App in Salesforce (click here for a guide).
  4. Install the cloud connector on a linux VM
  5. Create a MSSQL database on a seperate DB server for use by the cloud connector
  6. Create the Salesforce pseudoclient.
  7.   Backup!

Obviously this guide is no substitute for the official documentation, however hopefully it will aide in testing out the Salesforce backup functionality for yourself. During my testing I used a free salesforce developer account available here.

Required information

During the configuration of the Commvault Salesforce pseudoclient you will need to complete the following fields:

  • Client Name: The name which appears in the Commvault Console
  • Instance Name: Usually the same as the client name.
  • Backup Client: The linux proxy with “Cloud Apps” installed
  • Storage Policy: The storage policy through which data will be stored.
  • Number of Data Backup Streams: Start with 1 and scale if required.
  • Salesforce login URL: (You may use a custom URL here if configured)
  • Username: Your salesforce service account
  • Password: As described
  • API Token: The salesforce security token, this can be reset once the service account has been created.
  • Consumer Key: Associated with the Salesforce connected App
  • Consumer Secret: Associated with the Salesforce connected App
  • Download to Cache Path: This should be a folder on the Cloud Connector, plan for 120% of the total salesforce data.
  • Database Type: SQL Server (You can also use PostGreSQL)
  • Database Host: The database server containing the database preconfigured for use by the Commvault salesforce agent. I had difficulty when using DNS names here, try IP if DNS doesn’t work.
  • Server Instance Name: As described
  • DB Name: Name of the SQL database set up on the SQL Server
  • Database Port: 1433
  • Username:sysadmin user for the configured database
  • Password: As described.


As described above; you will need to plan for both the Cloud Connector cache and SQL server storage. Required storage is as follows:

In addition to this you should consider the amount of back-end storage required. This should be planned in the same way planning for any additional client is performed.

Creating a Connected App

  1. Login to Salesforce using your administrator account
  2. Click Setup
  3. On the left select Build then Create –> Apps
  4. Scroll down to Connected Apps and click New
  5. Completed the required fields, ensure “Enabled OAuth Settings” is enabled.
  6. OAuth Scope should be “Full Access”
  7. Callback URL should be “; if you are using a test account. If this is a production config it would be “;
  8. Click Save and then click the connected app name, note the following fields:
    1. Consumer Key
    2. Consumer Secret
    3. Callback URL
      1. Commvault requires this during setup, minus the /services/oauth2/token


Generating the API Key

The API key is associated with the administration account (service account) used by Commvault to connect to Salesforce. To generate it:

  1. Login as the service account.
  2. Click the username at the top of the screen, select “My Settings”
  3. On the left of the screen, select “Reset My Security Token”
  4. Click the button to reset the token, it will be sent to the email address registered to the service account.

Configure the Pseudoclient

You should now have all the components and details required to create a Salesforce pseudoclient (also described as a virtual client). In addition to the items configured above you should have a Linux client installed with the Cloud Apps package, sufficient space on that client for the cached salesforce data & a SQL server with a preconfigured SQL database (and associated credentials).

Steps are as follows:

  1. From the CommCell console, Create a new Salesforce client.
  2. Complete the details on each tab, use the “Required Information” section above if you need clarification on any of the fields. Be sure to Test Connection on the “Connection Details” and “Backup Options” tabs.
  3. Your new client will appear as displayed below. Right click on the default subclient to adjust what is to be protected. You’ll notice the defaultbackupset reflects the name of the service account used.
  4. In the Contents page of the default subclient you have the option to include metadata, selecting this option will give you access to the metadata view when performing a “browse & restore”.

Standard vs Metadata views

Not being a Salesforce administrator I cant provide much insight into the significance of these two views, I can however provide screenshots of the different browse and restore results:





Next Steps

Now it’s time to run a backup operation which can be performed by right clicking on the default subclient and choosing backup. You’ll notice an incremental backup is automatically performed following the completion of the full backup. Once this has completed you can test the solution by deleting accounts/opportunities etc and testing the restoration process.

If you are restoring directly to Salesforce you’ll notice the requirement for a configured database. The screenshot below shows the restore options window when restoring directly from the most recent database sync.


If you are restoring from another restore point, you may need to restore from Media, as shown in the screenshot below:


You’ll notice this clears the database details. All restores to Salesforce require a staging database (be that MSSQL or PostGreSQL). You would need to create and specify a SQL database to restore to (from media), prior to that data being restored to your Salesforce instance.

Implementing a floating CommServe name for faster DR recovery

UPDATE: As of v11 SP11 this method is not supported on the CommServe due to possible communication issues. I’ll leave the post up for reference and will update when an alternative method is available.

In the event of losing your primary CommServe (CS) you’ll need to restore your database to the DR CS. This is a simple enough process involving the following (high level) steps:

  1. Ensure your DR CS is installed and at the same patch level as your current (now unavailable) CS.
  2. Restore the latest DR database dump to the DR CommServe using CSRecoveryAssistant.exe
  3. Use the Name Management Wizard to update all clients & MediaAgents to use the DR CS name.

This works well in most scenarios; however, if you are keen to get the process finished as quickly as possible Step 3 can take some time. Books Online advises that it can take “30 seconds per online physical client and 20 seconds per offline physical client”. If you have 1000’s of clients, this can be a long time to wait. It is for this reason that changing the CS name on the clients ahead of time (during a defined outage window) can save precious time during an actual disaster.

Official documentation for this procedure can be found here. This document aims to achieve the CommCell rename without the requirement to rename the host name of the CommServe computer. In this example I’ll be renaming cs01.test.local to, allowing a common name for both internal and external backup clients.

The steps below assume you already have a working single CommServe (no SQL cluster) environment. Before making any changes you should:

  1. Ensure all clients (MediaAgents included) are referencing the CommServe by DNS name, not IP.
  2. Create a CNAME in your internal (end external if appropriate) DNS services.
  3. Stop all jobs
  4. Disable the scheduler
  5. Perform a DR backup
  6. Ensure there is no current DDB activity on the MediaAgents (Check SIDBEngine.log on the MediaAgents)

Before starting, the properties page of your CommCell should display your current actual CommCell host name.


Open a command prompt and navigate to the base directory. Login with the qlogin command.


Run the following command (adjusting instance & the two names as appropriate):

SetPreImagedNames.exe CSNAME -instance Instance001 -hostname youroldfullqualifiedname.local -skipODBC


Close the Commcell properties (if it was still open) and reopen it. You should see the new floating name reflected on the General tab:


Next Step is to update the clients to reference the new name. Restart the Commcell console GUI then open the Commvault Control Panel and select Name Management. Select “Update CommServe for Clients (use this option after DR restore)” and click Next. Ensure the New CommServe name matches your desired FQDN and click Next.


Move all clients requiring the updated name across, be sure to leave the checkbox unchecked.


Some clients may have issues updating, these will need to be investigated individually. Try restarting the services and check client readiness from the CommCell Console. In the case of the PC below it was the Firewall at fault (revealed with help from CVInstallMgr.log).


Check the client properties (you’ll need to refresh the client list first using F5) and you should see that the CommServe hostname has been updated to reflect the new FQDN.


Once all clients are pointing to the new CNAME, you can simply update DNS rather that using name management during a DR failover.



Securing Access to the Web & Admin consoles with Lets Encrypt!

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:


Official Documentation for the procedure outlined in this post can be found here. This guide focuses on using Let’s encrypt as the certificate authority and is more suited to lab environments due to the 3 month expiry of certificates. I’m looking at ways to allow for auto renewal of public certs (as Let’s Encrypt is designed to do) however this will give a good starting point.


  • Commvault Commserve installed with web server.
  • ACMESharp powershell modules (installation covered in procedure section)
  • Internet access
  • Access to public DNS management (i.e. Route53)

Procedure – Obtain certificate

    1. First we need to install and configure ACMESharp. This will be used to request the certificate from LetsEncrypt. The steps included below are modified from the official quick start guide here. This post uses the manual method and as such the certificate is only valid for 3 months.
    2. Install the powershell modules using an elevated powershell window. You will be prompted twice, answer the 2 prompts with either “Y” or “A”.
      Install-Module ACMESharp
    3. Install the extension modules. Answer “A” when prompted for each of the extensions.
      Install-Module ACMESharp.Providers.AWS
      Install-Module ACMESharp.Providers.IIS
      Install-Module ACMESharp.Providers.Windows
    4. Enable the extension modules.
      Import-Module ACMESharp
      Enable-ACMEExtensionModule ACMESharp.Providers.AWS
      Enable-ACMEExtensionModule ACMESharp.Providers.IIS
      Enable-ACMEExtensionModule ACMESharp.Providers.Windows
    5. Verify the providers have been added correctly
      Get-ACMEExtensionModule | select Name

      You should receive the following output:

    6. Initialize the ACME vault as follows:
    7. Register with LetsEncrypt
New-ACMERegistration -Contacts -AcceptTos
  1. Submit a new domain identifier. This is the name of the dns name you wish to secure.
    New-ACMEIdentifier -Dns -Alias dns1
  2. You now need to prove you own the domain. The easiest way to do this is to automate the process using IIS, unfortunately this needs to be using port 80 which is already bound to the tomcat service. The workaround I’m using is to prove I have ownership of DNS.
    Complete-ACMEChallenge dns1 -ChallengeType dns-01 -Handler manual
  3. Run the following command to request the required details to prove DNS ownership.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    When you get a resoponse similar to the following; you can continue to the next step.

  4. Add the TXT record to your DNS as indicated in the challenge. For Route53 it would appear as follows:
  5. Once you have completed the DNS entry, run the following to submit the challenge:
    Submit-ACMEChallenge dns1 -ChallengeType dns-01
  6. Run the following to check whether the challenge was successful.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    If successful, you will be presented with the following:

  7. If the status is shown as valid (highlighted above) you can now request & retrieve your new certificate. It is possible to request a SAN (Subject Alternative Name) at this point, however for this example we’re sticking with one. Run the following two commands:
    New-ACMECertificate dns1347 -Generate -Alias cert1
    Submit-ACMECertificate cert1
  8. If almost all fields are populated in the response, the certificate is now ready to be stored in the vault
    Update-ACMECertificate cert1
  9. You can now export the certificate in pkcs12 format:
    Get-ACMECertificate cert1 -ExportPkcs12 "C:\certs\cert1.pfx" -CertificatePassword 'myPassw0rd'
  10. Check to ensure the new pfx file is visible in the chosen location.

Procedure – Update Commvault configuration

The next stage is to instruct the web server to use the created certificate bundle as its certificate source.

  1. Stop the Commvault Tomcat process via Process Manger.
  2. Copy the pfx file created earlier to [programdrive]:\Program Files\Commvault\ContentStore\Apache.
  3. Edit conf\server.xml (Notepad++ is a good choice for this)
  4. The official documentation indicates that the default connector redirect port should be adjusted, however new installations should already be configured to redirect to 443. Either way it should look like this:
    <Connector port="80" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="443" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressionMinSize="500" compressableMimeType="text/html,text/json,application/json,text/xml,text/plain,application/javascript,text/css,text/javascript,text/js" useSendfile="false"/>
  5. Add a second connector, beneath the line you have just edited. This will reference the pfx file created earlier.
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="443" URIEncoding="UTF-8" maxPostSize="40960000" maxHttpHeaderSize="1024000" maxThreads="2500" enableLookups="true" SSLEnabled="true" scheme="https" secure="true" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressableMimeType="application/javascript,text/css,text/javascript,text/js" useSendfile="false">
     <SSLHostConfig certificateVerification="none" honorCipherOrder="true" protocols="TLSv1,TLSv1.1,TLSv1.2" ciphers="TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA">
     <Certificate certificateKeystoreFile="E:\Program Files\Commvault\ContentStore\Apache\cert1.pfx" certificateKeystorePassword="myPasw0rd" certificateKeystoreType="PKCS12"/>
  6. You can test the certificate by visiting your web console using the web console button in the CommCell console; you’ll still see the certificate error but upon further inspection you should see the certificate issuer is Let’s Encrypt. The next step is to adjust Commvault to use the public domain name to access the web console (ensure your internal DNS is configured to direct the DNS name to the internal IP).
  7. As described here, add the following additional setting to the ComCell.
    Category CommServDB.GxGlobalParam
    Type String
    Value: https://hostname:port/webconsole/clientDetails/
    hostname:port should match the name you associated with the certificate.
  8. Restart the CommCell services. The web console link should now reference the new name.
  9. If you have a windows shortcut to the Admin Console, that will also need its properties adjusted to reflect the new link.

Using a proxy to simplify WAN connections.


The majority of large Commvault implementations will require some sort of cross-firewall communication to clients on the WAN. Designing for this can be tricky as you’re opening pathways between trusted and untrusted networks.

As discussed on my previous post “Understanding One-Way Firewalls with Commvault” it is possible to allow communication by opening specific ports on the firewall, and using this to NAT connections between the client & Commvault infrastructure. This works well in some situations, however when multiple infrastructure components require access from external parties, this can get messy.

Utilising a proxy outside of the trusted network (typically in the DMZ) creates a central conduit for all trusted – non trusted communication.


The port shown in the above graphic is 8403 (default) however this can be substituted if required.


Official documentation for the solution can be found here. Commvault offers two methods of configuring the “Proxy in Perimiter Network” topology:

  1. Using a predefined network topology (preferred method)
  2. Using the basic or advanced firewall settings (alternative method)

This post will focus on the preferred method for connection of Servers outside the network. Laptop connections require slightly different steps which you should refer to the official documentation for guidance. I will cover the alternative (Using the basic or advanced firewall settings) method in a later post.

Before starting you will need to ensure:

  • You have the following client groups configured in Commvault:
    • Trusted Client Group 1: a client group that will initiate connections to the proxy group. This may contain infrastructure components such as the Commserve, MediaAgents, Web Severs etc. Ensure these are added to the group once created.
    • Trusted Client Group 2: additional client group that will initiate connections to the proxy group. This may contain the servers outside of the DMZ, to which the 1st group should use a proxy to communicate.
    • Proxy/DMZ Group: the client group that you want to designate as the proxy group. This group should contain the proxy.
  • You have administrative permissions to the above groups.
  • Proxy server will require:
    • 1Ghz Dual Core Processor & 8GB RAM for <1000 clients using encryption
    • 1Ghz Quad Core Processor & 8GB RAM for >1000 clients using encryption
    • The Firewall topology must be completed before installing the proxy.
  • Proxy server should have visibility of the same DNS zones as the infrastructure components.
  • Firewall/NAT rules should be configured as follows:
    • CommCell Components (CommServe, MediaAgents, WebServers, IndexServers) –> Proxy on 8403
    • Proxy –> Internal DNS on 53
    • Proxy –> Internal, ICMP (optional, for testing ping)
    • External Clients –> Proxy on 8403

Configuration – Firewall & Proxy

  1. Right click Network Topologies and select New Topology. Complete the name, description (optional), Client type (Servers), Topology type (via Proxy), and the 3 groups configured earlier. NewTopology
  2. The Make clients from Trusted Client group 1 use proxies for all traffic checkbox should be left unchecked. Click OK
  3. Create a placeholder client for the proxy. Right click the CommCell name and create a new File System client as shown below.NewFS
  4. Complete the client & host name and click NextFinish.NewFS2.PNG
  5. Add the placeholder proxy client to the Proxy/DMZ group. Click OK.proxydmzgroup.PNG
  6. Push the firewall configuration Trusted Client Group 1.pushnetwork.PNG
  7. Download the media kit matching your CommServe (at the time of writing v11 SP10) from Begin the software installation on the proxy server, selecting the File System Core package.prxyinstall.PNG
  8. At the Configure Roles screen select Configure as Network Proxyconfigroles
  9. At the Firewall Configuration screen; select “CommServe can open…” and click Next.
  10. Complete the Client computer information, ensure that the Host Name is resolvable by the CommServe.
  11. At the CommServe Information screen enter the fully qualified name.
  12. Select a port (default is 8403) to be used by the CommServe when establishing communication with the proxy.
  13. Ensure the correct client group is selected at the Additional Configuration screen.
  14. If communication was successfully established, you should see the following message.
  15. Use the Check Readiness function from the Commcell console to ensure communication is working.
  16. If all is well; expect the following:
  17. If not; check the CVD.log on the proxy, it should indicate where the communication breakdown lies.

Configuration – Client

The client configuration should be relatively simple. Communication to the CommCell will be made via the previously configured proxy.

  1. Start the installation as ususal using the media kit matching your CommServe (at the time of writing v11 SP10) from
  2. Follow the prompts, selecting the agents you wish to install. When you reach the Firewall Configuration screen, select “CommServe is reachable through a proxy”.
  3. Specify the Client Computer information and CommServe Name (Fully Qualified) at the next two screens.
  4. At the Firewall Connection Information screen, complete the port & proxy information as shown below. The port should match the configured rule on your perimiter firewall.
  5. Click Next at the Firewall HTTP Proxy Information & Certificate screen.
  6. At the Additional Configuration screen, ensure that Trusted Client group 2 is selected. This ensures the correct firewall rules are pushed to the client during configuration.
  7. At the confirmation page, click Finish. Use the check readiness function from the CommServe to ensure communication is working.

The next step is to test backups. Perform a standard file system backup to ensure data is being protected correctly. If you are testing this in a lab (without firewalls) you can stop the services on the proxy and retry the check readiness; if it fails you have proved that the proxy is being used for data transfer.




Commvault v11 SP10 – New Features

Commvault SP10 is out and the new features deserve a mention:

Server Retirement Workflow

This workflow, once downloaded and installed; will handle the following retirement functions:

  • Disables all activity on the client
  • Updates audit notes
  • Sets the job retention time period
  • Releases the client license
  • (Optional): Trigger an approval email

more info here

Data Encryption Configuration for Global Policies

This is extremely useful and ensures all data deduplicated using the global deduplication (or global secondary) policy will be encrypted. Prior to SP10 the encryption has to be specified at either the storage policy copy or client level.

More info here

SQL Server agent support for SQL 2017 on Windows & Linux

Supported Linux distros are:

  • Red Hat Enterprise Linux 7.3 or 7.4 Workstation, Server, and Desktop
  • SuSE Linux 12 SP2 Enterprise Server
  • Ubuntu16.04 LTS

All supported windows version for SQL 2017 will work. The SQL Management Studio Plug-In is also supported for SQL 2016 & 2017

“Proxyless” Intellisnap Backups.

Intellisnap backups are becoming more and more relevant considering the ever increasing size of virtual machines. It is not always practical to perform daily disk backups of multi-TB VMs. In previous service packs intellisnap backups required a proxy ESXi host in order to mount the hardware snapshot for backup copy. SP10 has introduced the process of retrieving the VMFS and CBT information during the snapshot operation. This adds a small time penalty to the snapshot operation, but considering the amount of time saved during the proxies mount and rescan operations; this is inconsequential. For backup copy operations the snapshot is mounted directly to the VSA proxy; File System APIs are then used to stream a copy to disk.

More info here

SP10 can be obtained via now or via the CommCell console after January 15th.

For more information and to read about additional new features, click here.