Protecting VM workloads in Azure

Introduction

Commvault’s ability to protect workloads in public cloud is nothing new, however with the increased popularity of hybrid cloud architectures it makes sense to protect both on & off-premises workloads through a single management platform.

Block-level protection of Azure virtual machines has been available since v11 SP4, further enhanced by SP5 with the introduction of CBT (Changed Block Tracking) and by SP10 with the support for managed disks. At present CBT is available for unmanaged disks only but is roadmapped to support managed disks soon.

Overview

Overview.PNG

Prerequisites

The following items should be in place prior to implementing this solution:

  • Commvault infrastructure/test lab installation. V11 SP14 or above.
  • Azure Subscription (keep in mind you have a limit of 4 vcores on a trial)
    • User credentials with Service Administrator capabilities, for logging in to your Azure account.
  • Resource Group created with:
    • Virtual Network Created
      • Azure Virtual Network configured with a Service Endpoint for Storage on the default subnet (or subnet used for the VSA VM)
    • Network Security Group allowing remote access and Commvault firewall ports access to Azure VSA. Typically this would be ports 3389 (RDP) and 8400-8403 (Commvault).
  • Azure Virtual machine with MediaAgent and Virtual Server Agent (VSA) software installed & connected to on-premises CommServe.
    • Connectivity verified with “Check Readiness” function.
    • Best Practice for this AzureVM can be found here.
    • Ensure a Premium SSD disk is allocated for the GDDB. This should be formatted and have a drive letter assigned.
    • Ensure your VSA can resolve management.azure.com. If you are using your own internal DNS this may not be the case. More detail here.

It is also advisable to read the cloud architecture guide located here to ensure you have the latest best practice information.

Procedure

Configure VSA

To configure the Azure VM to function as a VSA:

  1. Configure a Application and Tenant for Azure Resource Manager
    1. Note down your Azure account subscription ID
      Subscriptions.PNG
    2. Submit a new Application Registration
      newappregistration
      You can choose the name and Sign-In URL, keep it simple. This will be used in a later step to reference this app registration.
    3. Adjust the required permissions
      1. Select the new application and select the settings blade.
        Settingsbutton.PNG
        From the Required Permissions tab select Add:
        addreqquiredpermissions
      2. Click Select an API
        SelectAnAPIU
        find “Windows Azure Service Management API” and select.
      3. Select “Access Azure Service Management as organization users (preview)” and click Select.
    4. Create a new Key by selecting the Keys blade.
      1. Give the Key a name and expiry and choose save. Take note of the generated key value (Its the only chance you’ll get).
        CreateKey.PNG
    5. Enter the subscriptions tab.
      1. Select Access Control (IAM) and then Add a role assignment.
        addroleassignment
        Select contributor & the webapp you created earlier and click save.
        NOTE: You can add a custom JSON role here, see the official documentation for details.
    6. To configure the VSA pseudoclient within Commvault (Next Step) you will need:
      1. Your subscription ID
      2. The Key you assigned to the application.
      3. Your Tenant ID. This can be found under Azure Active Directory –> Properties –> Directory ID
        tenantid.PNG
  2. From the CommCell Console, Create a new Azure Virtualization client NewAzure.PNG
    1. Enter the credentials as follows:
      NewAzure2.PNG
      Subscription ID: From Subscription Properties
      Tenant ID: From Azure Active Directory
      Application: For App Registrations
      App Password: Key Created during App Registration.

Configure Disk Library

We are using Azure BLOB (Binary Large OBject) storage to store the protected VSA data.

  1. From the Azure Portal, Select Storage Accounts.
  2. Create a New Storage Account
    NewStorage.PNG

    1. The name should be unique (and will be used when creating the cloud disk library later on).
    2. Account Kind should be Blob Storage.
    3. Replication can be chosen based on requirments, for a lab LRS is fine.
    4. Hot is best for primary backup targets.
    5. Click Review + Create. (You can add Tags & encryption options etc via advanced options if desired)
    6. Click Create
  3. Goto the newly create blob storage and select Keys.
    BlobKeys.PNG
  4. Select one of the access keys and copy for use later.
  5. Create a New container within the blob storage:
    New COntainer.PNG
    Configure as follows (choose your own name)
    newcontainer02
  6. From the CommCell Console, create a new Cloud Disk library, configure as follows:
    newblob.PNG

    1. MediaAgent: Your Azure VSA/MediaAgent
    2. Service Host: leave default (in most cases)
    3. Access Key ID: The Key you copied earlier
    4. Account Name: The unique name you created earlier.
    5. Click Detect. Select the container you created earlier.
    6. Click OK.

Configure Global Deduplication Database (GDDB)

To make the most efficient use of your newly created Blob library, you’re gonna want it deduplicated. To configure the GDDB you’ll use the standard process as follows:

  1. From Storage Resources, RIght Click Deduplication Engines and select New Global Deduplication Policy
  2. Give it a name and choose next
  3. Select the previously created BLOB library and click next.
  4. Select the Azure VSA MediaAgetnt and choose next.
  5. CHoose whether you woud like to encrypt your data, keep in mind that Azure blob storage is encrypted by default anyway.
  6. Click Next until the Specify location page appears. Specify the location of the Premium SSD allocated for the GDDB.
    newHotBlobGDDB01.PNG
  7. Click next until the process is completed. As this is a cloud library it will automatically specify the deduplication block size to 512Kb.

Configure Storage Policies

You are now free to create a storage policy using the new Global Dedupe policy as it primary. You can also (from SP14) add this GDDB as an additional copy in existing 128Kb based policies. See Change in Default Block Size on this link. I wont go into the details of creating a storage policy here, if you need a guide, check out Commvaults Books Online.

Create subclients and kick off your first backup!

You created your Azure pseudoclient earlier on, now its time to create a subclient to protect your selected VMs. The default subclient will automatically pick up anything it has the permssion to see so its usually best to create separate subclients to create granular control of retention & scheduling.

Azure subclients can select their content by:

  • Name (Wildcard if needed)
  • Resource Group
  • Region
  • Storage Account
  • Tag
  • Power State
  • A combination of the above 🙂

How you organise things depends on your environment. Tags can work well as once set up, any backup requirements can be set when creating the Azure VMs.

Associate your subclient with your Azure Storage Policy and kick it off. If you want higher throughput try increasing the instance size or adding additional proxies.

 

 

 

Advertisements

Creating a Site-to-Site VPN between your Lab & Azure

Almost every project I am involved in demands a cloud component design element. Typically this will be with either the MS Azure of AWS public clouds however it is not uncommon for Google Cloud, Alibaba or Oracle cloud offerings requiring consideration. In order to stay current with these technologies it is necessary to practice or “LAB” these skills, and what better way to make a start than to connect your existing lab and a Azure subscription?

In this post I will will:

  • Configure the necessary Azure networking components.
  • Deploy an Azure VPN Gateway
  • Deploy an Azure virtual machine
  • Configure a Server 2016 Routing & remote Access Server (RRAS) server in my home Lab
  • Demonstrate connectivity between the two environments.

Overview

Overview.PNG

A new Azure virtual network will be created with two subnets; the first (default) to be used by the Azure virtual machines & the second to be used by the “Gateway Subnet”. A Virtual machine will be created in the default subnet.

A Server 2016 RRAS server will be installed in my home lab & configured to connect to the public IP of the Azure VPN Gateway. To ensure both Home Lab & Azure can communicate; static routes will be introduced to ensure traffic is  routed correctly.

Prerequisites

  • Created an Azure subscription
  • Have a home lab with domain configured
  • Have a new Server 2016 virtual machine deployed.
    • Configure 2 vNICs on the virtual machine
      • Internal
        • This should have a static address in your home lab address range. i.e. 172.16.1.56/24
        • No default gateway
      • External
        • This should have a static address in your home address range. i.e. 172.16.1.57/24
        • The default gateway should be set to your standard gateway (i.e. 172.16.1.1)
  • You home router should be configured to allow VPN pass through.
    • Configure Ports UDP 500 & UDP4500 to forward through to your RRAS VM.

Process

Configure Azure Networking

  1. From the Azure portal create a Resource group in which to place the Azure components. 
  2. Create a new Virtual Network, use the following fields as a guide:
    1. Name: CVLab
    2. Address Space: 10.10.0.0/16
    3. Subscription: Your Subscription/Free Trial
    4. Resource Group: As created earlier.
    5. Location: As Desired
    6. Subnet
      1. Name: default
      2. Address Range: 10.10.10.0/24
    7. DDoS Protection: Standard
    8. Service Endpoints: Disabled
    9. Firewall: Disabled
  3. Once the virtual network has deployed, create a Gateway subnet:
    newgateway subnet.PNG
    Ensure the Gateway subnet is configured accordingly:

    1. Address Range: 10.10.11.0/29
    2. Route Table – None
    3. Services None
    4. Delegate subnet: None
  4. Add a virtual network gateway. This will be used by your RRAS server as a VPN connection target.
    newvnetgateway
    Configure as follows:

    1. Name: CVGateway
    2. Type: VPN
    3. VPN Type: Route Based
    4. Virtual Network: CVLab
    5. Public IP: Create New
      1. This is auto-generated and will be the endpoint targeted by your on-premises RRAS server.
    6. PublicIPName:CVPublicIP
    7. Subscription: Your Subscription/Free Trial
    8. Location: Same as where you created your vNet.
  5. Create a local Network Gateway. This will be used to allow your Azure VMs to connect to your on-premises VMs
    localnetworkgateway
    Configure as follows:

    1. Name: CVLabGateway
    2. IP Address: The IP Address of your home internet connection
      1. NOTE: if you have a dynamic IP address, this IP will will need to be updated when that IP changes.
    3. Address space: Your Lab (on-premises) network. For me this is 172.16.1.0/24. You may add multiple address spaces here if appropriate.
    4. Subscription: Your Subscription/Free Trial
    5. ResourceGroup: Your Resource Group
    6. Location: Same as where you created your vNet.
  6. Once the local gateway has finished deploying (can take a while), you can create a connection object.
    NewConnectionConfigure as follows:

    1. Name: CVLabHomeCon
    2. Virtual Network Gateway: CVGateway
    3. PSK: Make one up, the longer and more complicated the better. This will be used by your RRAS Server.
    4. ResourceGroup : Your Resource Group

Configure RRAS

  1. Install Routing & Remote Access on your Server 2016 VM
    1. From server manager, add Roles & Features
      addrolesfeatures1
    2. Click next through until you reach Roles. Select Remote Access and click Next until you reach the Select Role Services page.
    3. Select Routing and DirectAccess and VPN (RAS). Click Add Features when prompted.
      addrolesfeatures2.PNG
    4. Click Next until the confirm page, then click Install.
  2. Once RRAS is installed, click Open the getting started wizard.
    addrolesfeatures3
  3. Select Deploy VPN Only
  4. Right click the name of your sever in the RRAS console and select Configure and Enable Routing and Remote Access.
    RRAS1
  5. On the Configuration page, click Custom configuration.
  6. Select VPN Access and LAN RoutingClick Next and Finish. If warned about the windows firewall click OK.
  7. Click Start Service when prompted.
  8. Create a new demand dial interface as shown below:
    RRAS2.PNG

    1. Name: Azure On-Demand
    2. Connection Type: VPN
    3. Type: IKEv2
    4. Host Name: The public IP of your Azure Virtual Network Gateway:
      GatewayIP.PNG
    5. Protocols and Security:
      1. Route IP Packets on this interface
    6. Static Routes:
      1. Destination: 10.10.10.0
      2. Mask: 255.255.255.0
      3. Metric: 10
    7. Dial Out Credentials – just write “Nothing” in the Username and click next. We’ll be using the PSK created earlier.
  9. Once the interface has been created, right click on the Azure On-Demand interface and choose properties.
  10. Select Security and Use preshared key for authentication. Enter your PSK in the box.
    psk.PNG
    Click OK
  11. Right Click your connection and choose Connect.

Configure Azure VM

  1. Back in the Azure portal, deploy a new virtual machine. The size is upto you but ensure it has the following properties:
    1. Region: Same as where you deployed the previous components
    2. OS: Windows Server
    3. Public Inbound Ports: None
    4. Virtual Network: CVLab
    5. Subnet: default
    6. The other areas can be left as default or adjusted if necessary.
  2. Wait for the VM to be deployed then proceed.
  3. From the Networking tab on the new VM, create a new rule to allow traffic from your home lab network (i.e. 172.16.1.0/24). This can be for specific ports or a blanket rule between your home & azure environments. Ensure at least RDP (3389) is allowed. In the below example I have allowed all traffic from my local Lab subnet.
    NetworkRules

Configure Static Routes

In the current state, your RRAS server will be able to connect to your AzureVM as the route to use has been added by the RRAS config. The next step is to add routes to the other VMs (both Azure and local Lab) to allow communication between subnets. The route add commands would be written as follows:

For Local lab VMs
ROUTE ADD 10.10.10.0 mask 255.255.255.0 172.16.1.56 metric 2 -p
For Azure VMs
ROUTE ADD 172.16.1.0 mask 255.255.255.0 172.16.1.57 metric 2 -p

Ping (ICMP) is not allowed through the windows firewall by default, however RDP on the Azure VMs is. From your RRAS server, RDP to the private IP of your new Azure VM, if you get a connection that proves RRAS is working. You will need to add the appropriate static routes to the test VMs before you can repeat the same test outside of the RRAS VM.

Next Steps

If you would like to make this Azure VM (and others) part of your domain for an extended period, it would be advisable to get a few other things tidied up, for example:

  • DNS Zones
  • Centralising the static Routes
    • Via Group Policy
    • Via Physical/Virtual network hardware.
  • Locking down the Azure ports using  network security groups.

 

 

Configuring o365 mailbox protection with Commvault

With each new service pack released, the process for protecting mailbox content hosted in Microsoft o365 is further refined.  The release of SP12 brings the following changes:

  • Combined & simplified agent install. Rather than choosing “Mailbox” or “Database” etc the binaries are combined into a single “Exchange” client.
  • If you need the OWA proxy enabler or Offline mining tool, these are available as separate installs.
  • No need for the scheduled tasks to update the service account permissions on new mailboxes.

This post will focus on “Exchange Mailbox Agent User Mailbox Office 365 with Exchange Using Azure AD Environment” as detailed on the official documentation here.

Prerequisites

There’s a few things to set up before you can start protecting the mailboxes. These are summarised below and detailed in the following sections.

Microsoft

  • Server to install the Commvault Exchange agent
  • Administration account for Office365
  • Exchange Online Service account
    • Must be an online mailbox
    • Setup via admin.microsoft.com
  • Local System Account
    • Member of Local administrators on machine

Commvault

  • Index Server
  • Job Results Directory
    • Must be a UNC Path
  • Mailbox Configuration Policies
  • Storage Policy

Procedure

Microsoft Tenant Configuration

  1. Connect to o365 with Powershell
    $UserCredential = Get-Credential
    $Session = New-PSSession -ConfigurationName Microsoft.Exchange -ConnectionUri https://outlook.office365.com/powershell-liveid/ -Credential $UserCredential -Authentication Basic -AllowRedirection
    Import-PSSession $Session

    You’ll be prompted for a username and password, use the Administration account for Office365 mentioned in the prerequisites.

  2. Run the following commands:
    get-ExecutionPolicy
    set-ExecutionPolicy RemoteSigned
    Enable-OrganizationCustomization

    You may at this point see the following:
    enable-orgCustNot a problem, just continue to the next step.

  3. You now need to provide the service account (the one with the online mailbox mentioned earlier) with view only recipient permissions.
    New-RoleGroup -Name "ExchangeonlinebackupGrp" -Roles "ApplicationImpersonation", "View-Only Recipients" -Members svcCommvault
  4. Register the o365 application with Exchange using AzureAD, do this via https://portal.azure.com:
    1. Goto Azure Active Directory (left hand side)
    2. Choose App Registrations
    3. Select New Application Registration
    4. Complete the fields as follows (feel free to use a different name)
      NewApp
    5. Click Create.
    6. Note down the Application ID (you’ll need this later to set up the pseudoclient)
    7. Click the Settings button once the app is created, Select the properties button on the right hand side.
    8. Scroll to the bottom and change Multi-tenanted to yes.
      multitenanted
    9. Click Save
    10. Select Settings–>Keys
    11. You’re going to create a new key, complete the form as follows, adjust the first two fields as you see fit:
      newkey.PNG
      Click Save.
    12. Copy the Key value & description, you’ll need this later. May be worth remembering the expiry date too.
    13. Now select the Required Permissions menu. Click Add.
    14. Choose Select an API then Microsoft Graph
      Graph
      Click Select.
    15. Scroll down to Read Directory data, check the box and click Select.
    16. Click Done then Grant Permissions. Click Yes when prompted.
      grantpermisssions.PNG
    17. On the left hand side, click Azure Active Directory then Properties. Note down the value in the Directory ID field.
      DirID

Commvault Configuration

Now it’s time to ensure you have the Exchange Policies, Index Server, Job Results Folder & Storage Policy setup. The latter 3 of these tasks are already well documented however the following should be noted:

  • The Job results directory needs to be shared (i.e. \\UNC Path) visible to all mailbox backup proxies
  • The primary copy retention  Retention for the mailboxes is governed by the Retention Policy and not by the Storage Policy as is typical with other agents. For this reason it is worth having a separate storage policy for the mailbox backups.
  • The mailbox agent index server should not be the MediaAgent responsible for you’re library and storage policies.
  • The mailbox pseudoclient to index server is a 1:1 relationship. It is possible via a registry key to have multiple pseudoclients use the same index server, however; if the multiple pseudoclients have any crossover you will very likely experience data loss.
  • Review the index store requirements before deploying the index server. If you’re doing this in a lab you can ususally start small and ramp up the specs on-the-fly.
  • You must have a web server installed in your environment, typical Commcells have this installed on the CommServe however larger CommCells split this role out to a dedicated server.

Exchange Policies

This is a copy from my previous post but the information is still valid:

The four new policy types are as follows:

  • Archiving – Archive is the new Backup. This policy dictates what messages will be protected, it has no effect on stubbing.
  • Cleanup – If you are archiving, this is where it is configured.
  • Retention – Primary Copy retention is configured here and will override any retention settings configured in the storage policy. Secondary copies will still adhere to the Storage Policy values.
  • Journal – The new compliance archive. Use this for journal mailboxes.

Policies are configured under Policies, Configuration Policies, Exchange policies as shown below:

Policies

Only configure the policies you need, for a standard mailbox backup (no archive) setup, your policies listing may look like this:

PolicyExample

Creating the Index Server

To create the logical Index Server (assuming you’ve installed the index store package) do the following:

  1. From the Commvault Console, right click Client Computers –> Index Servers and select New Index Server.
    NewIndexServer
  2. Complete the fields on the General tab. If possible ensure the drive nominated for the Index Directory is formatted as 64kb block size. The Storage Policy, although optional; is used to backup the index.
    NewCloud
  3. On the Roles tab, click Add, select Exchange Index, and move it to the Include field.
    IncludeExchange
  4. On the Nodes tab; add the server on which you installed the Index Store package.
  5. Click OK. There will be a delay while the index is created, you may notice the following status on the bar at the bottom of the screen
    Cloud reation in Progress

Creating the Mailbox Client

You’ll need to have the Index server & policies ready before continuing to the mailbox client creation.

  1. Right click the CommServe name at the top of the CommCell Browser on the right hand side and select All Tasks –> Add/Remove Software –> New Client.
    NewCLient.PNG
  2. Expand Exchange Mailbox and select User Mailbox.
    UserMailbox
  3. Complete the fields as shown below. Note: I have used the index server to host the job results directory which isn’t best practice, but OK for a lab.
    ClientCreation01.PNG
  4. On the access nodes page, select a client (or clients) that has the Commvault Exchange package installed.
  5. On the Environment Type page, choose Exchange Online (Access Through Azure Active Directory).
  6. On the Azure App Details page enter the following:
    1. Application ID:  as noted down earlier
    2. Key Value: As described (The auto generated key)
    3. Azure Directory ID: Noted down earlier
  7. On the service account settings page you’ll need to add 2 accounts:
    1. The Exchange online service account, this is the one we granted permission to earlier.
    2. A local system account. This needs to have local admin rights on your exchange proxy(ies).
  8. Optional: On the Recall Service, enter the URL for your web server as shown below. This is only used if Archiving or Content store viewer is implemented.
  9. Click Finish!
  10. To test that you are able to query the instance for its mailboxes navigate to MailboxAgent –> ExchangeMailbox –> User Mailbox and click the Mailbox Tab at the bottom of the screen.
    Associations1
  11. Right Click in the white space above the Tabs and choose New Association – User.
  12. On the Create New Association box, click Configure then Discover (Choose Yes at prompt).
  13. If your expected list of mailboxes appears, you’re doing it right!list mailboxes

The next step is to configure the auto associations. This can be easily achieved by following the official instructions here.

 

Securing Access to the Web & Admin consoles using a SAN certificate.

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:

untrusted.PNG

Official Documentation for the procedure outlined in this post can be found here. The official documentation does not however; instruct on creating a certificate that will allow the use of multiple SAN (Subject Alternative Name) aliases. This guide will include the necessary fields for creating a SAN cert request, in addition to ensuring the cert complies with the SHA256withRSA signature algorithm.

Prerequisites

  • CommCell with Web & Admin Console
  • Certificate Authority
    • This can be either internal or external. If you are allowing access to the web console externally an external authority is recommended.
    • If you are using an internal CA, ensure it is capable of issuing SHA2 certs rather than the deprecated SHA2. Details can be found here. SHA1 certs are acceted by IE however chrome and firefox will complain.

Procedure

  1. In order to ensure the necessary java versions are in place you may need to update Java. The minimum version required is JRE 1.8.0_65. Check which version you have installed; the SP10 release of Commvault is packaged with java version “1.8.0_121” so does not require updating. To check your java version run “java -version” from the command prompt of the web server. if you need to update java follow the official doco here.
  2. From the web console computer, navigate to the following directory via an elevated command prompt (replace the java version if different):
    C:\Program Files\Java\jre1.8.0_121\bin
  3. You must create a keystore file using the keytool utility contained in the above directory. Run the following command:
    keytool -genkey -alias tomcat -keyalg RSA -sigalg SHA256withRSA -keystore "C:\mykeystore.jks" -keysize 2048
  4. You will be prompted for the following details:
    1. Keystore password: Be sure to make this a strong password and keep it safe.
    2. First & Last Name: This is the domain name for which you are creating a certificate i.e. CMS.MyDomain.com
    3. Org Unit: Leave this blank or use company name
    4. Org Name: use company name
    5. City or Locality, State, Country Code: As Described
  5. At the “Is this correct?” prompt, type yes.
  6. At the “Enter password for tomcat”, click ENTER to use the same as the keystore file.
  7. Now we use the created keyfile to generate a csr:
    keytool -certreq -file C:\somename.csr -keystore C:\mykeystore.jks -keyalg RSA -sigalg SHA256withRSA -alias tomcat -ext SAN=dns:alias1.mydomain.local,dns:alias2.mydomain.com -keysize 2048
  8. You will be prompted for the password you created earlier. Once entered the certificate request will be saved as specified (C:\somename.csr if you used the above command).
  9. Use this csr to generate the certificate. Upload the request to your certificate authority and download the signed certificates. The files you will require for the next step are as follows:
    1. root certificate
    2. intermediate certificate (if available)
    3. issued certificate
  10. All of these files can be in either cer or crt format. If you have been given a *.p7b. file dont panic; this can be opened to show the issued certs. In the example below you can see only the root and issued certificate are available.
    p7b
  11. Right click the certificates one by one and choose All Tasks –> Export. Use the default option as pictured below to export the individual certs as *.cer files.
    export
  12. Once you have your 2 or 3 certificates, head back to the command prompt. Its now time to import your root, intermediate (if you have one) and issued certs.
  13. First import the root certificate:
    keytool -import -alias root -keystore C:\mykeystore.jks -trustcacerts -file C:\root.cer
  14. Next the intermediate:
    keytool -import -alias intermed -keystore C:\mykeystore.jks -trustcacerts -file C:\intermediate.cer
  15. And finally the issued SAN cert:
    keytool -import -alias tomcat -keystore C:\mykeystore.jks -trustcacerts -file C:\actual.cer
  16. You now have a keystore (in this case “mykeystore.jks”) which can be used by Commvault to secure its web console traffic. To make Commvault aware of this file you’ll need to copy it into the Program Files/Commvault/ContentStore/Apache/Conf folder on the web server. Once the file is copied, stop the “Commvault Tomcat Service” using Commvault Process Manager.
  17. If you are using a Commvault version prior to v11 SP9 you’ll need to refer to the official doco here, if not carry on…
  18. Using a text editor (such as Notepad++) edit the “server.xml” file. It’s wise at this point to take a copy of the file first in case things go pear-shaped.
  19. Find the line “<Certificate certificateKeystoreFile=”” and edit the path and password to match your keystore file:
    SSL.PNG
  20. Ensure the path is correct, if you placed the file in the conf folder as instructed, the path should be “conf/mykeystore.jks”.
  21. Start the Tomcat service using the Commvault Process Manager, give the web server a couple of minutes to start and then browse to the server using one of your specified SAN aliases :-).

 

Protecting Salesforce with Commvault

System administrators are seeing more and more business processes becoming reliant on SaaS solutions. Salesforce is; to many organisations the lifeblood of their CRM & Sales lifecycle, providing efficiencies of customer management essential to an effective organisation.

The mistake many companies make is to assume that just because a solution is “in the cloud” it has the same level of protection as their on-premise systems. In many cases this assumption is false; data is very often replicated (such as the multi-zone replication of AWS S3), but rarely has any sort of historical recovery points. Salesforce for example at the time of writing only holds 2 weeks worth of restore points; any changes or deletions not spotted within this time are likely permanent. Other considerations are the fees charged by Salesforce in the event you do have to roll-back. This will easily run into thousands of dollars ($10k is what i’ve read), money that could be saved through the configuration of a Salesforce agent.

This capability is still relatively new and may be simplified in later releases. This guide is written with Commvault v11 SP11 in mind. The configuration presently is made up of the following components:

  • Commserve
  • MediaAgent
  • Salesforce Virtual Client (pseudoclient)
  • Cloud Connector (Linux)
  • SQL Server (Optional but required for restoration to Salesforce instance)

Data Flow

Not everything is protected as the data protection is limited to what is made available via the Salesforce API. For a list of items that are not protected check out books online here. The high level steps required to implement the Salesforce backup are as follows:

  1. Ensure you have a supported version of Salesforce
  2. Ensure you have sufficient capacity license available on your Commcell
  3. Configure a Connected App in Salesforce (click here for a guide).
  4. Install the cloud connector on a linux VM
  5. Create a MSSQL database on a seperate DB server for use by the cloud connector
  6. Create the Salesforce pseudoclient.
  7.   Backup!

Obviously this guide is no substitute for the official documentation, however hopefully it will aide in testing out the Salesforce backup functionality for yourself. During my testing I used a free salesforce developer account available here.

Required information

During the configuration of the Commvault Salesforce pseudoclient you will need to complete the following fields:

  • Client Name: The name which appears in the Commvault Console
  • Instance Name: Usually the same as the client name.
  • Backup Client: The linux proxy with “Cloud Apps” installed
  • Storage Policy: The storage policy through which data will be stored.
  • Number of Data Backup Streams: Start with 1 and scale if required.
  • Salesforce login URL: https://login.salesforce.com (You may use a custom URL here if configured)
  • Username: Your salesforce service account
  • Password: As described
  • API Token: The salesforce security token, this can be reset once the service account has been created.
  • Consumer Key: Associated with the Salesforce connected App
  • Consumer Secret: Associated with the Salesforce connected App
  • Download to Cache Path: This should be a folder on the Cloud Connector, plan for 120% of the total salesforce data.
  • Database Type: SQL Server (You can also use PostGreSQL)
  • Database Host: The database server containing the database preconfigured for use by the Commvault salesforce agent. I had difficulty when using DNS names here, try IP if DNS doesn’t work.
  • Server Instance Name: As described
  • DB Name: Name of the SQL database set up on the SQL Server
  • Database Port: 1433
  • Username:sysadmin user for the configured database
  • Password: As described.

Sizing

As described above; you will need to plan for both the Cloud Connector cache and SQL server storage. Required storage is as follows:

In addition to this you should consider the amount of back-end storage required. This should be planned in the same way planning for any additional client is performed.

Creating a Connected App

  1. Login to Salesforce using your administrator account
  2. Click Setup
  3. On the left select Build then Create –> Apps
  4. Scroll down to Connected Apps and click New
  5. Completed the required fields, ensure “Enabled OAuth Settings” is enabled.
  6. OAuth Scope should be “Full Access”
  7. Callback URL should be “https://test.salesforce.com/services/oauth2/token&#8221; if you are using a test account. If this is a production config it would be “https://login.salesforce.com/services/oauth2/token&#8221;
  8. Click Save and then click the connected app name, note the following fields:
    1. Consumer Key
    2. Consumer Secret
    3. Callback URL
      1. Commvault requires this during setup, minus the /services/oauth2/token

ConnectedApp.PNG

Generating the API Key

The API key is associated with the administration account (service account) used by Commvault to connect to Salesforce. To generate it:

  1. Login as the service account.
  2. Click the username at the top of the screen, select “My Settings”
    mysettings
  3. On the left of the screen, select “Reset My Security Token”
  4. Click the button to reset the token, it will be sent to the email address registered to the service account.

Configure the Pseudoclient

You should now have all the components and details required to create a Salesforce pseudoclient (also described as a virtual client). In addition to the items configured above you should have a Linux client installed with the Cloud Apps package, sufficient space on that client for the cached salesforce data & a SQL server with a preconfigured SQL database (and associated credentials).

Steps are as follows:

  1. From the CommCell console, Create a new Salesforce client.
    newclient1
  2. Complete the details on each tab, use the “Required Information” section above if you need clarification on any of the fields. Be sure to Test Connection on the “Connection Details” and “Backup Options” tabs.
  3. Your new client will appear as displayed below. Right click on the default subclient to adjust what is to be protected. You’ll notice the defaultbackupset reflects the name of the service account used.
    completedclient
  4. In the Contents page of the default subclient you have the option to include metadata, selecting this option will give you access to the metadata view when performing a “browse & restore”.

Standard vs Metadata views

Not being a Salesforce administrator I cant provide much insight into the significance of these two views, I can however provide screenshots of the different browse and restore results:

Objects:

Objects

Metadata:

metadata

Next Steps

Now it’s time to run a backup operation which can be performed by right clicking on the default subclient and choosing backup. You’ll notice an incremental backup is automatically performed following the completion of the full backup. Once this has completed you can test the solution by deleting accounts/opportunities etc and testing the restoration process.

If you are restoring directly to Salesforce you’ll notice the requirement for a configured database. The screenshot below shows the restore options window when restoring directly from the most recent database sync.

restore-db.PNG

If you are restoring from another restore point, you may need to restore from Media, as shown in the screenshot below:

restore-media.PNG

You’ll notice this clears the database details. All restores to Salesforce require a staging database (be that MSSQL or PostGreSQL). You would need to create and specify a SQL database to restore to (from media), prior to that data being restored to your Salesforce instance.

Implementing a floating CommServe name for faster DR recovery

UPDATE: As of v11 SP11 this method is not supported on the CommServe due to possible communication issues. I’ll leave the post up for reference and will update when an alternative method is available.

In the event of losing your primary CommServe (CS) you’ll need to restore your database to the DR CS. This is a simple enough process involving the following (high level) steps:

  1. Ensure your DR CS is installed and at the same patch level as your current (now unavailable) CS.
  2. Restore the latest DR database dump to the DR CommServe using CSRecoveryAssistant.exe
  3. Use the Name Management Wizard to update all clients & MediaAgents to use the DR CS name.

This works well in most scenarios; however, if you are keen to get the process finished as quickly as possible Step 3 can take some time. Books Online advises that it can take “30 seconds per online physical client and 20 seconds per offline physical client”. If you have 1000’s of clients, this can be a long time to wait. It is for this reason that changing the CS name on the clients ahead of time (during a defined outage window) can save precious time during an actual disaster.

Official documentation for this procedure can be found here. This document aims to achieve the CommCell rename without the requirement to rename the host name of the CommServe computer. In this example I’ll be renaming cs01.test.local to commserve.mydomain.com, allowing a common name for both internal and external backup clients.

The steps below assume you already have a working single CommServe (no SQL cluster) environment. Before making any changes you should:

  1. Ensure all clients (MediaAgents included) are referencing the CommServe by DNS name, not IP.
  2. Create a CNAME in your internal (end external if appropriate) DNS services.
  3. Stop all jobs
  4. Disable the scheduler
  5. Perform a DR backup
  6. Ensure there is no current DDB activity on the MediaAgents (Check SIDBEngine.log on the MediaAgents)

Before starting, the properties page of your CommCell should display your current actual CommCell host name.

Before-Name

Open a command prompt and navigate to the base directory. Login with the qlogin command.

qlogin

Run the following command (adjusting instance & the two names as appropriate):

SetPreImagedNames.exe CSNAME -instance Instance001 -hostname yournewfloatingname.com youroldfullqualifiedname.local -skipODBC

setpreimagednames

Close the Commcell properties (if it was still open) and reopen it. You should see the new floating name reflected on the General tab:

After-Name

Next Step is to update the clients to reference the new name. Restart the Commcell console GUI then open the Commvault Control Panel and select Name Management. Select “Update CommServe for Clients (use this option after DR restore)” and click Next. Ensure the New CommServe name matches your desired FQDN and click Next.

NameManagement1

Move all clients requiring the updated name across, be sure to leave the checkbox unchecked.

NameManagement2

Some clients may have issues updating, these will need to be investigated individually. Try restarting the services and check client readiness from the CommCell Console. In the case of the PC below it was the Firewall at fault (revealed with help from CVInstallMgr.log).

FailedCLients

Check the client properties (you’ll need to refresh the client list first using F5) and you should see that the CommServe hostname has been updated to reflect the new FQDN.

complete

Once all clients are pointing to the new CNAME, you can simply update DNS rather that using name management during a DR failover.

 

 

Securing Access to the Web & Admin consoles with Lets Encrypt!

The Web & Admin consoles provide a simplified interface for performing common Commvault tasks. Each service pack release brings additional functionality allowing the Admin console to replace more of the day-to-day administration tasks.

Https is used by default to secure the consoles, however; as it is a self signed certificate users are presented with the following:

untrusted.PNG

Official Documentation for the procedure outlined in this post can be found here. This guide focuses on using Let’s encrypt as the certificate authority and is more suited to lab environments due to the 3 month expiry of certificates. I’m looking at ways to allow for auto renewal of public certs (as Let’s Encrypt is designed to do) however this will give a good starting point.

Prerequisites

  • Commvault Commserve installed with web server.
  • ACMESharp powershell modules (installation covered in procedure section)
  • Internet access
  • Access to public DNS management (i.e. Route53)

Procedure – Obtain certificate

    1. First we need to install and configure ACMESharp. This will be used to request the certificate from LetsEncrypt. The steps included below are modified from the official quick start guide here. This post uses the manual method and as such the certificate is only valid for 3 months.
    2. Install the powershell modules using an elevated powershell window. You will be prompted twice, answer the 2 prompts with either “Y” or “A”.
      Install-Module ACMESharp
    3. Install the extension modules. Answer “A” when prompted for each of the extensions.
      Install-Module ACMESharp.Providers.AWS
      Install-Module ACMESharp.Providers.IIS
      Install-Module ACMESharp.Providers.Windows
    4. Enable the extension modules.
      Import-Module ACMESharp
      Enable-ACMEExtensionModule ACMESharp.Providers.AWS
      Enable-ACMEExtensionModule ACMESharp.Providers.IIS
      Enable-ACMEExtensionModule ACMESharp.Providers.Windows
    5. Verify the providers have been added correctly
      Get-ACMEExtensionModule | select Name

      You should receive the following output:
      providers.PNG

    6. Initialize the ACME vault as follows:
      Initialize-ACMEVault
    7. Register with LetsEncrypt
New-ACMERegistration -Contacts mailto:me@mydomain.com -AcceptTos
  1. Submit a new domain identifier. This is the name of the dns name you wish to secure.
    New-ACMEIdentifier -Dns myserver.example.com -Alias dns1
  2. You now need to prove you own the domain. The easiest way to do this is to automate the process using IIS, unfortunately this needs to be using port 80 which is already bound to the tomcat service. The workaround I’m using is to prove I have ownership of DNS.
    Complete-ACMEChallenge dns1 -ChallengeType dns-01 -Handler manual
  3. Run the following command to request the required details to prove DNS ownership.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    When you get a resoponse similar to the following; you can continue to the next step.
    token.PNG

  4. Add the TXT record to your DNS as indicated in the challenge. For Route53 it would appear as follows:
    dns.PNG
  5. Once you have completed the DNS entry, run the following to submit the challenge:
    Submit-ACMEChallenge dns1 -ChallengeType dns-01
  6. Run the following to check whether the challenge was successful.
    (Update-ACMEIdentifier dns1 -ChallengeType dns-01).Challenges | Where-Object {$_.Type -eq "dns-01"}

    If successful, you will be presented with the following:
    challenegesuccess

  7. If the status is shown as valid (highlighted above) you can now request & retrieve your new certificate. It is possible to request a SAN (Subject Alternative Name) at this point, however for this example we’re sticking with one. Run the following two commands:
    New-ACMECertificate dns1347 -Generate -Alias cert1
    Submit-ACMECertificate cert1
  8. If almost all fields are populated in the response, the certificate is now ready to be stored in the vault
    Update-ACMECertificate cert1
  9. You can now export the certificate in pkcs12 format:
    Get-ACMECertificate cert1 -ExportPkcs12 "C:\certs\cert1.pfx" -CertificatePassword 'myPassw0rd'
  10. Check to ensure the new pfx file is visible in the chosen location.

Procedure – Update Commvault configuration

The next stage is to instruct the web server to use the created certificate bundle as its certificate source.

  1. Stop the Commvault Tomcat process via Process Manger.
  2. Copy the pfx file created earlier to [programdrive]:\Program Files\Commvault\ContentStore\Apache.
  3. Edit conf\server.xml (Notepad++ is a good choice for this)
  4. The official documentation indicates that the default connector redirect port should be adjusted, however new installations should already be configured to redirect to 443. Either way it should look like this:
    <Connector port="80" protocol="HTTP/1.1" connectionTimeout="20000" redirectPort="443" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressionMinSize="500" compressableMimeType="text/html,text/json,application/json,text/xml,text/plain,application/javascript,text/css,text/javascript,text/js" useSendfile="false"/>
  5. Add a second connector, beneath the line you have just edited. This will reference the pfx file created earlier.
    <Connector protocol="org.apache.coyote.http11.Http11NioProtocol" port="443" URIEncoding="UTF-8" maxPostSize="40960000" maxHttpHeaderSize="1024000" maxThreads="2500" enableLookups="true" SSLEnabled="true" scheme="https" secure="true" server="Commvault WebServer" compression="on" noCompressionUserAgents="gozilla,traviata" compressableMimeType="application/javascript,text/css,text/javascript,text/js" useSendfile="false">
     <SSLHostConfig certificateVerification="none" honorCipherOrder="true" protocols="TLSv1,TLSv1.1,TLSv1.2" ciphers="TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256,TLS_DHE_RSA_WITH_AES_128_GCM_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256,TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA">
     <Certificate certificateKeystoreFile="E:\Program Files\Commvault\ContentStore\Apache\cert1.pfx" certificateKeystorePassword="myPasw0rd" certificateKeystoreType="PKCS12"/>
     </SSLHostConfig>
    </Connector>
  6. You can test the certificate by visiting your web console using the web console button in the CommCell console; you’ll still see the certificate error but upon further inspection you should see the certificate issuer is Let’s Encrypt. The next step is to adjust Commvault to use the public domain name to access the web console (ensure your internal DNS is configured to direct the DNS name to the internal IP).
    cert
  7. As described here, add the following additional setting to the ComCell.
    Name:WebConsoleURL
    Category CommServDB.GxGlobalParam
    Type String
    Value: https://hostname:port/webconsole/clientDetails/fsDetails.do?clientName=CLIENTNAME
    hostname:port should match the name you associated with the certificate.
  8. Restart the CommCell services. The web console link should now reference the new name.
  9. If you have a windows shortcut to the Admin Console, that will also need its properties adjusted to reflect the new link.