How to install Neverfail Continuity Engine on Google Cloud Platform

How to install Neverfail Continuity Engine on Google Cloud Platform

Summary

This article provides information on how to deploy Neverfail Continuity Engine on Google Cloud Platform environment.
First chapter covers the Neverfail Continuity Engine complete installation on Google Cloud Platform (GCP) environment (GCP-to-GCP deployment).
The second chapter covers the scenario in which the Secondary (or Tertiary, if applicable) cluster node, part of an on-premise Neverfail Engine Cluster, is (are) moved (migrated) into Google Cloud Platform environment (on-premises-to-GCP deployment).

Continuity Engine GCP-to-GCP deployments

Starting point:

  • Production server (Primary to be) hosted on Google Cloud Platform

Supported Neverfail Engine deployment configurations:

  • Pair HA with different Engine Public IP addresses on each node (each associated to a different static/reserved GCP public IP address)

  • Pair DR with different Engine Public IP addresses on each node (each associated to a  different static/reserved GCP public IP address)

  • Trio HA+DR with different Engine Public IP addresses on each node (each associated to a different static/reserved GCP public IP address)

Deployment procedure: 

  • Deploy Engine Management Service (EMS) on a GCP VM located in Production's server virtual network / subnet (dedicated server or any existing server except Primary-to-be)  

  • Fulfill the requirements for Primary/Production:

    • How to create a GCP Windows VM instance: https://cloud.google.com/compute/docs/instances/windows/creating-managing-windows-instances

    • GCP console>  VM creation> separate VPC for channel and public connection: https://cloud.google.com/vpc/docs/create-use-multiple-interfaces

    • GCP console>  VM creation> channel IP address configured as ephemeral custom

    • GCP console>  VM creation> VM public IP address should be staticreserved in order to be persistent across VM reboots (this is attached to Primary public IP address)

    • GCP console> VM creation> RDP enabled for GCP VM. Also ensure the firewall rules meet Engine requirements.

    • VM network configuration>  the private VM configured IP addresses (ephemeral or static reserved) must be configured as static (considering they are initially DHCP allocated). Make sure you preserve the exact network configurations (gateway, DNS, etc)

    • on Primary-to-be> verify that Engine installation requirements are met (administrator account, firewall, etc)

  • Deploy Engine on Primary (usual setup via EMS, no additional steps)

  • Add standby HA or DR VMs nodes for Pair deployment

    • Usual setup via EMS using manual/assisted cloning procedure

      • For DR pair deployments, configure different Engine public IP address for Secondary

    • Manual cloning procedure: duplicate Secondary from Primary VM via snapshot instance creation to get an exact copy of the source VM (including identity and SID): when Primary VM source is ready for cloning (indicated in EMS), create snapshot from Primary> create HA/DR VM instance from the taken snapshot

      • Public interface - configure distinct (different from Primary instance) internal ephemeral IP address and reserve a new external IP address (this should be the IP address of HA node when active) - you should set the desired (HA) or setup-configured (DR) internal public IP for Secondary

      • Channel interface: configure ephemeral custom or static internal (reserved) - use the same channel IP configured during Engine Add standby server setup

    • Post cloning configuration (required for HA pair deployment only)

      • if deploying a DR pair, you’re done (no post-clone configuration required)!

      • RDP into HA node (can be done from any GCP VM connected to the HA/DR VM channel virtual network / subnet, using the private IP address allocated during VM creation) 

      • on HA> stop Engine service

      • on HA configwiz> reconfigure the public IP address to the internal public IP for Secondary configured during HA instance creation

      • on HA> start Engine service

      • HA node should connect to the Engine cluster and start replicating

  • Add a HA+DR trio:

    • since the cloning method used is Manual, it is not possible to create HA+DR trio directly: HA node should be added first resulting a HA Pair, then the DR node will be added to the existing HA Pair.

    • Adding the HA node will follow the above procedure having Primary as the clone source

    • Adding the DR node will follow the above procedure having Secondary as the clone source

Notes:

  • the above procedure applies also when deploying the Engine cluster across different subscriptions/VPCs/sites/regions: the only additional requirement is to fix the connectivity between the Engine nodes (via VPN tunnels/gateways, VPC peering, etc.)  - the solution for this is outside Engine deployment procedure scope - it strictly belongs to customer's Goocle Cloud inter-connectivity configuration 


Continuity Engine on-premises-to-GCP deployments

Starting point:

  • Production server (Primary to be) hosted on-premises

  • on-premises - GCP connectivity (VPN, etc)

  • existing on-premises AD - GCP federation (optional)

Supported Neverfail Engine deployment configurations:

  • Pair HA with same or different Engine Public IP addresses on each node (on the GCP HA VM the Engine public IP address is associated to a static/reserved GCP public IP address)

  • Pair DR with same or different Engine Public IP addresses on each node (on the GCP DR VM the Engine public IP address is associated to a static/reserved GCP public IP address)

  • Trio HA+DR (both HA and DR in GCP) with different Engine Public IP addresses for the standby GCP HA and DR VMs (on the HA and DR VMs, each public IP is associated to a different static/reserved GCP public IP address)

  • Trio HA+DR (only DR in GCP) with same or different Engine Public IP addresses for the standby GCP DR VM (on the DR VMs, the public IP is associated to a static/reserved GCP public IP address)

Deployment procedure:

The same procedure as for GCP-2-GCP Engine deployments applies except: 

  • EMS server should be installed on-premises

  • Production server should be deployed on-premises (requirements as per installation guide)

  • the way to-be-GCP-VM's will be cloned (1) and migrated (2) from on-premises to GCP: 

    • (1) cloning the source on-premises server

      • if Engine source server (Primary or Secondary) may be cloned on-premises using automated method (via vCenter server), then the automated HA and/or DR cloning should be used during the EMS deployment

      • otherwise, manual cloning method should be used (this depends on the type of the source: physical, VMware, Hyper-V, Xen, KVM, ...)

    • (2) Migrate the HA and/or DR servers from on-premises to GCP

      • follow the specific procedure that applies to the on-premises HA and/or DR servers type - which are the subjects to be migrated) or any other technique which assures the correct migration, preserving the servers identity, SID, etc...

    • once the migration is completed, proceed with the Post cloning configuration for HA/DR node steps mentioned in the GCP-to-GCP procedure

Applies to:

Neverfail Continuity Engine 8.5 (or newer)

    • Related Articles

    • Neverfail Continuity Engine v8.5 - Release Notes

      Summary This Knowledge base article provides information about the 8.5 release of Neverfail Continuity Engine and all subsequent updates to this release. Neverfail Continuity Engine v8.5 Update 7 The following information applies to ...
    • Continuity Engine Product Architecture

      Learning objectives At the completion of this session, you should be able to: Identify major components of the Neverfail Continuity Engine product architecture. Describe major component configuration. Identify advantages of the Neverfail Continuity ...
    • Neverfail Continuity Engine v8.1 - Release Notes

      Summary This Knowledge base article provides information about this specific release of Neverfail Continuity Engine v8.1 More Information Supporting Documentation A listing of technical documents supporting this version of Neverfail Continuity ...
    • How to Install Neverfail Continuity Engine Management Service v8.0

      Summary This Knowledgebase article provides the procedure to install the Neverfail Continuity Engine Management Service. Using this procedure will ensure that you achieve a successful installation.   More Information Prior to attempting installation ...
    • Neverfail IT Continuity Engine v8.0 - Release Notes

      Summary This Knowledge base article provides information about this specific release of Neverfail IT Continuity Engine v8.0 More Information Supporting Documentation A listing of technical documents supporting this version of Neverfail IT Continuity ...