Enabling Active(Unsafe) Queues in Memory for WAN Environments with the Low Bandwidth Module enabled.

Follow

Summary
-------

Neverfail Heartbeat was designed as a "High Availability" or "Disaster Recovery" solution; it was optimized for configurations with 'enough' bandwidth between the primary and secondary server. In this configuration, the protected application (Exchange, SQL Server, etc.) stores replication updates on the active server in an unsafe queue stored in memory. This maximizes performance over fast, LAN speed connections. The same model was applied to all versions of Neverfail Heartbeat WAN deployments up to and including V4.3.2, with or without the Low Bandwidth Module.

Neverfail Heartbeat 4.3.3 and later with Low Bandwidth Module (LBM) now ships configured to store the unsafe queue on disk, rather than memory. This has been found to be far more efficient for the following environments:

- Low bandwidth WAN links.
- WAN links competing with other applications e.g. VoIP.
- Regular distribution of very large attachments.
- Large spikes in demand.

To change the behaviour of Neverfail Heartbeat WAN deployments using LBM so that the queue is stored in memory, refer to the section High Availability.


Disaster Recovery - Unsafe Queue on Disk
----------------------------------------

No configuration is required other than enabling Low Bandwidth Module in the configuration wizard. Administrators may find that it is appropriate to increase the size of the Max Disk Usage from the default 1 GB in v6.0 and earlier and 10 GB in v6.2 and later. Before doing so consider the following trade off when adjusting this setting:

- The larger the value the less likely you are to experience a MaxDiskUsage event which will trigger a Full System Check.
- The larger the value the greater the potential for lost data in the event of a server failure. The time to transfer data in the unsafe queue to the passive server should remain reasonable. As an example, using a 1 GB buffer and a 256 Kbps lime will mean that it takes more than 24 hours to empty the queue. As a rule of thumb all traffic that is generated during a nightly peak should be processed before the start of the working day and all traffic generated during working hours should be processed before the nightly peak begins. Consult the SCOPE report for predicted Channel backlog.
- For optimum performance, the Neverfail Comms Logs (default %install_drive%:\neverfail\r2\log) should be on a separate physical disk to the application data to avoid disk bottlenecks.

Administrators should consider carefully

- Amount of replication traffic, especially during peak hours, consult the SCOPE Report.
- Departmental targets imposed upon application availability and the amount of acceptable data lost in the event of failure.

These vary from company to company, and even department to department so Neverfail does not recommend any particular setting. Neverfail Group does provide the SCOPE Tool to estimate the size of the unsafe queue.


High Availability - Unsafe Queue in Memory
------------------------------------------

This configuration minimizes data loss by storing the unsafe queue in memory. The application may be slowed during periods of high demand to ensure that the unsafe updates can be applied to the passive server. The performance of the system as a whole is dependent upon the available / effective bandwidth between the two sites. This is the default mechanism used for all installations without the Low Bandwidth Module.

To enable this behavior on LBM installations make the following configuration change. This may be applicable under the following circumstances

- Sufficient bandwidth to accommodate application load.
- Preference for a High Availability Solution.

1. Shutdown Neverfail Heartbeat.
2. Set the following key to the value 'false' on the active server

- HKEY_LOCAL_MACHINE\SOFTWARE\Javasoft\prefs\neverfail\current\/Comms/Mgr\/Unsafe/Queue/On/Disk

3. Restart Neverfail Heartbeat on the active and passive servers.

KBID-349

0 out of 0 found this helpful

Comments

0 comments

Please sign in to leave a comment.