Genesys Quality Management Suite can be deployed as a cluster server installation on a two or more independent servers that work collectively as a single QM solution.

This guide covers the following topics

General steps for cluster installations

Functional Diagram

Prepare a functional diagram before every cluster installation. This diagram serves as a graphical guide to the required installations steps and module distribution.

The functional diagram reflects the stages of the installation described in the following sections.This stage should include both the module distribution and the NFS shares between the servers.

Installation of OS and GQM

After preparing the functional diagram of your GQM solution install OS and GQM packages on all nodes of the cluster including the all prerequisites.

Configuration of GQM packages

Configuration of GQM packages for a cluster installation consists from the following steps:

  1. Run callrec-setup in single server mode on every node of the cluster and choose the services according to your functional diagram.
  2. Perform additional configuration in order to interconnect the GQM modules.

The first step is covered in Single Server Configuration.

In this document we'll focus on the second step, on interconnecting the GQM modules.

Database access

Access to the server database must be configured manually if this step was not done during the callrec-setup phase. It is suggested to configure the database access for all of nodes of the cluster. If a replay server is to be installed, then this replay server must have access to the database on the recording cluster as well.

The database is initialized by default  to the /opt/callrec/data/psql directory. Access to the database is configured in the /opt/callrec/data/psql/pg_hba.conf configuration file. It is necessary to reload the PostgreSQL service to apply the changes.

For example:

host    all         all         192.168.10.1/32          trust

NFS configuration

GQM works with local paths only. Because of this it is necessary to share the folders with the media files within the cluster installation. The database records do not  contain any server names, hostnames or IP addresses, therefore all modules have to be able to find the media files on the file system under the same path.

Example:

The decoder runs on server callrec-XX-b while the Web UI runs on server callrec-XX-a. The decoder is configured to save mp3 files to the /opt/calrec/data/calls directory on the callrec-XX-b server (that is, the mp3 file will be saved on the filesystem as /opt/callrec/data/calls/20130801/aaa.mp3). The same location of the mp3 file is saved to the database.

The database record in the cfiles table will contain in the cfpath column value of local path of the mp3 file (that is, /opt/callrec/data/calls/20130801/aaa.mp3).
When the call is to be played in the Web UI of GQM call recording, then the Web UI sends select to the database for mp3 file location. The answer will be the local path (in the same example /opt/callrec/data/calls/20130801/aaa.mp3 ). Web UI starts to search the mp3 on the callrec-XX-a server local filesystem (in this example /opt/callrec/data/calls/20130801/aaa.mp3).

If the folders are not shared under the same location, then the Web UI reports error message "No media file found". The /opt/callrec/data/calls needs to be exported from callrec-XX-b to callrec-XX-a and mounted the same way on this callrec-XX-a server.

ID of callrec user

In cluster installations, it may happen that the ID and GID of callrec user differs between the servers. This can result in a situation when callrec user on one server cannot read/write the media files onto another server because of insufficient permissions.

It is necessary to unify the IDs of callrec user between the servers to eliminate this issue. This issue mostly happens when a new server is added to the call recording cluster while previously installed servers are upgraded to the same GQM version as the new server has installed.

Other possibility for the ID difference is when some servers are hardware servers while other are virtual machines, or when someone starts to create Linux users after the OS is installed, before GQM installation is performed.

System configuration verification

Verify that the system configuration on all of the servers in the cluster is the same. It is important to pay attention to the following configuration:

DNS:

  • /etc/resolv.conf
  • /etc/hosts

time/NTP:

  • /etc/ntp.conf

Email sending:

  • /etc/postfix/main.cf

 

Enable automatic startup of important services upon server bootup (nfs, netfs, ntpd, postgreSQL, postfix)

On virtual machines it is needed to verify that the VMware tools are installed and running. Verify which network driver is used. It is suggested to use vmxnet3 driver. If this driver is not supported then it is possible to use vmxnet driver as well.

GQM cluster configuration

This step usually requires the  manual editing of configuration files.

  • All .properties files (configuration of loggers), callrec.conf and callrec.derived are read locally by each module.
  • All .xml configuration files(except *log4j.xml files) are read by the ConfigManager module. All other modules then retrieve their configuration by communicating with the ConfigManager module.
  • All PSQL config files (postgresql.conf, pg_hba.conf) are also read locally by the PSQL instance.

GQM cluster tuning

This part requires good knowledge of customer's environment. Several parameters can be tuned based on the network and server performance. When a huge amount of calls is expected to be recorded, then the JVM parameters of the correspondent modules need to be increased. The JVM parameters can be also changed to support different character sets there. NFS mounting parameters are sometimes modified to improve performance of file sharing within the servers.