(English) Liferay 7.1: Docker Compose Cluster

Ci spiace, ma questo articolo è disponibile soltanto in Inglese Americano. Per ragioni di convenienza del visitatore, il contenuto è mostrato sotto nella lingua alternativa. Puoi cliccare sul link per cambiare la lingua attiva.

In the article Liferay 7: The liferay cluster is back and how to get it published in 2017, we saw how to build the OSGi bundles to add cluster support to Liferay 7.0 GA5. After a short time, Liferay published the cluster bundle jars on the Maven repository.

 

In this article, see how to pull up a Liferay 7.1 base cluster configuration using the Docker Compose. I made a Docker Compose project that allows you to get within a few minutes a Liferay cluster composed of two working nodes. Consider that this project is intended for cluster development and testing.

 

Figure 1 - Docker Compose LIferay 7.1 CE GA1 Cluster

Figure 1 – Docker Compose LIferay 7.1 CE GA1 Cluster

 

This Docker Compose (show in Figure 1) contains this services:
  • lb-haproxy: HA Proxy as Load Balancer
  • liferay-portal-node-1: Liferay 7.1 GA1 (with cluster support) node 1
  • liferay-portal-node-2: Liferay 7.1 GA1 (with cluster support) node 2
  • postgres: PostgreSQL 10 database
  • es-node-1 and es-node-2: Elasticsearch 6.1.4 Cluster nodes
As for the shared directory for the Liferay document library, I decided to use a shared dock volume instead of NSF.

 

For more information about Liferay Cluster and Configure Liferay Portal to Connect to your Elasticsearch Cluster (6.1.4)
you can read Liferay Portal Clustering and Connect to your Elasticsearch Cluster on Liferay Developer Network.

 

The liferay directory contains the following items:
  • Cluster OSGi Bundle (inside deploy directory)
    • com.liferay.portal.cache.ehcache.multiple.jar (version: 2.0.3)
    • com.liferay.portal.cluster.multiple.jar (version: 2.0.1)
    • com.liferay.portal.scheduler.multiple.jar (version: 2.0.2)
  • OSGi configs (inside configs directory)
    • BundleBlacklistConfiguration.config: contains the list of bundles that need not be installed
    • ElasticsearchConfiguration.config: contains elastic cluster configuration
    • AdvancedFileSystemStoreConfiguration.cfg: contains the configuration of the document library
  • Portal properties (inside configs directory)
    • portal-ext.properties: contains common configurations for Liferay, such as database connection, cluster enabling, document library, etc.
The haproxy directory contains the following items:
  • HA Proxy
    • haproxy.cfg: It contains the configuration to expose an endpoint http which balances the two Liferay nodes.
The elastic directory contains the optional configuration files for customizing cluster configuration.

 

1. How to start Liferay 7.1 Cluster

To start a services from this Docker Compose, please run following docker-compose command, which will start a Liferay Portal 7.1 GA1 with cluster support running on Tomcat 9.0.6:

 

For the first start, proceed as follows:

You can view output from containers following docker-compose logs or docker-compose logs -f  for follow log output.

After the first Liferay node is on (liferay-portal-node-1), then run:
After the two Liferay nodes are up, then pull the HA Proxy:
For the next start, you can run the only command:

If you encounter (ERROR: An HTTP request took too long to complete) this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).

 

1.1 Check Liferay and Elasticsearch services

To check the successful installation of cluster bundles, you can connect to the Gogo Shell of each node (via telnet) and run the command. The telnet ports of the GogoShell exposed by the two nodes are respectively: 21311 and 31311
and you should get the following output. Be careful that the three bundle must be in the active state.

On the logs of every Liferay instance you should see logs similar to those shown below.

From this log we see the join between the two cluster Liferay nodes (liferay-portal-node-1 and liferay-portal-node-2).

To check the correct installation of the Elasticsearch cluster, just check the status of the cluster and the presence of the Liferay indexes. We can use REST services to get this information.

In output you should get the status of the cluster in green and the presence of two nodes.

In output you should also get Liferay indices.


After all the services are up, you can reach Liferay this way:

  • Via HA Proxy or Load Balancer at URL http://localhost
  •  Accessing directly to the nodes:
    • Liferay Node 1: http://localhost:6080
    • Liferay Node 2: http://localhost:7080
You can access the HA Proxy statistics report in this way: http://localhost:8181 (username/password: liferay/liferay).
Figure 2 - HA Proxy Report

Figure 2 – HA Proxy Report

 

In my case, I inserted the following entries on my /etc/ hosts file:

To access Liferay through HA Proxy goto your browser at http://liferay-lb.local

Figure 3 - Liferay Node 1

Figure 3 – Liferay Node 1

 

Figure 4 - Liferay Node 2

Figure 4 – Liferay Node 2

 

2. JGroups cluster support

This cluster support is limited to EhCache RMI replication. RMI is known to not scale well when increasing the number of cluster nodes. It creates more threads when adding more nodes to the cluster. They can cause server nodes to decrease its performance and even to crash.
Juan Gonzalez made the needed changes from Liferay 7 CE GA5 sources to change RMI to JGroups.
For more detailed information, I suggest you read Liferay Portal 7 CE GA5 with JGroups cluster support
2 Condivisioni

Antonio Musarra

I began my journey into the world of computing from an Olivetti M24 PC (http://it.wikipedia.org/wiki/Olivetti_M24) bought by my father for his work. Day after day, quickly taking control until … Now doing business consulting for projects in the enterprise application development using web-oriented technologies such as J2EE, Web Services, ESB, TIBCO, PHP.

Potrebbero interessarti anche...