Liferay 7.1: Docker Compose Cluster
- lb-haproxy: HA Proxy as Load Balancer
- liferay-portal-node-1: Liferay 7.1 GA1 (with cluster support) node 1
- liferay-portal-node-2: Liferay 7.1 GA1 (with cluster support) node 2
- postgres: PostgreSQL 10 database
- es-node-1 and es-node-2: Elasticsearch 6.1.4 Cluster nodes
- Cluster OSGi Bundle (inside deploy directory)
- OSGi configs (inside configs directory)
- BundleBlacklistConfiguration.config: contains the list of bundles that need not be installed
- ElasticsearchConfiguration.config: contains elastic cluster configuration
- AdvancedFileSystemStoreConfiguration.cfg: contains the configuration of the document library
- Portal properties (inside configs directory)
- portal-ext.properties: contains common configurations for Liferay, such as database connection, cluster enabling, document library, etc.
- HA Proxy
- haproxy.cfg: It contains the configuration to expose an endpoint http which balances the two Liferay nodes.
1. How to start Liferay 7.1 Cluster
$ docker-compose up -d liferay-portal-node-1
You can view output from containers following docker-compose logs or docker-compose logs -f for follow log output.
$ docker-compose up -d liferay-portal-node-2
$ docker-compose up -d lb-haproxy
$ docker-compose up -d
If you encounter (ERROR: An HTTP request took too long to complete) this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
$ COMPOSE_HTTP_TIMEOUT=200 docker-compose up -d
1.1 Check Liferay and Elasticsearch services
g! lb -s multiple
START LEVEL 20 ID|State |Level|Symbolic name 52|Active | 10|com.liferay.portal.cache.ehcache.multiple (2.0.3) 53|Active | 10|com.liferay.portal.cluster.multiple (2.0.1) 54|Active | 10|com.liferay.portal.scheduler.multiple (2.0.2)
On the logs of every Liferay instance you should see logs similar to those shown below.
liferay-portal-node-1_1 | 2018-10-02 08:58:57.681 INFO [main][BundleStartStopLogger:35] STARTED com.liferay.portal.cache.ehcache.multiple_2.0.3 [52] liferay-portal-node-1_1 | 2018-10-02 08:58:58.440 INFO [main][BundleStartStopLogger:35] STARTED com.liferay.portal.cluster.multiple_2.0.1 [53] liferay-portal-node-1_1 | 2018-10-02 08:58:58.484 INFO [main][JGroupsClusterChannelFactory:141] Autodetecting JGroups outgoing IP address and interface for www.google.com:80 liferay-portal-node-1_1 | 2018-10-02 08:58:58.524 INFO [main][JGroupsClusterChannelFactory:180] Setting JGroups outgoing IP address to 172.19.0.5 and interface to eth0 liferay-portal-node-1_1 | liferay-portal-node-1_1 | ------------------------------------------------------------------- liferay-portal-node-1_1 | GMS: address=liferay-portal-node-1-29833, cluster=liferay-channel-control, physical address=172.19.0.5:43578 liferay-portal-node-1_1 | ------------------------------------------------------------------- liferay-portal-node-1_1 | 2018-10-02 08:59:00.935 INFO [main][JGroupsReceiver:85] Accepted view [liferay-portal-node-1-29833|0] (1) [liferay-portal-node-1-29833]
From this log we see the join between the two cluster Liferay nodes (liferay-portal-node-1 and liferay-portal-node-2).
liferay-portal-node-2_1 | ------------------------------------------------------------------- liferay-portal-node-2_1 | GMS: address=liferay-portal-node-2-3742, cluster=liferay-channel-transport-0, physical address=172.19.0.6:51427 liferay-portal-node-2_1 | ------------------------------------------------------------------- liferay-portal-node-1_1 | 2018-10-02 09:37:09.679 INFO [Incoming-1,liferay-channel-transport-0,liferay-portal-node-1-41590][JGroupsReceiver:85] Accepted view [liferay-portal-node-1-41590|1] (2) [liferay-portal-node-1-41590, liferay-portal-node-2-3742] liferay-portal-node-2_1 | 2018-10-02 09:37:09.686 INFO [main][JGroupsReceiver:85] Accepted view [liferay-portal-node-1-41590|1] (2) [liferay-portal-node-1-41590, liferay-portal-node-2-3742]
To check the correct installation of the Elasticsearch cluster, just check the status of the cluster and the presence of the Liferay indexes. We can use REST services to get this information.
curl http://localhost:9200/_cluster/health?pretty
{ "cluster_name":"docker-elasticsearch", "status":"green", "timed_out": false, "number_of_nodes": 2, "number_of_data_nodes": 2, "active_primary_shards": 4, "active_shards": 6, "relocating_shards": 0, "initializing_shards": 0, "unassigned_shards": 0, "delayed_unassigned_shards": 0, "number_of_pending_tasks": 0, "number_of_in_flight_fetch": 0, "task_max_waiting_in_queue_millis": 0, "active_shards_percent_as_number": 100.0 }
curl curl http://localhost:9200/_cat/indices
In output you should also get Liferay indices.
green open liferay-20099 6tfpxVt7Td6Sc_UY1TUpfA 1 0 0 0 264b 264b green open liferay-0 zSf1E3mEQjSGmWPjm7pczg 1 0 149 0 228kb 228kb green open .monitoring-es-6-2018.10.02 s9jxYLtyS46Bi_okocuVIA 1 1 4772 6 7.7mb 3.7mb green open .monitoring-es-6-2018.10.01 -gJo6qMARbWjHwo4HGmjhA 1 1 2359 4 2.5mb 1.2mb
After all the services are up, you can reach Liferay this way:
- Via HA Proxy or Load Balancer at URL http://localhost
- Accessing directly to the nodes:
- Liferay Node 1: http://localhost:6080
- Liferay Node 2: http://localhost:7080
## # Liferay 7.1 CE GA1 Cluster ## 127.0.0.1 liferay-portal-node-1.local 127.0.0.1 liferay-portal-node-2.local 127.0.0.1 liferay-portal.local 127.0.0.1 liferay-lb.local
To access Liferay through HA Proxy goto your browser at http://liferay-lb.local