Scale Your Liferay Application by Clustering

admin

April 28, 2022

Liferay Portal has verity of implementation out there from small application to largest scale enterprise portals. And whenever one server isn’t sufficient to serve the high traffic needs of your portal, you can scale your Liferay portal by adding additional servers.

Basically when group of servers and other resources act like a single system and enable high availability is termed as “Clustering”. It is mainly required for parallel processing, fault tolerance and load balancing, high traffic on the application.

With release of Liferay Portal CE 7.0 GA5+, we have some good news, Clustering is back in community editions. In Liferay, It is fairly simple and easy to cluster multiple machines or cluster multiple VMs on a single machine, or any mixture of two.

Now lets see how to achieve this. To implement it, we need two or more instances of Liferay running on Single/Separate VM. Once Liferay portal is installed in more than one application server node, there are few changes that need to be done. Here are some basics/rules :

  • All nodes should be pointing to same Liferay database.
  • Document and Media repositories must have the same configuration and should be accessible to all nodes.
  • Search should be on a separate search server that is optionally clustered.
  • Enable cluster link for cache replication.

In portal-ext.properties of liferay_home directory add below property.

1)   Database configuration: Each node should be configured with a data source or JNDI connection that points to one Liferay database (or a database cluster) which all the nodes will share. JNDI is recommended configuration.

jdbc.default.jndi.name=jdbc/LiferayPool
  • In root.xml of liferay_home/tomcat/conf/Catalina/localhost add below tag with appropriate values within <context></context>.
<Resource name=”jdbc/LiferayPool” auth=”Container”
driverClass=”net.sourceforge.jtds.jdbc.Driver”
jdbcUrl=”jdbc:jtds:sqlserver://DB.Server.IP:Port/dxpportal”
user=”root” password=”***” maxPoolSize=”100″ minPoolSize=”10″
acquireIncrement=”5″                          factory=”org.apache.naming.factory.BeanFactory”
type=”com.mchange.v2.c3p0.ComboPooledDataSource”>
/>
  • Testing: Start nodes sequentially, once Liferay portals are up and running Log in and Add any sample portlet on page to node1 and on node2 refresh the page. Repeat the test with reverse roles.

2)   Data folder configuration: Liferay’s Document and media library can mount several repositories with it. Users would be able to use default mounted Liferay repository. Liferay portal can use one of several different store implementations. In clustered environment all nodes must point to the same Data folder to store the Documents and Media.

Let’s take an example of local file system store configuration for two nodes on same machine.

  • Portal-ext.properties in liferay_home directory.
dl.store.impl=com.liferay.portal.store.file.system.FileSystemStore
  • Default root directory path for Data folder is Liferay_home/data/document_library we can configure it in two ways:
  • Control Panel: Config root directory within Control Panel Administration >Configuration >System Settings >Search file system property and change it.
  • Configuration file(Recommended): Create config file with “com.liferay.portal.store.file.system.configuration.FileSystemStoreConfiguration.cfg” name in Liferay_Home/osgi/configs directory and put below property.
rootDir=opt/LiferayPortal_node1/data/document_library

 

Testing: start nodes sequentially and execute following steps:

  • On Node 1 upload a document to the Documents and Media.
  • On Node 2 download the document. The download should be successful.
  • Repeat the test with reversed roles.

3)   Search Clustering: Search engine should be separate from Liferay server. Liferay supports Elasticsearch or Solr, either of both can also be configured in clustered environment.

For more information of how to configure Elasticsearch or Solr with Liferay refer the links. Liferay portal default ships with Elasticsearch engine.

Once the Liferay portal servers and Search engine have been configured as a cluster, change Liferay Portal from embedded mode to remote mode. On the first connection, the two sets of clustered servers communicate with each other the list of all IP addresses; in case of a node going down, the proper fail over protocols will enable. Queries and indices can continue to be sent for all nodes.

Now restart Liferay servers and perform index replication by navigating to Control Panel > Configuration>Server Administration>Resources Execute Reindex.

Testing: Add any web content and search it from both nodes.

4)   Enabling cluster link: By enabling cluster link it automatically activates distributed caching. Data is generated single time and replicated to other servers in cluster. The cache is distributed across multiple Liferay Portal nodes running concurrently. Enabling Cluster Link can increase performance dramatically.

  • To enable Cluster link, add following property to portal-ext.properties:
cluster.link.enabled=true
  • Modifying Cache Configuration: Liferay Portal uses Ehcache, which has robust distributed caching support. With load test, you may find that the default distributed cache settings aren’t optimized for your site. You can modify the Liferay Portal installation directly or you can use a module to do it. Either way, the settings you change are the same. A benefit of working with modules is that you can install a module on each node and change the settings without taking down the cluster. Modifying the Ehcache settings with a module is recommended over modifying the Ehcache settings directly.
  • Sample cache configure module to override your cache settings. All you just need is to modify one Ehcache configuration file. Download the module and unzip it into a Liferay workspace’s modules folder. Configuration file you will get into src/main/java/resources/ehcache/override-liferay-multi-vm-clustered.xml.
  • Ehcache has a lot of different modifications that can be done to cache certain objects. Users can tune these settings for their needs. The XML files have configuration settings which can be modified appropriately for your requirements.
  • You can get default configuration file from Liferay_home/osgi/marketplace folder, com.liferay.portal.ehcache-[version].jar. Default XML files in the /ehcache folder inside the .jar. You can replace the contents of the override-liferay-multi-vm-clustered.xml file above with the contents of this file or you can change it as per your requirement.
  • Once you’ve made your changes to the cache, save the file, build, and deploy the module, and your settings override the default settings. In this way, you can tweak your cache settings so that your cache performs optimally for the type of traffic generated by your site. You don’t have to restart your server to change the cache settings. This is a great benefit, but beware: since Ehcache doesn’t allow for changes to cache settings while the cache is alive, re-configuring a cache while the server is running flushes the cache.

5)   Hot Deploy: Plugins/Modules must be deployed to all nodes of the cluster. plugin/module must be places in Liferay portals deploy folder on each node, if you are not using centralized server farms deployment. To avoid any inconsistency Liferay needs to have the same patches installed across all nodes.

6) Application server Configuration: Configure tomcat servers for Cluster awareness.

  • server.xml file: Specify node1 and node2 value of jvmRoute=” ” in <Engine> tag of both nodes respectively.
  • Configure tomcat’s worker threads for Clustering maxThreads=”150″ in <Connector > tag.
  • Configuring unicast session replication in both the nodes so that session gets copied on entire cluster. To do so add following configuration between <Engine> tag in server.xml.
<Cluster className=”org.apache.catalina.ha.tcp.SimpleTcpCluster”  channelSendOptions=”8″>

<Manager className=”org.apache.catalina.ha.session.DeltaManager” expireSessionsOnShutdown=”false” notifyListenersOnReplication=”true”>

</Manager>

<Channel className=”org.apache.catalina.tribes.group.GroupChannel”>

<Membership className=”org.apache.catalina.tribes.membership.McastService”

address=”224.0.0.5″ port=”45564″ frequency=”500″ dropTime=”3000″></Membership>

<Sender className=”org.apache.catalina.tribes.transport.ReplicationTransmitter”>

<Transport className=”org.apache.catalina.tribes.transport.nio.PooledParallelSender”></Transport>

</Sender>

<Receiver className=”org.apache.catalina.tribes.transport.nio.NioReceiver”

address=”auto” port=”4000″ autoBind=”100″ selectorTimeout=”5000″  maxThreads=”6″>

</Receiver>

<Interceptor className=”org.apache.catalina.tribes.group.interceptors.TcpFailureDetector”>

</Interceptor>

<Interceptor className=”org.apache.catalina.tribes.group.interceptors.MessageDispatch15Interceptor”>

</Interceptor>

</Channel>

<Valve className=”org.apache.catalina.ha.tcp.ReplicationValve” filter=””></Valve>

<Valve className=”org.apache.catalina.ha.session.JvmRouteBinderValve”></Valve>

<ClusterListener className=”org.apache.catalina.ha.session.ClusterSessionListener”>

</ClusterListener>

</Cluster>

  • In both tomcat server, open web.xml and add <distributable/> tag above </webapp> in your tomcat/webapps/ROOT/WEB-INF/web.xml file and in tomcat/conf/web.xml also

Now restart the servers sequentially and verify cache and session replication.

 

Post By,

Zeenesh Patel

Subscribe
Notify of
guest
0 Comments
Inline Feedbacks
View all comments