com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_link3' is unknown.

Upgrading from the Previous Release

This page takes you through the steps for upgrading from BAM 2.3.0 version to BAM 2.4.0. If you are upgrading from BAM 2.2.0, you must first upgrade to 2.3.0 before upgrading to 2.4.0.

Preparing to upgrade

Download WSO2 BAM 2.4.0.

Configuration upgrades

  1. Replace contents in <BAM_HOME>/repository/conf/ folder, with the corresponding content of the  conf folder in BAM 2.3.0. 
  2. Modify following files in the conf folder: 
    • Enable Axis2 clustering in <BAM_HOME>/repository/conf/axis2/axis2.xml as follows:

      <clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true"> 
      In the above clustering configuration, make sure to configure below properties correctly:
      membershipScheme: This indicates the cluster membership scheme being used. Set it to "multicast".
      localMemberHost : This indicates the host name or IP address of the member. Set it to relavant host name of the machine. (For example: node1) 
    • Set task server count to 1 in the <BAM_HOME>/repository/conf/etc/tasks-config.xmlfile.
    • Remove coordination-client-config.xml and zoo.cfg files from <BAM_HOME>/repository/conf/etc/ folder as they are no longer required by BAM 2.4.0.

Artifact migration

  1. Copy your existing toolboxes into <BAM_HOME>/repository/deployment/server/bam-toolbox/ folder. 

    Activity_Monitoring toolbox is no longer supported by BAM 2.4.0 

  2. Copy any third party libraries to <BAM_HOME>/repository/components/lib/ folder.

Database migration

  1. Upgrade User Management (UM)database schema to it's latest version. 

    Find the migration scripts according to your database from migration-scripts.

  2. No migration is required for the Registry database schema, since it has not been changed since BAM 2.3.0. Also no upgrade is required for Hive meta store database. 

Cassandra data migration

There are no Cassandra related configurations changed from BAM 2.3.0, if you use your existing Cassandra cluster without any updates. However, if you use internal Cassandra cluster, take one node at a time and perform the following steps for the Caasandra data migration :

Removing the node temporarily from the active cluster

  1. Run disablegossip and disablethrift using the NodeTool, to make the node stop accepting further requests from external clients/other nodes.

  2. Flush/drain the memtables in order to flush the data written to memory into the disk.

  3. Run Compaction to merge sstables.

  4. Take snapshots and enable incremental backups.

  5. This stops all the other nodes/clients from writing to this node and since memtables are flushed to disk, startup times are fast as it need not walk-through commit logs.

  6. Stop Cassandra. (Though this node is down, cluster is available for write/read, therefore the downtime is zero.)

Upgrading SS tables

  1. Install Cassandra 1.2.13 on the new locations.

  2. Upgrade sstables to new storage format using sstableupgrade

  3. Copy the upgraded related files of the sstables to the proper location, where they are typically stored. (CARBON_HOME/repository/database/cassandra/data/[keyspace_name]/)

Merging cassandra.yaml related configurations

Compare the cassandra.yaml files shipped with both Cassandra versions and apply the parameter values appropriately. This is required as there could be certain parameters that have been dropped. (It could be that they are retained to preserve backward compatibility), due to moving from one major version to another.

Rebooting Cassandra

  1. Start Cassandra.

  2. Check whether the nodes have properly joined the cluster via NodeTool ring command.

Cleaning Hive metastore database

Hive metastore database is used to store the meta information about the tables created in Hive, such as actual data source name, column family name, key mappings, etc. Since there are some database schema changes in Hive metastore from BAM 2.3.0 to BAM 2.4.0, you need to clean the database entries in BAM 2.3.0, and provide a fresh database for BAM 2.4.0.  When you clean the database, the necessary tables and entries will be created during the execution of the Hive scripts and the data will be populated again.

Following properties in <BAM_HOME>repository/conf/advanced/hive-site.xml file are related to metastore database.

<property>
  <name>javax.jdo.option.ConnectionURL</name>
  <value>jdbc:h2://${CARBON_HOME}/repository/database/metastore_db</value>
  <description>JDBC connect string for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionDriverName</name>
  <value>org.h2.Driver</value>
  <description>Driver class name for a JDBC metastore</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionUserName</name>
  <value>wso2carbon</value>
  <description>username to use against metastore database</description>
</property>

<property>
  <name>javax.jdo.option.ConnectionPassword</name>
  <value>wso2carbon</value>
  <description>password to use against metastore database</description>
</property>

Steps to clean the metastore database

  1. Shut down all BAM analyzer nodes.
  2. Check for the metastore database that is pointed from javax.jdo.option.ConnectionURL property in the  <BAM_HOME>repository/conf/advanced/hive-site.xml .
  3. Backup the database checked in the above step, and then drop all the tables in it. (Example: metastore_db)
  4. Restart the all BAM analyzer nodes.

Hadoop cluster configuration

Hadoop cluster configuration settings have not been changed since BAM 2.3.0. Therefore proceed with your existing Hadoop installation.

Going into production

Following artifacts will no longer be shipped with BAM 2.4.0:

  • Activity monitoring sample and the activity monitoring toolboxes. 
  • Activity monitoring data agent which has so far been available under Service Data Publishing.

The new activity search component has its own Jaggery app. This can be used to query data directly from Cassandra using indices, rather than using Hive scripts for summarizing data. It will also be shipped with the BAM distribution by default, thereby negating the need for installing a dedicated toolbox.

The message tracer will replace the activity data publisher for dumping SOAP payloads to BAM. It will also serve in correlating messages based on ID.

com.atlassian.confluence.content.render.xhtml.migration.exceptions.UnknownMacroMigrationException: The macro 'next_previous_links2' is unknown.