Upgrading from the Previous Release
This page takes you through the steps for upgrading from BAM 2.3.0 version to BAM 2.4.0. If you are upgrading from BAM 2.2.0, you must first upgrade to 2.3.0 before upgrading to 2.4.0.
Preparing to upgrade
Configuration upgrades
- Replace contents in
<BAM_HOME>/repository/conf/
folder, with the corresponding content of theÂconf
folder in BAM 2.3.0. - Modify following files in the
conf
folder:ÂEnable Axis2 clustering in
<BAM_HOME>/repository/conf/axis2/axis2.xml
as follows:<clustering class="org.wso2.carbon.core.clustering.hazelcast.HazelcastClusteringAgent" enable="true">
In the above clustering configuration, make sure to configure below properties correctly:
membershipScheme
: This indicates the cluster membership scheme being used. Set it to "multicast".
localMemberHost
: This indicates the host name or IP address of the member. Set it to relavant host name of the machine. (For example: node1)Â- Set task server count to 1 in theÂ
<BAM_HOME>/repository/conf/etc/tasks-config.xml
file. - Remove
coordination-client-config.xml
andzoo.cfg
 files from<BAM_HOME>/repository/conf/etc/
 folder as they are no longer required by BAM 2.4.0.
Artifact migration
Copy your existing toolboxes into
<BAM_HOME>/repository/deployment/server/bam-toolbox/
folder.ÂActivity_Monitoring
toolbox is no longer supported by BAM 2.4.0Â- Copy any third party libraries to
<BAM_HOME>/repository/components/lib/
folder.
Database migration
Upgrade User Management (UM)database schema to it's latest version.Â
Find the migration scripts according to your database from migration-scripts.
- No migration is required for the Registry database schema, since it has not been changed since BAM 2.3.0. Also no upgrade is required for Hive meta store database.Â
Cassandra data migration
There are no Cassandra related configurations changed from BAM 2.3.0, if you use your existing Cassandra cluster without any updates. However, if you use internal Cassandra cluster, take one node at a time and perform the following steps for the Caasandra data migration :
Removing the node temporarily from the active cluster
RunÂ
disablegossip
 andÂdisablethrift
 using the NodeTool, to make the node stop accepting further requests from external clients/other nodes.Flush/drain theÂ
memtables
 in order to flush the data written to memory into the disk.Run Compaction to mergeÂ
sstables
.Take snapshots and enable incremental backups.
This stops all the other nodes/clients from writing to this node and sinceÂ
memtables
 are flushed to disk, startup times are fast as it need not walk-through commit logs.Stop Cassandra. (Though this node is down, cluster is available for write/read, therefore the downtime is zero.)
Upgrading SS tables
Install Cassandra 1.2.13 on the new locations.
UpgradeÂ
sstables
 to new storage format using sstableupgrade- Copy the upgraded related files of theÂ
sstables
 to the proper location, where they are typically stored. (CARBON_HOME/repository/database/cassandra/data/[keyspace_name]/
)
Merging cassandra.yaml related configurations
Compare the cassandra.yaml
 files shipped with both Cassandra versions and apply the parameter values appropriately. This is required as there could be certain parameters that have been dropped. (It could be that they are retained to preserve backward compatibility), due to moving from one major version to another.
Rebooting Cassandra
Start Cassandra.
- Check whether the nodes have properly joined the cluster via NodeToolÂ
ring
 command.
Cleaning Hive metastore database
Hive metastore
database is used to store the meta information about the tables created in Hive, such as actual data source name, column family name, key mappings, etc. Since there are some database schema changes in Hive metastore from BAM 2.3.0 to BAM 2.4.0, you need to clean the database entries in BAM 2.3.0, and provide a fresh database for BAM 2.4.0. Â When you clean the database, the necessary tables and entries will be created during the execution of the Hive scripts and the data will be populated again.
Following properties in <BAM_HOME>repository/conf/advanced/hive-site.xml
 file are related to metastore
database.
<property> <name>javax.jdo.option.ConnectionURL</name> <value>jdbc:h2://${CARBON_HOME}/repository/database/metastore_db</value> <description>JDBC connect string for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionDriverName</name> <value>org.h2.Driver</value> <description>Driver class name for a JDBC metastore</description> </property> <property> <name>javax.jdo.option.ConnectionUserName</name> <value>wso2carbon</value> <description>username to use against metastore database</description> </property> <property> <name>javax.jdo.option.ConnectionPassword</name> <value>wso2carbon</value> <description>password to use against metastore database</description> </property>
Steps to clean the metastore database
- Shut down all BAM analyzer nodes.
- Check for the
metastore
database that is pointed fromjavax.jdo.option.ConnectionURL
property in theÂ<BAM_HOME>repository/conf/advanced/hive-site.xml
. - Backup the database checked in the above step, and then drop all the tables in it. (Example:
metastore_db
) - Restart the all BAM analyzer nodes.
Hadoop cluster configuration
Hadoop cluster configuration settings have not been changed since BAM 2.3.0. Therefore proceed with your existing Hadoop installation.
Going into production
Following artifacts will no longer be shipped with BAM 2.4.0:
- Activity monitoring sample and the activity monitoring toolboxes.Â
- Activity monitoring data agent which has so far been available under
Service Data Publishing
.
The new activity search component has its own Jaggery app. This can be used to query data directly from Cassandra using indices, rather than using Hive scripts for summarizing data. It will also be shipped with the BAM distribution by default, thereby negating the need for installing a dedicated toolbox.
The message tracer will replace the activity data publisher for dumping SOAP payloads to BAM. It will also serve in correlating messages based on ID.