Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Sample Error log
Code Block
event for endpoint group [ ( Receiver URL : tcp://das-1.-prod.local:7611, Authentication URL : ssl://das-1.-prod.local:7711),( Receiver URL : tcp://das-2.-prod.local:7611, Authentication URL : ssl://das-2.-prod.local:7711) ], 139882 events dropped so far. {org.wso2.carbon.databridge.agent.DataPublisher}
TID: [-1] [] [2017-05-23 00:05:53,708] ERROR
{org.wso2.carbon.databridge.agent.endpoint.DataEndpoint} - Unable to send events to the endpoint. {org.wso2.carbon.databridge.agent.endpoint.DataEndpoint}
org.wso2.carbon.databridge.agent.exception.DataEndpointException: Cannot send Events
OccurrenceThis occurs on the client side when using the WSO2Event event receiver.
Possible reasons
  • The TCP connection between the databridge client and the DAS server may not be established.
  • The DAS server may be unresponsive because it is unable to handle the event load due to resource constraints, or because it is not tuned properly. The insufficiently allocated resources may include CPU, memory, disk space, network bandwiidth, DB capacity etc.
  • The database may be unable to receive the data fast enough, resulting in it being a bottleneck. This can happen if the database is an RDBMS database such as MySQL, due to limited capacity.
Troubleshooting options

To check the TCP connection, enable event tracing and event logs, and make sure that one or more events are received by WSO2 DAS.

To log all the events received via WSO2Event event receivers deployed in DAS, add the following logger to the <DAS_HOME>/repository/conf/log4j.properties file.
org.wso2.carbon.event.input.adapter.wso2event.internal.ds.WSO2EventAdapterServiceDS=DEBUG 

For more information, see the following topics:

Recommended action
  • To check the database persistence and database receiver rate, the following properties can be set when the DAS server is started (e.g., ./bin/wso2server.sh -DprofileReceiver=true):
    • -DprofileReceiver=true :This enables you to check the throughput per receiver. This creates the receiver-perf.txt file in the DAS_HOME. The throughput is calculated and published in this file for every 100000 events.
    • -DreceiverStatsCutoff=2000 : This specifies the number of events for which the  receiver-perf.txt  file generated via the  -DprofileReceiver  property is updated. e.g.,  If 2000 is specified, new statistics are inserted into the  r eceiver-perf.txt  file for every batch of 2000 events received via the databridge agent. The default value is 100000 events.
    • -DprofilePersistence=true : This allows you to check the throughput at the persistence (i.e., Data Access Layer) level. This creates the persistence-perf.txt file in the DAS_HOME .
    • -DpersistenceStatsCutoff=1000 : This property specifies the number of events for which the persistence-perf.txt file generated via the -DprofilePersistence property is updated. If this property is not set, 100000 is considered the default number.
  • NoSQL databases such as HBase are recommended for high throughput environments to avoid this exception.
  • If there is an increase in the load of events persisted, schedule data purging to be carried out more frequently in order to enable databases to handle that load. For more information, see Purging Data.
  • To troubleshoot issues relating to Thrift/Binary transport that may result in this exception, see Understanding the Thrift Transport Thread Model.

...