Saturday

Code review with Sonar

Last week Sonar announced their new version 2.8 with a few new features and bug fixes. The main new feature is the support of custom code review. Crucible and Review board are another alternative code review system. Sonar come across with code coverage and review in one system, which is easy to maintain with small effort.
As usual sonar administrator must create users to assign tasks and collaborations. Sonar provide LDAP plugin which enables the delegation of Sonar authentication to an external system. Currently LDAP plugin supports LDAP and Active directory. In our corporation we are using active directory and first of all i tried to configure the LADP plugin. LDAP plugin wiki fully describes the installation of the plugin with LDAP system but poorly with AD. With some effort with my boss we were able to configure the plugin with our AD system. Follows i am sharing the configuration:
#-------------------
# Sonar LDAP Plugin
#-------------------

# IMPORTANT : before activation, make sure that one Sonar administrator is defined in the external system
# Activates the plugin. Leave blank or comment out to use default sonar authentication.
sonar.authenticator.class: org.sonar.plugins.ldap.LdapAuthenticator

# Ignore failure at startup if the connection to external system is refused.
# Users can browse sonar but not log in as long as the connection fails.
# When set to true, Sonar will not start if connection to external system fails.
# Default is false.
#sonar.authenticator.ignoreStartupFailure: true

# Automatically create users (available since Sonar 2.0).
# When set to true, user will be created after successful authentication, if doesn't exists.
# The default group affected to new users can be defined online, in Sonar general settings. The default value is "sonar-users".
# Default is false.
#sonar.authenticator.createUsers: true

# (omit if you use autodiscovery) URL of the LDAP server.
# If you are using ldaps, then you should install server certificate into java truststore.
# eg. ldap://localhost:10389
ldap.url: ldap://mycompany.com

# (optional) Distinguished Name (DN) of the root node in LDAP from which to search for users,
# eg. “ou=users,o=mycompany”
ldap.baseDn: dc=mycompany,dc=com

# (optional) Bind DN is the username of an LDAP user to connect (or bind) with.
# This is a Distinguished Name of a user who has administrative rights,
# eg. “cn=sonar,ou=users,o=mycompany”. Leave blank for anonymous access to the LDAP directory.
ldap.bindDn: ADADMIN

# (optional) Bind Password is the password of the user to connect with.
# Leave blank for anonymous access to the LDAP directory.
ldap.bindPassword: ADADMIN_PASSWORD

# Login Attribute is the attribute in LDAP holding the user’s login.
# Default is ‘uid’. Set ’sAMAccountName’ for Microsoft Active Directory
ldap.loginAttribute: sAMAccountName

# Object class of LDAP users.
# Default is 'inetOrgPerson'. Set ‘user’ for Microsoft Active Directory.
ldap.userObjectClass: user

# (advanced option) See http://java.sun.com/products/jndi/tutorial/ldap/security/auth.html
# Default is 'simple'. Possible values: 'simple', 'CRAM-MD5', 'DIGEST-MD5', 'GSSAPI'.
ldap.authentication: simple

# (advanced option)
# See
# http://java.sun.com/products/jndi/tutorial/ldap/security/digest.html
# http://java.sun.com/products/jndi/tutorial/ldap/security/crammd5.html
# eg. example.org
#ldap.realm:

# (advanced option) Context factory class.
# Default is 'com.sun.jndi.ldap.LdapCtxFactory'.
#ldap.contextFactoryClass: com.sun.jndi.ldap.LdapCtxFactory
Configuration may vary on your AD system, strongly guess system administrator may help in this issue.
For now in the time of the authentication, Sonar will ignore the password from it's own system and delegate the username and password to the active directory for authentication. Also sonar administrator must configure the role for each user independently.
After installing plugin we are ready to go for code review. On the violations tab we should see the review link as follows:
Now we can add comments on violations, by default task will assign to the author of the comment:
After creating the task we also can reassign the task to another user as follows:
All the reviews you can get from the dash board
For more screen shots you should visit this link (sonar-2-8-in-screenshots).
One shortage of the sonar code review is the lack of notification, when any comment or task assign to the users. I believe that, in future release sonar will add this notification functionality in code review.

Wednesday

Apache maven incremental build

Apache maven is one of the popular tool for building and managing java projects. Based on the concept of a project object model (POM), Maven can manage a project's build, reporting and documentation from a central piece of information.
However when you have projects with multiple modules, it follows some issue when you compiling your project. One of them is incremental building, which means when you updates your project from the version control, you have to build the entire system by command mvn clean install. Consider the following maven project structure:
IncrementalBuild
|_ _ test-api
|_ _ test-api-impl
|_ _ test-donothing
where module test-api-impl dependent on module test-api. Whenever we will make some change on module test-api, we have to recompile and build the module test-api-impl.
If we will enter the command mvn install module test-api-impl will not get the updated version from the module test-api. You have to run command mvn clean install which will rebuild the entire project. Sometime it's time consuming and just unnecessary. You can download the project from here and check your self.
Apache maven currently doesn't support for the incremental build even on version 3.0.3.
But there is a plugin called Maven-Incremental build plugin, which can build project incrementally.
Just add the following plugin in the root pom file and you are ready for go
<plugin>
 <groupId>net.java.maven-incremental-build</groupId>
 <artifactId>incremental-build-plugin</artifactId>
 <version>1.4</version>
 <executions>
  <execution>
   <goals>
   <goal>incremental-build</goal>
   </goals>
  </execution>
 </executions>
</plugin>
Now you can run mvn install without goal clean and the project will detects the updated code and recompile modules if need.
UPD:- Note that, following page http://maven-incremental-build.java.net/site/usage.html contains incorrect groupId on example "net.java.incremental-build-plugin" which will not uploaded on central maven repository.
Resource:
1) Apache Maven Incremental Build support for WSO2 Carbon
2) Apache maven incremental plugin mojo

Saturday

Analyse with ANT - a sonar way

After the Javaone conference in Moscow, i have found some free hours to play with Sonar. Here is a quick steps to start analyzing with ANT projects. Sonar provides Analyze with ANT document to play around with ANT, i have just modify some parts.
Here is it.
1) Download the Sonar Ant Task and put it in your ${ANT_HOME}/lib directory
2) Modify your ANT build.xml as follows:
<?xml version = '1.0' encoding = 'windows-1251'?>

<project name="abc" default="build" basedir=".">
 <!-- Define the Sonar task if this hasn't been done in a common script -->
 <taskdef uri="antlib:org.sonar.ant" resource="org/sonar/ant/antlib.xml">
  <classpath path="E:\java\ant\1.8\apache-ant-1.8.0\lib" />
 </taskdef>
 <!-- Out-of-the-box those parameters are optional -->
 <property name="sonar.jdbc.url" value="jdbc:oracle:thin:@xyz/sirius.xyz" />
 <property name="sonar.jdbc.driverClassName" value="oracle.jdbc.driver.OracleDriver" />
 <property name="sonar.jdbc.username" value="sonar" />
 <property name="sonar.jdbc.password" value="sonar" />
 <!-- Additional Sonar configuration (PMD need 1.5 when using annotations)-->
 <property name="sonar.java.source" value="1.5"/>
 <property name="sonar.java.target" value="1.5"/>


 <!-- SERVER ON A REMOTE HOST -->
 <property name="sonar.host.url" value="http://sunny.fors.ru/sonar" />


 <property name="ear.file" value="konfiskat.ear"/>

 <property file="build.properties"/>

 <property name="build.dir"    value="build"/>
 <property name="classes.dir"  value="${build.dir}/classes"/>
 <property name="classes2.dir" value="classes"/>
 <property name="deploy.dir"   value="deploy"/>
 <property name="doc.dir"      value="docs"/>
 <property name="jar.dir"      value="${build.dir}/jar"/>
 <property name="lib.dir.1"    value="lib"/>
 <property name="lib.dir.2"    value="${lib.common.dir}"/>
 <property name="lib.dir.3"    value="common_lib"/>
 <property name="lib.dir.4"    value="${jdev.libs.dir}"/>
 <property name="src.dir"      value="src"/>
 <property name="config.dir"   value="${src.dir}/META-INF"/>
 <property name="temp.dir"   value="${src.dir}/temp"/>

 <path id="classpath">
  <fileset dir="${lib.dir.3}" includes="**/*.jar"/>
  <fileset dir="${lib.dir.2}" includes="**/com.ibm.mq.jar, **/jboss-j2ee.jar"/>
  <fileset dir="${lib.dir.4}" includes="**/*.jar"/>
 </path>

 <path id="srcpath">
  <pathelement location="${src.dir}"/>
 </path>
 <!-- Add the target -->
 <target name="sonar">
  <!-- The workDir directory is used by Sonar to store temporary files -->
  <sonar:sonar workDir="${temp.dir}" key="org.example:example" version="0.1-SNAPSHOT" xmlns:sonar="antlib:org.sonar.ant">

   <!-- source directories (required) -->
   <sources>
    <path location="${src.dir}" />
   </sources>

   <!-- binaries directories, which contain for example the compiled Java bytecode (optional) -->
   <binaries>
    <path location="${classes.dir}" />
   </binaries>

   <!-- path to libraries (optional). These libraries are for example used by the Java Findbugs plugin -->
   <libraries>
    <path refid="classpath"/>
   </libraries>
  </sonar:sonar>
 </target>

 <target name="clean">
 </target>

 <target name="init" depends="clean"/>

 <target name="compile" depends="init">
  <mkdir dir="${classes.dir}"/>
  <javac destdir="${classes.dir}"
    classpathref="classpath"
    debug="on">
   <src refid="srcpath"/>
  </javac>
 </target>

 <target name="doc" depends="compile">

 </target>

 <target name="build" depends="doc">
  <mkdir dir="${jar.dir}"/>
  <jar destfile="${jar.dir}/toOAS.jar">
   <manifest>
    <attribute name="Class-Path" value="com.ibm.mq.jar xercesImpl.jar"/>
   </manifest>
   <metainf dir="${config.dir}">
    <include name="ejb-jar.xml"/>
    <include name="orion-ejb-jar.xml"/>
   </metainf>
   <fileset dir="${classes.dir}">
    <include name="**/*.*"/>
   </fileset>
   <fileset dir="${src.dir}">
    <include name="*.properties"/>
   </fileset>
  </jar>

  <mkdir dir="${deploy.dir}"/>

  <echo file="${jar.dir}/application.xml"><![CDATA[<?xml version="1.0" encoding="UTF-8"?> <application> <display-name>ReadMQ</display-name> <module> <ejb>toOAS.jar</ejb> </module> </application>]]>
  </echo>

  <ear destfile="${deploy.dir}/${ear.file}"
    appxml="${jar.dir}/application.xml">
   <fileset dir="${jar.dir}" includes="*.jar"/>
   <fileset dir="${lib.dir.2}">
    <include name="com.ibm.mq.jar"/>
    <include name="xercesImpl.jar"/>
   </fileset>
  </ear>

 </target>

 <target name="all" depends="build"/>
</project>

Wednesday

Oracle java one presentation of Continuous integration

Tuesday

A quick fix of WS-I BP2703 assertion

A few days ago we have got WS-I conformance report from our third party client that says our web service is not compliant with WS-I guidelines. The report summary was failed with the following error:
Assertion: BP2703
Our team member quick check the WS-I compliance in Jdeveloper and can't reproduce the bug, but with soapUI gave the result with the assertion failed. Here is the WSDL of the web service.
<?xml version="1.0" encoding="utf-8" ?>
<wsdl:definitions 
    xmlns:tns="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/RetailerService.wsdl" 
    targetNamespace="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/RetailerService.wsdl" 
    xmlns:retailer="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl" 
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" 
    xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
    xmlns="http://schemas.xmlsoap.org/wsdl/" >
 
  <wsdl:import namespace="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl" 
      location="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl"/>


  <wsdl:service name="RetailerService">
    <wsdl:port name="LocalRetailerPort" binding="retailer:RetailerSoapBinding">
      <soap:address location="http://localhost:9080/Retailer/services/Retailer"/>
    </wsdl:port>
  
  </wsdl:service>
<wsp:Policy wsu:Id="UsernameToken" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy/ws-policy.xsd" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
  <wsp:ExactlyOne>
   <wsp:All>
    <sp:TransportBinding/>
    <sp:SupportingTokens>
     <wsp:Policy>
      <sp:UsernameToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient"/>
     </wsp:Policy>
    </sp:SupportingTokens>
   </wsp:All>
  </wsp:ExactlyOne>
 </wsp:Policy>
</wsdl:definitions>
If we carefully check the error, we have to get that wsdl parser encounter wsp:Policy element after service element and throws exception. If we will check the WS policy specification we could found that Policy element can be anywhere in wsdl document.
But basic WS-I profile doesn't allow the Policy element after the service element by the specification of the wsdl schema. If we put the Policy element before the type or import element the assertion error fixes because according to the wsdl schema WSDL definition could take any number of element before type element, and this the simple fix. The valid wsdl document are as follows:
<?xml version="1.0" encoding="utf-8" ?>

<wsdl:definitions 
    xmlns:tns="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/RetailerService.wsdl" 
    targetNamespace="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/RetailerService.wsdl" 
    xmlns:retailer="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl" 
    xmlns:wsdl="http://schemas.xmlsoap.org/wsdl/" 
    xmlns:soap="http://schemas.xmlsoap.org/wsdl/soap/"
    xmlns="http://schemas.xmlsoap.org/wsdl/" >
  
     <wsp:Policy wsu:Id="UsernameToken" xmlns:wsu="http://docs.oasis-open.org/wss/2004/01/oasis-200401-wss-wssecurity-utility-1.0.xsd" xmlns:wsp="http://schemas.xmlsoap.org/ws/2004/09/policy/ws-policy.xsd" xmlns:sp="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702">
  <wsp:ExactlyOne>
   <wsp:All>
   
    <sp:TransportBinding/>
    <sp:SupportingTokens>
     <wsp:Policy>
      <sp:UsernameToken sp:IncludeToken="http://docs.oasis-open.org/ws-sx/ws-securitypolicy/200702/IncludeToken/AlwaysToRecipient"/>
     </wsp:Policy>
    </sp:SupportingTokens>
   </wsp:All>
  </wsp:ExactlyOne>
 </wsp:Policy> 
  <wsdl:import namespace="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl" 
      location="http://www.ws-i.org/SampleApplications/SupplyChainManagement/2002-08/Retailer.wsdl"/>


  <wsdl:service name="RetailerService">
    <wsdl:port name="LocalRetailerPort" binding="retailer:RetailerSoapBinding">
      <soap:address location="http://localhost:9080/Retailer/services/Retailer"/>
    </wsdl:port>
  </wsdl:service>

</wsdl:definitions>
Rest of the part of the blog post describes how to configure WS-I testing tools on Mac OS. For some unknown reason SOAPUI doesn't display the conformance report on the window.
First download the Java WS-I testing tool from the following link. Unzip the archive and add the following properties on the .bash_profile
export WSI_HOME=/Users/samim/Development/WS-i/TestTool/wsi-test-tools
export PATH=/opt/subversion/bin:$WSI_HOME/java/bin:$PATH
run the bash_profile . .bash_profile
If we now runs the Analyzer.sh tools we should get the following error
/bin/sh^M: bad interpreter: No such file or directory
Unfortunately all the executable file on the wsi-test-tools/java/bin/ catalog has wrong return endings.
Do the following for the files setenv.sh and Analyzer.sh
cp -p setenv.sh setenv.sh.orig
cat setenv.sh | tr -d '\r' > setenv.sh.new
mv setenv.sh.new setenv.sh
now change the permissions of the files:
chmod +x setenv.sh (also do for the Analyzer.sh)

Now we are ready to execute Analyze tool. From the wsi-test-tools/java/samples/ directory run the following command
./Analyze.sh -config ./analyzerConfig.xml
these will analyze the WSDL and create a report on the current directory. You can examine and edit the analyzerConfig.xml for our purpose. Report.xml can be run on any internet browser and you will see the report.

Saturday

Clearing Hazelcast data grid cache with Oracle Database change notification

UP1: if you are interested in in-memory computing, we recommended the book "High performance in-memory computing with Apache Ignite".

A few days ago we decided to use 2nd level cache for better java scalability in our legacy system. Everything goes fine with hazelcast as a 2nd level cache, whenever our a few 3rd party applications starts uploading data directly to the Oracle schema. Generally, a middle-tier data cache duplicates some data from the back-end database server. Its goal is to avoid redundant queries to the database. However, this is efficient only when the data rarely changes in the database. The data cache has to be updated or invalidated when the data changes in the database. If application operates DML operations through cache it's simply your life, but in our case some of our 3rd party can't use hazelcast data grid and we decided to get the proper way to update our caches or clear it whenever some entity on tables updates. In this post i will provide a simple way to clear Hazelcast cache (Hibernate region) whenever data base event occurs on the oracle database. 
It's very easy to plug Hazelcast as a 2nd level cache on Hibernate Project. 
To enable 2nd level cache on hibernate, do the following:

      AnnotationConfiguration aconf=new AnnotationConfiguration();
      .... 
      aconf.setProperty("hibernate.cache.use_query_cache","true");
      aconf.setProperty("hibernate.cache.usesecondlevel_cache","true");
      aconf.setProperty("hibernate.cache.useminimalputs","true");
      aconf.setProperty("hibernate.cache.provider_class","com.hazelcast.hibernate.provider.HazelcastCacheProvider"); 
as usual you can set these above properties in the hibernate.cfg.xml.
Now put hibernate cache annotations @Cache annotation on your entities and collections.

@Cache(usage = CacheConcurrencyStrategy.READ)@SuppressWarnings("serial")
public class FdcVt extends FdcDocBase implements java.io.Serializable, IReportable, ITrXmlGenerationable {
}
Application is ready to use of 2nd level caches (assume you also hazelcast-hibernate-.jar in your calsspath), if you run the application you should see the following logs on your console or file:

22.01.2011 15:04:50 com.hazelcast.hibernate.provider.HazelcastCacheProvider
INFO: Starting up HazelcastCacheProvider...
22.01.2011 15:04:50 com.hazelcast.config.XmlConfigBuilder
INFO: Looking for hazelcast.xml config file in classpath.
22.01.2011 15:04:50 com.hazelcast.config.XmlConfigBuilder
WARNING: Could not find hazelcast.xml in classpath.
Hazelcast will use hazelcast-default.xml config file in jar.
22.01.2011 15:04:50 com.hazelcast.config.XmlConfigBuilder
INFO: Using configuration file /hazelcast-default.xml in the classpath.
22.01.2011 15:04:51 com.hazelcast.system
INFO: [dev] Hazelcast 1.9.1 (20110103) starting at Address[192.168.157.1:5701]
22.01.2011 15:04:51 com.hazelcast.system
INFO: [dev] Copyright (C) 2008-2010 Hazelcast.com
22.01.2011 15:04:51 com.hazelcast.impl.LifecycleServiceImpl
INFO: [dev] Address[192.168.157.1:5701] is STARTING
22.01.2011 15:04:53 com.hazelcast.impl.Node
INFO: [dev] 


Members [1] {
    Member [192.168.157.1:5701] this
}

22.01.2011 15:04:53 com.hazelcast.impl.LifecycleServiceImpl
INFO: [dev] Address[192.168.157.1:5701] is STARTED
22.01.2011 15:04:54 com.hazelcast.hibernate.provider.HazelcastCache
INFO: Creating new HazelcastCache with region name: ru.fors.lsadb.datamodel.FdcVt
22.01.2011 15:04:54 com.hazelcast.hibernate.provider.HazelcastCache
INFO: Creating new HazelcastCache with region name: org.hibernate.cache.UpdateTimestampsCache
22.01.2011 15:04:54 com.hazelcast.hibernate.provider.HazelcastCache
INFO: Creating new HazelcastCache with region name: org.hibernate.cache.StandardQueryCache 
Hazelcast create hibernate region for every entity, in our case it's ru.fors.lsadb.datamodel.FdcVt. You can change your region name on @Cache annotations also in hazelcast configuration file.
If you will run any query against fdcVt entity, first time query will runs on database table and the second time should use hazelcast cache. If you enable hibernate statistics through JMX, you would also see the query result hit as follows:
Now it's time to meet with Oracle database change notification, from the database version Oracle 10g (10.2) it's possible to get notification whenever any Database object's change. Oracle provide these unique features on JDBC driver for 11g. You should download the drive from the oracle site and you are ready for coding. Even more oracle given a fine grained example to use of the features.
The following paragraph i have copy and paste :-)

To use Oracle JDBC driver support for Database Change Notification, perform the following:
  1. Registration: You first need to create a registration.
  2. Query association: After you have created a registration, you can associate SQL queries with it. These queries are part of the registration.
  3. Notification: Notifications are created in response to changes in tables or result set. Oracle database communicates these notifications to the JDBC drivers through a dedicated network connection and JDBC drivers convert these notifications to Java events.
Also, you need to grant the CHANGE NOTIFICATION privilege to the user. For example, if you connect to the database using the SCOTT user name, then you need to run the following command in the database:
grant change notification to scott;
For detail information you should visit the Database change notification page.
Here is the complete quick start example which will clear the hazelcast cache whenever the entity will updated:

package com.blu.misc;

import oracle.jdbc.driver.OracleConnection;
import oracle.jdbc.driver.OracleDriver;
import oracle.jdbc.dcn.DatabaseChangeRegistration;
import oracle.jdbc.dcn.DatabaseChangeListener;
import oracle.jdbc.dcn.DatabaseChangeEvent;
import oracle.jdbc.dcn.TableChangeDescription;
import oracle.jdbc.OracleStatement;

import java.util.Properties;
import java.util.Map;
import java.sql.SQLException;
import java.sql.Statement;
import java.sql.ResultSet;

import com.hazelcast.core.HazelcastInstance;
import com.hazelcast.core.Transaction;
import com.hazelcast.client.HazelcastClient;

public class GetNotify {
    private static final String USERNAME="xyz";
    private static final String PASSWORD = "w";
    private static final String URL="jdbc:oracle:thin:@xyz:1521:orcl";

    public static void main(String[] args) {
        System.out.println("Notify start");
        GetNotify notif = new GetNotify();
        OracleConnection con= null;
        DatabaseChangeRegistration dcr = null;
        try{
            con =  notif.getConnection();
            Properties prop = new Properties();
            // set the registration propetries
            prop.setProperty(OracleConnection.DCN_NOTIFY_ROWIDS,"true");
            //prop.setProperty(OracleConnection.DCN_QUERY_CHANGE_NOTIFICATION,"true");

            dcr = con.registerDatabaseChangeNotification(prop);
            // add the listener
            dcr.addListener(new DatabaseChangeListener(){
                public void onDatabaseChangeNotification(DatabaseChangeEvent e) {
                    Thread t = Thread.currentThread();
                    System.out.println("QCNDemoListener: got an event ("+this+" running on thread "+t+")");
                    System.out.println("====================================");
                    System.out.println(e.toString());
                    System.out.println("====================================");
                    TableChangeDescription[] tchanges =  e.getTableChangeDescription();
                    for(TableChangeDescription tdesc : tchanges){
                        System.out.println("Changed Object:"+ tdesc.getTableName());
                    }
                    // clear cache
                    HazelcastInstance instance = HazelcastClient.newHazelcastClient("dev", "dev-pass", "192.168.157.1", "192.168.157.1:5702");
                    Map<string, org.hibernate.cache.readwritecache.item=""> vtMaps =  instance.getMap("ru.fors.lsadb.datamodel.FdcVt");
                    System.out.println("Cache size by region:"+ vtMaps.size());
                    
                    Transaction transaction = instance.getTransaction();
                    transaction.begin();
                    vtMaps.clear();
                    transaction.commit();
                    System.out.println("Object allocated after clear cache.."+ vtMaps.size());
                }
            });

            String query = "select * from dbf_kbk";
            Statement stm = con.createStatement();
            ((OracleStatement) stm).setDatabaseChangeRegistration(dcr);

            ResultSet rs = stm.executeQuery(query);
            stm.executeQuery("select 1 from fdc_vt where 1!=2");
            stm.executeQuery("select 1 from fdc_pt where 1!=2");
            while(rs.next()){
            }
            // get tables from dcr
            String[] tables = dcr.getTables();
            for(String str : tables){
                System.out.println("Tables:"+ str);
            }
            rs.close();
            stm.close();
        }catch(SQLException e){
            System.out.println("SQLException:"+ e.getMessage());
            try{
            if(con != null && !con.isClosed()){
                con.unregisterDatabaseChangeNotification(dcr);
                con.close();

            }
            }catch(SQLException e1){
                System.out.println("e1"+ e1.getMessage());
            }
        }finally{
            try{
            if(con != null && !con.isClosed()){
                con.close();
            }
            }catch(SQLException e){
                System.out.println("e2:"+e.getMessage());
            }

        }
    }
    private OracleConnection getConnection() throws SQLException {
        OracleDriver driver = new OracleDriver();
        Properties prop = new Properties();
        prop.setProperty("user",this.USERNAME);
        prop.setProperty("password",this.PASSWORD);
        return (OracleConnection)driver.connect(this.URL,prop);
    }

}
Above snippet is very self explainable, first we creates registrations and register Entity through the sql query. Add a DatabaseChangeListner and implement the hazelcast client operation on it. Through the hazelcast client we gets cache of the hibernate region, turn on the transaction and clear the cache. Whenever you commit the changes, all the member of the Hazelcast gets the notification. Now hibernate has to run the query on the database and will fill the 2nd level cache, which will help to always getting synchronous query result on the client side.
Thank'x for reading.

J2EE application profiling with yourkit on Weblogic

One of our provider uses HP-UX on production machine, a few J2ee applications runs on that machine. After sometimes, we start getting complain from our clients that portal often goes out of memory and we decided to investigate the application with Yourkit. Yourkit is a industry leading java profiling software on present day, it can works with standalone java application as well as remote profiling. YourKit supports SQL,JNDI and run time memory profiling, see the following link for more information.

1) First download the version you need, in my cases i download Windows and HP-UX version and got the evaluation key.
2) Unzip the YJP-9.5.3*.zip and run the following command on HP-UX to set the agent for profiling on weblogic
 cd yjp-9.5.3/lib & java -jar yjp.jar -integrate

which will bring up you new command console to configure agent with the weblogic server. Go throws the command prompt and locate your weblogic startup script as follows:

Now you will get a new startup script named startWebLogic_with_yjp.sh to start Weblogic Server with yourkit agent.  Most of all time you can use these script to start WebLogic server with agent, script only contains the  jvm argument

JAVA_OPTIONS="-agentpath:/var/oracle/app/yourkit/yjp-9.5.3/bin/hpux-ia64-64/libyjpagent.so=disablestacktelemetry,disableexceptiontelemetry,builtinprobes=none,delay=10000,sessionname=WebLogicMona
 $JAVA_OPTIONS"
export JAVA_OPTIONS 


On HP-UX these above script couldn't start WebLogic with agent, for some reason env variable JAVA_OPTIONS was not enable on startup terminal. For quick fix i copy the JAVA_OPTIONS on /bin/startWeblogic.sh
and start the server nohup ./startWeblogic.sh &
now in the nohup.out file we found the following log
[YourKit Java Profiler 9.5.3] Loaded. Log file: /home/oracle/.yjp/log/8907.log

From the 8907.log we found the port name which uses agent to connect with.
[YourKit Java Profiler 9.5.3] [12.860]: Profiler agent is listening on port 10001


Now from our host operating system we can connect remotely and profiling our applications.
You can watch demos here to quick start with java YourKit.