Batch-Insertion in Hibernate

Batch-Insertion in Hibernate

To enable batch-processing to upload a large number of records into your database using Hibernate.

If you are undertaking batch processing you will need to enable the use of JDBC batching. This is absolutely essential if you want to achieve optimal performance. Set the JDBC batch size to a reasonable number (10-50, for example):

hibernate.jdbc.batch_size 20

the code snippet look like this ,

Session session = sessionFactory.openSession();
Transaction txInstance = session.beginTransaction();
for ( int i=0; i<100000; i++ ) {
    Student student = new Student(.....);;
    if ( i % 40 == 0 ) { 

When making new objects persistent flush() and then clear() the session regularly in order to control the size of the first-level cache because by default, Hibernate will cache all the persisted objects in the session-level cache and ultimately your application would fall over with an OutOfMemoryException.

A JDBC batch can target one table only, so every new DML statement targeting a different table ends up the current batch and initiates a new one. Mixing different table statements is therefore undesirable when using SQL batch processing.




How to use system commands in LUA

How to use system commands in LUA

Lets assume we have some requirement to call Unix-commands on the click of a button by using LUA .

To achieve this follow these steps :
1. First check your Lua page is working or not ?
> Run your lua page to ensure that there is no any error in this page.

2. Now we need to configure template file in Lua page.(or you can do it by lua codes also but it has limited CSS )
> For this we need first to create/implement template page in lua file.

Ex : lets assume you have one lua file name (details.lua) .
So first you need to create map in your lua page.
(for detail please refer this blog :

Now you configure your template(getData.htm) in lua file.

<input type=”button” value=”Set” style=”width:71px;” onclick=”<%-=pcdata(self:getData(section))-%>” />

Now in your lua page (details.lua) , you need to write this function.

test1 = readData:option(Value, “_custom”,translate(“Get Data “),”help text “)
readData.nocreate = true
readData.widget = “checkbox”
readData.template = “cbi/getData”
function readData.getData(self,section)
ptr1 =”/usr/bin/runScript start”

By this way we can run any system-command as value of ptr1

like “ls -ltrh /usr/bin >>/tmp/fileDetails.txt” ….etc

and run on the OS indirectly via LUA page.

Pushpraj Kumar.


JAVA POLICY FILE Apps-File-Java-icon



The Java™ 2 Platform, Enterprise Edition (J2EE) Version 1.3 and later specifications have a well-defined programming model of responsibilities between the container providers and the application code.

The java.policy file is a global default policy file that is shared by all of the Java programs that run in the Java virtual machine (JVM) on the node. A change to the java.policy file is local for the node.

The java.policy file is not a configuration file that is managed by the repository and the file replication service. Changes to this file are local and do not get replicated to the other machine.
By using this feature we can control the execution ,
To set the run-time permissions such that Java won’t grant the global permissions. Then you can specify only the permissions you want granted for your app. The key is to run your app with the options below.

java MyClass

Note the double equals The double equals == means to use only the permissions in the named file as opposed to the single equal sign which means use these permissions in addition to the inherited global permissions.

Then create a policy file excluding the permissions you want to deny:

grant codeBase “file:/C:/abc.jar” {
//list of permissions minus the ones you want to deny
//for example , the following give the application
//ONLY AudioPermission and AWTPermission. Other permission such as
// would be denied.
permission javax.sound.sampled.AudioPermission;
permission java.awt.AWTPermission;

NOTE : {app_server_root}/java/jre/lib/security/java.policy. Default permissions are granted to all classes. The policy of this file applies to all the processes launched by `Application Server.



Pushpraj Kumar


Data-Vault-Model (1)

Data Vault Modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that, apart from the modeling aspect, deals with issues such as auditing, tracing of data, loading speed and resilience to change.

Data Vault Modeling focuses on several things:-
First, it emphasizes the need to trace of where all the data in the database came from. This means that every row in a Data Vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source.
Second, it makes no distinction between good and bad data (“bad” meaning not conforming to business rules) This is summarized in the statement that a Data Vault stores “a single version of the facts” as opposed to the practice in other data warehouse methods of storing “a single version of the truth” where data that does not conform to the definitions is removed or “cleansed”.
Third, the modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes.
Finally, Data Vault is designed to enable parallel loading as much as possible, so that very large implementations can scale out without the need for major redesign.

Data Vault’s philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data is not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being “wrong” is an interpretation of the data that stems from a particular point of view that may not be valid for everyone or at every point in time. Therefore the Data Vault must capture all data and only when reporting or extracting data from the Data Vault is the data being interpreted.

Data Vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.

The business keys and their associations are structural attributes, forming the skeleton of the data model. The Data Vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a Data Warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the Hubs is of prime importance for the stability of your model. The keys are stored in tables with a few constraints on the structure. These key-tables are called Hubs.

The Data Vault modelled layer is normally used to store data. It is not optimized for query performance, nor is it easy to query by the well-known query-tools such as Cognos, SAP Business Objects, Pentaho et al. Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.
For performance reasons the dimensional model will usually be implemented in relational tables, after approval.
Note that while it is relatively straightforward to move data from a Data Vault model to a (cleansed) dimensional model, the reverse is not as easy.


Refer for more Detail:

PUSHPRAJ (BI-Developer)