Batch-Updation in Hibernate

Batch Updation in Hibernate

JDBC has long been offering support for DML statement batching. By default, all statements are sent one after the other, each one in a separate network round-trip. Batching allows us to send multiple statements in one-shot, saving unnecessary socket stream flushing.

Hibernate hides the database statements behind a transactional write-behind abstraction layer. An intermediate layer allows us to hide the JDBC batching semantics from the persistence layer logic. This way, we can change the JDBC batching strategy without altering the data access code.

Update code snippet look like this ,

Session session = sessionFactory.openSession();
Transaction txInstance = session.beginTransaction();
ScrollableResults studentInstance = session.createQuery("FROM STUDENT").scroll();
int count =0;
while( studentInstance.next())
{
   Student student =(Student) studentInstance.get(Student.class,StudentID); 
   student.setregNo( regNO );
   session.update(student);
  // 50 - size of batch which you set earlier.
  // For Detail "http://helicaltech.com/batch-insertion-in-hibernate/"
   if(++count %50==0)
   {
      session.flush();
      session.clear();
   }
}
txInstance.commit();
session.close();

----------------------------------

 

PUSHPRAJ KUMAR (BI Developer)

INNODB Backup and Recovery using mysqldump

InnoDB is a storage engine for MySQL. MySQL 5.5 and later use it by default. It provides the standard ACID-compliant transaction features, along with foreign key support (Declarative Referential Integrity). It is included as standard in most binaries distributed by “MySQL AB”, the exception being some “OEM” versions.

The key to safe database management is making regular backups. Depending on your data volume, number of MySQL servers, and database workload, you can use these techniques, alone or in combination: hot backup with MySQL Enterprise Backup; cold backup by copying files while the MySQL server is shut down; physical backup for fast operation (especially for restore); logical backup with mysqldump for smaller data volumes or to record the structure of schema objects

Issue : —

C:\Calpont\bin>mysqldump.exe  -u root -p  –database  TestDB > C:\Pushpraj\infinidbPushpraj.sql 

Enter password: *******

mysqldump: Got error: 138: IDB-2029: Lock table command is currently not supported in InfiniDB. when using LOCK TABLES

Solution  :-  use –skip-lock-tables

LOGICAL BACKUP

Windows :::

To take the dump from your mysql for infinidb —-

C:\Calpont\bin>mysqldump.exe –skip-lock-tables -u root -p  –database  TestDB > C:\Pushpraj\infinidbPushpraj.sql 

Enter password: *******

It will create your dump file .

To restore the dump in your mysql —-

First create an empty DB in your Mysql desitination DB.

C:\Calpont\bin>mysql.exe -u root -p  TestDB < C:\Pushpraj\infinidbPushpraj.sql 

Enter password: *******

 

🙂   Have fun   🙂

 

PUSHPRAJ KUMAR

MySQL C – API programming

MySQL C – API programming

MySQL is a leading open source database management system. It is a multi user, multithreaded database management system. MySQL is especially popular on the web.

Basic Structure of C Programs that uses MySQL C mysqlwithcAPI 1. All programs must include <mysql/mysql.h> as the last include.2. Define MYSQL type variable. NOTE: THERE CAN BE ONLY ONE MYSQL VARIABLE.3. Initialize MYSQL type variable with mysql_init()4. Load any options, if required, by using mysql_options(). If you don’t need don’t call. You can call this fuction multiple times if you require. If you call this, call this before mysql_real_connect() and after mysql_init().5. Connect by calling mysql_real_connect()6. Call the business logic and MySQL API’s7. Close the MYSQL type variable.

 

Template of program

#include <mysql/mysql.h><return_type> <function_name>(parameters)

{

MYSQL mysql; mysql_init(&mysql);

mysql_options(&mysql,MYSQL_OPT_COMPRESS,0);      

/*call only if required otherwise omit*/

mysql_options(&mysql,MYSQL_READ_DEFAULT_GROUP,”Pushpraj”);        

 /*call only if required otherwise omit*/

mysql_real_connect(….);      

/* now call other API’s*/

mysql_close(&mysql);

}

 Compiling and Running in UNIX :- 

To Compile :-  $gcc mysql_app.c -o mysql_app -I/usr/local/include -L/usr/local/lib/mysql –lmysqlclient

To Run :-  $./mysql_app

Examples:-

1. Test the connection.

#include </usr/include/mysql/my_global.h>

#include </usr/include/mysql/mysql.h>

int main(int argc, char **argv)

{

printf(“MySQL client version: %s\n”, mysql_get_client_info());

  exit(0);

}

Description:-  “mysql_get_client_info()”, this mysql function collects the version information of mysql.

2. Create table and insert data into table.

#include <my_global.h>

#include <mysql.h>

void finish_with_error(MYSQL *con){

  fprintf(stderr, “%s\n”, mysql_error(con));

  mysql_close(con);

  exit(1);       

}

 

int main(int argc, char **argv)

{

  MYSQL *con = mysql_init(NULL);

  if (con == NULL) {

      fprintf(stderr, “%s\n”, mysql_error(con));

      exit(1);

  } 

if (mysql_real_connect(con, “192.168.2.50”, “username”, “password”,”db-Name”, port-No, NULL, 0) == NULL)

  {

      finish_with_error(con);

  }   

 

  if (mysql_query(con, “DROP TABLE IF EXISTS Dell”)) {

      finish_with_error(con);

  }

  if (mysql_query(con, “CREATE TABLE Dell(Id INT, Name TEXT, Price INT)”)) {     

      finish_with_error(con);

  }

  if (mysql_query(con, “INSERT INTO Dell VALUES(1,’vostro14′,26042)”)) {

      finish_with_error(con);

  }

  if (mysql_query(con, “INSERT INTO Dell VALUES(2,’inspiron15′,37000)”)) {

      finish_with_error(con);

  }

  if (mysql_query(con, “INSERT INTO Dell VALUES(3,’inspiron16′,40000)”)) {

      finish_with_error(con);

  }

  if (mysql_query(con, “INSERT INTO Dell VALUES(4,’ vostro15′,29000)”)) {

      finish_with_error(con);

  }

  if (mysql_query(con, “INSERT INTO Dell VALUES(5,’express’,69000)”)) {

      finish_with_error(con);

  }

  mysql_close(con);

  exit(0);

}

Description:- mysql_init, allocates/intialises a Mysql object suitable for mysql_real_connect()establishes a connection to the database.We provide connection handler, host name, username ,password ,database name, port number , unix socket and client flag.

3. Create connection with Database & fetch the records.

#include <my_global.h>

#include <mysql.h>

void finish_with_error(MYSQL *con)

{

  fprintf(stderr, “%s\n”, mysql_error(con));

  mysql_close(con);

  exit(1);       

}

int main(int argc, char **argv)

{     

  MYSQL *con = mysql_init(NULL);

  if (con == NULL)

  {

      fprintf(stderr, “mysql_init() failed\n”);

      exit(1);

  } 

 

  if (mysql_real_connect(con, “192.168.2.50”, “username”, “password”,”db-Name”, port-No, NULL, 0) == NULL)

  {      finish_with_error(con);  }  

 

  if (mysql_query(con, “SELECT * FROM Dell”))

  {      finish_with_error(con);  }  

  MYSQL_RES *result = mysql_store_result(con);

  if (result == NULL)

  {      finish_with_error(con);  }  

  int num_fields = mysql_num_fields(result);

  MYSQL_ROW row;

  while ((row = mysql_fetch_row(result)))

  {

      for(int i = 0; i < num_fields; i++)

      {

          printf(“%s “, row[i] ? row[i] : “NULL”);

      }

      printf(“\n”);

  }

  mysql_free_result(result);

  mysql_close(con);

  exit(0);

}

Description:-

  • Create a connection:: mysql_real_connect(con, “192.168.2.50”, “username”, “password”,”db-Name”, port-No, NULL, 0)
  • Execute query:: mysql_query(con, “SELECT * FROM Dell”)
  • Get the result set:: MYSQL_RES *result = mysql_store_result(con)
  • Fetch all available rows one by one :: row = mysql_fetch_row(result)
  • Free the result set:: mysql_free_result(result)
  • Close the connection:: mysql_close(con);

Thanks

Pushpraj Kumar

DATA VAULT MODELING

Data-Vault-Model (1)

DATA VAULT MODELING
Data Vault Modeling is a database modeling method that is designed to provide long-term historical storage of data coming in from multiple operational systems. It is also a method of looking at historical data that, apart from the modeling aspect, deals with issues such as auditing, tracing of data, loading speed and resilience to change.

Data Vault Modeling focuses on several things:-
First, it emphasizes the need to trace of where all the data in the database came from. This means that every row in a Data Vault must be accompanied by record source and load date attributes, enabling an auditor to trace values back to the source.
Second, it makes no distinction between good and bad data (“bad” meaning not conforming to business rules) This is summarized in the statement that a Data Vault stores “a single version of the facts” as opposed to the practice in other data warehouse methods of storing “a single version of the truth” where data that does not conform to the definitions is removed or “cleansed”.
Third, the modeling method is designed to be resilient to change in the business environment where the data being stored is coming from, by explicitly separating structural information from descriptive attributes.
Finally, Data Vault is designed to enable parallel loading as much as possible, so that very large implementations can scale out without the need for major redesign.

Data Vault’s philosophy is that all data is relevant data, even if it is not in line with established definitions and business rules. If data is not conforming to these definitions and rules then that is a problem for the business, not the data warehouse. The determination of data being “wrong” is an interpretation of the data that stems from a particular point of view that may not be valid for everyone or at every point in time. Therefore the Data Vault must capture all data and only when reporting or extracting data from the Data Vault is the data being interpreted.

BASIC NOTIONS
Data Vault attempts to solve the problem of dealing with change in the environment by separating the business keys (that do not mutate as often, because they uniquely identify a business entity) and the associations between those business keys, from the descriptive attributes of those keys.

The business keys and their associations are structural attributes, forming the skeleton of the data model. The Data Vault method has as one of its main axioms that real business keys only change when the business changes and are therefore the most stable elements from which to derive the structure of a historical database. If you use these keys as the backbone of a Data Warehouse, you can organize the rest of the data around them. This means that choosing the correct keys for the Hubs is of prime importance for the stability of your model. The keys are stored in tables with a few constraints on the structure. These key-tables are called Hubs.

DATA VAULT AND DIMENSIONAL MODELLING
The Data Vault modelled layer is normally used to store data. It is not optimized for query performance, nor is it easy to query by the well-known query-tools such as Cognos, SAP Business Objects, Pentaho et al. Since these end-user computing tools expect or prefer their data to be contained in a dimensional model, a conversion is usually necessary.
For performance reasons the dimensional model will usually be implemented in relational tables, after approval.
Note that while it is relatively straightforward to move data from a Data Vault model to a (cleansed) dimensional model, the reverse is not as easy.

hub-spoke-2

Refer for more Detail: http://www.slideshare.net/dlinstedt

PUSHPRAJ (BI-Developer)

How to solve pg_tblspc problem in Postgres

images

How to solve “pg_tblspc problem” in postgres.
Sometime when we are doing some operation on Postgres database, we faced this error.
I faced it, when I was trying to get values from database.
This issue throws error that could not open directory “pg_tblspc/<1234…x>”: No such file or directory
To solve this issue you first have to create dump of your database. Steps are ahead …

 psql -h localhost -p 6666 -U postgres -d <yourDBname>-f “E:/xyz/abc.p”
(It will create dump file of your database)
 Then delete your old database after dumping it.
(dump database <yourDBname>)
 Then again restore it from your dump file…

Or, Its better to use your backup file while restoring..

If problem is still there then we need to first restore that <abc.p/abc.sql> file on our local machine
 then again create (.p) file as dump file
 And restore your postgres by using this new dump file…
You will not come across this problem again……..

Another way:-
1) Backup data with pg_dump

pg_dump -i -h localhost -p 5432 -U postgres -F c -b -v -f
“/usr/local/backup/10.70.0.61.backup” old_db

To list all of the available options of pg_dump, please issue following command.

 pg_dump -?

-p, –port=PORT database server port number
-i, –ignore-version proceed even when server version mismatches
-h, –host=HOSTNAME database server host or socket directory
-U, –username=NAME connect as specified database user
-W, –password force password prompt (should happen automatically)
-d, –dbname=NAME connect to database name
-v, –verbose verbose mode
-F, –format=c|t|p output file format (custom, tar, plain text)
-c, –clean clean (drop) schema prior to create
-b, –blobs include large objects in dump
-v, –verbose verbose mode
-f, –file=FILENAME output file name

2) Restore data with pg_restore

pg_restore -i -h localhost -p 5432 -U postgres -d old_db -v
“/usr/local/backup/10.70.0.61.backup”

To list all of the available options of pg_restore, please issue following command.

 pg_restore -?

-p, –port=PORT database server port number
-i, –ignore-version proceed even when server version mismatches
-h, –host=HOSTNAME database server host or socket directory
-U, –username=NAME connect as specified database user
-W, –password force password prompt (should happen automatically)
-d, –dbname=NAME connect to database name
-v, –verbose verbose mode