Data Analysis Using Apache Hive and Apache Pig

Apache Hive, an open-source data warehouse system, is used with Apache Pig for loading and transforming unstructured, structured, or semi-structured data for data analysis and getting better business insights. Pig, a standard ETL scripting language, is used to export and import data into Apache Hive and to process a large number of datasets. Pig can be used for the ETL data pipeline and iterative processing.

In this blog, let's discuss loading and storing data in Hive with Pig Relation using HCatalog.

Prerequisites

Download and configure the following:

Use Case

In this blog, let's discuss the below use case:

Data Description

Two cricket data files with Indian Premier League data from 2008 to 2016 is used as a data source. The files are as follows:

These files are extracted and loaded into Hive. The data is further processed, transformed, and analyzed to get the winner for each season and the top five batsmen with the maximum run in each season and overall season.

Synopsis

Creating Database and Database Tables in Hive

To create databases and database tables in Hive, save the below query as a SQL file (database_table_creation.sql):select

Importing Data Into Hive Tables

To load data from both the CSV files into Hive, save the below query as a SQL file (data_loading.sql):select

Calling Hive SQL in Shell Script

To automatically create databases and database tables and to import data into Hive, call both the SQL files (database_table_creation.sql and data_loading.sql) using Shell Script.select

Viewing Database Architecture

The database schema and tables created are as follows:select

The raw matches.csv file loaded into Hive schema (ipl_stats.matches) is as follows:

select

The raw deliveries.csv file loaded into Hive schema (ipl_stats.deliveries) is as follows:select

Loading and Storing Hive Data Into Pig Relation

To load and store data from Hive into Pig relation and to perform data processing and transformation, save the below script as Pig file (most_run.pig):select

Note: Create a Hive table before calling Pig file. To write back the processed data into Hive, save the below script as a SQL file (most_run.sql):

select

Calling Pig Script in Shell Script

To automate ETL process, call files (most_run.pig, most_run.sql) using Shell script.select

The data loaded into Hive using Pig script is as follows:

select

Applying Pivot Concept in Hive SQL

As the data loaded into Hive is in rows, the SQL pivot concept is used to convert rows into columns for more data clarity and for gaining better insights. The user-defined aggregation function (UDAF) technique is used to perform pivot in Hive. In this use case, the pivot concept is applied to season and run rows alone.

To use  Collect UDAF, add Brickhouse JAR file into Hive class path.

The top five most run scored batsmen data for each season before applying pivot is shown as follows:

select

The top five most run scored batsmen data for each season after applying pivot is shown as follows:

select

Viewing Output

Let's view winners of a season, the top five most run scored batsmen, 

Viewing Winners of a Season

To view winners of each season, use the following Hive SQL query:select

Viewing Top 5 Most Run Scored Batsmen

To view top five most run scored batsmen, use the following Hive SQL query:select

The top five most run scored batsmen are shown graphically using MS Excel as follows:

select

Viewing Year-Wise Runs of Top 5 Batsmen

To view year-wise runs of the top five batsmen, use the following Hive SQL query:select

The year-wise runs of the top five batsmen are shown graphically using MS Excel as follows:

select

Reference

 

 

 

 

Top