Sunday, July 3, 2016

Spark Fundamentals, Spark Core , Spark History,Spark RDD

Why do we require Spark?
1. Data will be in one machine is very difficult to process , and it will be increase day by day
Spark
1. Easy Readability
2. Expressiveness
3. Fast
4. Testability
5. Interactive
6. Fault Tolerant
7. Unify Big Data
Spark overview
1. Basics of Spark
2. Core API
3. Cluster Managers
4. Spark Maintenance
Libraries
1. SQL
2. Streaming
3. MLib/GraphX
Troubleshooting/Optimization
Basics of Spark
1. Hadoop
2. History of Spark
3. Installation
4. Big Data’s Hello World

If we want to run streaming data, then we need STORM. Like this we need different frame works to run the different big data items like Hive, Scalding, HBase , Apache DRILL, Flume,mahout and Apache GIRAPH to unify all these things Spark came into picture.


1. Spark is a unified flatform for Big Data
2. It originates from core libraries





Abstractions FTW
Hadoop MR will take – 110000 lines of code
Impala will take – 90000 lines of code
Strom will take – 70000 lines of code
Giraph – 60000 lines of code
Finally Spark will take all together – 80000(includes Spark core- 40000+Spark SQL-30000+Streaming-6000+Graph X- 4000)


History of Spark
MapReduce-2004
Hadoop – 2006
Spark – 2009
Spark paper – BSD Open Source – 2010
amp Labs – 2011
Databricks -2013
Given to Apache – 2013
Top Level Downloaded and in apache – 2014
Databricks== Stability
Every three months , they will have releases.

Who is using Spark
Over 500 companies using Spark
Like PANDORA, NETFLIX, OOYALA, Goldmansachs, ebay, yahoo,conviva,hhmi and jannelia for healthcare
Spark Installation

Check at http://www.javaguruonline.com/2016/06/getting-started-with-spark-and-word.html

Spark Languages
We can write more than one language to write Scala applications
1. Scala
2. Java
3. Python
4. R

Hello Big Data
Word count example in http://www.javaguruonline.com/2016/06/getting-started-with-spark-and-word.html

Big Data
1. IOT – internet of things- fairly large amount of data.
2. Spark unified data flatform.
Spark Logistics
Experimental
Developer API
Alpha Component
Unit testing is Very easy in Spark

Resources
1. Amplabs- for big data-moores –law-means-better decisions
2. Chrisstucchio- Hadoop_hatred
3. Aadrake-command-line-tools-can_be-235X-fatser-than-your-hadoop-cluster
4. Quantified-spark-unit-test
5. Spark.apache.org
\Documentation
\Examples
Apache Spark You tube channel.
Community

Spark Core

Spark Maintainers
1. Matei Zaharia
2. Reynold Xin
3. Patric Wendell
4. Josh Rosen

Core API
1. Appify
2. RDD( Resilient Distributed Dataset)
3. Transforming the data
4. Action

Spark Mechanics
1. Driver- Spark Context (It is distributer across workers)

1. Executor - Task
a.Worker
2. Executor -Task
b.Worker
3. Executor Task
c.Worker

Spark Context
a. Task creator
b. Scheduler
c. Data locality
d. Fault Tolerance

RDD
Resilient Distributed Dataset
DAG- Apache Spark has an advanced DAG execution engine that supports cyclic data flow and in-memory computing

Transformations
a. Map
b. Filter

Actions
a. Collect
b. Count
c. Reduce
RDD is immutable. Once created we can’t change
Every Action is fresh submit.

Input – How to load the data
1. Hadoop HDFS
2. File System
3. Amazon S3
4. Data bases
5. Cassandra
6. In Memory
7. Avro
8. Parquet
Lambdas-Anonymous functions
Named Method
def addOne(Item:int)={
 Item+1
} 
Val intList = List(1,2)
for( item <- intList) yield {
addOne(item);
}//List(2,3)

Using lambda function
Val intList = List(1,2)
intList.map(x=>{
addOne(x);
}) }//List(2,3)

We can minimise the above code like below
Val intList = List(1,2)
intList.map(item=>item+1)//List(2,3)

Transformations
RDD will have 2 types of methods
1.Transformations
a.A method used to take our existing data set run with provided function and it transform into another required shape
b.If any method returns another RDD, then it is transformation.

Map – Distributed across Nodes like Node1, Node 2 and Node N
The Given function will execute from all the nodes

Node1
      For (item <- items) {
           Yield mapFunction(item)
       }
mapFunction- transformation function. This is repeated across all the nodes. Instead of repeating all the same data in all nodes , we can avoid configuring mapItemsFunction(items) Ex:instead of creating DB connection each node we will have single DB connection. RDD Combiners We will have mongoDB RDD1 and HDFS file System RDD2, to combine both RDD’s we can use UNION to combinedRDD We can use ++ operator to combine two RDD’s Intersection -RDD1.intersection(RDD2), to get the distinct values from 2 RDDS. Substract - one RDD have only unique values to another RDD Cartesian – One RDD will take each element in another RDD will compare with all possible RDD pairs Zip- Both RDDS should match same no of elements and same number of partitions 2. Actions a. Transformations are lazy and keep the data as distributed as possible. b. Actions typically sent results back to the Driver. 1. Associative Property 2+4+4+7 we can add this values in one go or (2+4) + (4+7) It is nothing but, however we are doing the action, result should be same. Acting on Data Data is distributed on different clusters, If we collect all the data and send to driver, there may be out of memory exceptions. Instead of that we can use take(5), each time once 5 records moved to Driver for computation , then again 5 records will take and send to driver and Driver keep it in Array format for final computation. Persistence Saving data no need to go to Driver. It can directly Store into any DB like 1. Cassandra 2. mongo DB 3. hadoop HDFS 4. AMAZON REDSHIFT 5. MySQL To save the Data we can use different formats 1. saveAsObjectFile(path) 2. saveAsTextFile(path) 3. ExternalConnector 4. Foreach(T => unit) foreachPartition(Iterator[T]=>unit) - Thank you very much for viewing this post.

Friday, July 1, 2016

Getting started with apache Pig, Pig UDF and How to write and execute Pig ,Pig scripts , Grunt shell


What is the Need of Pig?
1.Who don’t know java , then can learn and write Pig script.
2.10 lines of Pig = 200 lines of Java
3.It has built in operations like Join, Group, Filter, Sort and more…
Why we have to go for Pig when we have Map Reduce
1.Because of performance on par with Raw Hadoop
2.Hadoop will take 20 lines of code = 1 line of Pig
3.Hadoop development time is 16 minutes = 1 minute of Pig
Map-Reduce
1.Powerful model for parallelism
2.Based on a rigid procedural structure
3.Provides a good opportunity to parallelize algorithm
Pig
1.It is desirable to have a higher level declarative language.
2.Similar to SQL query where the user specifies “what” and leaves the “how” to the underlying process engine.

Why Pig
1.Java Not Required
2.Can take any type of data like structured or semi structured data.
3.Easy to learn, write and read. Because it is similar to SQL, Reads like series of steps
4.It can extensible by UDF from Java, Python, Ruby and Java script
5.It provides common data operations filters, joins, ordering etc. and nested data types tuples, bags and maps, which is missing in MapReduce.
6.An ad-hoc way of creating and executing map-reduce jobs on very large data sets
7.Open source and actively supported by a community of developers.
Where should we use Pig?
1.Pig is data flow language
2.It is on the top of Hadoop and makes it possible to create complex jobs to process large volumes of data quickly and efficiently
3.It is used in Time Sensitive Data Loads
4.Processing Many Data Sources
5.Analytic Insight Through Sampling.


Where not to use Pig?
1.Really nasty data formats or completely unstructured data(video, audio, raw human-readable text)
2.Perfectly implemented MapReduce code can sometimes execute jobs slightly faster than equally well written Pig code.
3.When we would like more power to optimize our code.
What is Pig?
1.Pig is a open source high level data flow system
2.It provides a simple language queries and data manipulation Pig Latin, that is compiled into map-reduce jobs that are run on Hadoop
Why Is it Important?
1.Companies like Yahoo, Google and Microsoft are collecting enormous data sets in the form of clicks of streams, search logs and web crawls.
2.Some form of ad-hoc processing and analysis of all this information is required.
Where we will use Pig?
1.Processing of Web Logs
2.Data processing for search platforms
3.Support for Ad hoc queries across large datasets.
4.Quick Prototyping of algorithms for processing large data sets.

Conceptual Data flow for Analysis task


How Yahoo uses Pig?
1.Pig is the best suited for the data factory

Data Factory contains
Pipelines:
1.Pipelines bring logs from Yahoo’s web servers
2.These logs are undergo a cleaning steps where boots, company internal views and clicks are removed.
Research:
1.Researchers want to quickly write a script to test theory
2.Pig integration with streaming makes it easy for researchers to take a Perl or Python script and run it against a huge dataset.
Use Case in Health care

1.Take DB Dump in csv format and ingest into HDFS
2.Read CSV file from HDFS using Pig Script
3.De-identify columns based on configurations and store the data back in csv file using Pig script.
4.Store De-identified SCV file into HDFS.
Pig – Basic Program structure.
Script:
1.Pig can run a script file that contains Pig commands
Ex: pig script.pig runs the commands in the file script.pig.
Grunt:
1.Grunt is an interactive shell for running the Pig commands.
2.It is also possible to run Pig scripts from within Grunt using run and exec(execute)
Embedded:
1.Embedded can run Pig programs from Java , much like we can use JDBC to run SQL programs from Java.



Pig Running modes:
1.Local Mode -> pig –x local
2.MapReduce or HDFS mode -> pig
Pig is made up of Two components
1.Pig
a.Pig Latin is used to express Data Flows
2.Execution Environments
a.Distributed execution on a Hadoop Cluster
b.Local execution in a single JVM
Pig Execution
1.Pig resides on User machine
2.Job executes on Hadoop Cluster
3.We no need to install any extra on Hadoop cluster.
Pig Latin Program
1.It is made up of series of operations or transformations that are applied to the input data to produce output.
2.Pig turns the transformations into a series of MapReduce Jobs.

Basic Types of Data Models in Pig

1.Atom
2.Tuple
3.Bag
4.Map
a)Bag is a collection of tuples
b)Tuple is a collection of fields
c)A field is a piece of data
d)A Data Map is a map from keys that are string literals to values that can be any data type.
Example: t= (1,{(2,3),(4,6),(5,7)},[‘apache’:’search’])

How to install Pig and start the pig
1.Down load Pig from apache site
2.Untar the same and place it where ever you want.
3.To start the Pig , type pig in the terminal and it will give you pig grunt shell
Demo:

1.Create directory pig_demo under /home/usr/demo/ mkdir pig_demo
2.Go to /home/usr/demo/pig_demo
3.Create a 2 text files A.txt and B.txt
4.gedit A.txt
add the below type of data in A.txt file.
0,1,2
1,7,8
5.gedit B.txt add the below type of data in B.txt file
0,5,2
1,7,8
6.Now we need to move these 2 files into hdfs
7.Create a directory in HDFS
     Hadoop fs –mkdir /user/pig_demo
8.Copy A.txt and B.txt files into HDFS
     Hadoop fs –put *.txt  /user/pig_demo/
9.Start the pig
    pig –x local

            or 
     pig
 
10.We will get Grunt shell, using grunt shell we will load the data and do the operations the same.
        grunt> a= LOAD ‘/user/pig_demo/A.txt’ using PigStorage(‘,’);
        grunt> b= LOAD ‘/user/pig_demo/B.txt’ using PigStorage(‘,’);
        grunt> dump a;
        // we will load the data and it will display in Pig
        grunt> dump b;
        grunt> c= UNION a,b;
        grunt> dump c;
 
11.We can change the A.txt file data and again we will place it into HDFS using
hadoop fs –put A.txt /user/pig_demo/
And load the data again using grunt shell, then union the a,b files . then combine 2 files using UNION and then dump the C. This is how we can do the load the data instantly.
//If we want split data into column wise 0 column values split into d and e
        grunt> SPLIT c  INTO d IF $0 == 0 , e IF $0 == 1
        grunt> dump d;
        grunt> dump e;
        grunt> lmt = LIMIT c 3;
        grunt> dump lmt;
        grunt> dump c;
        grunt> f= FILTER  c BY  $1 > 3;
        grunt> dump f;
        grunt> g = group c by  $2;
        grunt> dump g;

We can load the data in different format
grunt> a= LOAD ‘/user/pig_demo/A.txt’ using PigStorage(‘,’)  as (int:a1 , int:a2 , int:a3);
grunt> b= LOAD ‘/user/pig_demo/B.txt’ using PigStorage(‘,’)  as (int:b1 , int:b2 , int:b3);
grunt> c= UNION a,b;
grunt> g = group c by  $2;
grunt> f= FILTER  c BY  $1 > 3;
grunt> describe c;
grunt> h= GROUP c ALL;
grunt> i= FOREACH h GENERATE COUNT($1);
grunt> dump h;
grunt> dump i;
grunt> j = COGROUP a BY $2 , b BY $2;
grunt> dump j;
grunt> j = join a by $2 , b by $2;
grunt> dump j;

Pig Latin Relational Operators
1.Loading and storing
a.LOAD- Loads data from the file system or other storage into a relation
b.STORE-Saves a relation to the file system or other storage
c.DUMP-Prints a relation to the console
2.Filtering
a.FILTER- Removes unwanted rows from a relation
b.DISTINCT – Removes duplicate rows from a relation
c.FOREACH .. GENERATE – Adds or removes fields from a relation.
d.STREAM – Transforms a relation using an external program.
3.Grouping and Joining
a.JOIN – Join two or more relations.
b.COGROUP – Groups the data in two or more relations
c.GROUP – Group the data in a single relation
d.CROSS – Creates a Cross product of two or more relations
4.Sorting
a.ORDER – Sorts a relation by one or more fields
b.LIMIT – Limits the size of a relation to the maximum number of tuples.
5.Combining and Splitting
a.UNION – Combines two or more relations into one
b.SPLIT – Splits a relation into two or more relations.

Pig Latins – Nulls
1.In Pig , when a data element is NULL , it means the value is unknown.
2.Pig includes the concept of a data element being NULL.(Data of any type can be NULL)
Pig Latin –File Loaders
1.BinStorage – “binary” storage
2.PigStorage – Loads and stores data that is delimited by something
3.TextLoader – Loads data line by line (delimited by newline character)
4.CSVLoader – Loads CSV files.
5.XMLLoader – Loads XML files.

Joins and COGROUP
1.JOIN and COGROUP operators perform similar functions
2.JOIN creates a flat set of output records while COGROUP creates a nested set of output records.
Diagnostic Operators and UDF Statements
1.Types of Pig Latin Diagnostic Operators
a.DESCRIBE – Print’s a relation schema.
b.EXPLAIN – Prints the logical and physical plans
c.ILLUSTRATE – Shows a sample execution of the logical plan, using a generated subset of the input.
2.Types of Pig Latin UDF Statements
a.REGISTER – Registers a JAR file with the Pig runtime
b.DEFINE-Creates an alias for a UDF, streaming script , or a command specification

    grunt>describe c;
    grunt> explain c;
     grunt> illustrate c;

Pig –UDF(User Defined Function)
1.Pig allows other users to combine existing operators with their own or other’s code via UDF’s
2.Pig itself come with some UDF’s. Few of the UDF’s are large number of standard string –processing, math, and complex-type UDF’s were added.
Pig word count example
1.sudo gedit pig_wordcount.txt

data will be as like this 

hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju
hive mapreduce pig Hbase siva raju

// create directory inside HDFS and place this file
hadoop  fs –mkdir /user/pig_demo/pig-wordcount_input
//put pig_wordcount.txt file
hadoop  fs –put  pig_wordcount.txt  /user/pig_demo/pig-wordcount_input/
//list the files inside the mentioned directory
hadoop  fs –ls   /user/pig_demo/pig-wordcount_input/

//load the data into pig
grunt> A = LOAD  ‘/user/pig_demo/pig-wordcount_input/ pig_wordcount.txt’;
grunt> dump A;
grunt> B = FOREACH A generate flatten (TOKENIZE((character)$0)) as word;
grunt> dump B;
grunt> C = Group B by word;
grunt> dump C;
grunt> D =  foreach C generate  group, COUNT(B);
grunt> dump D;
How to write a pig script and execute the same
sudo gedit pig_wordcount_script.pig
//We need to add the below command to the script file.
A = LOAD  ‘/user/pig_demo/pig-wordcount_input/ pig_wordcount.txt’;
B = FOREACH A generate flatten (TOKENIZE((character)$0)) as word;
C = Group B by word;
D =  foreach C generate  group, COUNT(B);
STORE D into ‘/user/pig_demo/pig-wordcount_input/ pig_wordcount_output.txt’;
dump D;
//Once completed then we need to execute the pig script
pig pig_wordcount_script.pig

Create User defined function using java.
1. Open eclipse – create new project – New class-
Import java.io.IOException;
Import org.apache.pig.EvalFunc;
Import org.apache.pig.data.Tuple;
public class ConvertUpperCase extends EvalFunc{
    public String exec(Tuple input)throws IOException{
      If(input == null || input.size()==0){
          return null;
      }
    try{
        String str = (String)input.get(0);
         return str.toUpperCase();
     }
catch(Exception ex){
  throw new IOException(“Caught exception processing row”);
}
     }
}
Once done the coding , then we need to export as jar. Place it where ever you want.

How to Run the jar from through pig script.
1.Create one udf_input.txt file - > sudo gedit udf_input.txt
Siva    raju    1234    bangalore   Hadoop
Sachin    raju    345345    bangalore   data
Sneha    raju    9775    bangalore   Hbase
Navya    raju    6458    bangalore   Hive
2.Create pig_udf_script.pig script - sudo gedit pig_udf_script.pig
3.Create one directory called pig_udf_input
  hadoop  fs –mkdir /user/pig_demo/pig-udf_input
  //put pig_wordcount.txt file
  hadoop  fs –put  udf_input.txt  /user/pig_demo/pig-udf_input/
4.Open the pig_udf_script.pig file
REGISTER /home/usr/pig_demo/ ConvertUpperCase.jar;
A=LOAD  ‘/user/pig_demo/pig-udf_input/ udf_input.txt’ using PigStorage (‘\t’) as (FName:chararray,LName:chararray,MobileNo:chararray,City:chararray,Profession:chararray);
B=FOREACH A generate ConvertUpperCase($0), MobileNo, ConvertUpperCase(Profession), ConvertUpperCase(City);
STORE B INTO ‘‘/user/pig_demo/pig-udf_input/ udf_output.txt’
DUMP B;
Run the UDF script using the following command.
pig pig_udf_script.pig

Friday, June 3, 2016

Getting Started with Spark and word count example using sparkcontext


Step 1: Download Spark DownLoad Spark From Here
1.Choose Spark Release < which ever version you wan to be work>
2.Choose Packe Type < Any version of hadoop>
3.Choose Download type
4.Click on the Download Spark.


Step 2: After successful Download, we need to run the spark.
For that , we need to follow few steps.

1.Install Java 7 and set the PATH and JAVA_HOME in environment variables.
2.Download Hadoop version< Here I have downloaded hadoop 2.4>
3. Untar the tar file and set the HADOOP_HOME and update the PATH in environemnt varaibles.
4.If Hadoop not installed then download the winutils.exe file and save in your local system.
(This is to work with windows environment)
5.After downloading set the HADOOP_HOME in environment variables where our winutils.exe file resides.



Step 3: Once everything has been done, then now we need to check spark has been working or not.
1.Go to command Prompt
C:/>spark-shell

spark will start and with lot of logs, to avoid info logs we need to change the log level.

Step 4: Go to conf inside spark
1.Copy log4j.properties.template and paste in same location and edit the same.
2.Change the INFO level to ERROR level and rename it has log4j.properties
log4j.rootCategory=INFO, console change as
log4j.rootCategory=ERROR, console

Step 5: After changing the Log level, if we try to run spark-shell, again from command prompt, then you can see the difference.
1.This is How we can install the Spark in windows environment.
2.If you are facing any issues while starting the spark.
3.First check the Hadoop home path by using the following command
C:> echo %HADOOP_HOME% 
4.It should print Hadoop home path where our winutils.exe file is available
5.Set the permissions for the hadoop temp folder, provide the permissions
           C:> %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive
           C:> %HADOOP_HOME%\bin\winutils.exe  chmod 777  \tmp\hive
       


Step 6: Now we will check word count example using spark.
How we usally do in Hadoop Map reduce to count the words in the given file.

1.After spark-shell started we will get 2 contexts, one is Spark Context (sc), SQL Context as sqlContext.
2.Using the spark context sc, we will read the files, and do the manipulation and write output to the file.

          val textFile = sc.textFile(“file:///C:/spark/spark-1.5.0-bin-hadoop2.4/README.md”)

          //to read the first line of the file
          textFile.first

          //Split the each line data using space as delimeter
         val tokenizedFileData = textFile.flatMap(line=>line.split(“ “))

         //Prepare Counts using map
         val countPrep = tokenizedFileData.map(word=>(word,1))

         //Check the counts using reduceByKey
         val  counts = countPrep.reduceByKey((accumValue,newValue)=>accumValue+newValue)

         //sort the values using key value pair
         val  sortedCounts = counts.sortBy(kvPair=>kvPair._2,false)

         //Save the sorted counts into outfile calles ReadMeWordCount
         sortedCounts.saveAsTextFile(file:///C:/spark/ReadMeWordCount)

        //If we want to show countByValue(built in mapreduce)
        tokenizedFileData.countByValue
     




Step 7: Few more commands to save the output file into local system


Step 8: Output file will be stored as parts as mentioned below


Thank you very much for viewing this post.

Friday, April 29, 2016

Print numbers 1 to 100 , Replace 'A' which number is divisible by 3 and Replace 'B' which number is divisible by 5 and Replace 'AB' which number is divisible by 3 and 5

This post will explain you about how to print numbers from 1 to 100 with following conditions.
1. Replace 'A' which number is divisible by 3
2. Replace 'B' which number is divisible by 5
3. Replace 'AB' which number is divisible by 3 and 5

import java.util.ArrayList;
import java.util.List;


public class TestPrintNumbers {
 
 public static void main(String[] args) {
  List list = new ArrayList();
  String finalResult = new String();
  for(Integer i=1;i<=100;i++){
       if(i%3==0 && i%5==0 ){
          list.add("AB");
       }
       else if(i%3==0){
   list.add("A");
       }
       else if(i%5==0){
          list.add("B");
       }
              else if(i%3 !=0 && i%5!=0){
          list.add(i);
                }
  }
     for (int j = 0; j < list.size(); j++) {
      finalResult = finalResult.concat(list.get(j)+",");
  }
  System.out.print(finalResult.substring(0,finalResult.length()-1));
  
 }

}

Output
1,2,A,4,B,A,7,8,A,B,11,A,13,14,AB,16,17,A,19,B,A,22,23,A,B,26,A,28,29,AB,31,32,A,34,B,A,37,38,A,B,41,A,43,44,AB,46,47,A,49,B,A,52,53,A,B,56,A,58,59,AB,61,62,A,64,B,A,67,68,A,B,71,A,73,74,AB,76,77,A,79,B,A,82,83,A,B,86,A,88,89,AB,91,92,A,94,B,A,97,98,A,B

AddToAny

Contact Form

Name

Email *

Message *