Friday, June 3, 2016

Getting Started with Spark and word count example using sparkcontext


Step 1: Download Spark DownLoad Spark From Here
1.Choose Spark Release < which ever version you wan to be work>
2.Choose Packe Type < Any version of hadoop>
3.Choose Download type
4.Click on the Download Spark.


Step 2: After successful Download, we need to run the spark.
For that , we need to follow few steps.

1.Install Java 7 and set the PATH and JAVA_HOME in environment variables.
2.Download Hadoop version< Here I have downloaded hadoop 2.4>
3. Untar the tar file and set the HADOOP_HOME and update the PATH in environemnt varaibles.
4.If Hadoop not installed then download the winutils.exe file and save in your local system.
(This is to work with windows environment)
5.After downloading set the HADOOP_HOME in environment variables where our winutils.exe file resides.



Step 3: Once everything has been done, then now we need to check spark has been working or not.
1.Go to command Prompt
C:/>spark-shell

spark will start and with lot of logs, to avoid info logs we need to change the log level.

Step 4: Go to conf inside spark
1.Copy log4j.properties.template and paste in same location and edit the same.
2.Change the INFO level to ERROR level and rename it has log4j.properties
log4j.rootCategory=INFO, console change as
log4j.rootCategory=ERROR, console

Step 5: After changing the Log level, if we try to run spark-shell, again from command prompt, then you can see the difference.
1.This is How we can install the Spark in windows environment.
2.If you are facing any issues while starting the spark.
3.First check the Hadoop home path by using the following command
C:> echo %HADOOP_HOME% 
4.It should print Hadoop home path where our winutils.exe file is available
5.Set the permissions for the hadoop temp folder, provide the permissions
           C:> %HADOOP_HOME%\bin\winutils.exe ls \tmp\hive
           C:> %HADOOP_HOME%\bin\winutils.exe  chmod 777  \tmp\hive
       


Step 6: Now we will check word count example using spark.
How we usally do in Hadoop Map reduce to count the words in the given file.

1.After spark-shell started we will get 2 contexts, one is Spark Context (sc), SQL Context as sqlContext.
2.Using the spark context sc, we will read the files, and do the manipulation and write output to the file.

          val textFile = sc.textFile(“file:///C:/spark/spark-1.5.0-bin-hadoop2.4/README.md”)

          //to read the first line of the file
          textFile.first

          //Split the each line data using space as delimeter
         val tokenizedFileData = textFile.flatMap(line=>line.split(“ “))

         //Prepare Counts using map
         val countPrep = tokenizedFileData.map(word=>(word,1))

         //Check the counts using reduceByKey
         val  counts = countPrep.reduceByKey((accumValue,newValue)=>accumValue+newValue)

         //sort the values using key value pair
         val  sortedCounts = counts.sortBy(kvPair=>kvPair._2,false)

         //Save the sorted counts into outfile calles ReadMeWordCount
         sortedCounts.saveAsTextFile(file:///C:/spark/ReadMeWordCount)

        //If we want to show countByValue(built in mapreduce)
        tokenizedFileData.countByValue
     




Step 7: Few more commands to save the output file into local system


Step 8: Output file will be stored as parts as mentioned below


Thank you very much for viewing this post.

Friday, April 29, 2016

Print numbers 1 to 100 , Replace 'A' which number is divisible by 3 and Replace 'B' which number is divisible by 5 and Replace 'AB' which number is divisible by 3 and 5

This post will explain you about how to print numbers from 1 to 100 with following conditions.
1. Replace 'A' which number is divisible by 3
2. Replace 'B' which number is divisible by 5
3. Replace 'AB' which number is divisible by 3 and 5

import java.util.ArrayList;
import java.util.List;


public class TestPrintNumbers {
 
 public static void main(String[] args) {
  List list = new ArrayList();
  String finalResult = new String();
  for(Integer i=1;i<=100;i++){
       if(i%3==0 && i%5==0 ){
          list.add("AB");
       }
       else if(i%3==0){
   list.add("A");
       }
       else if(i%5==0){
          list.add("B");
       }
              else if(i%3 !=0 && i%5!=0){
          list.add(i);
                }
  }
     for (int j = 0; j < list.size(); j++) {
      finalResult = finalResult.concat(list.get(j)+",");
  }
  System.out.print(finalResult.substring(0,finalResult.length()-1));
  
 }

}

Output
1,2,A,4,B,A,7,8,A,B,11,A,13,14,AB,16,17,A,19,B,A,22,23,A,B,26,A,28,29,AB,31,32,A,34,B,A,37,38,A,B,41,A,43,44,AB,46,47,A,49,B,A,52,53,A,B,56,A,58,59,AB,61,62,A,64,B,A,67,68,A,B,71,A,73,74,AB,76,77,A,79,B,A,82,83,A,B,86,A,88,89,AB,91,92,A,94,B,A,97,98,A,B

Sunday, March 27, 2016

Getting started with apache flume, retrieve Twitter tweets data into HDFS using flume


This post will explain you about, flume installation , retrieve tweets to HDFS, Twitter app creation for development.

1.Download latest flume
2.Untar the downloaded tar file in which ever location you want.
sudo tar -xvzf apache-flume-1.6.0-bin.tar.gz
3.Once you have done the above steps, u can start the ssh localhost, if not connected to ssh server
4.Start the dfs ./start-dfs.sh
5.Start the ./start-yarn.sh
6.Go up to bin folder where flume has been extracted
7.Here I am extracted flume under /usr/local/flume-ng/apache-flume-1.6.0-bin
8.First Download the flume-sources-1.0-snapshot.jar and move that jar into inside /usr/local/flume-ng/apache-flume-1.6.0-bin/lib/

9.Once we have done this, Now we need to set the java path and snapshot.jar path details in flume-env.sh
/usr/local/flume-ng/apache-flume-1.6.0-bin/conf>sudo cp flume-env.sh.template flume-env.sh
   /usr/local/flume-ng/apache-flume-1.6.0-bin/conf> sudo gedit flume-env.sh

10.Now we need to register our application with twitter dev
11.open the twitter , if you are not having sign in details, then please signup the same. Once you have signed up then use the twitter apps

12. Click on Create New App Enter the required details

13. Check the I Agree checkbox

14.Once application has been created then twitter page will be look like this.


15.Click on the Keys and Access Tokens tab and copy consumer key and consumer secret key and paste any notepad

16.Click Create my access token

17.It will generate access token and access token secret, copy these 2 values and place it notepad

18.Create flume.conf file under /usr/local/flume-ng/apache-flume-1.6.0-bin/conf and paste the below details.
# Licensed to the Apache Software Foundation (ASF) under one
# or more contributor license agreements.  See the NOTICE file
# distributed with this work for additional information
# regarding copyright ownership.  The ASF licenses this file
# to you under the Apache License, Version 2.0 (the
# "License"); you may not use this file except in compliance
# with the License.  You may obtain a copy of the License at
#
#  http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing,
# software distributed under the License is distributed on an
# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
# KIND, either express or implied.  See the License for the
# specific language governing permissions and limitations
# under the License.


# The configuration file needs to define the sources, 
# the channels and the sinks.
# Sources, channels and sinks are defined per agent, 
# in this case called 'TwitterAgent'

TwitterAgent.sources = Twitter
TwitterAgent.channels = MemChannel
TwitterAgent.sinks = HDFS

TwitterAgent.sources.Twitter.type = com.cloudera.flume.source.TwitterSource
TwitterAgent.sources.Twitter.channels = MemChannel
TwitterAgent.sources.Twitter.consumerKey = joeTPv3pjfc471vfMH0lmP
TwitterAgent.sources.Twitter.consumerSecret = PydW6v8aYoiHOm1gOe0qdQUboHua9HaTYzo1Vg3muu4xJhF
TwitterAgent.sources.Twitter.accessToken = 714023179098857474-4ZaCUhAxbcZCKdnvijGvyuWQteEv
TwitterAgent.sources.Twitter.accessTokenSecret = yhMgQrmrUZht2nMn6Ts1NbclmzuBda2xvtIIvVoneQ 
TwitterAgent.sources.Twitter.keywords = hadoop, big data, analytics, bigdata, cloudera, data science, data scientiest, business intelligence, mapreduce, data warehouse, data warehousing, mahout, hbase, nosql, newsql, businessintelligence, cloudcomputing

TwitterAgent.sinks.HDFS.channel = MemChannel
TwitterAgent.sinks.HDFS.type = hdfs
TwitterAgent.sinks.HDFS.hdfs.path = hdfs://localhost:9000/user/flume/tweets/%Y/%m/%d/%H/
TwitterAgent.sinks.HDFS.hdfs.fileType = DataStream
TwitterAgent.sinks.HDFS.hdfs.writeFormat = Text
TwitterAgent.sinks.HDFS.hdfs.batchSize = 1000
TwitterAgent.sinks.HDFS.hdfs.rollSize = 0
TwitterAgent.sinks.HDFS.hdfs.rollCount = 10000

TwitterAgent.channels.MemChannel.type = memory
TwitterAgent.channels.MemChannel.capacity = 10000
TwitterAgent.channels.MemChannel.transactionCapacity = 100

19. Once that is done , then we need to run the twitter agent in flume.
/usr/local/flume-ng/apache-flume-1.6.0-bin>./bin/flume-ng agent -n TwitterAgent -c conf -f /usr/local/apache-flume-1.6.0-bin/conf/flume.conf

20. once it is started , wait for some time and click the ctrl+C and now it's time to see the tweets in HDFS file.
21. Open the browser which is there in unix machine and browse the same. go to /user/flume/tweets and see the tweets
http://localhost:50075
22. we can see data similar as shown in twitter, then the unstructured data has been streamed from twitter on to HDFS successfully. Now we can do analytics on this twitter data using Hive.

This is how we can bring live tweets data into HDFS and we can do the analytics using hive.
Thank you very much for viewing this post





Zookeeper Basics, HBase Zookeeper


This post will explain you basics about Zookeeper.
Why Zookeeper
Zookeeper is coordinating mechanism for HBase.

It is mainly used in cluster environment.
Target market for Zookeeper


Zookeeper Data Model
1. Hierarchal namespace (like File System)
2. Each Z node as data and children
3. Data is read and write in its entity

Zookeeper will provide services like, if any one server failure , then another server will be accessible, without any delay.
Zookeeper will provide
1. Wait free
2. Simple , Robust, Good Performance
3. Turned for Read Dominant Workloads
4. Familiar Models and interfaces
5. Need to be able to wait efficiency
Zookeeper and Hbase
Master failover
Region servers and master discovery via zookeeper
1. HBase clients connect zookeeper to find configuration details
2. Region server and master failure detection.
How HBase and Zookeeper will work?


Master
If more than one master, then they fight
Root Region Server
1. This Z node holds the location of the server hosting the root of all the tables in HBase.
2. A directory in which there is a znode per HBase region server
3. Region servers register themselves with zookeeper when they come online.
On region server failure (detected via ephemeral znodes and notification via zookeeper), the
master splits the edits out per region.

These are the basic details about zookeeper.
Thank you very much for viewing this post.


AddToAny

Contact Form

Name

Email *

Message *