Flash Sale: Get Upto 75% off on all courses. Browse the courses below to see available discounts and offers. Valid till:

Big Data and Hadoop

Managing Big data using Hadoop tools like MapReduce, Hive, Pig, hBase and m

Instructed by Saheb Singh

Access all courses with Premium Subscription

  • Monthly
  • Yearly

Monthly

$ 29/mo
Billed Monthly
  • All Courses Access
  • New Courses Instant Access
  • Learning paths Access
  • Course completion certificates
  • Skills Assessment
  • Instructor Support
  • Exercise files & Quizzes
  • Resume & Play
  • Mobile and TV apps
  • Offline viewing
  • Cancel Anytime
Subscribe Now

Yearly

$ 299/yr
Billed Anually
  • One Year Unlimited Access
  • New Courses Instant Access
  • Learning paths Access
  • Course completion certificates
  • Skills Assessment
  • Instructor Support
  • Exercise files & Quizzes
  • Resume & Play
  • Mobile and TV apps
  • Offline viewing
  • Cancel Anytime
Subscribe Now
  • In late 1990s, Developers and Programmers were generating data through coding, in late 2000s everyone on Social media generating data on FB, Twitter, Insta etc and these days Machines are generating data which overall creating a Huge Volume of data which cannot be handled easily through traditional databases, so after completion of this course you'll be able to do that using HADOOP as your platform and also Able to crack Cloudera CCA 175 Certification

This is an interactive lecture of one of my Big data and Hadoop class where everything is covered from the scratch and also you will see students asking doubts so you can clear those concepts here as well.

Students will be Able to crack Cloudera CCA 175 Certification after successful completion and with little practice.

Tools covered :

1. Sqoop

2. Flume

3. MapReduce

4. Hive

5. Impala

6. Beeline

7. Apache Pig

8. HBase

9. OOZIE

10. Project on a real data set.

  • A Laptop, 6 GB RAM (at least), 100 GB FREE HDD
  • Students who want to step into Big Data, want to know how to Analyse, work on and manage it.
View More...

Section 1 : All about BIG DATA

  • Lecture 1 :
  • Big Data and Hadoop Introduction Preview
  • Lecture 2 :
  • Hadoop framework
  • Lecture 3 :
  • Hadoop Ecosystem
  • Lecture 4 :
  • HDFS
  • Lecture 5 :
  • Magic Boxes, Sqoop and Flume
  • Lecture 6 :
  • NameNode, DataNode, JournalNode
  • Lecture 7 :
  • Input output operations, Ram and HDD, pros and cons
  • Lecture 8 :
  • Mapreduce Theory 1.1
  • Lecture 9 :
  • Mapreduce Theory 1.2
  • Lecture 10 :
  • Mapreduce Theory 1.3
  • Lecture 11 :
  • Combiner Approach in MapReduce
  • Lecture 12 :
  • Coding : Sqoop with SQL
  • Lecture 13 :
  • Visit to Cloudera Machine
  • Lecture 14 :
  • Sqoop commands with introduction to Linux commands as well
  • Lecture 15 :
  • Sqoop commands
  • Lecture 16 :
  • Basics of core Java, introduction to eclipse, MapReduce Coding
  • Lecture 17 :
  • Coding : MapReduce
  • Lecture 18 :
  • Hive Theory
  • Lecture 19 :
  • Hive: connecting, loading, defining delimiters
  • Lecture 20 :
  • Coding : Hive
  • Lecture 21 :
  • Hive to Impala and Beeline
  • Lecture 22 :
  • Hive : Partitioning
  • Lecture 23 :
  • Hive Bucketing
  • Lecture 24 :
  • YARN, HBASE, OOZIE
  • Lecture 25 :
  • FINAL PROJECT ON REAL DATA SET
  • Lecture 26 :
  • Assignment on DataNodes
  • Introduction Let's assume that, you have 100 TB of data to store and process with Hadoop. The configuration of each available DataNode is as follows: • 8 GB RAM • 10 TB HDD •100 MB/s read-write speed  You have a Hadoop Cluster with replication factor = 3 and block size = 64 MB. In this case, the number of DataNodes required to store would be: • Total amount of Data * Replication Factor / Disk Space available on each DataNode •100 * 3 / 10 •30 DataNodes  Now, let's assume you need to process this 100 TB of data using MapReduce. And, reading 100 TB data at a speed of 100 MB/s using only 1 node would take: •Total data / Read-write speed •100 * 1024 * 1024 / 100 •1048576 seconds •291.27 hours  So, with 30 DataNodes you would be able to finish this MapReduce job in: •291.27 / 30 •9.70 hours  1.Problem Statement How many such Data Nodes you would need to read 100TB data in 5 minutes in your Hadoop Cluster?

Saheb Singh,

Hello guys, How are you all doing? So a quick introduction of me, well I am a Big Data Expert and Data Scientist, I have 3 years of experience on these domain in companies as well as a trainer. My teaching method is completely different than others like I am not a slide reader, I use analogies and examples a lot to explain things and mostly try to be practical which you'll see in the course. Hope to see you all in the lectures, Good Luck!
View More...

Big Data Pipeline Applied to UFOs

By : Eduardo Morelli

Lecture 6

Git and GitHub Version Control - Th...

By : Abhilash Nelson

Lecture 15

Statistics for Data Scientists and ...

By : Phikolomzi Gugwana

Lecture 31

Machine Learning from Scratch using...

By : Saheb Singh

Lecture 14

Data Preparation for Analytics A-Z...

By : Shokat Ali

Lecture 13

Need any help with the platform? Contact us at: support@learnfly.com