Tuesday 13 August 2013
Big data training and placement
Apache Hadoop is an open source software project that enables the distributed processing of large data sets across clusters of commodity servers. It is designed to scale up from a single server to thousands of machines, with a very high degree of fault tolerance. Rather than relying on high-end hardware, the resiliency of these clusters comes from the software’s ability to detect and handle failures at the application layer.
* Resume preparation and Interview assistance will be provided.
Tuesday 6 August 2013
Big data Job training and placement
Big data and Hadoop job training and placement assistance in Magnific training.
HBase background
HBase Architecture
HBase core concepts
HBase vs. RDBMS
HBase Master and Region Servers
Data Modeling
Column Families and Regions
Bloom Filters and Block Indexes
Write Pipeline / Read Pipeline
Visit: www.hadooponlinetraining.net
Compactions
Performance Tuning
HBase GeoRedundancy, DR and Snapshots
LAB #4: Use HBase CLI to create databases and tune them.
Data Analytics via Hive
Hive philosophy and architecture
Hive vs. RDBMS
HiveQL and Hive Shell
Managing tables
Data types and schemas
Querying data
LAB #5: Analyzing data using Hive
Sqoop – Moving data between RDBMS and Hadoop
Data Processing through Sqoop
Understand Sqoop connectivity model with RDBMS
Using Sqoop example with real time data applications
LAB #6: Sqoop lab exercise.
You can attend 1st 2 classes or 3 hours for free. once you like the classes then you can go for registration.
or full course details please visit our website www.hadooponlinetraining.net
Duration for course is 30 days or 45 hours and special care will be taken. It is a one to one training with hands on experience.
* Resume preparation and Interview assistance will be provided.
For any further details please contact
INDIA: +91-9052666559
USA: +1-6786933994, 6786933475
visit www.hadooponlinetraining.net
please mail us all queries to info@magnifictraining.com
Subscribe to:
Posts (Atom)