Don’t Miss These 5 Crash Courses at the DataWorks / Hadoop Summit Sydney

By: Robert Hryniewicz - 11 Aug 2017

dataworks summit

At each DataWorks / Hadoop Summit Hortonworks sponsors a series of Crash Courses with a quick, hands-on introduction that covers key Apache projects. These Crash Courses start with a short technical introduction and then shift to a hands-on portion where attendees get to experiment on their personal machines, ask questions, and leave with a working environment allowing them to continue their journey.

Here are 5 Crash Courses that you should definitely checkout at the DataWorks / Hadoop Summit Sydney (Sept. 20-21):

1. Data Science

A hands-on introduction to basic Machine Learning techniques with Apache Spark MLlib module and Apache Zeppelin in the Hortonworks Data Cloud (HDCloud).

Objective: To provide a quick and short hands-on introduction to Machine Learning with Spark MLlib. In the lab, you will use the following components: Apache Zeppelin (a “Modern Data Science Toolbox”) and Apache Spark. You will learn how to analyze the data, structure the data, train Machine Learning models and apply them to answer real-world questions.

Checkout our short video on Basic Machine Learning Algorithms.

2. Streaming Analytics

A hands-on introduction to stream processing using the Hortonworks DataFlow (HDF) Sandbox.

Objective: To provide a quick and short hands-on introduction to Stream Processing without coding. In the lab you will use Streaming Analytics Manager (SAM) to connect, aggregate and process real-time events. You will learn how to connect and consume streaming sensor data, filter and transform the data and persist to multiple data sources.

3. Apache Spark

A hands-on introduction to Apache Spark and Apache Zeppelin  in the Hortonworks Data Cloud (HDCloud).

Objective: To provide a quick and short hands-on introduction to Apache Spark. This lab will use the following Spark and Apache Hadoop components: Spark, Spark SQL, Apache Hadoop HDFS, Apache Hadoop YARN, Apache ORC, and Apache Ambari User Views. You will learn how to move data into HDFS using Spark APIs, create Apache Hive tables, explore the data with Spark and Spark SQL, transform the data and then issue some SQL queries.

Checkout our short video on Apache Spark Basics.

4. Apache NiFi 

A hands-on introduction to simple event data processing and data flow processing in Apache NiFi using the Hortonworks DataFlow (HDF) Sandbox.

Objective: To provide a quick and short hands-on introduction to Apache NiFi. In the lab, you will install and use Apache NiFi to collect, conduct and curate data-in-motion and data-at-rest with NiFi. You will learn how to connect and consume streaming sensor data, filter and transform the data and persist to multiple data sources.

5. Apache Hadoop

A hands on introduction to Apache Hadoop in the Hortonworks Data Platform (HDP) Sandbox.

Objective: To provide a quick and short hands-on introduction to Hadoop. This lab will use the following Hadoop components: HDFS, YARN, Apache Pig, Apache Hive, Apache Spark, and Apache Ambari User Views. You will learn how to move data into HDFS, explore the data, clean the data, issue SQL queries and then build a report with Apache Zeppelin.

 

If you plan to attend one of these Crash Courses, we highly recommend that you show up early since they are very popular and on a first-come first-in basis.

 

Still need to register for DataWorks Summit Sydney? Enter code SOCIAL to save 25%!