What you will learn
The academy’s big data certification program focuses on these areas:
Critical SAS programming skills.
Accessing, transforming and manipulating data.
Improving data quality for reporting and analytics.
Fundamentals of statistics and analytics.
Working with Hadoop, Hive, Pig and SAS.
Exploring and visualizing data.
The six-week big data certification program curriculum includes the following courses:
This course provides an overview of the challenges associated with big data and analysis-driven data.
Reading external data files.
Storing and processing data.
Combining Hadoop and SAS.
Recognizing and overcoming big data challenges.
SAS Fundamentals: Programming, SQL and Macro Language
This course focuses on data manipulation techniques using the DATA step and SQL procedure to access, transform, join and summarize SAS data sets. You’ll learn how to use components of the SAS macro facility to make text substitutions in SAS code and to write simple macro programs.
Summarizing and presenting data.
Querying and subsetting data.
Transforming character, numeric and date variables.
Combining SAS data sets, including complex joins and merges.
Performing DO loop and SAS array processing.
Restructuring or transposing SAS data sets.
Performing text substitution in SAS code.
Using macro variables.
Creating simple macro definitions.
Introduction to SAS and Hadoop: Essentials
This course teaches you how to use SAS programming methods to read, write and manipulate Hadoop data. You’ll learn how to use Base SAS methods to read and write raw data with the DATA step, manage the Hadoop Distributed File System (HDFS) and execute MapReduce and Pig code from SAS via the HADOOP procedure. You’ll also learn how to use SAS/ACCESS® Interface to Hadoop methods that allow LIBNAME access and SQL pass-through techniques to read and write Hive or Impala table structures.
Accessing Hadoop distributions using the LIBNAME statement and the SQL pass‐through facility.
Creating and using SQL procedure pass‐through queries.
Using options and efficiency techniques for optimizing data access performance.
Joining data using the SQL procedure and the DATA step.
Reading and writing Hadoop files with the FILENAME statement.
Executing and using Hadoop commands with PROC HADOOP.
Using Base SAS procedures with Hadoop.
DS2 Programming Essentials With Hadoop
This course focuses on DS2, a fourth-generation SAS proprietary language for advanced data manipulation, which enables parallel processing and storage of large data with reusable methods and packages.
Identifying the similarities and differences between the SAS DATA step and the DS2 DATA step.
Converting a Base SAS DATA step to DS2.
Creating DS2 variable declarations, expressions and methods for data conversion, manipulation and conditional processing.
Creating user‐defined and predefined packages to store, share and execute DS2 methods.
Creating and executing DS2 threads for parallel processing.
Leveraging the SAS In‐Database Code Accelerator to execute DS2 code outside of a SAS session.
Executing DS2 code in the SAS High‐Performance Analytics grid using the HPDS2 procedure.
Big Data Analysis With Hive and Pig
In this hands-on course, you’ll use processing and analysis to find insights in structured and unstructured big data. You’ll learn how to organize structural data in tabular format using Apache Hive and how to analyze the data using the Hive query language (HiveQL). You’ll use the Apache Pig scripting language to perform batch processing tasks, such as extract, transform, load (ETL), data preparation and analytics.
Moving data into the Hadoop ecosystem.
Using Hive to design a data warehouse in Hadoop.
Performing data analysis using HiveQL.
Joining data sources.
Organizing data in Hadoop by usage.
Performing analysis on unstructured data using Pig.
Joining massive data sets using Pig.
Using user‐defined functions (UDFs).
Analyzing big data in Hadoop using Hive and Pig.
Getting Started With SAS In-Memory Statistics
This course focuses on accessing data on the SAS LASR Analytic Server and performing exploratory analysis and preparation. Topics include starting the server, loading data and manipulating data on the SAS LASR Analytic Server using the IMSTAT procedure. IMSTAT topics include deriving new temporary and permanent tables and columns, calculating summary statistics (e.g., mean, frequency and percentile), and creating filters and joins on in-memory data.
Starting up a SAS LASR Analytic Server.
Loading tables into memory on the SAS LASR Analytic Server.
Processing in‐memory tables with PROC LASR and PROC IMSTAT.
Accessing data more efficiently via intelligent partitioning.
Deriving new temporary and permanent tables and variables.
Creating filters and joins on in-memory data.
Exporting ODS result tables for client‐side graphic development.
Producing descriptive statistics including counts, percentiles and means.
Creating multidimensional summaries including cross‐tabulations and contingency tables.
Deriving kernel density estimates using normal functions.