Learning Spark
Lightning-Fast Big Data Analysis
Publisher: O'Reilly Media
Release Date: February 2015
Pages: 276
Read on Safari with a 10-day trial
Start your free trial now Buy on AmazonWhere’s the cart? Now you can get everything on Safari. To purchase books, visit Amazon or your favorite retailer. Questions? See our FAQ or contact customer service:
1-800-889-8969 / 707-827-7019
support@oreilly.com
Data in all domains is getting bigger. How can you work with it efficiently? Recently updated for Spark 1.3, this book introduces Apache Spark, the open source cluster computing system that makes data analytics fast to write and fast to run. With Spark, you can tackle big datasets quickly through simple APIs in Python, Java, and Scala. This edition includes new information on Spark SQL, Spark Streaming, setup, and Maven coordinates.
Written by the developers of Spark, this book will have data scientists and engineers up and running in no time. You’ll learn how to express parallel jobs with just a few lines of code, and cover applications from simple batch jobs to stream processing and machine learning.
- Quickly dive into Spark capabilities such as distributed datasets, in-memory caching, and the interactive shell
- Leverage Spark’s powerful built-in libraries, including Spark SQL, Spark Streaming, and MLlib
- Use one programming paradigm instead of mixing and matching tools like Hive, Hadoop, Mahout, and Storm
- Learn how to deploy interactive, batch, and streaming applications
- Connect to data sources including HDFS, Hive, JSON, and S3
- Master advanced topics like data partitioning and shared variables
Table of Contents
-
Chapter 1 Introduction to Data Analysis with Spark
-
What Is Apache Spark?
-
A Unified Stack
-
Who Uses Spark, and for What?
-
A Brief History of Spark
-
Spark Versions and Releases
-
Storage Layers for Spark
-
-
Chapter 2 Downloading Spark and Getting Started
-
Downloading Spark
-
Introduction to Spark’s Python and Scala Shells
-
Introduction to Core Spark Concepts
-
Standalone Applications
-
Conclusion
-
-
Chapter 3 Programming with RDDs
-
RDD Basics
-
Creating RDDs
-
RDD Operations
-
Passing Functions to Spark
-
Common Transformations and Actions
-
Persistence (Caching)
-
Conclusion
-
-
Chapter 4 Working with Key/Value Pairs
-
Motivation
-
Creating Pair RDDs
-
Transformations on Pair RDDs
-
Actions Available on Pair RDDs
-
Data Partitioning (Advanced)
-
Conclusion
-
-
Chapter 5 Loading and Saving Your Data
-
Motivation
-
File Formats
-
Filesystems
-
Structured Data with Spark SQL
-
Databases
-
Conclusion
-
-
Chapter 6 Advanced Spark Programming
-
Introduction
-
Accumulators
-
Broadcast Variables
-
Working on a Per-Partition Basis
-
Piping to External Programs
-
Numeric RDD Operations
-
Conclusion
-
-
Chapter 7 Running on a Cluster
-
Introduction
-
Spark Runtime Architecture
-
Deploying Applications with spark-submit
-
Packaging Your Code and Dependencies
-
Scheduling Within and Between Spark Applications
-
Cluster Managers
-
Which Cluster Manager to Use?
-
Conclusion
-
-
Chapter 8 Tuning and Debugging Spark
-
Configuring Spark with SparkConf
-
Components of Execution: Jobs, Tasks, and Stages
-
Finding Information
-
Key Performance Considerations
-
Conclusion
-
-
Chapter 9 Spark SQL
-
Linking with Spark SQL
-
Using Spark SQL in Applications
-
Loading and Saving Data
-
JDBC/ODBC Server
-
User-Defined Functions
-
Spark SQL Performance
-
Conclusion
-
-
Chapter 10 Spark Streaming
-
A Simple Example
-
Architecture and Abstraction
-
Transformations
-
Output Operations
-
Input Sources
-
24/7 Operation
-
Streaming UI
-
Performance Considerations
-
Conclusion
-
-
Chapter 11 Machine Learning with MLlib
-
Overview
-
System Requirements
-
Machine Learning Basics
-
Data Types
-
Algorithms
-
Tips and Performance Considerations
-
Pipeline API
-
Conclusion
-