Take your first steps in developing large-scale distributed data processing applications using Apache Spark 2
About This Video
Get introduced to the recently released Apache Spark 2 framework
Leverage the capabilities of various Spark components to perform efficient data processing, machine learning and graph processing
A practical tutorial aimed at absolute beginners to get them up and running with Apache Spark
Spark is one of the most widely-used large-scale data processing engines and runs extremely fast. It is a framework that has tools that are equally useful for application developers as well as data scientists.This book starts with the fundamentals of Spark 2 and covers the core data processing framework and API, installation, and application development setup. Then the Spark programming model is introduced through real-world examples followed by Spark SQL programming with DataFrames. An introduction to SparkR is covered next. Later, we cover the charting and plotting features of Python in conjunction with Spark data processing. After that, we take a look at Spark's stream processing, machine learning, and graph processing libraries. The last chapter combines all the skills you learned from the preceding chapters to develop a real-world Spark application.By the end of this video, you will be able to consolidate data processing, stream processing, machine learning, and graph processing into one unified and highly interoperable framework with a uniform API using Scala or Python.
Implementing Lambda Architecture and Working with Spark Applications 08m 19s
Coding Style, Setting Up the Source Code, and Understanding Data Ingestion 09m 09s
Generating Purposed Views and Queries 05m 53s
Understanding Custom Data Processes 06m 12s
Apache Spark 2 for Beginners
5 hours 38 minutes
Rajanarayanan Thottuvaikkatumana, Raj, is a seasoned technologist with more than 23 years of software development experience at various multinational companies. He has lived and worked in India, Singapore, and the USA, and is presently based out of the UK. His experience includes architecting, designing, and developing software applications. He has worked on various technologies including major databases, application development platforms, web technologies, and big data technologies. Since 2000, he has been working mainly in Java related technologies, and does heavy-duty server-side programming in Java and Scala. He has worked on very highly concurrent, highly distributed, and high transaction volume systems. Currently he is building a next generation Hadoop YARN-based data processing platform and an application suite built with Spark using Scala.
Raj holds one master's degree in Mathematics, one master's degree in Computer Information Systems and has many certifications in ITIL and cloud computing to his credit. Raj is the author of Cassandra Design Patterns - Second Edition, published by Packt.When not working on the assignments his day job demands, Raj is an avid listener to classical music and watches a lot of tennis.