Agile Data Science

Book description

Mining big data requires a deep investment in people and time. How can you be sure you’re building the right models? With this hands-on book, you’ll learn a flexible toolset and methodology for building effective analytics applications with Hadoop.

Using lightweight tools such as Python, Apache Pig, and the D3.js library, your team will create an agile environment for exploring data, starting with an example application to mine your own email inboxes. You’ll learn an iterative approach that enables you to quickly change the kind of analysis you’re doing, depending on what the data is telling you. All example code in this book is available as working Heroku apps.

  • Create analytics applications by using the agile big data development methodology
  • Build value from your data in a series of agile sprints, using the data-value stack
  • Gain insight by using several data structures to extract multiple features from a single dataset
  • Visualize data with charts, and expose different aspects through interactive reports
  • Use historical data to predict the future, and translate predictions into action
  • Get feedback from users after each sprint to keep your project on track

Publisher resources

View/Submit Errata

Table of contents

  1. Preface
    1. Who This Book Is For
    2. How This Book Is Organized
    3. Conventions Used in This Book
    4. Using Code Examples
    5. Safari® Books Online
    6. How to Contact Us
  2. I. Setup
    1. 1. Theory
      1. Agile Big Data
      2. Big Words Defined
      3. Agile Big Data Teams
        1. Recognizing the Opportunity and Problem
        2. Adapting to Change
          1. Harnessing the power of generalists
          2. Leveraging agile platforms
          3. Sharing intermediate results
      4. Agile Big Data Process
      5. Code Review and Pair Programming
      6. Agile Environments: Engineering Productivity
        1. Collaboration Space
        2. Private Space
        3. Personal Space
      7. Realizing Ideas with Large-Format Printing
    2. 2. Data
      1. Email
      2. Working with Raw Data
        1. Raw Email
        2. Structured Versus Semistructured Data
      3. SQL
      4. NoSQL
        1. Serialization
        2. Extracting and Exposing Features in Evolving Schemas
        3. Data Pipelines
      5. Data Perspectives
        1. Networks
        2. Time Series
        3. Natural Language
        4. Probability
        5. Conclusion
    3. 3. Agile Tools
      1. Scalability = Simplicity
      2. Agile Big Data Processing
      3. Setting Up a Virtual Environment for Python
      4. Serializing Events with Avro
        1. Avro for Python
          1. Installation
          2. Testing
      5. Collecting Data
      6. Data Processing with Pig
        1. Installing Pig
      7. Publishing Data with MongoDB
        1. Installing MongoDB
        2. Installing MongoDB’s Java Driver
        3. Installing mongo-hadoop
        4. Pushing Data to MongoDB from Pig
      8. Searching Data with ElasticSearch
        1. Installation
        2. ElasticSearch and Pig with Wonderdog
          1. Installing Wonderdog
          2. Wonderdog and Pig
          3. Searching our data
          4. Python and ElasticSearch with pyelasticsearch
      9. Reflecting on our Workflow
      10. Lightweight Web Applications
        1. Python and Flask
          1. Flask Echo ch03/python/flask_echo.py
          2. Python and Mongo with pymongo
          3. Displaying sent_counts in Flask
      11. Presenting Our Data
        1. Installing Bootstrap
        2. Booting Boostrap
        3. Visualizing Data with D3.js and nvd3.js
      12. Conclusion
    4. 4. To the Cloud!
      1. Introduction
      2. GitHub
      3. dotCloud
        1. Echo on dotCloud
        2. Python Workers
      4. Amazon Web Services
        1. Simple Storage Service
        2. Elastic MapReduce
        3. MongoDB as a Service
          1. Pushing data from Pig to MongoDB at dotCloud
      5. Instrumentation
        1. Google Analytics
        2. Mortar Data
  3. II. Climbing the Pyramid
    1. 5. Collecting and Displaying Records
      1. Putting It All Together
      2. Collect and Serialize Our Inbox
      3. Process and Publish Our Emails
      4. Presenting Emails in a Browser
        1. Serving Emails with Flask and pymongo
        2. Rendering HTML5 with Jinja2
      5. Agile Checkpoint
      6. Listing Emails
        1. Listing Emails with MongoDB
        2. Anatomy of a Presentation
          1. Reinventing the wheel?
          2. Prototyping back from HTML
      7. Searching Our Email
        1. Indexing Our Email with Pig, ElasticSearch, and Wonderdog
        2. Searching Our Email on the Web
      8. Conclusion
    2. 6. Visualizing Data with Charts
      1. Good Charts
      2. Extracting Entities: Email Addresses
        1. Extracting Emails
      3. Visualizing Time
      4. Conclusion
    3. 7. Exploring Data with Reports
      1. Building Reports with Multiple Charts
      2. Linking Records
      3. Extracting Keywords from Emails with TF-IDF
      4. Conclusion
    4. 8. Making Predictions
      1. Predicting Response Rates to Emails
      2. Personalization
      3. Conclusion
    5. 9. Driving Actions
      1. Properties of Successful Emails
      2. Better Predictions with Naive Bayes
      3. P(Reply | From & To)
      4. P(Reply | Token)
      5. Making Predictions in Real Time
      6. Logging Events
      7. Conclusion
  4. Index
  5. About the Author
  6. Colophon
  7. Copyright

Product information

  • Title: Agile Data Science
  • Author(s): Russell Jurney
  • Release date: October 2013
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781449326265