Deep Learning Cookbook

Book description

Deep learning doesn’t have to be intimidating. Until recently, this machine-learning method required years of study, but with frameworks such as Keras and Tensorflow, software engineers without a background in machine learning can quickly enter the field. With the recipes in this cookbook, you’ll learn how to solve deep-learning problems for classifying and generating text, images, and music.

Each chapter consists of several recipes needed to complete a single project, such as training a music recommending system. Author Douwe Osinga also provides a chapter with half a dozen techniques to help you if you’re stuck. Examples are written in Python with code available on GitHub as a set of Python notebooks.

You’ll learn how to:

  • Create applications that will serve real users
  • Use word embeddings to calculate text similarity
  • Build a movie recommender system based on Wikipedia links
  • Learn how AIs see the world by visualizing their internal state
  • Build a model to suggest emojis for pieces of text
  • Reuse pretrained networks to build an inverse image search service
  • Compare how GANs, autoencoders and LSTMs generate icons
  • Detect music styles and index song collections

Publisher resources

View/Submit Errata

Table of contents

  1. Preface
    1. A Brief History of Deep Learning
    2. Why Now?
    3. What Do You Need to Know?
    4. How This Book Is Structured
    5. Conventions Used in This Book
    6. Accompanying Code
    7. O’Reilly Safari
    8. How to Contact Us
    9. Acknowledgments
  2. Tools and Techniques
    1. 1.1. Types of Neural Networks
    2. 1.2. Acquiring Data
    3. 1.3. Preprocessing Data
  3. Getting Unstuck
    1. 2.1. Determining That You Are Stuck
    2. 2.2. Solving Runtime Errors
    3. 2.3. Checking Intermediate Results
    4. 2.4. Picking the Right Activation Function (for Your Final Layer)
    5. 2.5. Regularization and Dropout
    6. 2.6. Network Structure, Batch Size, and Learning Rate
  4. Calculating Text Similarity Using Word Embeddings
    1. 3.1. Using Pretrained Word Embeddings to Find Word Similarity
    2. 3.2. Word2vec Math
    3. 3.3. Visualizing Word Embeddings
    4. 3.4. Finding Entity Classes in Embeddings
    5. 3.5. Calculating Semantic Distances Inside a Class
    6. 3.6. Visualizing Country Data on a Map
  5. Building a Recommender System Based on Outgoing Wikipedia Links
    1. 4.1. Collecting the Data
    2. 4.2. Training Movie Embeddings
    3. 4.3. Building a Movie Recommender
    4. 4.4. Predicting Simple Movie Properties
  6. Generating Text in the Style of an Example Text
    1. 5.1. Acquiring the Text of Public Domain Books
    2. 5.2. Generating Shakespeare-Like Texts
    3. 5.3. Writing Code Using RNNs
    4. 5.4. Controlling the Temperature of the Output
    5. 5.5. Visualizing Recurrent Network Activations
  7. Question Matching
    1. 6.1. Acquiring Data from Stack Exchange
    2. 6.2. Exploring Data Using Pandas
    3. 6.3. Using Keras to Featurize Text
    4. 6.4. Building a Question/Answer Model
    5. 6.5. Training a Model with Pandas
    6. 6.6. Checking Similarities
  8. Suggesting Emojis
    1. 7.1. Building a Simple Sentiment Classifier
    2. 7.2. Inspecting a Simple Classifier
    3. 7.3. Using a Convolutional Network for Sentiment Analysis
    4. 7.4. Collecting Twitter Data
    5. 7.5. A Simple Emoji Predictor
    6. 7.6. Dropout and Multiple Windows
    7. 7.7. Building a Word-Level Model
    8. 7.8. Constructing Your Own Embeddings
    9. 7.9. Using a Recurrent Neural Network for Classification
    10. 7.10. Visualizing (Dis)Agreement
    11. 7.11. Combining Models
  9. Sequence-to-Sequence Mapping
    1. 8.1. Training a Simple Sequence-to-Sequence Model
    2. 8.2. Extracting Dialogue from Texts
    3. 8.3. Handling an Open Vocabulary
    4. 8.4. Training a seq2seq Chatbot
  10. Reusing a Pretrained Image Recognition Network
    1. 9.1. Loading a Pretrained Network
    2. 9.2. Preprocessing Images
    3. 9.3. Running Inference on Images
    4. 9.4. Using the Flickr API to Collect a Set of Labeled Images
    5. 9.5. Building a Classifier That Can Tell Cats from Dogs
    6. 9.6. Improving Search Results
    7. 9.7. Retraining Image Recognition Networks
  11. Building an Inverse Image Search Service
    1. 10.1. Acquiring Images from Wikipedia
    2. 10.2. Projecting Images into an N-Dimensional Space
    3. 10.3. Finding Nearest Neighbors in High-Dimensional Spaces
    4. 10.4. Exploring Local Neighborhoods in Embeddings
  12. Detecting Multiple Images
    1. 11.1. Detecting Multiple Images Using a Pretrained Classifier
    2. 11.2. Using Faster RCNN for Object Detection
    3. 11.3. Running Faster RCNN over Our Own Images
  13. Image Style
    1. 12.1. Visualizing CNN Activations
    2. 12.2. Octaves and Scaling
    3. 12.3. Visualizing What a Neural Network Almost Sees
    4. 12.4. Capturing the Style of an Image
    5. 12.5. Improving the Loss Function to Increase Image Coherence
    6. 12.6. Transferring the Style to a Different Image
    7. 12.7. Style Interpolation
  14. Generating Images with Autoencoders
    1. 13.1. Importing Drawings from Google Quick Draw
    2. 13.2. Creating an Autoencoder for Images
    3. 13.3. Visualizing Autoencoder Results
    4. 13.4. Sampling Images from a Correct Distribution
    5. 13.5. Visualizing a Variational Autoencoder Space
    6. 13.6. Conditional Variational Autoencoders
  15. Generating Icons Using Deep Nets
    1. 14.1. Acquiring Icons for Training
    2. 14.2. Converting the Icons to a Tensor Representation
    3. 14.3. Using a Variational Autoencoder to Generate Icons
    4. 14.4. Using Data Augmentation to Improve the Autoencoder’s Performance
    5. 14.5. Building a Generative Adversarial Network
    6. 14.6. Training Generative Adversarial Networks
    7. 14.7. Showing the Icons the GAN Produces
    8. 14.8. Encoding Icons as Drawing Instructions
    9. 14.9. Training an RNN to Draw Icons
    10. 14.10. Generating Icons Using an RNN
  16. Music and Deep Learning
    1. 15.1. Creating a Training Set for Music Classification
    2. 15.2. Training a Music Genre Detector
    3. 15.3. Visualizing Confusion
    4. 15.4. Indexing Existing Music
    5. 15.5. Setting Up Spotify API Access
    6. 15.6. Collecting Playlists and Songs from Spotify
    7. 15.7. Training a Music Recommender
    8. 15.8. Recommending Songs Using a Word2vec Model
  17. Productionizing Machine Learning Systems
    1. 16.1. Using Scikit-Learn’s Nearest Neighbors for Embeddings
    2. 16.2. Use Postgres to Store Embeddings
    3. 16.3. Populating and Querying Embeddings Stored in Postgres
    4. 16.4. Storing High-Dimensional Models in Postgres
    5. 16.5. Writing Microservices in Python
    6. 16.6. Deploying a Keras Model Using a Microservice
    7. 16.7. Calling a Microservice from a Web Framework
    8. 16.8. TensorFlow seq2seq models
    9. 16.9. Running Deep Learning Models in the Browser
    10. 16.10. Running a Keras Model Using TensorFlow Serving
    11. 16.11. Using a Keras Model from iOS
  18. Index

Product information

  • Title: Deep Learning Cookbook
  • Author(s): Douwe Osinga
  • Release date: June 2018
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781491995792