Python: Deeper Insights into Machine Learning

Book description

Leverage benefits of machine learning techniques using Python.

About This Book

  • Improve and optimise machine learning systems using effective strategies.
  • Develop a strategy to deal with a large amount of data.
  • Use of Python code for implementing a range of machine learning algorithms and techniques.

Who This Book Is For

This title is for data scientist and researchers who are already into the field of data science and want to see machine learning in action and explore its real-world application. Prior knowledge of Python programming and mathematics is must with basic knowledge of machine learning concepts.

What You Will Learn

  • Learn to write clean and elegant Python code that will optimize the strength of your algorithms
  • Uncover hidden patterns and structures in data with clustering
  • Improve accuracy and consistency of results using powerful feature engineering techniques
  • Gain practical and theoretical understanding of cutting-edge deep learning algorithms
  • Solve unique tasks by building models
  • Get grips on the machine learning design process

In Detail

Machine learning and predictive analytics are becoming one of the key strategies for unlocking growth in a challenging contemporary marketplace. It is one of the fastest growing trends in modern computing, and everyone wants to get into the field of machine learning. In order to obtain sufficient recognition in this field, one must be able to understand and design a machine learning system that serves the needs of a project.

The idea is to prepare a learning path that will help you to tackle the real-world complexities of modern machine learning with innovative and cutting-edge techniques. Also, it will give you a solid foundation in the machine learning design process, and enable you to build customized machine learning models to solve unique problems.

The course begins with getting your Python fundamentals nailed down. It focuses on answering the right questions that cove a wide range of powerful Python libraries, including scikit-learn Theano and Keras.After getting familiar with Python core concepts, it's time to dive into the field of data science. You will further gain a solid foundation on the machine learning design and also learn to customize models for solving problems.

At a later stage, you will get a grip on more advanced techniques and acquire a broad set of powerful skills in the area of feature selection and feature engineering.

Style and approach

This course includes all the resources that will help you jump into the data science field with Python. The aim is to walk through the elements of Python covering powerful machine learning libraries. This course will explain important machine learning models in a step-by-step manner. Each topic is well explained with real-world applications with detailed guidance.Through this comprehensive guide, you will be able to explore machine learning techniques

Table of contents

  1. Python: Deeper Insights into Machine Learning
    1. Table of Contents
    2. Python: Deeper Insights into Machine Learning
    3. Python: Deeper Insights into Machine Learning
    4. Credits
    5. Preface
      1. What this learning path covers
      2. What you need for this learning path
      3. Who this learning path is for
      4. Reader feedback
      5. Customer support
        1. Downloading the example code
        2. Errata
        3. Piracy
        4. Questions
    6. 1. Module 1
      1. 1. Giving Computers the Ability to Learn from Data
        1. Building intelligent machines to transform data into knowledge
        2. The three different types of machine learning
          1. Making predictions about the future with supervised learning
            1. Classification for predicting class labels
            2. Regression for predicting continuous outcomes
          2. Solving interactive problems with reinforcement learning
          3. Discovering hidden structures with unsupervised learning
            1. Finding subgroups with clustering
            2. Dimensionality reduction for data compression
        3. An introduction to the basic terminology and notations
        4. A roadmap for building machine learning systems
          1. Preprocessing – getting data into shape
          2. Training and selecting a predictive model
          3. Evaluating models and predicting unseen data instances
        5. Using Python for machine learning
          1. Installing Python packages
        6. Summary
      2. 2. Training Machine Learning Algorithms for Classification
        1. Artificial neurons – a brief glimpse into the early history of machine learning
        2. Implementing a perceptron learning algorithm in Python
          1. Training a perceptron model on the Iris dataset
        3. Adaptive linear neurons and the convergence of learning
          1. Minimizing cost functions with gradient descent
          2. Implementing an Adaptive Linear Neuron in Python
          3. Large scale machine learning and stochastic gradient descent
        4. Summary
      3. 3. A Tour of Machine Learning Classifiers Using Scikit-learn
        1. Choosing a classification algorithm
        2. First steps with scikit-learn
          1. Training a perceptron via scikit-learn
        3. Modeling class probabilities via logistic regression
          1. Logistic regression intuition and conditional probabilities
          2. Learning the weights of the logistic cost function
          3. Training a logistic regression model with scikit-learn
          4. Tackling overfitting via regularization
        4. Maximum margin classification with support vector machines
          1. Maximum margin intuition
          2. Dealing with the nonlinearly separable case using slack variables
          3. Alternative implementations in scikit-learn
        5. Solving nonlinear problems using a kernel SVM
          1. Using the kernel trick to find separating hyperplanes in higher dimensional space
        6. Decision tree learning
          1. Maximizing information gain – getting the most bang for the buck
          2. Building a decision tree
          3. Combining weak to strong learners via random forests
        7. K-nearest neighbors – a lazy learning algorithm
        8. Summary
      4. 4. Building Good Training Sets – Data Preprocessing
        1. Dealing with missing data
          1. Eliminating samples or features with missing values
          2. Imputing missing values
          3. Understanding the scikit-learn estimator API
        2. Handling categorical data
          1. Mapping ordinal features
          2. Encoding class labels
          3. Performing one-hot encoding on nominal features
        3. Partitioning a dataset in training and test sets
        4. Bringing features onto the same scale
        5. Selecting meaningful features
          1. Sparse solutions with L1 regularization
          2. Sequential feature selection algorithms
        6. Assessing feature importance with random forests
        7. Summary
      5. 5. Compressing Data via Dimensionality Reduction
        1. Unsupervised dimensionality reduction via principal component analysis
          1. Total and explained variance
          2. Feature transformation
          3. Principal component analysis in scikit-learn
        2. Supervised data compression via linear discriminant analysis
          1. Computing the scatter matrices
          2. Selecting linear discriminants for the new feature subspace
          3. Projecting samples onto the new feature space
          4. LDA via scikit-learn
        3. Using kernel principal component analysis for nonlinear mappings
          1. Kernel functions and the kernel trick
          2. Implementing a kernel principal component analysis in Python
            1. Example 1 – separating half-moon shapes
            2. Example 2 – separating concentric circles
          3. Projecting new data points
          4. Kernel principal component analysis in scikit-learn
        4. Summary
      6. 6. Learning Best Practices for Model Evaluation and Hyperparameter Tuning
        1. Streamlining workflows with pipelines
          1. Loading the Breast Cancer Wisconsin dataset
          2. Combining transformers and estimators in a pipeline
        2. Using k-fold cross-validation to assess model performance
          1. The holdout method
          2. K-fold cross-validation
        3. Debugging algorithms with learning and validation curves
          1. Diagnosing bias and variance problems with learning curves
          2. Addressing overfitting and underfitting with validation curves
        4. Fine-tuning machine learning models via grid search
          1. Tuning hyperparameters via grid search
          2. Algorithm selection with nested cross-validation
        5. Looking at different performance evaluation metrics
          1. Reading a confusion matrix
          2. Optimizing the precision and recall of a classification model
          3. Plotting a receiver operating characteristic
          4. The scoring metrics for multiclass classification
        6. Summary
      7. 7. Combining Different Models for Ensemble Learning
        1. Learning with ensembles
        2. Implementing a simple majority vote classifier
          1. Combining different algorithms for classification with majority vote
        3. Evaluating and tuning the ensemble classifier
        4. Bagging – building an ensemble of classifiers from bootstrap samples
        5. Leveraging weak learners via adaptive boosting
        6. Summary
      8. 8. Applying Machine Learning to Sentiment Analysis
        1. Obtaining the IMDb movie review dataset
        2. Introducing the bag-of-words model
          1. Transforming words into feature vectors
          2. Assessing word relevancy via term frequency-inverse document frequency
          3. Cleaning text data
          4. Processing documents into tokens
        3. Training a logistic regression model for document classification
        4. Working with bigger data – online algorithms and out-of-core learning
        5. Summary
      9. 9. Embedding a Machine Learning Model into a Web Application
        1. Serializing fitted scikit-learn estimators
        2. Setting up a SQLite database for data storage
        3. Developing a web application with Flask
          1. Our first Flask web application
          2. Form validation and rendering
        4. Turning the movie classifier into a web application
        5. Deploying the web application to a public server
          1. Updating the movie review classifier
        6. Summary
      10. 10. Predicting Continuous Target Variables with Regression Analysis
        1. Introducing a simple linear regression model
        2. Exploring the Housing Dataset
          1. Visualizing the important characteristics of a dataset
        3. Implementing an ordinary least squares linear regression model
          1. Solving regression for regression parameters with gradient descent
          2. Estimating the coefficient of a regression model via scikit-learn
        4. Fitting a robust regression model using RANSAC
        5. Evaluating the performance of linear regression models
        6. Using regularized methods for regression
        7. Turning a linear regression model into a curve – polynomial regression
          1. Modeling nonlinear relationships in the Housing Dataset
          2. Dealing with nonlinear relationships using random forests
            1. Decision tree regression
            2. Random forest regression
        8. Summary
      11. 11. Working with Unlabeled Data – Clustering Analysis
        1. Grouping objects by similarity using k-means
          1. K-means++
          2. Hard versus soft clustering
          3. Using the elbow method to find the optimal number of clusters
          4. Quantifying the quality of clustering via silhouette plots
        2. Organizing clusters as a hierarchical tree
          1. Performing hierarchical clustering on a distance matrix
          2. Attaching dendrograms to a heat map
          3. Applying agglomerative clustering via scikit-learn
        3. Locating regions of high density via DBSCAN
        4. Summary
      12. 12. Training Artificial Neural Networks for Image Recognition
        1. Modeling complex functions with artificial neural networks
          1. Single-layer neural network recap
          2. Introducing the multi-layer neural network architecture
          3. Activating a neural network via forward propagation
        2. Classifying handwritten digits
          1. Obtaining the MNIST dataset
          2. Implementing a multi-layer perceptron
        3. Training an artificial neural network
          1. Computing the logistic cost function
          2. Training neural networks via backpropagation
        4. Developing your intuition for backpropagation
        5. Debugging neural networks with gradient checking
        6. Convergence in neural networks
        7. Other neural network architectures
          1. Convolutional Neural Networks
          2. Recurrent Neural Networks
        8. A few last words about neural network implementation
        9. Summary
      13. 13. Parallelizing Neural Network Training with Theano
        1. Building, compiling, and running expressions with Theano
          1. What is Theano?
          2. First steps with Theano
          3. Configuring Theano
          4. Working with array structures
          5. Wrapping things up – a linear regression example
        2. Choosing activation functions for feedforward neural networks
          1. Logistic function recap
          2. Estimating probabilities in multi-class classification via the softmax function
          3. Broadening the output spectrum by using a hyperbolic tangent
        3. Training neural networks efficiently using Keras
        4. Summary
    7. 2. Module 2
      1. 1. Thinking in Machine Learning
        1. The human interface
        2. Design principles
          1. Types of questions
          2. Are you asking the right question?
          3. Tasks
            1. Classification
            2. Regression
            3. Clustering
            4. Dimensionality reduction
            5. Errors
            6. Optimization
            7. Linear programming
            8. Models
              1. Geometric models
              2. Probabilistic models
              3. Logical models
            9. Features
          4. Unified modeling language
            1. Class diagrams
            2. Object diagrams
            3. Activity diagrams
            4. State diagrams
        3. Summary
      2. 2. Tools and Techniques
        1. Python for machine learning
        2. IPython console
        3. Installing the SciPy stack
        4. NumPY
          1. Constructing and transforming arrays
          2. Mathematical operations
        5. Matplotlib
        6. Pandas
        7. SciPy
        8. Scikit-learn
        9. Summary
      3. 3. Turning Data into Information
        1. What is data?
        2. Big data
          1. Challenges of big data
            1. Data volume
            2. Data velocity
            3. Data variety
          2. Data models
          3. Data distributions
          4. Data from databases
          5. Data from the Web
          6. Data from natural language
          7. Data from images
          8. Data from application programming interfaces
        3. Signals
          1. Data from sound
        4. Cleaning data
        5. Visualizing data
        6. Summary
      4. 4. Models – Learning from Information
        1. Logical models
          1. Generality ordering
          2. Version space
          3. Coverage space
          4. PAC learning and computational complexity
        2. Tree models
          1. Purity
        3. Rule models
          1. The ordered list approach
          2. Set-based rule models
        4. Summary
      5. 5. Linear Models
        1. Introducing least squares
          1. Gradient descent
          2. The normal equation
        2. Logistic regression
          1. The Cost function for logistic regression
        3. Multiclass classification
        4. Regularization
        5. Summary
      6. 6. Neural Networks
        1. Getting started with neural networks
        2. Logistic units
        3. Cost function
          1. Minimizing the cost function
        4. Implementing a neural network
        5. Gradient checking
        6. Other neural net architectures
        7. Summary
      7. 7. Features – How Algorithms See the World
        1. Feature types
          1. Quantitative features
          2. Ordinal features
          3. Categorical features
        2. Operations and statistics
        3. Structured features
        4. Transforming features
          1. Discretization
          2. Normalization
          3. Calibration
        5. Principle component analysis
        6. Summary
      8. 8. Learning with Ensembles
        1. Ensemble types
        2. Bagging
          1. Random forests
          2. Extra trees
        3. Boosting
          1. Adaboost
          2. Gradient boosting
        4. Ensemble strategies
          1. Other methods
        5. Summary
      9. 9. Design Strategies and Case Studies
        1. Evaluating model performance
        2. Model selection
          1. Gridsearch
        3. Learning curves
        4. Real-world case studies
          1. Building a recommender system
            1. Content-based filtering
            2. Collaborative filtering
            3. Reviewing the case study
          2. Insect detection in greenhouses
            1. Reviewing the case study
        5. Machine learning at a glance
        6. Summary
    8. 3. Module 3
      1. 1. Unsupervised Machine Learning
        1. Principal component analysis
          1. PCA – a primer
          2. Employing PCA
        2. Introducing k-means clustering
          1. Clustering – a primer
          2. Kick-starting clustering analysis
          3. Tuning your clustering configurations
        3. Self-organizing maps
          1. SOM – a primer
          2. Employing SOM
        4. Further reading
        5. Summary
      2. 2. Deep Belief Networks
        1. Neural networks – a primer
          1. The composition of a neural network
          2. Network topologies
        2. Restricted Boltzmann Machine
          1. Introducing the RBM
            1. Topology
            2. Training
          2. Applications of the RBM
          3. Further applications of the RBM
        3. Deep belief networks
          1. Training a DBN
          2. Applying the DBN
          3. Validating the DBN
        4. Further reading
        5. Summary
      3. 3. Stacked Denoising Autoencoders
        1. Autoencoders
          1. Introducing the autoencoder
            1. Topology
            2. Training
          2. Denoising autoencoders
          3. Applying a dA
        2. Stacked Denoising Autoencoders
          1. Applying the SdA
          2. Assessing SdA performance
        3. Further reading
        4. Summary
      4. 4. Convolutional Neural Networks
        1. Introducing the CNN
          1. Understanding the convnet topology
            1. Understanding convolution layers
            2. Understanding pooling layers
            3. Training a convnet
            4. Putting it all together
          2. Applying a CNN
        2. Further Reading
        3. Summary
      5. 5. Semi-Supervised Learning
        1. Introduction
        2. Understanding semi-supervised learning
        3. Semi-supervised algorithms in action
          1. Self-training
            1. Implementing self-training
            2. Finessing your self-training implementation
              1. Improving the selection process
          2. Contrastive Pessimistic Likelihood Estimation
        4. Further reading
        5. Summary
      6. 6. Text Feature Engineering
        1. Introduction
        2. Text feature engineering
          1. Cleaning text data
            1. Text cleaning with BeautifulSoup
            2. Managing punctuation and tokenizing
            3. Tagging and categorising words
              1. Tagging with NLTK
              2. Sequential tagging
              3. Backoff tagging
          2. Creating features from text data
            1. Stemming
            2. Bagging and random forests
          3. Testing our prepared data
        3. Further reading
        4. Summary
      7. 7. Feature Engineering Part II
        1. Introduction
        2. Creating a feature set
          1. Engineering features for ML applications
            1. Using rescaling techniques to improve the learnability of features
            2. Creating effective derived variables
            3. Reinterpreting non-numeric features
          2. Using feature selection techniques
            1. Performing feature selection
              1. Correlation
              2. LASSO
              3. Recursive Feature Elimination
              4. Genetic models
        3. Feature engineering in practice
          1. Acquiring data via RESTful APIs
            1. Testing the performance of our model
            2. Twitter
              1. Translink Twitter
              2. Consumer comments
              3. The Bing Traffic API
            3. Deriving and selecting variables using feature engineering techniques
              1. The weather API
        4. Further reading
        5. Summary
      8. 8. Ensemble Methods
        1. Introducing ensembles
          1. Understanding averaging ensembles
            1. Using bagging algorithms
            2. Using random forests
          2. Applying boosting methods
            1. Using XGBoost
          3. Using stacking ensembles
            1. Applying ensembles in practice
        2. Using models in dynamic applications
          1. Understanding model robustness
            1. Identifying modeling risk factors
          2. Strategies to managing model robustness
        3. Further reading
        4. Summary
      9. 9. Additional Python Machine Learning Tools
        1. Alternative development tools
          1. Introduction to Lasagne
            1. Getting to know Lasagne
          2. Introduction to TensorFlow
            1. Getting to know TensorFlow
            2. Using TensorFlow to iteratively improve our models
          3. Knowing when to use these libraries
        2. Further reading
        3. Summary
      10. 10. Chapter Code Requirements
    9. A. Biblography
    10. Index

Product information

  • Title: Python: Deeper Insights into Machine Learning
  • Author(s): Sebastian Raschka, David Julian, John Hearty
  • Release date: August 2016
  • Publisher(s): Packt Publishing
  • ISBN: 9781787128576