Programming Collective Intelligence

Book description

Want to tap the power behind search rankings, product recommendations, social bookmarking, and online matchmaking? This fascinating book demonstrates how you can build Web 2.0 applications to mine the enormous amount of data created by people on the Internet. With the sophisticated algorithms in this book, you can write smart programs to access interesting datasets from other web sites, collect data from users of your own applications, and analyze and understand the data once you've found it.

Programming Collective Intelligence takes you into the world of machine learning and statistics, and explains how to draw conclusions about user experience, marketing, personal tastes, and human behavior in general -- all from information that you and others collect every day. Each algorithm is described clearly and concisely with code that can immediately be used on your web site, blog, Wiki, or specialized application. This book explains:

  • Collaborative filtering techniques that enable online retailers to recommend products or media
  • Methods of clustering to detect groups of similar items in a large dataset
  • Search engine features -- crawlers, indexers, query engines, and the PageRank algorithm
  • Optimization algorithms that search millions of possible solutions to a problem and choose the best one
  • Bayesian filtering, used in spam filters for classifying documents based on word types and other features
  • Using decision trees not only to make predictions, but to model the way decisions are made
  • Predicting numerical values rather than classifications to build price models
  • Support vector machines to match people in online dating sites
  • Non-negative matrix factorization to find the independent features in a dataset
  • Evolving intelligence for problem solving -- how a computer develops its skill by improving its own code the more it plays a game
Each chapter includes exercises for extending the algorithms to make them more powerful. Go beyond simple database-backed applications and put the wealth of Internet data to work for you.

"Bravo! I cannot think of a better way for a developer to first learn these algorithms and methods, nor can I think of a better way for me (an old AI dog) to reinvigorate my knowledge of the details."

-- Dan Russell, Google

"Toby's book does a great job of breaking down the complex subject matter of machine-learning algorithms into practical, easy-to-understand examples that can be directly applied to analysis of social interaction across the Web today. If I had this book two years ago, it would have saved precious time going down some fruitless paths."

-- Tim Wolters, CTO, Collective Intellect

Publisher resources

View/Submit Errata

Table of contents

  1. Programming Collective Intelligence
  2. A Note Regarding Supplemental Files
  3. Praise for Programming Collective Intelligence
  4. Preface
    1. Prerequisites
    2. Style of Examples
    3. Why Python?
      1. Python Tips
        1. List and dictionary constructors
        2. Significant Whitespace
        3. List comprehensions
    4. Open APIs
    5. Overview of the Chapters
    6. Conventions
    7. Using Code Examples
    8. How to Contact Us
    9. Safari® Books Online
    10. Acknowledgments
  5. 1. Introduction to Collective Intelligence
    1. What Is Collective Intelligence?
    2. What Is Machine Learning?
    3. Limits of Machine Learning
    4. Real-Life Examples
    5. Other Uses for Learning Algorithms
  6. 2. Making Recommendations
    1. Collaborative Filtering
    2. Collecting Preferences
    3. Finding Similar Users
      1. Euclidean Distance Score
      2. Pearson Correlation Score
      3. Which Similarity Metric Should You Use?
      4. Ranking the Critics
    4. Recommending Items
    5. Matching Products
    6. Building a del.icio.us Link Recommender
      1. The del.icio.us API
      2. Building the Dataset
      3. Recommending Neighbors and Links
    7. Item-Based Filtering
      1. Building the Item Comparison Dataset
      2. Getting Recommendations
    8. Using the MovieLens Dataset
    9. User-Based or Item-Based Filtering?
    10. Exercises
  7. 3. Discovering Groups
    1. Supervised versus Unsupervised Learning
    2. Word Vectors
      1. Pigeonholing the Bloggers
      2. Counting the Words in a Feed
    3. Hierarchical Clustering
    4. Drawing the Dendrogram
    5. Column Clustering
    6. K-Means Clustering
    7. Clusters of Preferences
      1. Getting and Preparing the Data
      2. Beautiful Soup
      3. Scraping the Zebo Results
      4. Defining a Distance Metric
      5. Clustering Results
    8. Viewing Data in Two Dimensions
    9. Other Things to Cluster
    10. Exercises
  8. 4. Searching and Ranking
    1. What’s in a Search Engine?
    2. A Simple Crawler
      1. Using urllib2
      2. Crawler Code
    3. Building the Index
      1. Setting Up the Schema
      2. Finding the Words on a Page
      3. Adding to the Index
    4. Querying
    5. Content-Based Ranking
      1. Normalization Function
      2. Word Frequency
      3. Document Location
      4. Word Distance
    6. Using Inbound Links
      1. Simple Count
      2. The PageRank Algorithm
      3. Using the Link Text
    7. Learning from Clicks
      1. Design of a Click-Tracking Network
      2. Setting Up the Database
      3. Feeding Forward
      4. Training with Backpropagation
      5. Training Test
      6. Connecting to the Search Engine
    8. Exercises
  9. 5. Optimization
    1. Group Travel
    2. Representing Solutions
    3. The Cost Function
    4. Random Searching
    5. Hill Climbing
    6. Simulated Annealing
    7. Genetic Algorithms
    8. Real Flight Searches
      1. The Kayak API
      2. The minidom Package
      3. Flight Searches
    9. Optimizing for Preferences
      1. Student Dorm Optimization
      2. The Cost Function
      3. Running the Optimization
    10. Network Visualization
      1. The Layout Problem
      2. Counting Crossed Lines
      3. Drawing the Network
    11. Other Possibilities
    12. Exercises
  10. 6. Document Filtering
    1. Filtering Spam
    2. Documents and Words
    3. Training the Classifier
    4. Calculating Probabilities
      1. Starting with a Reasonable Guess
    5. A Naïve Classifier
      1. Probability of a Whole Document
      2. A Quick Introduction to Bayes’ Theorem
      3. Choosing a Category
    6. The Fisher Method
      1. Category Probabilities for Features
      2. Combining the Probabilities
      3. Classifying Items
    7. Persisting the Trained Classifiers
      1. Using SQLite
    8. Filtering Blog Feeds
    9. Improving Feature Detection
    10. Using Akismet
    11. Alternative Methods
    12. Exercises
  11. 7. Modeling with Decision Trees
    1. Predicting Signups
    2. Introducing Decision Trees
    3. Training the Tree
    4. Choosing the Best Split
      1. Gini Impurity
      2. Entropy
    5. Recursive Tree Building
    6. Displaying the Tree
      1. Graphical Display
    7. Classifying New Observations
    8. Pruning the Tree
    9. Dealing with Missing Data
    10. Dealing with Numerical Outcomes
    11. Modeling Home Prices
      1. The Zillow API
    12. Modeling “Hotness”
    13. When to Use Decision Trees
    14. Exercises
  12. 8. Building Price Models
    1. Building a Sample Dataset
    2. k-Nearest Neighbors
      1. Number of Neighbors
      2. Defining Similarity
      3. Code for k-Nearest Neighbors
    3. Weighted Neighbors
      1. Inverse Function
      2. Subtraction Function
      3. Gaussian Function
      4. Weighted kNN
    4. Cross-Validation
    5. Heterogeneous Variables
      1. Adding to the Dataset
      2. Scaling Dimensions
    6. Optimizing the Scale
    7. Uneven Distributions
      1. Estimating the Probability Density
      2. Graphing the Probabilities
    8. Using Real Data—the eBay API
      1. Getting a Developer Key
      2. Setting Up a Connection
      3. Performing a Search
      4. Getting Details for an Item
      5. Building a Price Predictor
    9. When to Use k-Nearest Neighbors
    10. Exercises
  13. 9. Advanced Classification: Kernel Methods and SVMs
    1. Matchmaker Dataset
    2. Difficulties with the Data
      1. Decision Tree Classifier
    3. Basic Linear Classification
    4. Categorical Features
      1. Yes/No Questions
      2. Lists of Interests
      3. Determining Distances Using Yahoo! Maps
        1. Getting a Yahoo! Application Key
        2. Using the Geocoding API
        3. Calculating the Distance
      4. Creating the New Dataset
    5. Scaling the Data
    6. Understanding Kernel Methods
      1. The Kernel Trick
    7. Support-Vector Machines
    8. Using LIBSVM
      1. Getting LIBSVM
      2. A Sample Session
      3. Applying SVM to the Matchmaker Dataset
    9. Matching on Facebook
      1. Getting a Developer Key
      2. Creating a Session
      3. Download Friend Data
      4. Building a Match Dataset
      5. Creating an SVM Model
    10. Exercises
  14. 10. Finding Independent Features
    1. A Corpus of News
      1. Selecting Sources
      2. Downloading Sources
      3. Converting to a Matrix
    2. Previous Approaches
      1. Bayesian Classification
      2. Clustering
    3. Non-Negative Matrix Factorization
      1. A Quick Introduction to Matrix Math
      2. What Does This Have to Do with the Articles Matrix?
      3. Using NumPy
      4. The Algorithm
    4. Displaying the Results
      1. Displaying by Article
    5. Using Stock Market Data
      1. What Is Trading Volume?
      2. Downloading Data from Yahoo! Finance
      3. Preparing a Matrix
      4. Running NMF
      5. Displaying the Results
    6. Exercises
  15. 11. EVOLVING INTELLIGENCE
    1. What Is Genetic Programming?
      1. Genetic Programming Versus Genetic Algorithms
    2. Programs As Trees
      1. Representing Trees in Python
      2. Building and Evaluating Trees
      3. Displaying the Program
    3. Creating the Initial Population
    4. Testing a Solution
      1. A Simple Mathematical Test
      2. Measuring Success
    5. Mutating Programs
    6. Crossover
    7. Building the Environment
      1. The Importance of Diversity
    8. A Simple Game
      1. A Round-Robin Tournament
      2. Playing Against Real People
    9. Further Possibilities
      1. More Numerical Functions
      2. Memory
      3. Different Datatypes
    10. Exercises
  16. 12. Algorithm Summary
    1. Bayesian Classifier
      1. Training
      2. Classifying
      3. Using Your Code
      4. Strengths and Weaknesses
    2. Decision Tree Classifier
      1. Training
      2. Using Your Decision Tree Classifier
      3. Strengths and Weaknesses
    3. Neural Networks
      1. Training a Neural Network
      2. Using Your Neural Network Code
      3. Strengths and Weaknesses
    4. Support-Vector Machines
      1. The Kernel Trick
      2. Using LIBSVM
      3. Strengths and Weaknesses
    5. k-Nearest Neighbors
      1. Scaling and Superfluous Variables
      2. Using Your kNN Code
      3. Strengths and Weaknesses
    6. Clustering
      1. Hierarchical Clustering
      2. K-Means Clustering
      3. Using Your Clustering Code
    7. Multidimensional Scaling
      1. Using Your Multidimensional Scaling Code
    8. Non-Negative Matrix Factorization
      1. Using Your NMF Code
    9. Optimization
      1. The Cost Function
      2. Simulated Annealing
      3. Genetic Algorithms
      4. Using Your Optimization Code
  17. A. Third-Party Libraries
    1. Universal Feed Parser
      1. Installation for All Platforms
    2. Python Imaging Library
      1. Installation on Windows
      2. Installation on Other Platforms
      3. Simple Usage Example
    3. Beautiful Soup
      1. Installation on All Platforms
      2. Simple Usage Example
    4. pysqlite
      1. Installation on Windows
      2. Installation on Other Platforms
      3. Simple Usage Example
    5. NumPy
      1. Installation on Windows
      2. Installation on Other Platforms
      3. Simple Usage Example
    6. matplotlib
      1. Installation
      2. Simple Usage Example
    7. pydelicious
      1. Installation for All Platforms
      2. Simple Usage Example
  18. B. Mathematical Formulas
    1. Euclidean Distance
    2. Pearson Correlation Coefficient
    3. Weighted Mean
    4. Tanimoto Coefficient
    5. Conditional Probability
    6. Gini Impurity
    7. Entropy
    8. Variance
    9. Gaussian Function
    10. Dot-Products
  19. Index
  20. About the Author
  21. Colophon
  22. Copyright

Product information

  • Title: Programming Collective Intelligence
  • Author(s): Toby Segaran
  • Release date: August 2007
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9780596550684