Webbots, Spiders, and Screen Scrapers

Book description

The Internet is bigger and better than what a mere browser allows. Webbots, Spiders, and Screen Scrapers is for programmers and businesspeople who want to take full advantage of the vast resources available on the Web. There's no reason to let browsers limit your online experience-especially when you can easily automate online tasks to suit your individual needs.

Learn how to write webbots and spiders that do all this and more:

  • Programmatically download entire websites

  • Effectively parse data from web pages

  • Manage cookies

  • Decode encrypted files

  • Automate form submissions

  • Send and receive email

  • Send SMS alerts to your cell phone

  • Unlock password-protected websites

  • Automatically bid in online auctions

  • Exchange data with FTP and NNTP servers

Sample projects using standard code libraries reinforce these new skills. You'll learn how to create your own webbots and spiders that track online prices, aggregate different data sources into a single web page, and archive the online data you just can't live without. You'll learn inside information from an experienced webbot developer on how and when to write stealthy webbots that mimic human behavior, tips for developing fault-tolerant designs, and various methods for launching and scheduling webbots. You'll also get advice on how to write webbots and spiders that respect website owner property rights, plus techniques for shielding websites from unwanted robots.

As a bonus, visit the author's website to test your webbots on sample target pages, and to download the scripts and code libraries used in the book.

Some tasks are just too tedious-or too important!- to leave to humans. Once you've automated your online life, you'll never let a browser limit the way you use the Internet again.

Table of contents

  1. Webbots, Spiders, and Screen Scrapers
    1. ACKNOWLEDGMENTS
    2. Introduction
      1. Old-School Client-Server Technology
      2. The Problem with Browsers
      3. What to Expect from This Book
        1. Learn from My Mistakes
        2. Master Webbot Techniques
        3. Leverage Existing Scripts
      4. About the Website
      5. About the Code
      6. Requirements
        1. Hardware
        2. Software
        3. Internet Access
      7. A Disclaimer (This Is Important)
    3. I. FUNDAMENTAL CONCEPTS AND TECHNIQUES
      1. 1. WHAT'S IN IT FOR YOU?
        1. Uncovering the Internet's True Potential
        2. What's in It for Developers?
          1. Webbot Developers Are in Demand
          2. Webbots Are Fun to Write
          3. Webbots Facilitate "Constructive Hacking"
        3. What's in It for Business Leaders?
          1. Customize the Internet for Your Business
          2. Capitalize on the Public's Inexperience with Webbots
          3. Accomplish a Lot with a Small Investment
        4. Final Thoughts
      2. 2. IDEAS FOR WEBBOT PROJECTS
        1. Inspiration from Browser Limitations
          1. Webbots That Aggregate and Filter Information for Relevance
          2. Webbots That Interpret What They Find Online
          3. Webbots That Act on Your Behalf
        2. A Few Crazy Ideas to Get You Started
          1. Help Out a Busy Executive
          2. Save Money by Automating Tasks
          3. Protect Intellectual Property
          4. Monitor Opportunities
          5. Verify Access Rights on a Website
          6. Create an Online Clipping Service
          7. Plot Unauthorized Wi-Fi Networks
          8. Track Web Technologies
          9. Allow Incompatible Systems to Communicate
        3. Final Thoughts
      3. 3. DOWNLOADING WEB PAGES
        1. Think About Files, Not Web Pages
        2. Downloading Files with PHP's Built-in Functions
          1. Downloading Files with fopen() and fgets()
            1. Creating Your First Webbot Script
            2. Executing Webbots in Command Shells
            3. Executing Webbots in Browsers
          2. Downloading Files with file()
        3. Introducing PHP/CURL
          1. Multiple Transfer Protocols
          2. Form Submission
          3. Basic Authentication
          4. Cookies
          5. Redirection
          6. Agent Name Spoofing
          7. Referer Management
          8. Socket Management
        4. Installing PHP/CURL
        5. LIB_http
          1. Familiarizing Yourself with the Default Values
          2. Using LIB_http
            1. http_get()
            2. http_get_withheader()
          3. Learning More About HTTP Headers
          4. Examining LIB_http's Source Code
            1. LIB_http Defaults
            2. LIB_http Functions
        6. Final Thoughts
      4. 4. PARSING TECHNIQUES
        1. Parsing Poorly Written HTML
        2. Standard Parse Routines
        3. Using LIB_parse
          1. Splitting a String at a Delimiter: split_string()
          2. Parsing Text Between Delimiters: return_between()
          3. Parsing a Data Set into an Array: parse_array()
          4. Parsing Attribute Values: get_attribute()
          5. Removing Unwanted Text: remove()
        4. Useful PHP Functions
          1. Detecting Whether a String Is Within Another String
          2. Replacing a Portion of a String with Another String
          3. Parsing Unformatted Text
          4. Measuring the Similarity of Strings
        5. Final Thoughts
          1. Don't Trust a Poorly Coded Web Page
          2. Parse in Small Steps
          3. Don't Render Parsed Text While Debugging
          4. Use Regular Expressions Sparingly
      5. 5. AUTOMATING FORM SUBMISSION
        1. Reverse Engineering Form Interfaces
        2. Form Handlers, Data Fields, Methods, and Event Triggers
          1. Form Handlers
          2. Data Fields
          3. Methods
            1. The GET Method
            2. The POST Method
          4. Event Triggers
        3. Unpredictable Forms
          1. JavaScript Can Change a Form Just Before Submission
          2. Form HTML Is Often Unreadable by Humans
          3. Cookies Aren't Included in the Form, but Can Affect Operation
        4. Analyzing a Form
        5. Final Thoughts
          1. Don't Blow Your Cover
          2. Correctly Emulate Browsers
          3. Avoid Form Errors
      6. 6. MANAGING LARGE AMOUNTS OF DATA
        1. Organizing Data
          1. Naming Conventions
          2. Storing Data in Structured Files
          3. Storing Text in a Database
            1. LIB_mysql
            2. The insert() Function
            3. The update() Function
            4. The exe_sql() Function
          4. Storing Images in a Database
          5. Database or File?
        2. Making Data Smaller
          1. Storing References to Image Files
          2. Compressing Data
            1. Compressing Inbound Files
            2. Compressing Files on Your Hard Drive
          3. Removing Formatting
        3. Thumbnailing Images
        4. Final Thoughts
    4. II. PROJECTS
      1. 7. PRICE-MONITORING WEBBOTS
        1. The Target
        2. Designing the Parsing Script
        3. Initialization and Downloading the Target
        4. Further Exploration
      2. 8. IMAGE-CAPTURING WEBBOTS
        1. Example Image-Capturing Webbot
        2. Creating the Image-Capturing Webbot
          1. Binary-Safe Download Routine
          2. Directory Structure
          3. The Main Script
            1. Initialization and Target Validation
            2. Defining the Page Base
            3. Creating a Root Directory for Imported File Structure
            4. Parsing Image Tags from the Downloaded Web Page
            5. The Image-Processing Loop
            6. Creating the Local Directory Structure
            7. Downloading and Saving the File
        3. Further Exploration
        4. Final Thoughts
      3. 9. LINK-VERIFICATION WEBBOTS
        1. Creating the Link-Verification Webbot
          1. Initializing the Webbot and Downloading the Target
          2. Setting the Page Base
          3. Parsing the Links
          4. Running a Verification Loop
          5. Generating Fully Resolved URLs
          6. Downloading the Linked Page
          7. Displaying the Page Status
        2. Running the Webbot
          1. LIB_http_codes
          2. LIB_resolve_addresses
        3. Further Exploration
      4. 10. ANONYMOUS BROWSING WEBBOTS
        1. Anonymity with Proxies
          1. Non-proxied Environments
          2. Your Online Exposure
          3. Proxied Environments
        2. The Anonymizer Project
          1. Writing the Anonymizer
            1. Downloading and Preparing the Target Web Page
            2. Modifying the <base> Tag
            3. Parsing the Links
            4. Substituting the Links
            5. Displaying the Proxied Web Page
        3. Final Thoughts
      5. 11. SEARCH-RANKING WEBBOTS
        1. Description of a Search Result Page
        2. What the Search-Ranking Webbot Does
        3. Running the Search-Ranking Webbot
        4. How the Search-Ranking Webbot Works
        5. The Search-Ranking Webbot Script
          1. Initializing Variables
          2. Starting the Loop
          3. Fetching the Search Results
          4. Parsing the Search Results
        6. Final Thoughts
          1. Be Kind to Your Sources
          2. Search Sites May Treat Webbots Differently Than Browsers
          3. Spidering Search Engines Is a Bad Idea
          4. Familiarize Yourself with the Google API
        7. Further Exploration
      6. 12. AGGREGATION WEBBOTS
        1. Choosing Data Sources for Webbots
        2. Example Aggregation Webbot
          1. Familiarizing Yourself with RSS Feeds
          2. Writing the Aggregation Webbot
            1. Downloading and Parsing the Target
            2. Dealing with CDATA
        3. Adding Filtering to Your Aggregation Webbot
        4. Further Exploration
      7. 13. FTP WEBBOTS
        1. Example FTP Webbot
        2. PHP and FTP
        3. Further Exploration
      8. 14. NNTP NEWS WEBBOTS
        1. NNTP Use and History
        2. Webbots and Newsgroups
          1. Identifying News Servers
          2. Identifying Newsgroups
          3. Finding Articles in Newsgroups
          4. Reading an Article from a Newsgroup
        3. Further Exploration
      9. 15. WEBBOTS THAT READ EMAIL
        1. The POP3 Protocol
          1. Logging into a POP3 Mail Server
          2. Reading Mail from a POP3 Mail Server
            1. The POP3 LIST Command
            2. The POP3 RETR Command
            3. Other Useful POP3 Commands
        2. Executing POP3 Commands with a Webbot
        3. Further Exploration
          1. Email-Controlled Webbots
          2. Email Interfaces
      10. 16. WEBBOTS THAT SEND EMAIL
        1. Email, Webbots, and Spam
        2. Sending Mail with SMTP and PHP
          1. Configuring PHP to Send Mail
          2. Sending an Email with mail()
        3. Writing a Webbot That Sends Email Notifications
          1. Keeping Legitimate Mail out of Spam Filters
          2. Sending HTML-Formatted Email
        4. Further Exploration
          1. Using Returned Emails to Prune Access Lists
          2. Using Email as Notification That Your Webbot Ran
          3. Leveraging Wireless Technologies
          4. Writing Webbots That Send Text Messages
      11. 17. CONVERTING A WEBSITE INTO A FUNCTION
        1. Writing a Function Interface
          1. Defining the Interface
          2. Analyzing the Target Web Page
          3. Using describe_zipcode()
            1. Getting the Session Value
            2. Submitting the Form
            3. Parsing and Returning the Result
        2. Final Thoughts
          1. Distributing Resources
          2. Using Standard Interfaces
          3. Designing a Custom Lightweight "Web Service"
    5. III. ADVANCED TECHNICAL CONSIDERATIONS
      1. 18. SPIDERS
        1. How Spiders Work
        2. Example Spider
        3. LIB_simple_spider
          1. harvest_links()
          2. archive_links()
          3. get_domain()
          4. exclude_link()
        4. Experimenting with the Spider
        5. Adding the Payload
        6. Further Exploration
          1. Save Links in a Database
          2. Separate the Harvest and Payload
          3. Distribute Tasks Across Multiple Computers
          4. Regulate Page Requests
      2. 19. PROCUREMENT WEBBOTS AND SNIPERS
        1. Procurement Webbot Theory
          1. Get Purchase Criteria
          2. Authenticate Buyer
          3. Verify Item
          4. Evaluate Purchase Triggers
          5. Make Purchase
          6. Evaluate Results
        2. Sniper Theory
          1. Get Purchase Criteria
          2. Authenticate Buyer
          3. Verify Item
          4. Synchronize Clocks
          5. Time to Bid?
          6. Submit Bid
          7. Evaluate Results
        3. Testing Your Own Webbots and Snipers
        4. Further Exploration
        5. Final Thoughts
      3. 20. WEBBOTS AND CRYPTOGRAPHY
        1. Designing Webbots That Use Encryption
          1. SSL and PHP Built-in Functions
          2. Encryption and PHP/CURL
        2. A Quick Overview of Web Encryption
        3. Local Certificates
        4. Final Thoughts
      4. 21. AUTHENTICATION
        1. What Is Authentication?
          1. Types of Online Authentication
          2. Strengthening Authentication by Combining Techniques
          3. Authentication and Webbots
        2. Example Scripts and Practice Pages
        3. Basic Authentication
        4. Session Authentication
          1. Authentication with Cookie Sessions
            1. How Cookies Work
            2. Cookie Session Example
          2. Authentication with Query Sessions
        5. Final Thoughts
      5. 22. ADVANCED COOKIE MANAGEMENT
        1. How Cookies Work
        2. PHP/CURL and Cookies
        3. How Cookies Challenge Webbot Design
          1. Purging Temporary Cookies
          2. Managing Multiple Users' Cookies
        4. Further Exploration
      6. 23. SCHEDULING WEBBOTS AND SPIDERS
        1. The Windows Task Scheduler
          1. Preparing Your Webbots to Run as Scheduled Tasks
          2. Scheduling a Webbot to Run Daily
        2. Complex Schedules
        3. Non-Calendar-Based Triggers
        4. Final Thoughts
          1. Determine the Webbot's Best Periodicity
          2. Avoid Single Points of Failure
          3. Add Variety to Your Schedule
    6. IV. LARGER CONSIDERATIONS
      1. 24. DESIGNING STEALTHY WEBBOTS AND SPIDERS
        1. Why Design a Stealthy Webbot?
          1. Log Files
            1. Access Logs
            2. Error Logs
            3. Custom Logs
          2. Log-Monitoring Software
        2. Stealth Means Simulating Human Patterns
          1. Be Kind to Your Resources
          2. Run Your Webbot During Busy Hours
          3. Don't Run Your Webbot at the Same Time Each Day
          4. Don't Run Your Webbot on Holidays and Weekends
          5. Use Random, Intra-fetch Delays
        3. Final Thoughts
      2. 25. WRITING FAULT-TOLERANT WEBBOTS
        1. Types of Webbot Fault Tolerance
          1. Adapting to Changes in URLs
            1. Avoid Making Requests for Pages That Don't Exist
            2. Follow Page Redirections
            3. Maintain the Accuracy of Referer Values
          2. Adapting to Changes in Page Content
            1. Avoid Position Parsing
            2. Use Relative Parsing
            3. Look for Landmarks That Are Least Likely to Change
          3. Adapting to Changes in Forms
          4. Adapting to Changes in Cookie Management
          5. Adapting to Network Outages and Network Congestion
        2. Error Handlers
      3. 26. DESIGNING WEBBOT-FRIENDLY WEBSITES
        1. Optimizing Web Pages for Search Engine Spiders
          1. Well-Defined Links
          2. Google Bombs and Spam Indexing
          3. Title Tags
          4. Meta Tags
          5. Header Tags
          6. Image alt Attributes
        2. Web Design Techniques That Hinder Search Engine Spiders
          1. JavaScript
          2. Non-ASCII Content
        3. Designing Data-Only Interfaces
          1. XML
          2. Lightweight Data Exchange
            1. How Not to Design a Lightweight Interface
            2. A Safer Method of Passing Variables to Webbots
          3. SOAP
            1. Advantages of SOAP
            2. Disadvantages of SOAP
      4. 27. KILLING SPIDERS
        1. Asking Nicely
          1. Create a Terms of Service Agreement
          2. Use the robots.txt File
          3. Use the Robots Meta Tag
        2. Building Speed Bumps
          1. Selectively Allow Access to Specific Web Agents
          2. Use Obfuscation
          3. Use Cookies, Encryption, JavaScript, and Redirection
          4. Authenticate Users
          5. Update Your Site Often
          6. Embed Text in Other Media
        3. Setting Traps
          1. Create a Spider Trap
          2. Fun Things to Do with Unwanted Spiders
        4. Final Thoughts
      5. 28. KEEPING WEBBOTS OUT OF TROUBLE
        1. It's All About Respect
        2. Copyright
          1. Do Consult Resources
          2. Don't Be an Armchair Lawyer
            1. Copyrights Do Not Have to Be Registered
            2. Assume "All Rights Reserved"
            3. You Cannot Copyright a Fact
            4. You Can Copyright a Collection of Facts if Presented Creatively
            5. You Can Use Some Material Under Fair Use Laws
        3. Trespass to Chattels
        4. Internet Law
        5. Final Thoughts
      6. A. PHP/CURL REFERENCE
        1. Creating a Minimal PHP/CURL Session
        2. Initiating PHP/CURL Sessions
        3. Setting PHP/CURL Options
          1. CURLOPT_URL
          2. CURLOPT_RETURNTRANSFER
          3. CURLOPT_REFERER
          4. CURLOPT_FOLLOWLOCATION and CURLOPT_MAXREDIRS
          5. CURLOPT_USERAGENT
          6. CURLOPT_NOBODY and CURLOPT_HEADER
          7. CURLOPT_TIMEOUT
          8. CURLOPT_COOKIEFILE and CURLOPT_COOKIEJAR
          9. CURLOPT_HTTPHEADER
          10. CURLOPT_SSL_VERIFYPEER
          11. CURLOPT_USERPWD and CURLOPT_UNRESTRICTED_AUTH
          12. CURLOPT_POST and CURLOPT_POSTFIELDS
          13. CURLOPT_VERBOSE
          14. CURLOPT_PORT
        4. Executing the PHP/CURL Command
          1. Retrieving PHP/CURL Session Information
          2. Viewing PHP/CURL Errors
        5. Closing PHP/CURL Sessions
      7. B. STATUS CODES
        1. HTTP Codes
        2. NNTP Codes
      8. C. SMS EMAIL ADDRESSES
    7. About the Author
    8. Colophon

Product information

  • Title: Webbots, Spiders, and Screen Scrapers
  • Author(s): Michael Schrenk
  • Release date: March 2007
  • Publisher(s): No Starch Press
  • ISBN: 9781593271206