System Performance Tuning, 2nd Edition

Book description

System Performance Tuning answers one of the most fundamental questions you can ask about your computer: How can I get it to do more work without buying more hardware? In the current economic downturn, performance tuning takes on a new importance. It allows system administrators to make the best use of existing systems and minimize the purchase of new equipment. Well-tuned systems save money and time that would otherwise be wasted dealing with slowdowns and errors. Performance tuning always involves compromises; unless system administrators know what the compromises are, they can't make intelligent decisions.Tuning is an essential skill for system administrators who face the problem of adapting the speed of a computer system to the speed requirements imposed by the real world. It requires a detailed understanding of the inner workings of the computer and its architecture. System Performance Tuning covers two distinct areas: performance tuning, or the art of increasing performance for a specific application, and capacity planning, or deciding what hardware best fulfills a given role. Underpinning both subjects is the science of computer architecture. This book focuses on the operating system, the underlying hardware, and their interactions. Topics covered include:

  • Real and perceived performance problems, introducing capacity planning and performance monitoring (highlighting their strengths and weaknesses).
  • An integrated description of all the major tools at a system administrator's disposal for tracking down system performance problems.
  • Background on modern memory handling techniques, including the memory-caching filesystem implementations in Solaris and AIX. Updated sections on memory conservation and computing memory requirements.
  • In depth discussion of disk interfaces, bandwidth capacity considerations, and RAID systems.
  • Comprehensive discussion of NFS and greatly expanded discussion of networking.
  • Workload management and code tuning.
  • Special topics such as tuning Web servers for various types of content delivery and developments in cross-machine parallel computing
For system administrators who want a hands-on introduction to system performance, this is the book to recommend.

Publisher resources

View/Submit Errata

Table of contents

  1. Preface
    1. Who Should Buy This Book?
    2. A Note on Coverage
    3. How to Read This Book
      1. This Book as a Story
      2. This Book as a Reference
    4. Organization
    5. Typographic Conventions
    6. Comments and Questions
    7. Personal Comments and Acknowledgments
      1. Acknowledgments from Mike Loukides
  2. 1. An Introduction to Performance Tuning
    1. 1.1. An Introduction to Computer Architecture
      1. 1.1.1. Levels of Transformation
        1. 1.1.1.1. Software: algorithms and languages
        2. 1.1.1.2. The Instruction Set Architecture
        3. 1.1.1.3. Hardware: microarchitecture, circuits, and devices
      2. 1.1.2. The von Neumann Model
      3. 1.1.3. Caches and the Memory Hierarchy
      4. 1.1.4. The Benefits of a 64-Bit Architecture
        1. 1.1.4.1. What does it mean to be 64-bit?
        2. 1.1.4.2. Performance ramifications
    2. 1.2. Principles of Performance Tuning
      1. 1.2.1. Principle 0: Understand Your Environment
      2. 1.2.2. Principle 1: TANSTAAFL!
      3. 1.2.3. Principle 2: Throughput Versus Latency
      4. 1.2.4. Principle 3: Do Not Overutilize a Resource
      5. 1.2.5. Principle 4: Design Tests Carefully
    3. 1.3. Static Performance Tuning
      1. 1.3.1. Other Miscellaneous Things to Check
    4. 1.4. Concluding Thoughts
  3. 2. Workflow Management
    1. 2.1. Workflow Characterization
      1. 2.1.1. Simple Commands
      2. 2.1.2. Process Accounting
        1. 2.1.2.1. Enabling process accounting
        2. 2.1.2.2. Reviewing accounting records
      3. 2.1.3. Automating sar
        1. 2.1.3.1. Enabling sar
        2. 2.1.3.2. Retrieving data
      4. 2.1.4. Virtual Adrian
      5. 2.1.5. Network Pattern Analysis
        1. 2.1.5.1. Pattern 1: request-response
        2. 2.1.5.2. Pattern 1B: inverse request-response
        3. 2.1.5.3. Pattern 2: data transfer
        4. 2.1.5.4. Pattern 3: message passing
        5. 2.1.5.5. Packet size distributions
    2. 2.2. Workload Control
      1. 2.2.1. Education
        1. 2.2.1.1. Usage and performance agreements
      2. 2.2.2. The maxusers and the pt_cnt Parameters
      3. 2.2.3. Limiting Users
        1. 2.2.3.1. Quotas
        2. 2.2.3.2. Environmental limits
      4. 2.2.4. Complex Environments
    3. 2.3. Benchmarking
      1. 2.3.1. MIPS and Megaflops
        1. 2.3.1.1. MIPS
        2. 2.3.1.2. Megaflops
      2. 2.3.2. Component-Specific Benchmarks
        1. 2.3.2.1. Linpack
        2. 2.3.2.2. SPECint and SPECfp
      3. 2.3.3. Commercial Workload Benchmarks
        1. 2.3.3.1. TPC
        2. 2.3.3.2. SPECweb99
      4. 2.3.4. User Benchmarks
        1. 2.3.4.1. Choose your problem set
        2. 2.3.4.2. Choose your runtime
        3. 2.3.4.3. Automate heavily
        4. 2.3.4.4. Set benchmark runtime rules
    4. 2.4. Concluding Thoughts
  4. 3. Processors
    1. 3.1. Microprocessor Architecture
      1. 3.1.1. Clock Rates
      2. 3.1.2. Pipelining
        1. 3.1.2.1. Variable-length instructions
        2. 3.1.2.2. Branches
      3. 3.1.3. The Second Generation of RISC Processor Design
    2. 3.2. Caching
      1. 3.2.1. The Cache Hierarchy
      2. 3.2.2. Cache Organization and Operation
      3. 3.2.3. Associativity
      4. 3.2.4. Locality and “Cache-Busters”
        1. 3.2.4.1. Unit stride
        2. 3.2.4.2. Linked lists
        3. 3.2.4.3. Cache-aligned block copy problems
      5. 3.2.5. The Cache Size Anomaly
    3. 3.3. Process Scheduling
      1. 3.3.1. The System V Model: The Linux Model
        1. 3.3.1.1. Finding a process’s priority
        2. 3.3.1.2. Adjusting a process’s effective priority
        3. 3.3.1.3. Modifications for SMP systems
      2. 3.3.2. Multilayered Scheduling Classes: The Solaris Model
        1. 3.3.2.1. The Solaris threading model
        2. 3.3.2.2. Scheduling classes
        3. 3.3.2.3. The dispatcher
        4. 3.3.2.4. Checking a process’s priority
        5. 3.3.2.5. Tuning the dispatch tables
        6. 3.3.2.6. Adjusting priorities
    4. 3.4. Multiprocessing
      1. 3.4.1. Processor Communication
        1. 3.4.1.1. Buses
        2. 3.4.1.2. Crossbars
        3. 3.4.1.3. UltraSPARC-III systems: Fireplane
        4. 3.4.1.4. “Interconnectionless” architectures
      2. 3.4.2. Operating System Multiprocessing
      3. 3.4.3. Threads
      4. 3.4.4. Locking
      5. 3.4.5. Cache Influences on Multiprocessor Performance
    5. 3.5. Peripheral Interconnects
      1. 3.5.1. SBus
        1. 3.5.1.1. Clock speed
        2. 3.5.1.2. Burst transfer size
        3. 3.5.1.3. Transfer mode
        4. 3.5.1.4. Summary of SBus implementations
        5. 3.5.1.5. SBus card utilization
      2. 3.5.2. PCI
        1. 3.5.2.1. PCI bus transactions
        2. 3.5.2.2. CompactPCI
      3. 3.5.3. A Summary of Peripheral Interconnects
      4. 3.5.4. Interrupts in Linux
      5. 3.5.5. Interrupts in Solaris
    6. 3.6. Processor Performance Tools
      1. 3.6.1. The Load Average
      2. 3.6.2. Process Queues
      3. 3.6.3. Specific Breakdowns
      4. 3.6.4. Multiprocessor Systems
      5. 3.6.5. top and prstat
      6. 3.6.6. Lock Statistics
      7. 3.6.7. Controlling Processors in Solaris
        1. 3.6.7.1. psrinfo
        2. 3.6.7.2. psradm
        3. 3.6.7.3. psrset
      8. 3.6.8. Peripheral Interconnect Performance Tools
      9. 3.6.9. Advanced Processor Performance Statistics
    7. 3.7. Concluding Thoughts
  5. 4. Memory
    1. 4.1. Implementations of Physical Memory
    2. 4.2. Virtual Memory Architecture
      1. 4.2.1. Pages
      2. 4.2.2. Segments
      3. 4.2.3. Estimating Memory Requirements
      4. 4.2.4. Address Space Layout
      5. 4.2.5. The Free List
        1. 4.2.5.1. Virtual memory management in Linux
      6. 4.2.6. Page Coloring
      7. 4.2.7. Transaction Lookaside Buffers (TLB)
    3. 4.3. Paging and Swapping
      1. 4.3.1. The Decline and Fall of Interactive Performance
      2. 4.3.2. Swap Space
        1. 4.3.2.1. Anonymous memory
        2. 4.3.2.2. Sizing swap space
        3. 4.3.2.3. Organizing swap space
        4. 4.3.2.4. Swapfiles
    4. 4.4. Consumers of Memory
      1. 4.4.1. Filesystem Caching
      2. 4.4.2. Filesystem Cache Writes: fsflush and bdflush
        1. 4.4.2.1. Solaris: fsflush
        2. 4.4.2.2. Linux: bdflush
      3. 4.4.3. Interactions Between the Filesystem Cache and Memory
        1. 4.4.3.1. Priority paging
        2. 4.4.3.2. Cyclic caching
      4. 4.4.4. Interactions Between the Filesystem Cache and Disk
    5. 4.5. Tools for Memory Performance Analysis
      1. 4.5.1. Memory Benchmarking
        1. 4.5.1.1. STREAM
        2. 4.5.1.2. lmbench
      2. 4.5.2. Examining Memory Usage System-Wide
        1. 4.5.2.1. vmstat
        2. 4.5.2.2. sar
        3. 4.5.2.3. memstat
      3. 4.5.3. Examining Memory Usage of Processes
        1. 4.5.3.1. Solaris tools
      4. 4.5.4. Linux Tools
    6. 4.6. Concluding Thoughts
  6. 5. Disks
    1. 5.1. Disk Architecture
      1. 5.1.1. Zoned Bit Rate (ZBR) recording
        1. 5.1.1.1. Disk caches
      2. 5.1.2. Access Patterns
      3. 5.1.3. Reads
      4. 5.1.4. Writes
        1. 5.1.4.1. UFS write throttling
      5. 5.1.5. Performance Specifications
        1. 5.1.5.1. One million bytes is a megabyte?
        2. 5.1.5.2. Burst speed versus internal transfer speed
        3. 5.1.5.3. Internal transfer speed versus actual speed
        4. 5.1.5.4. Average seek time
        5. 5.1.5.5. Storage capacity and access capacity
    2. 5.2. Interfaces
      1. 5.2.1. IDE
        1. 5.2.1.1. Improving IDE performance in Linux
        2. 5.2.1.2. Limitations of IDE drives
      2. 5.2.2. IPI
      3. 5.2.3. SCSI
        1. 5.2.3.1. Multi-initiator SCSI
        2. 5.2.3.2. Bus transactions
        3. 5.2.3.3. Synchronous versus asynchronous transfers
        4. 5.2.3.4. Termination
        5. 5.2.3.5. Command queuing
        6. 5.2.3.6. Differential signaling
        7. 5.2.3.7. Bus utilization
        8. 5.2.3.8. Mixing different speed SCSI devices
        9. 5.2.3.9. SCSI implementations
      4. 5.2.4. Fibre Channel
      5. 5.2.5. IEEE 1394 (FireWire)
      6. 5.2.6. Universal Serial Bus (USB)
    3. 5.3. Common Performance Problems
      1. 5.3.1. High I/O Skew
      2. 5.3.2. Memory-Disk Interactions
      3. 5.3.3. High Service Times
    4. 5.4. Filesystems
      1. 5.4.1. vnodes, inodes, and rnodes
        1. 5.4.1.1. The directory name lookup cache (DNLC)
      2. 5.4.2. The Unix Filesystem (UFS)
        1. 5.4.2.1. inode density
        2. 5.4.2.2. Filesystem cluster size
        3. 5.4.2.3. Minimum free space
        4. 5.4.2.4. Rotational delay
        5. 5.4.2.5. fstyp and tunefs
        6. 5.4.2.6. Bypassing memory caching
        7. 5.4.2.7. The inode cache
        8. 5.4.2.8. The buffer cache
      3. 5.4.3. Logging Filesystems
        1. 5.4.3.1. Solstice:DiskSuite
        2. 5.4.3.2. Solaris
      4. 5.4.4. The Second Extended Filesystem (EXT2)
      5. 5.4.5. The Third Extended Filesystem (EXT3)
        1. 5.4.5.1. Tuning the elevator algorithm
        2. 5.4.5.2. Choosing a journaling mode
        3. 5.4.5.3. Transitioning from ext2 to ext3
      6. 5.4.6. The Reiser Filesystem (ReiserFS)
        1. 5.4.6.1. Tail packing
      7. 5.4.7. The Journaled Filesystem (JFS)
      8. 5.4.8. The Temporary Filesystem (tmpfs)
      9. 5.4.9. Veritas VxFS
      10. 5.4.10. Caching Filesystems (CacheFS)
        1. 5.4.10.1. Minimizing seek times by filesystem layout
    5. 5.5. Tools for Analysis
      1. 5.5.1. Enabling Disk Caches
      2. 5.5.2. Disk Performance Benchmarking
        1. 5.5.2.1. hdparm
        2. 5.5.2.2. tiobench
        3. 5.5.2.3. iozone
      3. 5.5.3. Second-Time-Through Improvements?
      4. 5.5.4. Using iostat
        1. 5.5.4.1. Historical limitations: iostat and queuing terminology
      5. 5.5.5. Using sar
      6. 5.5.6. I/O Tracing
        1. 5.5.6.1. Using the kernel probes
        2. 5.5.6.2. Using process filtering
        3. 5.5.6.3. Restarting prex
    6. 5.6. Concluding Thoughts
  7. 6. Disk Arrays
    1. 6.1. Terminology
    2. 6.2. RAID Levels
      1. 6.2.1. RAID 0: Striping
      2. 6.2.2. RAID 1: Mirroring
      3. 6.2.3. RAID 2: Hamming Code Arrays
      4. 6.2.4. RAID 3: Parity-Protected Striping
      5. 6.2.5. RAID 4: Parity-Protected Striping with Independent Disks
      6. 6.2.6. RAID 5: Distributed, Parity-Protected Striping
      7. 6.2.7. RAID 10: Mirrored Striping
    3. 6.3. Software Versus Hardware
      1. 6.3.1. Software
      2. 6.3.2. Hardware
        1. 6.3.2.1. RAID overlap
    4. 6.4. A Summary of Disk Array Design
      1. 6.4.1. Choosing a RAID Level
    5. 6.5. Software RAID Implementations
      1. 6.5.1. Solaris: Solstice DiskSuite
        1. 6.5.1.1. State databases
        2. 6.5.1.2. RAID 0: stripes
        3. 6.5.1.3. RAID 1: mirrors
        4. 6.5.1.4. RAID 5 arrays
        5. 6.5.1.5. Hot spare pools
      2. 6.5.2. Linux: md
        1. 6.5.2.1. Persistent superblocks
        2. 6.5.2.2. Chunk size
        3. 6.5.2.3. Linear mode
        4. 6.5.2.4. RAID 0: stripes
        5. 6.5.2.5. RAID 1: mirrors
        6. 6.5.2.6. RAID 5 arrays
        7. 6.5.2.7. Creating the array
        8. 6.5.2.8. Autodetection
        9. 6.5.2.9. Booting from an array device
    6. 6.6. RAID Recipes
      1. 6.6.1. Attribute-Intensive Home Directories
      2. 6.6.2. Data-Intensive Home Directories
      3. 6.6.3. High Performance Computing
      4. 6.6.4. Databases
      5. 6.6.5. Case Study: Applications Doing Large I/O
    7. 6.7. Concluding Thoughts
  8. 7. Networks
    1. 7.1. Network Principles
      1. 7.1.1. The OSI Model
    2. 7.2. Physical Media
      1. 7.2.1. UTP
        1. 7.2.1.1. A note on terminology: plugs and jacks
      2. 7.2.2. Fiber
    3. 7.3. Network Interfaces
      1. 7.3.1. Ethernet
        1. 7.3.1.1. Fundamentals of Ethernet signaling
        2. 7.3.1.2. Topologies
        3. 7.3.1.3. 10BASE-T
        4. 7.3.1.4. 100BASE-T4
        5. 7.3.1.5. 100BASE-TX
        6. 7.3.1.6. Gigabit Ethernet topologies
        7. 7.3.1.7. The 5-4-3 rule
        8. 7.3.1.8. Collisions
        9. 7.3.1.9. Autonegotiation
        10. 7.3.1.10. Displaying and setting modes
      2. 7.3.2. FDDI
      3. 7.3.3. ATM
      4. 7.3.4. Ethernet Versus ATM/FDDI
    4. 7.4. Network Protocols
      1. 7.4.1. IP
        1. 7.4.1.1. Fragmentation
        2. 7.4.1.2. Time-to-live
        3. 7.4.1.3. Protocols
        4. 7.4.1.4. IP addresses
        5. 7.4.1.5. Classful addressing
        6. 7.4.1.6. Subnetting classful networks
        7. 7.4.1.7. Moving to a classless world
        8. 7.4.1.8. Routing
      2. 7.4.2. TCP
        1. 7.4.2.1. Connection initiation and SYN flooding
        2. 7.4.2.2. Path MTU discovery and the maximum segment size
        3. 7.4.2.3. Buffers, watermarks, and windows
        4. 7.4.2.4. Retransmissions
        5. 7.4.2.5. Deferring acknowledgments
        6. 7.4.2.6. Window congestion and the slow start algorithm
        7. 7.4.2.7. TCP timers and intervals
        8. 7.4.2.8. The Nagle algorithm
      3. 7.4.3. UDP
      4. 7.4.4. TCP Versus UDP for Network Transport
    5. 7.5. NFS
      1. 7.5.1. Characterizing NFS Activity
      2. 7.5.2. Tuning Clients
        1. 7.5.2.1. Obtaining statistics for an NFS-mounted filesystem
        2. 7.5.2.2. The rnode cache
        3. 7.5.2.3. Tuning NFS clients for bursty transfers
        4. 7.5.2.4. Tuning NFS clients for sequential transfer
      3. 7.5.3. Tuning Servers
        1. 7.5.3.1. Designing disk subsystems for NFS servers
        2. 7.5.3.2. NVRAM caching
        3. 7.5.3.3. Memory requirements
        4. 7.5.3.4. The two basic types of NFS servers
        5. 7.5.3.5. Tuning the number of NFS threads
        6. 7.5.3.6. Adjusting the buffer cache
        7. 7.5.3.7. The maxusers parameter
        8. 7.5.3.8. The directory name lookup cache (DNLC)
        9. 7.5.3.9. The inode cache
        10. 7.5.3.10. Observing NFS server performance with nfsstat
      4. 7.5.4. Wide Area Networks and NFS
    6. 7.6. CIFS via Unix
    7. 7.7. Concluding Thoughts
  9. 8. Code Tuning
    1. 8.1. The Two Critical Approaches
      1. 8.1.1. String Searching Algorithms
        1. 8.1.1.1. Algorithm 1: naive searching
        2. 8.1.1.2. Algorithm 2: Knuth-Morris-Pratt searching
        3. 8.1.1.3. Algorithm 3: Boyer-Moore searching
      2. 8.1.2. Caveats in Optimization
    2. 8.2. Techniques for Code Analysis
      1. 8.2.1. Application Timing: time, timex, and ptime
        1. 8.2.1.1. time
        2. 8.2.1.2. timex
        3. 8.2.1.3. ptime
        4. 8.2.1.4. Mechanisms of timing
      2. 8.2.2. Timing-Specific Code Sections
        1. 8.2.2.1. Timing via gethrtime
        2. 8.2.2.2. Timing via the TICK register
      3. 8.2.3. Probe-Based Analysis: Solaris TNF
        1. 8.2.3.1. Inserting probes
        2. 8.2.3.2. Caveats
      4. 8.2.4. Profiler-Based Analysis: gprof
        1. 8.2.4.1. Implementing profiling
        2. 8.2.4.2. Compiling with profiling
        3. 8.2.4.3. Execution with profiling
        4. 8.2.4.4. Profile analysis
        5. 8.2.4.5. Caveats
    3. 8.3. Optimization Patterns
      1. 8.3.1. Arithmetic
      2. 8.3.2. Loops
      3. 8.3.3. Strings
    4. 8.4. Interacting with Compilers
      1. 8.4.1. Typical Optimizations: -fast
      2. 8.4.2. Optimization Level: -xO
      3. 8.4.3. Specifying Instruction Set Architecture: -xarch
      4. 8.4.4. Specifying Processor Architecture: -xchip
      5. 8.4.5. Function Inlining: -xinlining and -xcrossfile
      6. 8.4.6. Data Dependency Analysis: -xdepend
      7. 8.4.7. Vector Operations: -xvector
      8. 8.4.8. Default Floating Point Constant Size: -xsfpconst
      9. 8.4.9. Data Prefetching: -xprefetch
      10. 8.4.10. Quick and Dirty Compiler Flags
      11. 8.4.11. Profiling Feedback
    5. 8.5. Concluding Thoughts
  10. 9. Instant Tuning
    1. 9.1. Top Five Tuning Tips
      1. 9.1.1. Where Is the Disk Bottleneck?
      2. 9.1.2. Do You Have Enough Memory?
      3. 9.1.3. Are the Processors Overloaded?
      4. 9.1.4. Are Processes Blocked on Disk I/O?
      5. 9.1.5. Does System Time Heavily Dominate User Time?
    2. 9.2. Instant Tuning Recipes
      1. 9.2.1. Single-User Development Workstations
        1. 9.2.1.1. Filesystems
        2. 9.2.1.2. Swap space
        3. 9.2.1.3. Kernel tuning
      2. 9.2.2. Workgroup Servers
        1. 9.2.2.1. Memory
        2. 9.2.2.2. Disks
        3. 9.2.2.3. Filesystems
        4. 9.2.2.4. Swap space
        5. 9.2.2.5. Optimizing NFS
        6. 9.2.2.6. Kernel tuning
      3. 9.2.3. Web Servers
        1. 9.2.3.1. Memory
        2. 9.2.3.2. Disks
        3. 9.2.3.3. Filesystems
        4. 9.2.3.4. Swap space
        5. 9.2.3.5. Networks
        6. 9.2.3.6. Kernel tuning
        7. 9.2.3.7. Special case: proxy servers
  11. About the Authors
  12. Colophon
  13. Copyright

Product information

  • Title: System Performance Tuning, 2nd Edition
  • Author(s): Gian-Paolo D. Musumeci, Mike Loukides
  • Release date: February 2002
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9780596002848