Write Great Code

Book description

If you've asked someone the secret to writing efficient, well-written software, the answer that you've probably gotten is "learn assembly language programming." By learning assembly language programming, you learn how the machine really operates and that knowledge will help you write better high-level language code. A dirty little secret assembly language programmers rarely admit to, however, is that what you really need to learn is machine organization, not assembly language programming. Write Great Code Vol I , the first in a series from assembly language expert Randall Hyde, dives right into machine organization without the extra overhead of learning assembly language programming at the same time. And since Write Great Code Vol I concentrates on the machine organization, not assembly language, the reader will learn in greater depth those subjects that are language-independent and of concern to a high level language programmer. Write Great Code Vol I will help programmers make wiser choices with respect to programming statements and data types when writing software, no matter which language they use.

Table of contents

  1. Write Great Code, Volume 1
  2. Acknowledgments
  3. 1. What You Need to Know to Write Great Code
    1. 1.1 The Write Great Code Series
    2. 1.2 What This Volume Covers
    3. 1.3 Assumptions This Volume Makes
    4. 1.4 Characteristics of Great Code
    5. 1.5 The Environment for This Volume
    6. 1.6 For More Information
  4. 2. Numeric Representation
    1. 2.1 What Is a Number?
    2. 2.2 Numbering Systems
      1. 2.2.1 The Decimal Positional Numbering System
      2. 2.2.2 Radix (Base)
      3. 2.2.3 The Binary Numbering System
        1. 2.2.3.1 Converting Between Decimal and Binary Representation
        2. 2.2.3.2 Making Binary Numbers Easier to Read
        3. 2.2.3.3 Binary Representation in Programming Languages
      4. 2.2.4 The Hexadecimal Numbering System
        1. 2.2.4.1 Hexadecimal Representation in Programming Languages
        2. 2.2.4.2 Converting Between Hexadecimal and Binary Representations
      5. 2.2.5 The Octal (Base-8) Numbering System
        1. 2.2.5.1 Octal Representation in Programming Languages
        2. 2.2.5.2 Converting Between Octal and Binary Representation
    3. 2.3 Numeric/String Conversions
    4. 2.4 Internal Numeric Representation
      1. 2.4.1 Bits
      2. 2.4.2 Bit Strings
    5. 2.5 Signed and Unsigned Numbers
    6. 2.6 Some Useful Properties of Binary Numbers
    7. 2.7 Sign Extension, Zero Extension, and Contraction
    8. 2.8 Saturation
    9. 2.9 Binary-Coded Decimal (BCD) Representation
    10. 2.10 Fixed-Point Representation
    11. 2.11 Scaled Numeric Formats
    12. 2.12 Rational Representation
    13. 2.13 For More Information
  5. 3. Binary Arithmetic and Bit Operations
    1. 3.1 Arithmetic Operations on Binary and Hexadecimal Numbers
      1. 3.1.1 Adding Binary Values
      2. 3.1.2 Subtracting Binary Values
      3. 3.1.3 Multiplying Binary Values
      4. 3.1.4 Dividing Binary Values
    2. 3.2 Logical Operations on Bits
    3. 3.3 Logical Operations on Binary Numbers and Bit Strings
    4. 3.4 Useful Bit Operations
      1. 3.4.1 Testing Bits in a Bit String Using AND
      2. 3.4.2 Testing a Set of Bits for Zero/Not Zero Using AND
      3. 3.4.3 Comparing a Set of Bits Within a Binary String
      4. 3.4.4 Creating Modulo-n Counters Using AND
    5. 3.5 Shifts and Rotates
    6. 3.6 Bit Fields and Packed Data
    7. 3.7 Packing and Unpacking Data
    8. 3.8 For More Information
  6. 4. Floating-Point Representation
    1. 4.1 Introduction to Floating-Point Arithmetic
    2. 4.2 IEEE Floating-Point Formats
      1. 4.2.1 Single-Precision Floating-Point Format
      2. 4.2.2 Double-Precision Floating-Point Format
      3. 4.2.3 Extended-Precision Floating-Point Format
    3. 4.3 Normalization and Denormalized Values
    4. 4.4 Rounding
    5. 4.5 Special Floating-Point Values
    6. 4.6 Floating-Point Exceptions
    7. 4.7 Floating-Point Operations
      1. 4.7.1 Floating-Point Representation
      2. 4.7.2 Floating-Point Addition and Subtraction
      3. 4.7.3 Floating-Point Multiplication and Division
        1. 4.7.3.1 Floating-Point Multiplication
        2. 4.7.3.2 Floating-Point Division
    8. 4.8 For More Information
  7. 5. Character Representation
    1. 5.1 Character Data
      1. 5.1.1 The ASCII Character Set
      2. 5.1.2 The EBCDIC Character Set
      3. 5.1.3 Double-Byte Character Sets
      4. 5.1.4 The Unicode Character Set
    2. 5.2 Character Strings
      1. 5.2.1 Character String Formats
        1. 5.2.1.1 Zero-Terminated Strings
        2. 5.2.1.2 Length-Prefixed Strings
        3. 5.2.1.3 Seven-Bit Strings
        4. 5.2.1.4 HLA Strings
        5. 5.2.1.5 Descriptor-Based Strings
      2. 5.2.2 Types of Strings: Static, Pseudo-Dynamic, and Dynamic
        1. 5.2.2.1 Static Strings
        2. 5.2.2.2 Pseudo-Dynamic Strings
        3. 5.2.2.3 Dynamic Strings
      3. 5.2.3 Reference Counting for Strings
      4. 5.2.4 Delphi/Kylix Strings
      5. 5.2.5 Creating Your Own String Formats
    3. 5.3 Character Sets
      1. 5.3.1 Powerset Representation of Character Sets
      2. 5.3.2 List Representation of Character Sets
    4. 5.4 Designing Your Own Character Set
      1. 5.4.1 Designing an Efficient Character Set
      2. 5.4.2 Grouping the Character Codes for Numeric Digits
      3. 5.4.3 Grouping Alphabetic Characters
      4. 5.4.4 Comparing Alphabetic Characters
      5. 5.4.5 Other Character Groupings
    5. 5.5 For More Information
  8. 6. Memory Organization and Access
    1. 6.1 The Basic System Components
      1. 6.1.1 The System Bus
        1. 6.1.1.1 The Data Bus
      2. 6.1.2 The Address Bus
      3. 6.1.3 The Control Bus
    2. 6.2 Physical Organization of Memory
      1. 6.2.1 8-Bit Data Buses
      2. 6.2.2 16-Bit Data Buses
      3. 6.2.3 32-Bit Data Buses
      4. 6.2.4 64-Bit Buses
      5. 6.2.5 Small Accesses on Non-80x86 Processors
    3. 6.3 Big Endian Versus Little Endian Organization
    4. 6.4 The System Clock
      1. 6.4.1 Memory Access and the System Clock
      2. 6.4.2 Wait States
      3. 6.4.3 Cache Memory
    5. 6.5 CPU Memory Access
      1. 6.5.1 The Direct Memory Addressing Mode
      2. 6.5.2 The Indirect Addressing Mode
      3. 6.5.3 The Indexed Addressing Mode
      4. 6.5.4 The Scaled Indexed Addressing Modes
    6. 6.6 For More Information
  9. 7. Composite Data Types and Memory Objects
    1. 7.1 Pointer Types
      1. 7.1.1 Pointer Implementation
      2. 7.1.2 Pointers and Dynamic Memory Allocation
      3. 7.1.3 Pointer Operations and Pointer Arithmetic
        1. 7.1.3.1 Adding an Integer to a Pointer
        2. 7.1.3.2 Subtracting an Integer from a Pointer
        3. 7.1.3.3 Subtracting a Pointer from a Pointer
        4. 7.1.3.4 Comparing Pointers
    2. 7.2 Arrays
      1. 7.2.1 Array Declarations
      2. 7.2.2 Array Representation in Memory
      3. 7.2.3 Accessing Elements of an Array
      4. 7.2.4 Multidimensional Arrays
        1. 7.2.4.1 Row-Major Ordering
        2. 7.2.4.2 Column-Major Ordering
        3. 7.2.4.3 Declaring Multidimensional Arrays
        4. 7.2.4.4 Accessing Elements of a Multidimensional Array
    3. 7.3 Records/Structures
      1. 7.3.1 Records in Pascal/Delphi
      2. 7.3.2 Records in C/C++
      3. 7.3.3 Records in HLA
      4. 7.3.4 Memory Storage of Records
    4. 7.4 Discriminant Unions
      1. 7.4.1 Unions in C/C++
      2. 7.4.2 Unions in Pascal/Delphi/Kylix
      3. 7.4.3 Unions in HLA
      4. 7.4.4 Memory Storage of Unions
      5. 7.4.5 Other Uses of Unions
    5. 7.5 For More Information
  10. 8. Boolean Logic and Digital Design
    1. 8.1 Boolean Algebra
      1. 8.1.1 The Boolean Operators
      2. 8.1.2 Boolean Postulates
      3. 8.1.3 Boolean Operator Precedence
    2. 8.2 Boolean Functions and Truth Tables
    3. 8.3 Function Numbers
    4. 8.4 Algebraic Manipulation of Boolean Expressions
    5. 8.5 Canonical Forms
      1. 8.5.1 Sum of Minterms Canonical Form and Truth Tables
      2. 8.5.2 Deriving the Sum of Minterms Canonical Form Algebraically
      3. 8.5.3 Product of Maxterms Canonical Form
    6. 8.6 Simplification of Boolean Functions
    7. 8.7 What Does This Have to Do with Computers, Anyway?
      1. 8.7.1 Correspondence Between Electronic Circuits and Boolean Functions
      2. 8.7.2 Combinatorial Circuits
        1. 8.7.2.1 Addition Circuits
        2. 8.7.2.2 Seven-Segment LED Decoders
        3. 8.7.2.3 Decoding Memory Addresses
        4. 8.7.2.4 Decoding Machine Instructions
      3. 8.7.3 Sequential and Clocked Logic
        1. 8.7.3.1 The Set/Reset Flip-Flop
        2. 8.7.3.2 The D Flip-Flop
    8. 8.8 For More Information
  11. 9. CPU Architecture
    1. 9.1 Basic CPU Design
    2. 9.2 Decoding and Executing Instructions: Random Logic Versus Microcode
    3. 9.3 Executing Instructions, Step by Step
      1. 9.3.1 The mov Instruction
      2. 9.3.2 The add Instruction
      3. 9.3.3 The jnz Instruction
      4. 9.3.4 The loop Instruction
    4. 9.4 Parallelism — The Key to Faster Processing
      1. 9.4.1 The Prefetch Queue
        1. 9.4.1.1 Saving Fetched Bytes
        2. 9.4.1.2 Using Unused Bus Cycles
        3. 9.4.1.3 Overlapping Instructions
        4. 9.4.1.4 Summary of Background Prefetch Events
      2. 9.4.2 Conditions That Hinder the Performance of the Prefetch Queue
      3. 9.4.3 Pipelining — Overlapping the Execution of Multiple Instructions
        1. 9.4.3.1 A Typical Pipeline
        2. 9.4.3.2 Stalls in a Pipeline
      4. 9.4.4 Instruction Caches — Providing Multiple Paths to Memory
      5. 9.4.5 Pipeline Hazards
      6. 9.4.6 Superscalar Operation — Executing Instructions in Parallel
      7. 9.4.7 Out-of-Order Execution
      8. 9.4.8 Register Renaming
      9. 9.4.9 Very Long Instruction Word (VLIW) Architecture
      10. 9.4.10 Parallel Processing
      11. 9.4.11 Multiprocessing
    5. 9.5 For More Information
  12. 10. Instruction Set Architecture
    1. 10.1 The Importance of the Design of the Instruction Set
    2. 10.2 Basic Instruction Design Goals
      1. 10.2.1 Choosing Opcode Length
      2. 10.2.2 Planning for the Future
      3. 10.2.3 Choosing Instructions
      4. 10.2.4 Assigning Opcodes to Instructions
    3. 10.3 The Y86 Hypothetical Processor
      1. 10.3.1 Y86 Limitations
      2. 10.3.2 Y86 Instructions
        1. 10.3.2.1 The mov Instruction
        2. 10.3.2.2 Arithmetic and Logical Instructions
        3. 10.3.2.3 Control Transfer Instructions
        4. 10.3.2.4 Miscellaneous Instructions
      3. 10.3.3 Addressing Modes on the Y86
      4. 10.3.4 Encoding Y86 Instructions
        1. 10.3.4.1 Eight Generic Y86 Instructions
        2. 10.3.4.2 Using the Special Expansion Opcode
      5. 10.3.5 Examples of Encoding Y86 Instructions
        1. 10.3.5.1 The add Instruction
        2. 10.3.5.2 The mov Instruction
        3. 10.3.5.3 The not Instruction
        4. 10.3.5.4 The Jump Instructions
        5. 10.3.5.5 The Zero-Operand Instructions
      6. 10.3.6 Extending the Y86 Instruction Set
    4. 10.4 Encoding 80x86 Instructions
      1. 10.4.1 Encoding Instruction Operands
      2. 10.4.2 Encoding the add Instruction — Some Examples
      3. 10.4.3 Encoding Immediate Operands
      4. 10.4.4 Encoding 8-, 16-, and 32-Bit Operands
      5. 10.4.5 Alternate Encodings for Instructions
    5. 10.5 Implications of Instruction Set Design to the Programmer
    6. 10.6 For More Information
  13. 11. Memory Architecture and Organization
    1. 11.1 The Memory Hierarchy
    2. 11.2 How the Memory Hierarchy Operates
    3. 11.3 Relative Performance of Memory Subsystems
    4. 11.4 Cache Architecture
      1. 11.4.1 Direct-Mapped Cache
      2. 11.4.2 Fully Associative Cache
      3. 11.4.3 n-Way Set Associative Cache
      4. 11.4.4 Matching the Caching Scheme to the Type of Data Access
      5. 11.4.5 Cache Line Replacement Policies
      6. 11.4.6 Writing Data to Memory
      7. 11.4.7 Cache Use and Software
    5. 11.5 Virtual Memory, Protection, and Paging
    6. 11.6 Thrashing
    7. 11.7 NUMA and Peripheral Devices
    8. 11.8 Writing Software That Is Cognizant of the Memory Hierarchy
    9. 11.9 Run-Time Memory Organization
      1. 11.9.1 Static and Dynamic Objects, Binding, and Lifetime
      2. 11.9.2 The Code, Read-Only, and Constant Sections
      3. 11.9.3 The Static Variables Section
      4. 11.9.4 The Uninitialized Storage (BSS) Section
      5. 11.9.5 The Stack Section
      6. 11.9.6 The Heap Section and Dynamic Memory Allocation
        1. 11.9.6.1 Memory Allocation
        2. 11.9.6.2 Garbage Collection
        3. 11.9.6.3 The OS and Memory Allocation
        4. 11.9.6.4 Heap Memory Overhead
    10. 11.10 For More Information
  14. 12. Input and Output (I/O)
    1. 12.1 Connecting a CPU to the Outside World
    2. 12.2 Other Ways to Connect Ports to the System
    3. 12.3 I/O Mechanisms
      1. 12.3.1 Memory-Mapped I/O
      2. 12.3.2 I/O and the Cache
      3. 12.3.3 I/O-Mapped Input/Output
      4. 12.3.4 Direct Memory Access (DMA)
    4. 12.4 I/O Speed Hierarchy
    5. 12.5 System Buses and Data Transfer Rates
      1. 12.5.1 Performance of the PCI Bus
      2. 12.5.2 Performance of the ISA Bus
      3. 12.5.3 The AGP Bus
    6. 12.6 Buffering
    7. 12.7 Handshaking
    8. 12.8 Time-outs on an I/O Port
    9. 12.9 Interrupts and Polled I/O
    10. 12.10 Protected Mode Operation and Device Drivers
      1. 12.10.1 Device Drivers
      2. 12.10.2 Communicating with Device Drivers and “Files”
    11. 12.11 Exploring Specific PC Peripheral Devices
    12. 12.12 The Keyboard
    13. 12.13 The Standard PC Parallel Port
    14. 12.14 Serial Ports
    15. 12.15 Disk Drives
      1. 12.15.1 Floppy Drives
      2. 12.15.2 Hard Drives
      3. 12.15.3 RAID Systems
      4. 12.15.4 Zip and Other Floptical Drives
      5. 12.15.5 Optical Drives
      6. 12.15.6 CD-ROM, CD-R, CR-R/W, DVD, DVD-R, DVD-RAM, and DVD-R/W Drives
    16. 12.16 Tape Drives
    17. 12.17 Flash Storage
    18. 12.18 RAM Disks and Semiconductor Disks
    19. 12.19 SCSI Devices and Controllers
    20. 12.20 The IDE/ATA Interface
    21. 12.21 File Systems on Mass Storage Devices
      1. 12.21.1 Maintaining Files Using a Free-Space Bitmap
      2. 12.21.2 File Allocation Tables
      3. 12.21.3 List-of-Blocks File Organization
    22. 12.22 Writing Software That Manipulates Data on a Mass Storage Device
      1. 12.22.1 File Access Performance
      2. 12.22.2 Synchronous and Asynchronous I/O
      3. 12.22.3 The Implications of I/O Type
      4. 12.22.4 Memory-Mapped Files
    23. 12.23 The Universal Serial Bus (USB)
      1. 12.23.1 USB Design
      2. 12.23.2 USB Performance
      3. 12.23.3 Types of USB Transmissions
      4. 12.23.4 USB Device Drivers
    24. 12.24 Mice, Trackpads, and Other Pointing Devices
    25. 12.25 Joysticks and Game Controllers
    26. 12.26 Sound Cards
      1. 12.26.1 How Audio Interface Peripherals Produce Sound
      2. 12.26.2 The Audio and MIDI File Formats
      3. 12.26.3 Programming Audio Devices
    27. 12.27 For More Information
  15. 13. Thinking Low-Level, Writing High-Level
  16. A. ASCII Character Set
  17. Index
  18. Copyright

Product information

  • Title: Write Great Code
  • Author(s):
  • Release date: November 2004
  • Publisher(s): No Starch Press
  • ISBN: 9781593270032