Making Software

Book description

Many claims are made about how certain tools, technologies, and practices improve software development. But which claims are verifiable, and which are merely wishful thinking? In this book, leading thinkers such as Steve McConnell, Barry Boehm, and Barbara Kitchenham offer essays that uncover the truth and unmask myths commonly held among the software development community. Their insights may surprise you.

  • Are some programmers really ten times more productive than others?
  • Does writing tests first help you develop better code faster?
  • Can code metrics predict the number of bugs in a piece of software?
  • Do design patterns actually make better software?
  • What effect does personality have on pair programming?
  • What matters more: how far apart people are geographically, or how far apart they are in the org chart?

Contributors include:

Jorge Aranda

Tom Ball

Victor R. Basili

Andrew Begel

Christian Bird

Barry Boehm

Marcelo Cataldo

Steven Clarke

Jason Cohen

Robert DeLine

Madeline Diep

Hakan Erdogmus

Michael Godfrey

Mark Guzdial

Jo E. Hannay

Ahmed E. Hassan

Israel Herraiz

Kim Sebastian Herzig

Cory Kapser

Barbara Kitchenham

Andrew Ko

Lucas Layman

Steve McConnell

Tim Menzies

Gail Murphy

Nachi Nagappan

Thomas J. Ostrand

Dewayne Perry

Marian Petre

Lutz Prechelt

Rahul Premraj

Forrest Shull

Beth Simon

Diomidis Spinellis

Neil Thomas

Walter Tichy

Burak Turhan

Elaine J. Weyuker

Michele A. Whitecraft

Laurie Williams

Wendy M. Williams

Andreas Zeller

Thomas Zimmermann

Publisher resources

View/Submit Errata

Table of contents

  1. Making Software
  2. Preface
    1. Organization of This Book
    2. Conventions Used in This Book
    3. Safari® Books Online
    4. Using Code Examples
    5. How to Contact Us
  3. I. General Principles of Searching For and Using Evidence
    1. 1. The Quest for Convincing Evidence
      1. In the Beginning
      2. The State of Evidence Today
        1. Challenges to the Elegance of Studies
        2. Challenges to Statistical Strength
        3. Challenges to Replicability of Results
      3. Change We Can Believe In
      4. The Effect of Context
      5. Looking Toward the Future
      6. References
    2. 2. Credibility, or Why Should I Insist on Being Convinced?
      1. How Evidence Turns Up in Software Engineering
      2. Credibility and Relevance
        1. Fitness for Purpose, or Why What Convinces You Might Not Convince Me
        2. Quantitative Versus Qualitative Evidence: A False Dichotomy
      3. Aggregating Evidence
        1. Limitations and Bias
      4. Types of Evidence and Their Strengths and Weaknesses
        1. Controlled Experiments and Quasi-Experiments
          1. Credibility
          2. Relevance
        2. Surveys
          1. Credibility
          2. Relevance
        3. Experience Reports and Case Studies
          1. Credibility
          2. Relevance
        4. Other Methods
        5. Indications of Credibility (or Lack Thereof) in Reporting
          1. General characteristics
          2. A clear research question
          3. An informative description of the study setup
          4. A meaningful and graspable data presentation
          5. A transparent statistical analysis (if any)
          6. An honest discussion of limitations
          7. Conclusions that are solid yet relevant
      5. Society, Culture, Software Engineering, and You
      6. Acknowledgments
      7. References
    3. 3. What We Can Learn from Systematic Reviews
      1. An Overview of Systematic Reviews
      2. The Strengths and Weaknesses of Systematic Reviews
        1. The Systematic Review Process
          1. Planning the review
          2. Conducting the review
          3. Reporting the review
        2. Problems Associated with Conducting a Review
      3. Systematic Reviews in Software Engineering
        1. Cost Estimation Studies
          1. The accuracy of cost estimation models
          2. The accuracy of cost estimates in industry
        2. Agile Methods
          1. Dybå and Dingsøyr
          2. Hannay, Dybå, Arisholm, and Sjøberg
        3. Inspection Methods
      4. Conclusion
      5. References
    4. 4. Understanding Software Engineering Through Qualitative Methods
      1. What Are Qualitative Methods?
      2. Reading Qualitative Research
      3. Using Qualitative Methods in Practice
      4. Generalizing from Qualitative Results
      5. Qualitative Methods Are Systematic
      6. References
    5. 5. Learning Through Application: The Maturing of the QIP in the SEL
      1. What Makes Software Engineering Uniquely Hard to Research
      2. A Realistic Approach to Empirical Research
      3. The NASA Software Engineering Laboratory: A Vibrant Testbed for Empirical Research
      4. The Quality Improvement Paradigm
        1. Characterize
        2. Set Goals
        3. Select Process
        4. Execute Process
        5. Analyze
        6. Package
      5. Conclusion
      6. References
    6. 6. Personality, Intelligence, and Expertise: Impacts on Software Development
      1. How to Recognize Good Programmers
        1. Individual Differences: Fixed or Malleable
        2. Personality
        3. Intelligence
        4. The Task of Programming
        5. Programming Performance
        6. Expertise
        7. Software Effort Estimation
      2. Individual or Environment
        1. Skill or Safety in Software Engineering
        2. Collaboration
        3. Personality Again
        4. A Broader View of Intelligence
      3. Concluding Remarks
      4. References
    7. 7. Why Is It So Hard to Learn to Program?
      1. Do Students Have Difficulty Learning to Program?
        1. The 2001 McCracken Working Group
        2. The Lister Working Group
      2. What Do People Understand Naturally About Programming?
      3. Making the Tools Better by Shifting to Visual Programming
      4. Contextualizing for Motivation
      5. Conclusion: A Fledgling Field
      6. References
    8. 8. Beyond Lines of Code: Do We Need More Complexity Metrics?
      1. Surveying Software
      2. Measuring the Source Code
      3. A Sample Measurement
        1. Source Lines of Code (SLOC)
        2. Lines of Code (LOC)
        3. Number of C Functions
        4. McCabe’s Cyclomatic Complexity
        5. Halstead’s Software Science Metrics
      4. Statistical Analysis
        1. Overall Analysis
        2. Differences Between Header and Nonheader Files
        3. The Confounding Effect: Influence of File Size in the Intensity of Correlation
          1. Effects of size on correlations for header files
          2. Effects of size on correlations for nonheader files
          3. Effect on the Halstead’s Software Science metrics
          4. Summary of the confounding effect of file size
      5. Some Comments on the Statistical Methodology
      6. So Do We Need More Complexity Metrics?
      7. References
        1. Bibliography
  4. II. Specific Topics in Software Engineering
    1. 9. An Automated Fault Prediction System
      1. Fault Distribution
      2. Characteristics of Faulty Files
      3. Overview of the Prediction Model
      4. Replication and Variations of the Prediction Model
        1. The Role of Developers
        2. Predicting Faults with Other Types of Models
      5. Building a Tool
      6. The Warning Label
      7. References
    2. 10. Architecting: How Much and When?
      1. Does the Cost of Fixing Software Increase over the Project Life Cycle?
      2. How Much Architecting Is Enough?
        1. Cost-to-Fix Growth Evidence
      3. Using What We Can Learn from Cost-to-Fix Data About the Value of Architecting
        1. The Foundations of the COCOMO II Architecture and Risk Resolution (RESL) Factor
          1. Economies and diseconomies of scale
          2. Reducing software rework via architecture and risk resolution
          3. A successful example: CCPDS-R
        2. The Architecture and Risk Resolution Factor in Ada COCOMO and COCOMO II
          1. How the Ada Process Model promoted risk-driven concurrent engineering software processes
          2. Architecture and risk resolution (RESL) factor in COCOMO II
          3. Improvement shown by incorporating architecture and risk resolution
        3. ROI for Software Systems Engineering Improvement Investments
      4. So How Much Architecting Is Enough?
      5. Does the Architecting Need to Be Done Up Front?
      6. Conclusions
      7. References
    3. 11. Conway’s Corollary
      1. Conway’s Law
      2. Coordination, Congruence, and Productivity
        1. Implications
      3. Organizational Complexity Within Microsoft
        1. Implications
      4. Chapels in the Bazaar of Open Source Software
      5. Conclusions
      6. References
        1. Bibliography
    4. 12. How Effective Is Test-Driven Development?
      1. The TDD Pill—What Is It?
      2. Summary of Clinical TDD Trials
      3. The Effectiveness of TDD
        1. Internal Quality
        2. External Quality
        3. Productivity
        4. Test Quality
      4. Enforcing Correct TDD Dosage in Trials
      5. Cautions and Side Effects
      6. Conclusions
      7. Acknowledgments
      8. General References
      9. Clinical TDD Trial References
        1. Bibliography
    5. 13. Why Aren’t More Women in Computer Science?
      1. Why So Few Women?
        1. Ability Deficits, Preferences, and Cultural Biases
          1. Evidence for deficits in female mathematical-spatial abilities
          2. The role of preferences and lifestyle choices
        2. Biases, Stereotypes, and the Role of Male Computer-Science Culture
      2. Should We Care?
        1. What Can Society Do to Reverse the Trend?
        2. Implications of Cross-National Data
      3. Conclusion
      4. References
    6. 14. Two Comparisons of Programming Languages
      1. A Language Shoot-Out over a Peculiar Search Algorithm
        1. The Programming Task: Phonecode
        2. Comparing Execution Speed
        3. Comparing Memory Consumption
        4. Comparing Productivity and Program Length
        5. Comparing Reliability
        6. Comparing Program Structure
        7. Should I Believe This?
      2. Plat_Forms: Web Development Technologies and Cultures
        1. The Development Task: People-by-Temperament
        2. Lay Your Bets
        3. Comparing Productivity
        4. Comparing Artifact Size
        5. Comparing Modifiability
        6. Comparing Robustness and Security
        7. Hey, What About <Insert-Your-Favorite-Topic>?
      3. So What?
      4. References
        1. Bibliography
    7. 15. Quality Wars: Open Source Versus Proprietary Software
      1. Past Skirmishes
      2. The Battlefield
      3. Into the Battle
        1. File Organization
        2. Code Structure
        3. Code Style
        4. Preprocessing
        5. Data Organization
      4. Outcome and Aftermath
      5. Acknowledgments and Disclosure of Interest
      6. References
        1. Bibliography
    8. 16. Code Talkers
      1. A Day in the Life of a Programmer
        1. Diary Study
        2. Observational Study
        3. Were the Programmers on Their Best Behavior?
      2. What Is All This Talk About?
        1. Getting Answers to Questions
        2. The Search for Rationale
        3. Interruptions and Multitasking
        4. What Questions Do Programmers Ask?
        5. Are Agile Methods Better for Communication?
      3. A Model for Thinking About Communication
      4. References
        1. Bibliography
    9. 17. Pair Programming
      1. A History of Pair Programming
      2. Pair Programming in an Industrial Setting
        1. Industry Practices in Pair Programming
        2. Results of Using Pair Programming in Industry
      3. Pair Programming in an Educational Setting
        1. Practices Specific to Education
        2. Results of Using Pair Programming in Education
      4. Distributed Pair Programming
      5. Challenges
      6. Lessons Learned
      7. Acknowledgments
      8. References
    10. 18. Modern Code Review
      1. Common Sense
      2. A Developer Does a Little Code Review
        1. Focus Fatigue
        2. Speed Kills
        3. Size Kills
        4. The Importance of Context
      3. Group Dynamics
        1. Are Meetings Required?
        2. False-Positives
        3. Are External Reviewers Required At All?
      4. Conclusion
      5. References
        1. Bibliography
    11. 19. A Communal Workshop or Doors That Close?
      1. Doors That Close
      2. A Communal Workshop
      3. Work Patterns
      4. One More Thing…
      5. References
        1. Bibliography
    12. 20. Identifying and Managing Dependencies in Global Software Development
      1. Why Is Coordination a Challenge in GSD?
      2. Dependencies and Their Socio-Technical Duality
        1. The Technical Dimension
          1. Syntactic dependencies and their impact on productivity and quality
          2. Logical dependencies and their impact on productivity and quality
        2. The Socio-Organizational Dimension
          1. Different types of work dependencies and their impacts on productivity and quality
        3. The Socio-Technical Dimension
      3. From Research to Practice
        1. Leveraging the Data in Software Repositories
        2. The Role of Team Leads and Managers in Supporting the Management of Dependencies
        3. Developers, Work Items, and Distributed Development
      4. Future Directions
        1. Software Architectures Suitable for Global Software Development
        2. Collaborative Software Engineering Tools
        3. Balancing Standarization and Flexibility
      5. References
    13. 21. How Effective Is Modularization?
      1. The Systems
      2. What Is a Change?
      3. What Is a Module?
      4. The Results
        1. Change Locality
        2. Examined Modules
        3. Emergent Modularity
      5. Threats to Validity
      6. Summary
      7. References
    14. 22. The Evidence for Design Patterns
      1. Design Pattern Examples
      2. Why Might Design Patterns Work?
      3. The First Experiment: Testing Pattern Documentation
        1. Design of the Experiment
        2. Results
      4. The Second Experiment: Comparing Pattern Solutions to Simpler Ones
      5. The Third Experiment: Patterns in Team Communication
      6. Lessons Learned
      7. Conclusions
      8. Acknowledgments
      9. References
    15. 23. Evidence-Based Failure Prediction
      1. Introduction
      2. Code Coverage
      3. Code Churn
      4. Code Complexity
      5. Code Dependencies
      6. People and Organizational Measures
      7. Integrated Approach for Prediction of Failures
      8. Summary
      9. Acknowledgments
      10. References
    16. 24. The Art of Collecting Bug Reports
      1. Good and Bad Bug Reports
      2. What Makes a Good Bug Report?
      3. Survey Results
        1. Contents of Bug Reports (Developers)
        2. Contents of Bug Reports (Reporters)
      4. Evidence for an Information Mismatch
      5. Problems with Bug Reports
      6. The Value of Duplicate Bug Reports
      7. Not All Bug Reports Get Fixed
      8. Conclusions
      9. Acknowledgments
      10. References
        1. Bibliography
    17. 25. Where Do Most Software Flaws Come From?
      1. Studying Software Flaws
      2. Context of the Study
      3. Phase 1: Overall Survey
        1. Summary of Questionnaire
        2. Summary of the Data
        3. Summary of the Phase 1 Study
      4. Phase 2: Design/Code Fault Survey
        1. The Questionnaire
        2. Statistical Analysis
          1. Finding and fixing faults
          2. Faults
          3. Fault Frequency Adjusted by Effort
          4. Underlying causes
          5. Means of prevention
          6. Underlying causes and means of prevention
        3. Interface Faults Versus Implementation Faults
      5. What Should You Believe About These Results?
        1. Are We Measuring the Right Things?
        2. Did We Do It Right?
        3. What Can You Do with the Results?
      6. What Have We Learned?
      7. Acknowledgments
      8. References
    18. 26. Novice Professionals: Recent Graduates in a First Software Engineering Job
      1. Study Methodology
        1. Subjects
        2. Task Analysis
        3. Task Sample
        4. Reflection Methodology
        5. Threats to Validity
      2. Software Development Task
        1. Task Breakdown
          1. Communication
          2. Documentation
          3. Working on bugs
          4. Programming
          5. Project management and tools
          6. Design specifications and testing
      3. Strengths and Weaknesses of Novice Software Developers
        1. Strengths
        2. Weaknesses
      4. Reflections
        1. Managing Getting Engaged
        2. Persistence, Uncertainty, and Noviceness
        3. Large-Scale Software Team Setting
      5. Misconceptions That Hinder Learning
      6. Reflecting on Pedagogy
        1. Pair Programming
        2. Legitimate Peripheral Participation
        3. Mentoring
      7. Implications for Change
        1. New Developer Onboarding
        2. Educational Curricula
      8. References
    19. 27. Mining Your Own Evidence
      1. What Is There to Mine?
      2. Designing a Study
      3. A Mining Primer
        1. Step 1: Determining Which Data to Use
        2. Step 2: Data Retrieval
        3. Step 3: Data Conversion (Optional)
        4. Step 4: Data Extraction
        5. Step 5: Parsing the Bug Reports
        6. Step 6: Linking Data Sets
          1. Linking code changes to bug reports
          2. Linking bug reports to code changes (optional)
        7. Step 6: Checking for Missing Links
        8. Step 7: Mapping Bugs to Files
      4. Where to Go from Here
      5. Acknowledgments
      6. References
    20. 28. Copy-Paste as a Principled Engineering Tool
      1. An Example of Code Cloning
      2. Detecting Clones in Software
      3. Investigating the Practice of Code Cloning
        1. Forking
        2. Templating
        3. Customizing
      4. Our Study
      5. Conclusions
      6. References
    21. 29. How Usable Are Your APIs?
      1. Why Is It Important to Study API Usability?
      2. First Attempts at Studying API Usability
        1. Study Design
        2. Summary of Findings from the First Study
      3. If At First You Don’t Succeed...
        1. Design of the Second Study
        2. Summary of Findings from the Second Study
        3. Cognitive Dimensions
      4. Adapting to Different Work Styles
        1. Scenario-Based Design
      5. Conclusion
      6. References
    22. 30. What Does 10x Mean? Measuring Variations in Programmer Productivity
      1. Individual Productivity Variation in Software Development
        1. Extremes in Individual Variation on the Bad Side
        2. What Makes a Real 10x Programmer
      2. Issues in Measuring Productivity of Individual Programmers
        1. Productivity in Lines of Code per Staff Month
        2. Productivity in Function Points
        3. What About Complexity?
        4. Is There Any Way to Measure Individual Productivity?
      3. Team Productivity Variation in Software Development
      4. References
  5. A. Contributors
  6. Index
  7. About the Authors
  8. Colophon
  9. Copyright

Product information

  • Title: Making Software
  • Author(s): Andy Oram, Greg Wilson
  • Release date: October 2010
  • Publisher(s): O'Reilly Media, Inc.
  • ISBN: 9781449397760