Semiconductor Development in Ireland

1 downloads 0 Views 5MB Size Report
May 24, 2010 - The Cobham-Edmonds Thesis (Kozen, 2006) asserts that problems ...... be considered in the context of the business model (Hansen, 1996).
Semiconductor Development in Ireland: Reducing ‘Development Stress’ Caused by Digital Hardware and Embedded Software Team Interaction

Ivan J. Griffin B.Eng., The University of Limerick, 1995 M.Eng., The University of Limerick, 1997

A thesis submitted in fulfilment of the requirements for the degree of Doctor of Philosophy

Supervised by Dr. Ita Richardson Submitted to the UNIVERSITY of LIMERICK, June 2010

Declaration

Title:

Semiconductor Development in Ireland: Reducing ‘Development Stress’ caused by Digital Hardware and Embedded Software Team Interaction.

Author:

Ivan J. Griffin.

Award:

Doctor of Philosophy in Computer Science.

Supervisor:

Dr. Ita Richardson.

I hereby declare that this thesis is entirely my own work, and does not contain material previously published by any other author, except where due reference or acknowledgement has been made. Furthermore, I declare that it has not previously been submitted for any other academic award.

24th May 2010

17h28

Except where excerpts from the contributions of others are explicitly noted, this text is copyright © 2004–2010 Ivan J. Griffin. i

A b s tr ac t Semiconductor Development in Ireland: Reducing ‘Development Stress’ caused by Digital Hardware and Embedded Software Team Interaction. Ivan J. Griffin consumer electronic products are complex, multi-discipline systems, far beyond just physical gates on a semiconductor chip. Their development involves a delicate mix of engineering disciplines and technologies such as analog hardware, digital hardware, software, board design, semiconductor physics and chemical engineering. The research presented in this dissertation examines the specific relationship between the integration of digital hardware and software development activities and the successful creation of modern consumer electronics devices and embedded computing platforms, to determine which aspects of inter-discipline development caused the most difficulty to a semiconductor device for the consumer electronics market as it progressed in life-cycle through from design to tape-out/mass production. As of 2010, Semiconductor Fabrication techniques continue to (broadly) follow Gordon Moore’s Law (Moore, 1965), which states that the number of transistors on a chip doubles approximately every 18 months. In addition, incremental improvements in fabrication techniques have resulted in successively smaller chip geometries—greater functionality requiring less silicon area and consuming lower power than the previous generation. Embedded computing devices are omnipresent in consumer electronics in modern society. Consumer desire and expectations of product improvement has lead to rampant innovation in the consumer electronics market, with terrific pressures to be first to market with a new innovative feature or design. Along with this increase in capability comes an unfortunate but necessary increase in complexity. Consequently, the semiconductor devices (Application Specific Integrated Circuit (ASIC), System-on-Chip (SoC), System-in-Package (SiP), modules) that power consumer electronic products require significant resource investments in both hardware and software design, implementation and test—investments that are increasing in direct relationship with the device complexity. The research presented in this thesis is a grounded theory which shows various categories of interactions that occur between Irish digital hardware and software development teams who work together in Small to Medium-sized Enterprise (SME) organisations on such product. The contribution of this research is in six parts:

T

ODAY ’ S

(a) it identifies that digital IC hardware engineering is very similar in many respects to software engineering; (b) it illustrates that the business models that survive in the ecosystem rely on engineering that is sufficient to meet market needs; (c) it shows that the social and geographical degrees of separation play a more significant role in adversely affecting and impeding performance than technical or techno-cultural issues between the two groups; (d) it acknowledges that there is a growing separation of technical mindset between software and digital IC hardware; (e) it provides strong evidence for the applicability of Agile methods in the development of consumer electronics SoC devices; (f) it presents a list of patterns of organisation and workflow for semiconductor projects that may help in the development process. Comparison against existing literature was used to validate the results. Scope for potential future work in the area of digital hardware and software team interaction is also presented and discussed. Supervisor: Dr. Ita Richardson. Keywords: ASIC Semiconductor SoC Digital IC Hardware Software Grounded Theory

ii

Acknowledgements

I

would like first and foremost to thank my supervision, Dr. Ita Richardson, for all her support, advice and encouragement during my research. A Ph.D. is a difficult endeavour at the best of

times. Dr. Richardson has a subtle yet profound knowledge and understanding of social sciences techniques which she was generously willing to share with me. My circumstances changed quite significantly during the course of my research—I got married, became a father, changed to an employment with considerably more responsibilities. . . As a result, I occasionally found myself lacking in free time, and direction when pursing this goal. I also on occasion experienced the self-doubt described in Miles and Huberman (1994): ‘we have encountered many students launched on qualitative dissertations or research projects who feel overwhelmed and under-trained.’ Nevertheless, Dr.

Richardson kept me on the straight and narrow, academically speaking. She was a true source of inspiration and ideas when I had become lost in my research; and she was equally a source of great confidence to me when I had become concerned with the uncertainty in direction my work often experienced, particularly in comparison to my starting point. I sincerely appreciate the contribution of Professor Emeritus Eamonn McQuade in proof-reading and providing valuable suggestions and recommendations for the improvement of this dissertation. I would also like to offer special thanks to my former lecturer, M.Eng. supervisor, and engineering manager in Parthus Technologies, Dr. John Nelson, for both professional and academic encouragement over the last decade or so. I consider him very much a mentor who helped me to shape my entire career to date. Indeed, all my colleagues and ex-colleagues of Parthus deserve credit for educating me in the real-world antics of embedded software and custom ASIC design—as well as some great fun and antics along the way: • To the old ‘Wireless Business Unit’ of Parthus Technologies, particularly the ‘Drua’ crew, and the ‘UPF Pack-Horse Gang’—having accidentally started with a platform that we thought was 25MIPs (but in actuality was closer to 7MIPs), we ended up with a great lightweight Bluetooth Solution; • To Alan Donnelly, for teaching me more than I ever intended, or wanted, to know about RF devices, Field Programmable Gate Array (FPGA) devices and boards—especially how to accidentally pour hot, steaming, coffee over them; iii

• To Jerry O’Brien for showing me the light—there is no money to be made in engineering—for our drives across Route 101 in California with a GPS navigation unit that kept speaking in Japanese, and our tour of the Hollywood sign in Los Angeles; • To Pat Lehane for his consistent brilliance, as an engineer, as a Verilog hacker, as a project manager—and for listening when the software guys insisted that there was definitely a problem with a particular hardware clock; • to Damien Nolan for his immaculately safe pair of hands when it comes to a chip tape-out, for his meticulous attention to detail, and for, unknownst to him, confirming to me that TEX and LATEX are by far the most appropriate tools to typeset a thesis—based primarily on his grumblings over the inadequacies of Microsoft Word and its master book feature when compiling the Paradiso chip functional specification; • To David Moloney for his foresight, his unparallelled ability at systems architecture, his anticipation of market evolution, and his unbridled enthusiasm—one of the few true geniuses I know; • To Niall Ó hEarcáin and Roger Maher for giving me the opportunity to lead an embedded software development team through a few interesting and challenging projects, and most of all for being very decent chaps to work for; • To David Airlie, ‘our man down under’, for sticking to his guns on open source software, and for ensuring the Fedora desktop on which I pulled this document together had sufficient 3D eye candy; • To Peter Flynn and Marc van Dongen for LATEX help, allowing me to automatically generate CSV logs of codes versus quotations from the LATEX typesetting run; • To the late Martin Mellody (1975–2009), a true friend, entrepreneur and engineering brilliance—how sad it is that you didn’t live to see the fruits of your labour. I would not be permitted to return to work without offering some recognition of my colleagues and friends in Frontier Silicon, especially the Shannon firmware team and Dublin SoC and DSP teams. Finally, an acknowledgement is due to advice I came across when pursuing this endeavour—after a prolonged period of inertia, frustration, and procrastination: • Firstly, Tveit (2008) had some genuinely useful tips on finishing a Ph.D., but the one that I appreciated was: iv

. . . ‘doers’ are more likely to finish their Ph.D. than ‘smarties’. Thinking doesn’t create your thesis, but writing might!

• Secondly, Lamott (1995) offers some sage-like wisdom and advice in suggesting: The first draft is the child’s draft, where you let it all pour out and then let it romp all over the place, knowing that no one is going to see it and that you can shape it later. You just let this childlike part of you channel whatever voices and visions come through and onto the page.

In typically engineering fashion,Young and Raymond (2001) explains this as being due to the fact that ‘you often don’t really understand the problem until after the first time you implement a solution’, and Brooks, Jr. (1995) echoes this advice in ‘plan to throw one away: you will, anyhow’.

This research is partially supported by the Science Foundation Ireland funded projects, Global Software Development in Small to Medium Sized Enterprises (GSD for SMEs) grant number

within Lero—the Irish Software

Engineering Research Centre (http://www.lero.ie/). Funding for portions of this work was generously provided by Science Foundation Ireland / Enterprise Ireland through the B Step Project—‘Building a Bi-Directional Bridge Between Software Theory and Practice’ grant number

v

.

This thesis is especially dedicated in loving memory of my late parents, John and Marie Griffin. Both my sister Yvonne and I miss you dearly. You always encouraged me to make the effort to pursue a Ph.D. so mum and dad, this is for you. . .

also to my wife Triona, for her support, love and kindness, and especially for keeping me sane. . .

vi

Glossary of Terms

Acronyms ASIC Application Specific Integrated Circuit ATS Abstract Test Suite BoM Bill of Materials CE Consumer Electronics CMMI Capability Maturity Model-Integrated CSCW Computer Supported Collaborative Work CSV Comma-Separated-Variables file CTAN the Comprehensive TEX Archive Network DFM Design for Manufacturing DFT Design For Test DRC Design Rule Checks DSP Digital Signal Processing EDA Electronic Design Automation EDIF Electronic Design Interchange Format FIBs Focused Ion Beams FPGA Field Programmable Gate Array GDN Global Design Network GDS-II “Graphics Data System revision II”, the industry de-facto file format standard for IC layout data exchange GSD Global Software Development HDL Hardware Description Language vii

IC Integrated Circuit IP Intellectual Property IPO Initial Public Offering ISD Information Systems Development LT Lower Tester LVS Layout versus Schematic checks MNC Multi-National Corporation MPW Multi Project Wafer NIH Not Invented Here ODM Original Design Manufacturer OEM Original Equipment Manufacturer PCO Point of Control and Observation QDA Qualitative Data Analysis RF Radio Frequency UML Unified Modeling Language UT Upper Tester VHDL VHSIC Hardware Description Language VHSIC Very High Speed Integrated Circuit SDK Software Development Kit SEI Software Engineering Institute SSH Secure Shell SI Signal Integrity analysis SME Small to Medium-sized Enterprise SiP System-in-Package SoC System-on-Chip viii

SPI Software Process Improvement TTM Time to Market TQM Total Quality Management UA Uncertainty Avoidance PD Power Distance XP eXtreme Programming

Definitions Block Test The testing of a hardware component in isolation, equivalent to a unit test in software terminology Blog A web diary or commentary (contraction of ‘weblog’) Design Flow The series of steps that combine the use of Electronic Design Automation (EDA) tools to design an Integrated Circuit Fab Silicon wafer fabrication facility Hardware Simulation Modelling of the hardware at different levels of abstraction (functional, gate level, etc.) for the purposes of hardware verification Mask A tool that contains a single pattern image that is applied to an entire wafer to define the integrated circuit Reticle A tool that contains a pattern image that needs to be stepped and repeated across a wafer to define the integrated circuit Shuttle Run A periodic engineering lot of MPW silicon wafers, containing designs from numerous customers which has transferred through the use of a reticle System-on-Chip A complex IC device which integrates the major functional elements of a product into a single chip—for example, processor, on-chip memory, custom acceleration logic etc. Unit Test Testing of a software component in isolation Wafer A thin round slice of a single crystal (highly pure) semiconductor substrate (usually silicon) that used in the fabrication of integrated circuits and other microelectronic devices. ix

Wiki A web-based content management tool designed to enable those who access it to contribute and easily modify its content, using a very simple markup language. Tape-Out The last phase of the design of a new chip, where the design is sent to the semiconductor foundry for manufacturing. The term dates back to when physical paper tapes were sent, whereas nowadays the transfer is electronic.

x

Contents 1 Introduction

1

1.1 Embedded Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1.1 Electronic Devices Everywhere! . . . . . . . . . . . . . . . . . . . . . . . . .

1

1.1.2 Business Models and Market Interception . . . . . . . . . . . . . . . . . . .

2

1.1.3 Coping with Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

3

1.1.4 Embedded Software Skills . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5

1.1.5 Architecture of an SoC device . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.1.6 Software in IC Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.2 Summary of Research

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

7

1.2.1 Outline of Thesis Structure and Content . . . . . . . . . . . . . . . . . . . .

8

1.3 My background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

12

I Initial Literature Review

13

2 Semiconductor Ecosystem

15

2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

2.2 Semiconductor Market . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

15

2.2.1 Intellectual Property . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

2.2.2 Chip Manufacture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

18

2.2.3 Product Design Companies . . . . . . . . . . . . . . . . . . . . . . . . . . .

22

2.2.4 Characterisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2.2.5 Embedded Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

23

2.2.6 The Global Marketplace—and Global Competitive Landscape . . . . . . . .

26

2.2.7 Maintaining a Competitive Edge in the Marketplace . . . . . . . . . . . . . .

28

2.3 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

32

3 Etymology of Hardware and Software

33

3.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . xi

33

3.2 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

34

3.2.1 Software . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

35

3.2.2 Digital Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

3.2.3 Workflow Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

3.2.4 Coding as an Art . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

40

3.3 Duality of Computing Logic . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

43

3.3.1 The Techno-Philosophical Argument . . . . . . . . . . . . . . . . . . . . . .

44

3.3.2 Hardware Design Language as a Variant of Software . . . . . . . . . . . . .

45

3.4 Cognitive Effects of Modelling and Abstraction, Linguistics and Culture . . . . . . .

48

3.4.1 Psychological Aspects of Logic Modelling . . . . . . . . . . . . . . . . . . .

49

3.4.2 Scientific Advance through Abstraction . . . . . . . . . . . . . . . . . . . . .

50

3.4.3 Linguistic Relativity Hypothesis . . . . . . . . . . . . . . . . . . . . . . . . .

53

3.4.4 Mediation of Culture . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

55

3.4.5 Computer Programming Languages . . . . . . . . . . . . . . . . . . . . . . .

55

3.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

58

3.5.1 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

59

4 Digital Hardware Flows vs. Software Development Processes

61

4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

61

4.2 Technical Discipline Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62

4.2.1 History of Hardware Development Approaches . . . . . . . . . . . . . . . .

63

4.2.2 Design Domains . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

4.3 Highlevel SoC Design Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

68

4.4 Computational Complexity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

71

4.5 Testing: Validation and Verification . . . . . . . . . . . . . . . . . . . . . . . . . . .

73

4.6 Abstraction Breakdown . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

76

4.6.1 Trend towards Higher-Level Synthesis and Functional Programming . . . . .

77

4.7 Detailed Integrated SoC Flow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

4.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

77

II Research Design

81

5 Research Design and Investigation

83

5.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

83

5.2 Conceptual Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

84

xii

5.2.1 Conceptual Framework . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

85

5.3 Method of Investigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

88

5.3.1 Qualitative versus Quantitative versus Mixed Methods . . . . . . . . . . . .

88

5.3.2 Philosophical Perspectives of Research Paradigms . . . . . . . . . . . . . . .

89

5.3.3 Selection of Interpretation Methodology . . . . . . . . . . . . . . . . . . . .

92

5.3.4 Grounded Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

94

5.3.5 Phases of Grounded Theory Research . . . . . . . . . . . . . . . . . . . . . .

97

5.4 Data Collection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

99

5.4.1 Data Collection Procedures . . . . . . . . . . . . . . . . . . . . . . . . . . . 102 5.4.2 Interviewing Risks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109 5.5 Data Reduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.5.1 Open Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.5.2 Axial Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112 5.5.3 Selective Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.6 Data Display . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114 5.7 Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115 5.8 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 119

III Theoretical Model and Solutions

121

6 Emergence of Theoretical Model

123

6.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.2 Theory Building . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 123 6.3 Business Realities: Impact on Technical Flow . . . . . . . . . . . . . . . . . . . . . . 125 6.3.1 Consumer Electronics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128 6.4 Risk: Approach to it . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 131 6.4.1 System Validation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 135 6.4.2 The influence of discipline specialisation on inter-team cultural differences . . 136 6.5 Social/Geographical Factors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138 6.5.1 Social Factors Independent of Location . . . . . . . . . . . . . . . . . . . . . 138 6.5.2 Social Factors that are Exacerbated by Geographical Separation . . . . . . . 141 6.6 Techno-cultural . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 145 6.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149 7 Emergent Toolbox of Patterns for SoC Project Organisation xiii

151

7.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7.2 What are Patterns? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151 7.3 The Pattern Groupings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.4 Patterns of Social Interaction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 154 7.4.1 Mitigate tacit knowledge loss through Social Networking Tools . . . . . . . 154 7.4.2 Actively Seed Social Interaction amongst Groups . . . . . . . . . . . . . . . . 155 7.4.3 Provide Project-level Focal Point through Core Team Structure . . . . . . . . 157 7.4.4 Drive continual progress through Daily Calls during Crunch Issues . . . . . . 158 7.4.5 Manage IP Deliveries Efficiently . . . . . . . . . . . . . . . . . . . . . . . . . 159 7.5 FPGA Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 160 7.5.1 Automate FPGA Design Traceability through Version Tracking . . . . . . . . 160 7.5.2 ASIC Synthesis Scripts . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.5.3 Automate FPGA Programming . . . . . . . . . . . . . . . . . . . . . . . . . 162 7.5.4 Implement Best Practises for FPGA Development . . . . . . . . . . . . . . . 163 7.5.5 Keep ASIC/SoC team involved in FPGA Development . . . . . . . . . . . . . 164 7.6 Development Patterns . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.6.1 Perform Regular Builds . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165 7.6.2 Share Code across Test Platforms and Technical Disciplines . . . . . . . . . . 166 7.6.3 Minimum Test Case Example . . . . . . . . . . . . . . . . . . . . . . . . . . 168 7.6.4 Keep the Firmware Design Simple . . . . . . . . . . . . . . . . . . . . . . . . 169 7.6.5 Keep the Firmware team involved in C code for simulations . . . . . . . . . 170 7.6.6 Consider Agile Methods, Test-Driven Development . . . . . . . . . . . . . . 171 7.6.7 Communicate in Diagrams Early On . . . . . . . . . . . . . . . . . . . . . . 172 7.6.8 Implement Recovery Mechanisms for Boot ROMs . . . . . . . . . . . . . . . 173 7.6.9 Provide Software Test Plans Early, Hardware Features Early . . . . . . . . . 175 7.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176 8 Validation of Theoretical Model

177

8.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 8.2 Business Themes in Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177 8.2.1 Commercial Realities of Software Development . . . . . . . . . . . . . . . . 180 8.3 Risk Themes in Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182 8.4 Socio-Geographic Themes in Literature . . . . . . . . . . . . . . . . . . . . . . . . . 184 8.4.1 The Sociology of Computing . . . . . . . . . . . . . . . . . . . . . . . . . . . 185 8.4.2 Globally Distributed Teams . . . . . . . . . . . . . . . . . . . . . . . . . . . 190 xiv

8.5 Technical Themes in Literature . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194 8.5.1 Techno-Cultural Perspective Differences . . . . . . . . . . . . . . . . . . . . 196 8.5.2 Technical Determinism . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200 8.5.3 A Language Convergence . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205 8.5.4 Importance of Mixed Skill Sets . . . . . . . . . . . . . . . . . . . . . . . . . 209 8.6 An Opportunity for Agility . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 213 8.6.1 Agile Manifesto versus Agility . . . . . . . . . . . . . . . . . . . . . . . . . . 215 8.7 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 219 9 Conclusions

221

9.1 Overall Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 9.2 Research Contribution . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221 9.2.1 Implications for Practitioners . . . . . . . . . . . . . . . . . . . . . . . . . . 227 9.2.2 Implications for Educators . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 9.3 Limitations of This Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 228 9.4 Scope for Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 229 9.5 Summary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231

IV Appendices

233

A Interviewee Biographies

235

B Interview Guides

237

B.1 Initial Interview Guide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 237 C Historical Roots of Computational Logic

241

C.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 241 C.2 Advances in Mathematical Logic, and the Entscheidungsproblem . . . . . . . . . . . 241 C.3 Computing Advances . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 243 D Digital Hardware Toplevel Design

247

D.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 D.1.1 Digital Toplevel Design . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 247 E Interview Codes

251

E.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 E.2 Workflow . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 xv

E.2.1 Coding through Typesetting . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 E.2.2 Visualisation of the Ontology . . . . . . . . . . . . . . . . . . . . . . . . . . 253 E.2.3 Codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 E.3 Source Code . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 E.3.1 Coding Macro . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 263 E.3.2 GraphViz graph description input file generation . . . . . . . . . . . . . . . . 263 F Academic Papers F.1

267

Academic Papers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 267 F.1.1

Globally Distributed Development of Complex Systems for the Consumer Electronics Semiconductor Industry . . . . . . . . . . . . . . . . . . . . . . . 267

F.1.2

Other Output . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 268

Bibliography

269

Index

293

xvi

List of Figures 1.1 High Level Abstraction to Physical Layout. . . . . . . . . . . . . . . . . . . . . . . .

5

1.2 Generalised SoC Block Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . .

6

1.3 Thesis Structure. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

9

2.1 Semiconductor Ecosystem. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

17

2.2 8-inch silicon wafer during manufacture. . . . . . . . . . . . . . . . . . . . . . . . .

19

2.3 Wafer is cut into separate silicon die, which are then individually wire-bonded and packaged. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

19

2.4 65nm Digital Radio ASIC . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

20

2.5 Decapped ASIC in QFN package, showing bond wiring . . . . . . . . . . . . . . . .

20

2.6 65nm ASIC—bare silicon die. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.7 Embedded Software Market—2003.

. . . . . . . . . . . . . . . . . . . . . . . . . .

25

2.8 IC Complexity - Moore’s law for memory and microprocessors. . . . . . . . . . . .

27

3.1 A Perspective on Engineering Endeavour and its Relationship to Systems Analysis and Science. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

41

3.2 Boolean Logic is Instantiated as Hardware, Firmware and Software Models. . . . .

46

3.3 Design Levels. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

51

4.1 The Cycle of Design. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

63

4.2 Naïve Digital Hardware Flow/Software Process Comparison. . . . . . . . . . . . . .

64

4.3 ASIC Abstraction.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

65

4.4 Gajski-Kuhn Y-chart. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

66

4.5 Gajski-Kuhn Y-chart showing levels of abstraction. . . . . . . . . . . . . . . . . . .

67

4.6 Gajski-Kuhn Y-chart showing Design Methodologies. . . . . . . . . . . . . . . . . .

69

4.7 High-level ASIC Design Flow Overview. . . . . . . . . . . . . . . . . . . . . . . . .

70

4.8 FPGA-based Verification Flow Overview. . . . . . . . . . . . . . . . . . . . . . . . .

75

4.9 Detailed SoC Flow Overview. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

78

5.1 Initial Conceptual Framework for the Study. . . . . . . . . . . . . . . . . . . . . . .

86

xvii

5.2 Reasoning.

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

90

5.3 Components of Data Analysis: Iterative Model. . . . . . . . . . . . . . . . . . . . .

93

5.4 Hermeneutic Circle.

98

. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

5.5 Representative Memo. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100 5.6 Grounded Theory Method employed in this research. . . . . . . . . . . . . . . . . . 101 5.7 Model of Symbolic Interactionist View of Question-Answer Behaviour. . . . . . . . 110 5.8 Visualisation of Open Codes as a Tag Cloud. . . . . . . . . . . . . . . . . . . . . . . 113 5.9 Early Selective Coding Data Analysis. . . . . . . . . . . . . . . . . . . . . . . . . . . 116 6.1 Theoretical Model of Influence on Hardware / Software Interworking. . . . . . . . . 124 6.2 Axial Coding of Business Reality Concepts. . . . . . . . . . . . . . . . . . . . . . . . 127 6.3 Emergence of Business Realities Theme. . . . . . . . . . . . . . . . . . . . . . . . . . 127 6.4 Competing Influences of Time, Quality and Cost. . . . . . . . . . . . . . . . . . . . 128 6.5 Radar Plot of Market Pressures. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129 6.6 Perceived Consequence Effect of Influencing Factor contribution to Total Project Risk. 132 6.7 Axial Coding of Risk Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.8 Emergence of Risk Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133 6.9 Axial Coding of Social Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.10 Emergence of Social Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139 6.11 Axial Coding of Techno-cultural Theme. . . . . . . . . . . . . . . . . . . . . . . . . 148 8.1 Influence of Business Themes in Theoretical Model. . . . . . . . . . . . . . . . . . . 178 8.2 Gartner’s ‘New Technology Hype Cycle’. . . . . . . . . . . . . . . . . . . . . . . . . 179 8.3 Components of the Hype Cycle. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180 8.4 Influence of Risk Themes in Theoretical Model. . . . . . . . . . . . . . . . . . . . . 183 8.5 Influence of Social Themes in Theoretical Model. . . . . . . . . . . . . . . . . . . . 184 8.6 Domain Bounds of Technology. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186 8.7 Layered Behavioural Model of Software Development. . . . . . . . . . . . . . . . . 189 8.8 Wilson’s Concept of (Social) Interface in SoC Design. . . . . . . . . . . . . . . . . . 190 8.9 Influence of Techno-Cultural Themes in Theoretical Model. . . . . . . . . . . . . . . 194 8.10 Refined Model of Trust (from Literature).

. . . . . . . . . . . . . . . . . . . . . . . 195

8.11 Triadic Reciprocal Causation of Social Cognitive Theory. . . . . . . . . . . . . . . . 197 8.12 Modified Concept of (Social) Interface in SoC Design. . . . . . . . . . . . . . . . . . 199 8.13 Influence of Technical Determinism Themes in Theoretical Model. . . . . . . . . . . 200 8.14 Naur’s Symmetrical Relation between Tools, Problems and People. . . . . . . . . . . 200 xviii

8.15 System Architecture: HW/SW Boundary. . . . . . . . . . . . . . . . . . . . . . . . . 211 8.16 The Embedded Software Education Gap. . . . . . . . . . . . . . . . . . . . . . . . . 212 8.17 Cornerstones of Agility. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 215 8.18 The C.HI.D.DL typology of Organisational Cultures. . . . . . . . . . . . . . . . . . 217 8.19 Implications of Agility for SoC Development.

. . . . . . . . . . . . . . . . . . . . . 218

C.1 Turing Machine. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 244 C.2 Timeline of Fundamental Developments in Computational Logic. . . . . . . . . . . 245 D.1 Digital Hardware Toplevel Design Flow. . . . . . . . . . . . . . . . . . . . . . . . . 248 E.1 Qualitative Data Analysis (QDA) Workflow. . . . . . . . . . . . . . . . . . . . . . . 251 E.2 Typesetting Workflow. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 251 E.3 Typesetting Feedback. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 252 E.4 Visualisation of Open Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 254 E.5 Visualisation of Axial Codes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 255 E.6 Axial Coding of Business Realities Theme. . . . . . . . . . . . . . . . . . . . . . . . 256 E.7 Axial Coding of Risk Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257 E.8 Axial Coding of Social Theme. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 258 E.9 Axial Coding of Techno-cultural Theme. . . . . . . . . . . . . . . . . . . . . . . . . 259

xix

List of Tables 2.1 Semiconductor Market Size . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

16

2.2 12 inch Integrated Circuit (IC) Mask Costs—2008 . . . . . . . . . . . . . . . . . . .

21

2.3 IC Technology Evolution 1971–2008 . . . . . . . . . . . . . . . . . . . . . . . . . .

26

4.1 The Tasks of Programming. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

62

5.1 Key differences in Grounded Theory approaches. . . . . . . . . . . . . . . . . . . .

96

6.1 Emergent Themes. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126 6.2 Digital Hardware vs. Software Vocabulary. . . . . . . . . . . . . . . . . . . . . . . . 145 7.1 Toolbox of Patterns. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153 7.2 Example FPGA Naming Scheme . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 161 A.1 Biographies for Interviewees. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 235 E.1 QDA Open Codes for Interviews. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 260 E.2 QDA Open Codes and Interview Excerpts. . . . . . . . . . . . . . . . . . . . . . . . 260

xx

List of Research Statements Research Problem Statement of Problem and Definition of ‘Development Stress’ . . . . . . . . . . . . . . . .

31

Fundamental Terms Definition of ‘Software’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

37

Definition of ‘Hardware’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

39

Research Questions Question 1—‘Frame of Reference’ . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .

60

Question 2—‘Effects due to Intrinsic Qualities of Discipline’ . . . . . . . . . . . . . . . .

60

Question 3—‘Effects due to Technical Specialisation’ . . . . . . . . . . . . . . . . . . . .

60

Question 4—‘Relieving Development Stress’ . . . . . . . . . . . . . . . . . . . . . . . . .

60

Research Contribution Research Contribution (Summary) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221

Question 1 addressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Question 2 addressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 225 Question 3 addressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226 Question 4 addressed . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 226

xxi

Chapter 1 Introduction



There is nothing more difficult to take in hand, more perilous to conduct or more uncertain in its success than to take the lead in the introduction of a new order of things. — NICCOLÒ DI BERNARDO DEI MACHIAVELLI



1469–1527, Italian Diplomat, Political Philosopher, Musician and Writer.

1.1 Embedded Systems

C

ONSUMER

electronic devices are developed through the output of two main technological

disciplines—electronic hardware and software. Whilst it is important in many market

segments that the devices look attractive, and have form factors and skins (“plastics”) that are fit for purpose, it is primarily the feature sets enabled by advances in hardware and software technology that fuel the continuing consumer appetite and desire for such devices. Device and technology convergence is well documented in the industry (Wilson, 2004), and disruptive technologies such as the 2007 Apple iPhone (Ranger, 2008) have encouraged a renewed vigour for “featuring-up” market entry-level mobile devices. Park (1998) mentions the need for highly integrated System-on-Chip (SoC) devices in order to remove cost and reduce the size of high volume electronic devices: ‘With the marketplace hungry for smaller, faster and cheaper products, highly integrated systems on a chip (SOCs) are a practical necessity.’

Lavagno et al. (1999) additionally noted that: ‘in the near future, most objects of common use will contain electronics to augment their functionality, performance, and safety.’

1.1.1 Electronic Devices Everywhere! Indeed, semiconductor devices are ubiquitous in our modern day world. They are used to enable practically every single electronic device we use—from our digital cameras, games consoles and 1

1. INTRODUCTION

1 . 1 . EM B EDDED S Y S T EMS

mobile cellular phones, to the ignition and traction control systems of our cars, to the avionics in our aircraft. The computing power of a smartphone device in 2010 (for example, Apple iPhone 3G S 600MHz ARM Cortex-A8 processor / Google Nexus One 1GHz Qualcomm Snapdragon processor), or of a portable games console in 2008 (for example Sony PSP, 333MHz R4000 processor), is equivalent or even superior to that of desktop computing 10 years ago (for example, Intel® Pentium® III 1GHz in 2000).

Home games console processing power has

unleashed tremendous potential for scientific computing (Williams et al., 2006) and mainframe development (Ferguson, 2007). In these devices, digital hardware and embedded software systems combine to provide the logic that enable their basic functionality, with analogue hardware components interfacing the world of digital logic to the outside environment.

1.1.2 Business Models and Market Interception The semiconductor business that provides these highly integrated systems depends on tight margins and large volumes. Paul Otellini, Chief Executive Officer of Intel Corporation1 , describes the semiconductor business very well in the following:

‘Our business model is one of very high risk: We dig a very big hole in the ground, spend three billion dollars to build a factory in it, which takes three years, to produce technology we haven’t invented yet, to run products we haven’t designed yet, for markets which don’t exist. ‘We do that two or three times a year.’ (BBC News, 2008)

Semiconductor designers need to anticipate an early market requirement two to three years hence, estimate what will still be a competitive technical solution at that time period (in other words, an extremely aggressive and challenging solution at the current time) and aim for this—hoping to intercept a rising market with the right technology and feature set at the right time at some point in the future. As if this wasn’t a difficult enough undertaking, two eponymous adages conspire together to impart ever greater complexity to the system designs. Moore’s Law states that the number of transistors on a silicon design doubles roughly every 18 months (Moore, 1965), whereas Wirth’s Law (‘Software is getting slower more rapidly than hardware becomes faster.’ (Wirth, 1995)2 ) deals with the increasing software complexity that Moore’s devotees enable.

1 As

of 4th June 2010. for his part, attributes the saying to Martin Reiser in Wirth (1995).

2 Wirth,

2

1. INTRODUCTION

1. 1 . EM B EDDED S Y S T EMS

Coudert (2002) notes that ‘designs keep getting bigger and more complex’. These elaborate and intricate designs require significant investment of designer time to ensure they are correctly designed, validated and verified. Due to the severe market pressures and the rampant pace of improvement, it is not surprising, perhaps, that very often consumers end up dealing with slightly under-cooked products—the rise of the so called “Beta culture” (Boran, 2009) where product is rushed to market and the initial early adopters form part of the extended beta test phase.

1.1.3 Coping with Complexity Coveney and Highfield (1995) describe the exploration of manifestations of complex process and systems: ’Within science, complexity is a watchword for a new way of thinking about the collective behaviour of many basic but interacting units, be they atoms, molecules, neurons, or bits within a computer. To be more precise, our definition is that complexity is the study of the behaviour of macroscopic collections of such units that are endowed with the potential to evolve in time.’

Semiconductor projects, with multi-million lines of HDL and software code, and even larger sizes test bench code, definitely qualify as complex systems. Those tasked with developing, integrating and debugging these systems into existence as devices fit for the end consumer can certainly attest to the difficulty in comprehending complex systems: ‘Their interactions lead to coherent collective phenomena, so-called emergent properties that can be described only at higher levels than those of the individual units. In this sense, the whole is more than the sum of its components, just as a van Gogh painting is so much more than a collection of bold brushstrokes’. (Coveney and Highfield, 1995)

As Michael Fister, President and CEO of Cadence Design System put it: ‘Development is not only about having a billion transistors on a chip but also about having them all work.’ (Banks and Fister, 2008)

In order to deal with this increasing complexity, it was necessary for engineers and scientists to raise the level of abstraction to cope. This precipitated the eventual specialisation into the disciplines of hardware engineering and software engineering: • Initially, analogue electrical circuits were used to model calculus, trigonometry and complex number theory; • Analog circuits became digital with the use of valves enabling the use of Discrete Logical Algebra to raise the abstraction to the new digital level; • Values were replaced with transistors over time; 3

1. INTRODUCTION

1 . 1 . EM B EDDED S Y S T EMS

• Software changed from punch-tape instruction sequences to bit patterns stored in computer memory;

• Assembler mnemonics were abstracted from the binary machine code to program the machine;

• Higher level languages such as C and Pascal allowed greater productivity (Brooks, Jr., 1995; Davis et al., 1998), simplifying common tasks to language constructs (if...else etc.) and allowing greater portability;

• Early computer hardware had a significant influence on software, with high-level languages ‘strongly influenced by the von Neumann architecture: they were sequential and imperative at the beginning. Writing such a program was guided by the capabilities the machine offered, instead of by the ideas on how to solve the particular problem’ (Kloos, 1987);

• Hardware engineers use logical circuit elements of digital metaphor to raise the abstraction bar beyond the electrical physics transistor or semiconductor level: digital hardware design evolved from transistors to Boolean logic, from logic cells to to HDLs capable of describing logic blocks and of algorithmic expression, and to system level design—Figure 1.1 illustrates the abstraction of working in a HDL language (Figure 1.1(a)), and the ultimate schematics that end up being generated for fabrication (Figure 1.1(b));

• Languages evolved into object oriented, dynamically typed, virtual machines running byte code, and higher level domain specific languages. Current (~2010) 45nm fabrication technology is actually below the wavelength of the laser light used, and so immersion lithography is now employed—purified water or other liquid replacing air as the medium as it reduces the effective wavelength (Owa and Nagasaka, 2003). For the purposes of comparison, 45nm transistors are 2,000 times smaller than a human hair (Intel Corporation, 2007a), and 400 or so would fit on a human red blood cell3 (Intel Corporation, 2007b). Vannevar Bush noted that the benefits of specialisation are not without consequences:

‘. . . there is increased evidence that we are being bogged down today as specialization extends. The investigator is staggered by the findings and conclusions of thousands of other workers— conclusions which he cannot find time to grasp, much less to remember, as they appear. Yet specialization becomes increasingly necessary for progress, and the effort to bridge between disciplines is correspondingly superficial.’ (Bush, 1945)

3 Red

Blood Cells, RBCs, have a mean diameter of approx. 8µm (Hillman et al., 2005)

4

1. INTRODUCTION

1. 1 . EM B EDDED S Y S T EMS

(a) Source VHDL Fragment

(b) Layout of Complex Digital Media ASIC, 65nm low power process (without metals)

Figure 1.1: High Level Abstraction to Physical Layout.

1.1.4 Embedded Software Skills There are differences in skill sets and differences in mindset and approach required in the development of software for embedded systems versus desktop software. For instance, (Anderson, 2008) remarks that: ‘Embedded systems are frequently resource-limited. These systems have low-power processors, possible battery operation, and limited memory and storage. And yet, there are increasing demands from consumers for high-end features. . . virtually all embedded system developers need an intimate knowledge of the hardware and how the software will interact with it. This knowledge is often times down to the hardware-register level of the device.’

Wolf (2006) describes the growth of the embedded computing profession, including some of its growing pains. Wolf states that: ‘Many consumer devices now run millions of lines of software code to support the range of services and connectivity that consumers demand. An increasing number of devices also allow loading and running of third-party software. In addition to creating a demand for new application-level software, this also makes the foundational software in the device more complex.’

Keating and Bricaud (1998) concurs with this viewpoint, stating: ‘System-on-a-Chip designs have a significant software component. . . Software plays an essential role in the design, integration and test of SoC systems, as well as in the final product itself.’

The International Technology Roadmap for Semiconductors is a fifteen-year assessment of the semiconductor industry’s future technology requirements. It is sponsored by the semiconductor 5

1. INTRODUCTION

1 . 1 . EM B EDDED S Y S T EMS

TCM RAM, System RAM

PLL

ROM

Clock & Power Management

NAND Flash

Crypto Accelerator

E2 PROM

Transport Peripherals

Micro-Processor (possibly multi-core)

(e.g. USB, I2 C, SPI, UART,

Function-Specific Core

ADC / DAC

JTAG

PDM / PWM

GPIO

2D/3D Accelerator, TFT Control

SDIO, WiFi, Bluetooth, . . . )

DMAC

Figure 1.2: Generalised SoC Block Structure.

industry associations in the five leading chip manufacturing regions in the world: Europe, Japan, Korea, Taiwan and the United States. The ITRS Update (2008) recognises the trend of increasing software importance in the industry, confirming ‘software as an integral part of semiconductor products, and software design productivity as a key driver of overall design productivity’. Additionally, it notes that ‘embedded software. . . has emerged as the most critical challenge to SOC productivity’. There are a number of reasons why this is the case: for example, Goldman (2009) mentions platform based designs (one chip, many products customisations in software), bug risk avoidance, extending the hardware lifecycle, and general product differentiation.

1.1.5 Architecture of an SoC device Figure 1.2 illustrates the blocks that might be found in a contemporary (~2010) SoC device for the consumer electronics markets (e.g. cellular, digital home multimedia etc.) This includes a boot ROM (and possibly ROM libraries with a patch table in RAM), various types of RAM, a microprocessor, function-specific core logic, general purpose I/O, a variety of transport peripherals for talking to other chips and devices, and a variety of hardware accelerators. 6

1. INTRODUCTION

1 . 2. SUM M A RY OF RES EA RC H

All of these blocks require individual verification, and the entire SoC design needs validation to ensure blocks can be used in tandem to implement the product’s market use cases—to ensure, for instance, there is sufficient bus bandwidth, that there are no resource conflicts in the system, that all interlocking and signal handshaking is correct. Semiconductor programmes, and particular SoC designs, typically require large quantities of deliverables to be managed in parallel with very tight timescales, with cross functional teams, on multiple sites, and with limited project management resource.

1.1.6 Software in IC Design The software aspects of semiconductor Intellectual Property (IP) design are often overlooked, and yet end up contributing disproportionately to the perceived quality of the end product. Many of the design flaws and oversights from the earlier hardware phase are left for resolution by software workaround. Additionally, tight market opportunities mean that whilst hardware teams possibly run late, software teams must deliver on time and to changing customer specifications. To some degree, semiconductor IP design is, by virtue of the very marketplace demands, an extreme example of an agile system development industry (Beedle et al., 2001). Meanwhile, system architects and hardware design teams face the significant problem of trying to predict the pace of technology 2–3 years out, and to achieve what is called the market intercept point—making sure the technology is competitive and relevant at time it becomes available in massproduction.

1.2 Summary of Research The research presented in this thesis is concerned with investigating the environment and factors which influence system-on-chip development in this industry, with a specific focus on inter-discipline team interaction that occurs between digital hardware and embedded software, and with the development of process patterns that address the particular problems faced. The aim of this research is the development of a grounded theory showing various categories of interactions that occur between the digital hardware and software development teams that work together on complex semiconductor product. The contribution of this research is in six parts: (a) it identifies that digital IC hardware engineering is very similar in many respects to software engineering. 7

1. INTRODUCTION

1 . 2. SU M M A RY OF RES EA RC H

(b) it illustrates that the business models that survive in the consumer electronics semiconductor ecosystem rely on a tailoring of engineering work that is sufficient to meet market intercept windows; (c) in contradiction to the original premise of the research, it shows that the social and geographical degrees of separation play a more significant role in adversely affecting and impeding performance than technical or techno-cultural issues between the two groups— specifically, it indicates that geographical separation of design teams is a considerable detraction of productivity potential in complex consumer electronics semiconductor projects; (d) it acknowledges that, despite the shared ancestry, there is a growing separation of technical mindset between software and digital IC hardware; (e) it provides strong evidence for the applicability of (situationally/context appropriate) Agile methods in the development of embedded devices, specifically for consumer electronics semiconductor devices; (f) it presents a list of patterns of organisation and workflow for semiconductor projects that may help in the development process.

1.2.1 Outline of Thesis Structure and Content In presenting an outline of the structure of this dissertation, it is helpful to first identify the separate phases that constitute empirical research4 (Creswell, 2003): • Identification of a research problem; • Review of the existing literature; • Specification of a purpose; • Collection of Data; • Analysis and Interpretation of data; • Reporting on and evaluating data. This introduction chapter provides for the identification of the specific research problem. With a view to matching with the remaining phases presented op. cit, the remainder of this dissertation is divided into five parts, as illustrated in Figure 1.3:

4 Research

in which the knowledge or theory derived from it is as the result of observations or experiments.

8

1. INTRODUCTION

1 . 2. SUM M A RY OF RES EA RC H

• Part I (“Initial Literature Review”) presents a literature review to describe the semiconductor market place, compare the disciplines of digital hardware and embedded software, and formulate research questions to tackle the identified research problem;

• Part II (“Research Design”) describes the steps employed to plan and conduct the research;

• Part III (“Theoretical Model and Solutions”) describes the outcomes of the research, presents my validation of the research—through post-mortem pre-existing literature review—and identifies scope for future work;

• Part IV (“Appendices”) provides some ancillary information, in the form of a set of appendices.

Chapter 1: Introduction

Part I: Initial Literature Review Chapter 2: Semiconductor Ecosystem (identifies and introduces Research Problem) Chapter 3: Etymology of Hardware and Software (formulates Research Questions) Chapter 4: Digital Hardware Flows vs. Software Development Processes

Part II: Research Design Chapter 5: Research Design and Investigation

Part III: Theoretical Model and Solutions Chapter 6: Emergence of Theoretical Model

Chapter 7: Emergent Toolbox of Patterns for SoC Project Organisation

Chapter 8: Validation of Theoretical Model

Chapter 9: Conclusions

Figure 1.3: Thesis Structure.

9

1. INTRODUCTION

1 . 2. SU M M A RY OF RES EA RC H

Part I—Initial Literature Review Literature in qualitative study is commonly employed a deductive manner, as a basis for advancing and framing the research problem, and for validating the results. Bearing this in mind, the use of scholarly literature serves three distinct aims within this thesis: • Firstly, as this research is an exploratory qualitative analysis of digital hardware and software team interaction, literature is used at the outset to set the scene (as opposed to any direct hypothesis construction) and to establish the ecosystem of semiconductor market. In this regard,

Chapter 2 (“Semiconductor Ecosystem”) provides some background

information on the ecosystem of semiconductor development, including detail on the various business models that are present. It describes the semiconductor market place, especially market dynamics. • Secondly, literature is used to provide a basis for the comparison of digital hardware and software activities as very closely related manifestations of computational logic. Chapter 3 (“Etymology of Hardware and Software”) presents an etymology on embedded systems to clarify what constitutes digital hardware and software. It traces the historical roots of computing science in order to further highlight the common ancestry between these disciplines. Chapter 4 (“Digital Hardware Flows vs. Software Development Processes”) compares and contrasts software and hardware development flows, and presents some conceptual similarities between the various process stages. • Thirdly, literature is used to validate the outcomes from the research. This is presented later, in Part III (“Theoretical Model and Solutions”), Chapter 8 (“Validation of Theoretical Model”).

Part II—Research Design Chapter 5 (“Research Design and Investigation”) discusses the conceptual framework used in designing the research. This framework is used to choose and develop a suitable research strategy, based on the inputs of what is known about the topic, what the intent of the study is, and the specific scientific methods applicable. It presents a discussion of the strategy used in the research. The methods employed for data collection and analysis are examined in detail. It describes the selection of candidates, the interview process, and the coding of data collected during interviewing. Twenty semi-structured interviews were held between 2006 and 2008 with field experts in various organisations across Ireland engaged in semiconductor development. In this research, the 10

1. INTRODUCTION

1 . 2. SUM M A RY OF RES EA RC H

unit of analysis (the major entity being analysed) is primarily the development team, whereas the unit of observation (on which data is collected) is that of the individual developer.

Part III—Theoretical Model and Solutions Chapter 6 (“Emergence of Theoretical Model”) describes how the codes obtained from data analysis were subsequently used in the construction of a grounded theory illustrating the areas of development stress and impediment in digital hardware and software team interaction. Chapter 7 (“Emergent Toolbox of Patterns for SoC Project Organisation”) presents a toolbox of semiconductor project design patterns, which emerged through analysis of the data collected. It is my contention that these patterns of management and project organisation, evolved during the course of the research, are viable techniques for empowering team and project productivity. Chapter 8 (“Validation of Theoretical Model”) validates the results of the research, with reference to pre-existing literature—mainly sourced from other areas of study within software research—for example, the phenomenon of Global Software Development (GSD). Finally, Chapter 9 (“Conclusions”) summarises this work, provides conclusions to my identified research questions, implications for practitioners and educators, and also proposes potential opportunities for follow-on research.

Part IV—Appendices Appendix A (“Interviewee Biographies”) presents biographies for the main individuals interviewed in the collection of data for this thesis. Appendix B (“Interview Guides”) presents the initial interview guides, used initially to frame and semi-structure discussions. Appendix C (“Historical Roots of Computational Logic”) describes the evolution and development of computing, placing the (relatively) recent specialisations into the disciplines of Hardware and Software Engineering in an historical context, and is useful background reading for Chapter 3 (“Etymology of Hardware and Software”). Appendix D (“Digital Hardware Toplevel Design”) looks at the steps involved in digital hardware toplevel design, for the purpose of better understanding the tasks involved—in particular, the additional considerations that bear an impact upon digital hardware design. Appendix E (“Interview Codes”) describes how the coding of collected data was converted through subsequent analysis into various facts. 11

1. INTRODUCTION

1 . 3 . MY B A C K G ROUND

Appendix F (“Academic Papers”) presents some of the work contained in this thesis, as published in other forums during the course of the work.

1.3 My background The role of the researcher as the primary data collection instrument within a study requires careful treatment, in order to transparently convey any personal biases or opinions at the outset of this work. Locke et al. (2000) suggests that the researcher’s individual viewpoints and effect on the research environment can contribute positively to the research. As my role in this dissertation is that of the active researcher, I will now present my background in order to address any concerns of ‘strategic, ethical, and personal issues’ (Creswell, 2003; Locke et al., 2000). I graduated from the University of Limerick with a Bachelor of Engineering in Computer Engineering in 1995, and a Masters of Engineering in Computer Engineering by Research and Thesis (Griffin, 1997). During these studies, I was exposed to a mixture of analog and digital hardware electronics theory, although I chose to specialise in software. From 1997, I worked as a research officer in the University on a variety of telecommunications related projects. From 1999 through to 2005, I worked in an IP creation company, developing wireless communication stack software for mobile devices. I was particularly involved at the hardware / software interface: working on such tasks as hardware abstraction layer development, low-level real-time state machines, device drivers. More recently, from 2005 to 2010, I have worked as IC Firmware Manager in a fabless semiconductor company, where we have been concerned with the integration of third-party intellectual property, the development of firmware for Mobile Digital TV, Digital Radio and Internet Radio/Connected Audio receiver chipsets, and the verification of complex ASIC designs. Bolker (1998) suggests that ‘Some people seem always to have known what they want to write their dissertations about . . . ’ Whilst not entirely true in my case, my métier is semiconductor firmware, and

I have a very deep professional interest in better understanding how to get digital hardware and embedded software teams working effectively towards a common goal. As such, Bolker (op. cit) might term me one of: ‘. . . the lucky ones who have a burning question that they want to spend time answering . . . You follow your curiosity, and, if you’re lucky, your passion.’

12

Part I

Initial Literature Review

Chapter 1: Introduction

Part I: Initial Literature Review Chapter 2: Semiconductor Ecosystem

Chapter 3: Etymology of Hardware and Software

Chapter 4: Digital Hardware Flows vs. Software Development Processes

Part II: Research Design Chapter 5: Research Design and Investigation

Part III: Theoretical Model and Solutions Chapter 6: Emergence of Theoretical Model

Chapter 7: Emergent Toolbox of Patterns for SoC Project Organisation

Chapter 8: Validation of Theoretical Model

Chapter 9: Conclusions

13

Chapter 2 Semiconductor Ecosystem



. . . when they were first invented, nobody could have imagined the possible applications of shrinking transistors down until they were almost invisible—in slabs of processed sand!—and then wiring millions of them together into a circuit. Turing, von Neumann, Shannon and Shockley couldn’t possibly have anticipated many of the modern-day applications of their work—WiFi and the web, iPods and the Internet. — MARTYN AMOS Taken from ‘Genesis Machines: The New Science of Biocomputing’, Atlantic



Books, London, 2006.

2.1 Introduction

T

HIS

chapter discusses the ecosystem of the consumer electronics marketplace for semiconductor

devices. The reason for doing this is to establish a view of the competitive landscape within this

industry, and to clearly identify the distinct business models involved and their related commercial pressures and requirements. First, I describe the various functional roles in the ecosystem, and how they correspond to business models. Then, I present an insight into semiconductor manufacture, and the proliferation of embedded semiconductor software. Finally I discuss the global landscape for semiconductor development in which Irish semiconductor companies compete.

2.2 Semiconductor Market Semiconductors are tiny devices made from silicon.

The semiconductor industry produces

microchips, or integrated circuits (“ICs”) that are required to build most common electronic products. 15

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

These devices have enabled tremendous changes to modern society—improving the safety and comfort of our standards of living, and providing us new forms of interactive entertainment. Writing in Byte magazine on the 25th anniversary of the microprocessor, Wayner (1996) describes how, after the arrival of devices ‘small enough and cheap enough to fit inside business machines, toys, appliances, tools, and entertainment devices’, the ‘world hasn’t been the same since’. The same article predicts that its

analysis of the microchip’s ‘impact on society is only a snapshot in time. The revolution continues’. The market for semiconductor products is endemic in the value chain of the electronics industry. The virus-like connotations associated with the word endemic are deliberate—as silicon capacity has increased and expectations of functionality have swollen, developers have relied on ever-mode sophisticated components of software logic for reasons of reducing time to market, reducing risk and increasing flexibility. Kumar (2008) describes how: ‘The semiconductor industry has been a fertile ground for the nurturing electronics industry. Over the last 40 years, various different markets have driven the growth of the industry and have provided the impetus for innovations.’

The market for semiconductors has expanded by 14% per annum on average between 1960 and 2002, while the cost per function have historically fallen by approximately 25% (Wojahn, 2002). Sales for semiconductor amounted to US $143 billion in 2001, and US $272 billion in 2007—as shown in Table 2.11 . Table 2.1: Semiconductor Market Size

Year

% Growth

Amount (US Dollars)

2007



$272.0 billion

2008

-2.0%

$266.6 billion

2009

-9.4%

$241.5 billion∗

2010

6.4%

$257.9 billion∗

2011

10.8%

$284.8 billion∗

∗ Figures for years 2009 through 2011 are forecasted estimates. Taken from LaPedus (2008a,b).

The semiconductor market growth rate, having been impacted in recent years since the global economic downturn, represents roughly twice that of the electronics industry in general. The reason for this faster growth rate is the increase in semiconductor content in end electronic products.

1 For up to date figures, the Semiconductor Industry Association (http://www.sia-online.org/) publishes its Global Sales Report, a three-month moving average of sales activity.

16

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

IP Creation

Design Services

Intellectual Property Provider

Chip Manufacture

ASIC Design

Brand

Foundary Partner

Packaging House

Product Design

Sales Channel

ODM

Design House

OEM

Figure 2.1: Semiconductor Ecosystem.

Semiconductor manufacturing processes are very sophisticated and there is a significant capital investment in plant and equipment. The semiconductor fabrication plants (“fabs”) require larger and larger quantities of production to break-even. In order to achieve this, they need the new product designs that will excite and entice the end markets. As these designs become increasingly intricate, the investment in doing the design work internally quite often becomes prohibitive. To address this, a very sophisticated ecosystem has emerged around the development of semiconductor systems—my own interpretation of which is illustrated in Figure 2.1. In the ecosystem, each oval represents a functional aspect/role, and not necessarily a discrete organisation. Depending on their business models and technical competencies, companies may take take on many roles, or specialise in as few as one. Wolf (2006) states that:

‘The embedded computing application space is growing and will continue to do so for quite some time as we figure out better ways to design new microprocessors and better ways to design embedded software.’

17

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

2.2.1 Intellectual Property Intellectual Property (IP) creation is the function of generating hardware and software designs to address market requirements. With more discerning consumer markets comes the demand for more sophisticated featureful product, in tandem with shortening market windows. This results in potential competitive advantages to being first to market2 . To address this demand for accelerated design cycles, the chip manufacturer has a few choices: either develop the intellectual property for the design internally, out-source it to an intellectual property provider (paying a license fee and potentially a royalty per unit volume), or underwrite its development via a bespoke design services company. Many companies are providing semiconductor IP—essentially reference designs for what are anticipated to be large volume consumer devices and peripherals. Intellectual property can consist of analog hardware, digital hardware, and software—all in either formats of source code, obfuscated source code, or libraries.

2.2.2 Chip Manufacture The chip manufacturing function has a number of distinct aspects to it. With modern sophisticated System-on-Chip products, IP from a variety of sources (internal, from IP, open source, etc.) providers is integrated into a single design. This design is validated and verified through various techniques. It is floor-planned in terms of the placement of various functionality in silicon, and a final layout created. The layout is in the form of a vector graphics file format (GDS-II) that the foundry uses to create the silicon masks and wafers. Once the design reached a sign-off as being fit for purpose, it is “taped-out”. Tape-out is a historical term, with its roots from a time when the design was stored on magnetic tape. Tape-out today involves the layout vector image of the silicon being transferred electronically to the foundry. Semiconductor ASIC devices are incrementally created as a number of layers—some of which are the actual chemical dopings for the transistors, the rest of which are various layers of insulator and logic in metal. A mask set corresponds to the collection of masks required to create each of these layers through some form of ultraviolet etching / acid washing. Creating a mask set requires high precision engineering. Current state of the art technology is 22nm (2010), in which the track widths are below the wavelength of the ultraviolet light. This requires some innovate diffraction techniques in order to successfully etch the tracks at the required tolerance levels. As a result, it is a

2 Although serial-entrepreneur Jason Calacanis remarked that ‘the first guy up the hill gets the arrows’ when discussing early mover advantage in business (Calacanis et al., 2008).

18

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

Figure 2.2: 8-inch silicon wafer during manufacture.

Image courtesy of Taiwan Semiconductor Manufacturing Co., Ltd.

Figure 2.3: Wafer is cut into separate silicon die, which are then individually wire-bonded and packaged.

19

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

Figure 2.4: 65nm Digital Radio ASIC

Silicon die encased and wire bonded in a QFN plastic package. Image courtesy of Frontier Silicon Ltd.

Figure 2.5: Decapped ASIC in QFN package, showing bond wiring

The silicon die is approx. 18mm2 , and the plastic package is approx. 100mm2 . Image courtesy of Gavin Marrow.

20

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

very expensive process. The cost of a mask set varies depending on the process technology, but the figures in Table 2.2 are a rough approximation for 2008: Table 2.2: 12 inch IC Mask Costs—2008

Technology Process

Mask Cost (US Dollars)

Wafer Cost (US Dollars)

130nm

$500,000

(not specified)

90nm

$800,000

$1,000

65nm

$1,000,000

$4,000

SOURCE: Interviewee David, personal conversation, April 2008. The cost of a 12-inch wafer is approximately $4,000, so the main upfront costs are the mask set. Subject to anticipated volumes of chip sales, the mask set may or may not be the significant portion of the overall project cost. Depending on the size and shape of the integrated circuit being etched, each mask will yield a number of individual silicon ICs, called die. Each die needs to be tested, as the process usually performs to a certain level of expected yield. Good die is then packaged at a Packaging House in plastic with bond wires added. This is the visual form factor that people are most familiar with when silicon or computer chips are mentioned. Packaged parts are tested again to ensure the packaging is okay. There are various packaging format options, but the rule of thumb is the smaller the size, the more sophisticated and costly the packaging technology. Prototype hardware may often be scheduled for a shuttle run, to lower costs, as only small quantities are required. Shuttles are a mechanism by which the fabrication plants fill temporary gaps in their production lines. Each shuttle wafer may contain silicon from a number of vendors. Due to the extra organisation overheads involved, shuttle runs cost slightly more than a full wafer, but each vendor shares proportionally in the cost thereby reducing the individual outlay and risk. For example, one tenth of a 12-inch shuttle wafer (90nm) could cost $90,000 whereas a full wafer could cost $800,000 (SOURCE: Interviewee David, personal conversation, April 2008). Another consequence of the additional organisational overheads of shuttle runs is the inflexibility on the date the shuttle “leaves”—if a company does not have its design ready in time, the shuttle will leave without them. With such high costs in the creation of the mask, it is important that every possible preventative and anticipatory measure is taken to ensure the success of the design. The cost of the mask set is typically apportioned across the anticipated market sales volume of the chip, so without a successful product it is very difficult to recoup this significant investment expenditure. 21

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

2.2.3 Product Design Companies Successful packaged parts are either sold directly to product design companies (“chip-down” sales), or, depending on the complexity of the part and its required external and ancillary circuitry, as modules of chip components for ease of integration. The competitiveness of a particular part is determined by a number of factors: • Its technical competitiveness—can it compete with solutions from peers within the marketplace? • Its direct cost—which consists of silicon and packaging costs, amortised IP licensing costs and royalties, patent pool licenses, and margin; • Its size—size affects cost (wafer cost divided by number of chips per wafer determines the silicon cost of the chip), and for certain market segments (e.g. small form-factor mobile handsets) it also affects technical competitiveness and differentiation; • Its power consumption, particularly for mobile devices but also increasingly to address ecoaware end consumers (SOURCE: Interviewee David, personal conversation, April 2008). These factors and others are discussed in greater detail in Subsection 2.2.7. The number of external passive and active components that a product design company must use to successfully integrate SoC product into their design has a direct bearing on the bill-of-materials costs of any product that incorporated it; and, with typical volumes in the millions for successful chips, even a couple of cent per component adds up pretty quickly. Hardware developers (and to some degree also embedded software developers) need to be cognisant of the implications of their actions on the BoM cost through the realisation of the design. Anderson (2008) elaborates further on this: ‘Due to the desire to keep costs down, embedded developers need to squeeze every last bit of performance from the hardware. . . So, at least some embedded designers need to understand thermal loading and MIPS/watt ratios.’

Chip-down designs are cheaper for larger volumes, but require greater technical acumen on the part of the product manufacturer.

Original Equipment Manufacturers (OEMs) take

product reference designs from a Design House and manufacture product from them, including creating consumer facing plastics enclosures, and shrink-wrapped product with manuals etc. Original Design Manufacturers (ODMs) are like OEMs, with the exception that they are capable of taking semiconductor modules or chips, and creating product themselves. In general, companies that sell mobile cellular phone devices are incredibly sophisticated and capable of 22

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

handling sophisticated levels of technology integration—whereas some other commodity consumer electronics manufacturers are just about able to handle board manufacture. Sometimes the large OEMs/ODMs become well known consumer brands, such as Samsung, Sony, LG. Othertimes, established brands buy devices directly from the OEMs or ODMs, and perform the branding and marketing themselves.

2.2.4 Characterisation Once silicon returns from the fab, it needs to be characterised for mass production across a variety of metrics and measures: • Correct functional operation—despite the rigorous level of testing, verification and validation pre tape-out, there is no guarantee that what was taped-out is functionally correct; • Power consumption in various operating use-cases (generally it is only possible to crudely estimate power consumption values through the use of spreadsheet models pre-tape-out); • Performance of any analog components, particularly analog-to-digital converters, RF tuners, high-speed PHYs etc.; • Operation across various temperature ranges; • EMC emission testing. At the same time, production testing procedures need to be established in order to quickly test packaged parts on the production line. This testing is usually charged for by the second, so the quicker product can be tested for basic functionality and wiring, the better. With all these new electronic features and components comes, of course, the need for extra software (complexity) to control it.

2.2.5 Embedded Software Embedded software is found throughout in the modern world. It is used to control many different electronic products that are not normally considered “computer” products such as televisions, DVD players, automotive systems, flight control and navigation systems, mobile phones and digital cameras. The embedded software market is a large market, estimated to be worth around $23 billion in 2003, as shown in Figure 2.7. Software has been described by the International Technology Roadmap for Semiconductors (ITRS Update, 2008) as ‘an integral part of semiconductor products’, with software design productivity 23

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

recognised as ‘a key driver of overall design productivity.’ Furthermore, ‘embedded software . . . has emerged as the most critical challenge to SOC productivity’ (ITRS Update, 2008).

Aart de Geus, chairman and CEO of EDA company Synopsys (as of 2009), noted that: ‘The growth of software has been faster than hardware and today (2008), semiconductor companies are hiring more software engineers than hardware engineers. This brings great opportunities and huge challenges. Thinking that “software is great, because it’s easy to change” is a recipe for disaster when people make little changes and, before you know it, everything becomes very difficult to verify . . . Very complex systems need (an) enormous amount of prototyping and verification.’ (Ben-Artzi, 2008)

Borgstrom and Gopalan (2009) explain that whilst ‘embedded software used to be a minor or nonexistent deliverable for typical semiconductor devices’, at 45nm and beyond ‘software accounts for a full 60 percent of total chip-development cost, with major implications on how chips and systems are verified’.

Wolf (2006) suggests between 500,000 and 800,000 people were employeed in 2006 as embedded computing programmers. In addition, the increasing proliferation of electronic media within the home (e.g. digital cameras, MP3 players, home media centers, mobile digital TV) is driving a new wave of growth within the consumer electronics segment of the industry. Wolf (2006) describes how: ‘Embedded computing sits at the intersection of computer science and engineering. It serves as the pillar that holds up the bridge connecting computer science with traditional engineering.’

Embedded software usually executes on internal micro-controllers or Digital Signal Processors (DSPs) used to control other hardware components. These platforms are often difficult to develop and test during the development cycle, and equally difficult to replace and repair when deployed in the field—for reasons of cost, accessibility, etc. Quite often, the applications for embedded software necessitate that the software be extremely reliable, very efficient and compact, and deterministic and precise in its response to rapid and unpredictable sequences of external stimuli. For example, the safety requirements for systems driven by embedded software can be, for certain applications, very high. Many embedded systems perform critical functions for the safety of life (e.g. automotive vehicle control systems, flight control and navigation). As a result, it is of considerable importance that the development activities are controlled with highly tuned, and effective, quality processes that ensure superior quality levels. In addition, as these embedded systems include larger feature sets as standard to maintain their competitiveness (de Geus, 2008), they require more and more software. The net effect of this is that the designs become increasingly complex, and thus the role of quality processes more and more significant. Embedded software for the semiconductor market generally has two significant competing tradeoffs: 24

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

Figure 2.6: 65nm ASIC—bare silicon die.

Te l Co ecom m Da put ms, i ta Co ng, mm s

Image courtesy of Interviewee Michael.

Con Elec sumer tron ics

34.0% 20.0% 9.0% Misc

8.0% 10.0%

Figure 2.7: Embedded Software Market—2003.

Taken from Electronic Times 11th April (2003).

25

n

io ce at ffi O utom A

tive

o Autom

Au Ind to ust m ri at al ion

19.0%

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

• Low resource usage (e.g. ROM / RAM/ CPU processing) is usually of paramount important to keep production costs low, and to ensure power efficiency and battery life; • End-users expect a high degree of reliability from their embedded devices, and there are potentially high safety requirements also depending on the application. An added complication is that the embedded software task is typically sandwiched between hardware stabilisation and maturity prior to tape-out, and the tape-out/release date—and quite often software time scheduled in project planning gets squeezed to fit fixed date commitments such as tape-out. (Wolf, 2006) proclaims that: ‘Embedded software needs to solve problems imposed by the underlying digital hardware. It must also deal with the challenges of interfacing with real-world physics.’

Wolf (op. cit.) further proposes that embedded software is increasingly replacing digital logic where cheap microcontrollers allow reconfigurable software based solutions at negligible incremental cost.

2.2.6 The Global Marketplace—and Global Competitive Landscape Table 2.3: IC Technology Evolution 1971–2008

Criterion

4004

Pentium® Pro

Core™ 2 Duo T9800 (“Wolfdale”)

Core™ i7-975 Extreme

Year

1971

1996

2008

2009

Transistors

2300

5,600,000

410,000,000

731,000,000

12mm2

196mm2

107mm2

263mm2

10µm

0.35µm

0.045µm (45nm)

0.045µm (45nm)

750kHz

200MHz

2.93GHz

3.33GHz

4KB

64GB

64GB

64GB

18 pins

387 pins

775 pins

1366 pins

Die Size Transistor Size

Clock Speed Memory Capacity Package Size

Taken from Intel Corporation (2008, 2010). As of 2009, Semiconductor Fabrication techniques continue to (broadly) follow Gordon Moore’s Law (Moore, 1965), which states that the number of transistors on a chip doubles approximately every two years. In addition, incremental improvements in fabrication techniques has resulted in successively smaller chip geometries—greater functionality requiring less silicon area and consuming lower power than the previous generation. 26

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

Transistors Per Die 1010 1965 Actual Data 109

MOS Arrays

108

1975 Projection

MOS Logic 1975 Actual Data

256M 128M 64M

Memory 107

4M

Microprocessor 256K 64K

105 4K 1K

103

16M

16K

80286

2G

4G

Itanium™ Pentium® 4

Pentium® III Pentium® II Pentium®

1M

106

104

512M

1G

i486™ i386™

8086

8080 4004

102 101 100 1960

1965

1970

1975

1980

1985

1990

1995

2000

2005

2010

Figure 2.8: IC Complexity - Moore’s law for memory and microprocessors.

This is plotted on a semi-logarithmic scale. The correction around 1980 shows the ‘law’ is an approximation. Taken from Moore (2003). Figure 2.8 illustrates the rate of progress in semiconductor technology since Moore first postulated the law in 1965. Table 2.3 illustrates the rate using specific Intel examples, beginning when they created the first IC in 1971 (Gwennap, 1996). In collaboration with Moore’s Law, Kryder’s Law (Walter, 2005), named after storage pioneer Mark Kryder (CTO, Seagate Technology), is seeing computing storage double annually (Leventhal, 2009). Walter (2005) notes:

‘. . . smaller, high-capacity drives are spawning not only new products and applications but entirely new industries. . . Such devices may relegate Moore’s Law to secondary status. “Today the density of information we can get on a hard drive is much more important to enabling new applications than advances in semiconductors,” Kryder remarks.’

Not only has the pace of technological development and achievement in semiconductors been remarkable, but it has also contributed to a socio-economic invisible hand effect on global society. Toynbee (1972) predicted the: ‘“annihilation of distance” by the progress of technology applied to physical means of communication opens up the vista of a future society that will embrace the whole habitable and traversable surface of the planet, together with its air-envelope, and will unit the human race in a single comprehensive society.’

More recently, Friedman (2007) describes the: ‘cascade of technological and social shifts that effectively levelled the economic world and accidentally made Beijing, Bangalore and Bethesda next-door neighbours’.

27

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

‘Annihilation of distance’ is very obvious in the modern day consumer electronics industry. Today,

the modern semiconductor business has conceded much of its manufacturing to the Far East. Semiconductor companies continue to outsource for reasons such as cost efficiency and de-risking demand uncertainty (Tsai and Wu, 2005). The largest semiconductor fabrication plans available for use by fabless companies are located in Taiwan—TSMC and UMC. This has left western economies with the traditionally higher-value stages in the chain—the design, marketing and branding. Many Apple iPods, for instance, bear the mark “Designed by Apple in California. Assembled in China”. The Chinese government has maintained a policy of prioritising support for the semiconductor industry in order to solidify and grow China’s position in semiconductor design and production—so much so that in addition to manufacture, China is becoming a major centre for IC product research and development, and is expected to have a global IC market share of 39% by 2010 (Ko, 2007). Asian economies have been busily scurrying up the value chain of this technology. Technologyfocused cultures such as Korea, Japan and Taiwan have produced semiconductor devices of increasing intricacy and sophistication. Whereas once very few Asian brands outside of Sony could command tier 1 pricing for their product, many are establishing branding beachheads with discerning western consumers. Although the Finnish company Nokia still holds the lion’s share of sales in the mobile cellphone industry, Korean manufacturers such as Samsung and LG Electronics are chomping at their heels—bringing innovative, highly featureful products to market in quicker time-frames than ever before. Camposano (2004) describes these developments in the Asian IC design community as follows: ‘From an IC design perspective, the Asian region is moving rapidly towards the rest of the world in terms of design complexity, speed, process technology and applications. In practice this means that local design teams are looking for design, verification and design-for-manufacturing support for high-end designs.’

In comparison, the ‘deverticalization of a mature high technology’ and the ‘emergency of low-cost countries and delocalization’ has meant that the ‘European design community has to reposition itself on high value solutions’ (Pollet, 2004)3 . In fact, fabless appears to be a particular ‘European weakness’, with a

market that is ‘small and hardly growing’ (Pollet, op. cit).

2.2.7 Maintaining a Competitive Edge in the Marketplace De Geus, interviewed in Ben-Artzi (2008), remarks that:

3 ‘This presentation was made in 2004 in a meeting where Project Officers of FP6 (Sixth Framework Programme) projects had invited representatives from the semiconductor industry to expose their needs and recommendations for future FP in the field of design in semiconductors.’ (Jean-François Pollet, personal correspondence, June 2009)

28

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

‘it is amazing how financial stress drives the high-tech economy, both in creating challenges and new markets. It forces companies to do more with less, and creates opportunities to solve new problems.’

To achieve and maintain a competitive edge in this global market: ‘. . . a thorough understanding of the (semiconductor) company’s market is essential in making correct decisions, both technical and business related.’ (Kumar, 2008)

The challenges facing organisations in today’s ‘electronics marketplace’ include (Kumar, op. cit.): • Short market windows and short Time to Market (TTM)—consumer electronics markets tend to have annual cycles with predictable demand spikes (e.g. Christmas, Chinese New Year, back-to-school in the US), and schedule delays can lead to missed market windows; • Short product life cycles; • Low Cost is King—according to Kumar (op. cit.), an annual reduction of ASP by 15–30% is common; • Low Power Dissipation—battery life is a significant issue for mobile devices (Catsoulis, 2002), and in addition to this the EU has identified and targeted the trend of increasing consumer electronics power consumption in the residential sector—29% in EU 27 Electricity consumption by Sector, 2005 (Bio Intelligence Service S.A.S., 2008)4 ; • Form Factor—size, shape and pitch of the packaged IC all play a significant part in ensuring the thinnest, sleekest consumer electronics designs; • Ever-increasing Chip Complexity—Kumar (op. cit.) also notes this as a market opportunity; • Escalating Development Cost—this is in part influenced by the ever-increasing complexity, and requires careful planning for the business model; • Software—differentiation through software is becoming more important as devices gain more computing power and become more featureful—and almost paradoxically, the intangeable nature of software integration is often more ‘sticky’ than the physical connections of hardware integration; • Complete Solutions—silicon customers are looking for more mature “turn-key” product offerings, including reference platforms, test suits, software etc.

4 To address this, the EU introduced in 2005 the Energy-using Products directive, EuP (Council Directive 2005/32/EC), and implemented it in January 2009 mandating sub-1W power consumption in standby for home audio systems in 2010, and sub-0.5W in 2013 (Scheiber, 2008; Schischke et al., 2008).

29

2. SEMICONDUCTOR ECOSYSTEM

2 . 2 . S EM I C ONDUC T OR M A RKET

Bearing these criteria for differentiation in mind, in order to compete with organisations in the aforementioned aggressive Eastern economies, it is very necessary that Irish and European companies continue to out-innovate in at least a number of different facets: • To optimise process and protocols and procedures; leveraging efficiencies of development and production systems in order to offset their higher labour force costs; • To address market segments in new and interesting ways; to capture the consumer imagination with the next “killer application”. De Geus further notes that: ‘The biggest cost for consumer goods is manufacturing, in terms of engineering time and tools. Investing in tools, methodologies and people that minimize this is actually the number one objective for cost.’ (Ben-Artzi, 2008)

This is particularly true in the early ASIC design and development phase (pre-operations and volume production)—although only a fraction of the overall timeline from inception to revenue, this earlier stage of the schedule is fraught with technical risk, and it is extremely expensive to rework hardware design flaws (functionality and architectural) if not caught at this point (Kumar, 2008). At one point, the fact that Ireland had become ‘silicon capable’ (Dunphy and Walsh, 2009) was a significant feather in the cap of those in the Irish development agencies with remit for foreign direct investment. Boustani (2008) remarks on the growth of the Irish economy during the height of the mid-2000s property boom: ‘Although Asia and India have captured most of the attention on the topic of economic growth, Ireland has had great success among European nations, largely due to its inviting approach to industrialization. Its policies and efforts transformed its weak economy into a world-class industry in computer hardware, software and services, earning it the nickname Celtic Tiger. ‘Ireland was positioned to benefit from globalization and the rapid growth of technology, with high-growth sector MNCs5 settling in to serve the European market. Dublin was becoming the Silicon Valley of Europe . . . ’

However, the global downturn was not kind to the Irish economy, with ‘the housing property bubble, international credit crisis, rising inflation and interest rates, create a perfect storm to challenge the confidence the country has in its future’: ‘According to a report by the Steering Group, increasing R&D may be a challenge to Ireland, even though investments tripled during the 1990s to reach C917 million in 2001. . . . The low investment in R&D to support indigenous businesses raises questions about whether Ireland aims to support them, or is simply funding education of the young to support MNC growth.’

5 Multi-National

Corporations

30

2. SEMICONDUCTOR ECOSYSTEM

2 .2 . S EM I C ONDUC T OR M A RKET

Richardson and von Wangenheim (2007) note that SMEs6 are ‘fundamental’ to the economic growth of many nations, including Ireland.

In Ireland, the demise of Digital Electronics

Corporation’s operations in Galway led to a number of indigenous start-ups and SMEs in the semiconductor industry; one of the descendents, Parthus Technologies, had a successful floatation on the NASDAQ and London stock markets in mid-2000, spawning a new generation of small indigenous organisations (such as Silansys, Redmere, Movidius, Ikon Semi). SME organisations are a major source of the development of indigenous entrepreneurial skills, innovation and employment. Additionally, they play a pivotal part in the conversion of innovative research into the potential for real economic return. The challenges to SMEs are significant—they often have problems securing cash flow and funding, and thus the risk of market under- or over-shoot are considerable. Funding (and thus survival) can be heavily dependent on achieving strategic technical and business milestones to agreed schedules. Richardson and von Wangenheim (2007) identify the need of SMEs for ‘efficient, effective software engineering solutions’ to help them achieve their targets.

With this background in mind, and considering the more difficult market place for consumer electronics, and the increasing design threat from Asia, it is important that the Irish semiconductor ensure its design teams of all disciplines, but especially digital hardware and embedded software, work together in achieving collective corporate goals, as efficiently as possible. Without the successful execution of this phase, there is no product to manufacture and sell. This leads directly to the statement of my initial research problem.

Research Problem: The research presented in this dissertation examines a specific part of the duty to ensuring Irish SME organisations engaged in semiconductor design can successfully compete in the global marketplace: that of improving development on complex semiconductor systems, through establishing a better understanding of the causes of inter-working difficulties that exist between embedded software and digital IC hardware teams. In that a team is more than the sum of its individual members, perhaps the problems and frustrations experienced at an individual developer level combine to a greater exhibition of dysfunction at the inter-team level—and thus whilst the team level is the primary unit of analysis, it is relevant to consider how individual developer issues affect team interaction. To this end, I will define the notion of development stress as encompassing:

6 SMEs

are defined in European Commission (2003).

31

2. SEMICONDUCTOR ECOSYSTEM

2. 3 . S U MM A RY

• Any cause of developer dissatisfaction or negative experience with the product or service designed; • Any aspect of inter-team relationships that causes an adverse material change to the project plans and market aspirations; • Any negative influence on developer productivity.

2.3 Summary In this chapter I introduced the consumer electronics marketplace for semiconductor devices. I described the various entities in this ecosystem, from intellectual property providers to fabless semiconductor designers through to contract manufacturing, sales channels and brands7 . I discussed the global nature of both the marketplace and of the competition, and presented the need for Irish companies to strive for innovativeness and efficiencies in order to compete internationally. From this, I identified my research problem as being the establishment of an etiology of inter-working difficulties that are prevalent between embedded software and digital IC hardware teams—an understanding of their origins and causation in order to better tackle their influences. In the next chapter, through the use of pre-existing literature, I will examine what differentiates hardware from software, and define the terms for the purposes of my work. I will review the genealogy of both domains and highlight the potential for the cognitive influences of technical specialisation on logic design. Based upon this, I will formulate and present a set of research questions to be answered in the course of this dissertation.

7 It is remarkably difficult within the scope of a single chapter to capture all the complexities of semiconductor business models. There are many phases of operational and logistical complexity post the ASIC design stage which are outside the scope of this dissertation. I refer the interested reader to Kumar (2008), an excellent practical discourse on the logistics and business planning for a fabless semiconductor start-up venture.

32

Chapter 3 Etymology of Hardware and Software



’Tis but thy name that is my enemy; thou art thyself, though not a Montague. What’s Montague? it is nor hand, nor foot, nor arm, nor face, nor any other part belonging to a man. O, be some other name! What’s in a name? that which we call a rose by any other name would smell as sweet; — WILLIAM SHAKESPEARE 1564(baptised)–1616, English Writer and Dramatist. Taken from ‘Romeo



and Juliet’, 1594.

3.1 Introduction

I

N

attempting to deal with the research problem identified in section 2.2.7, it is important to

understand hardware and software design disciplines in terms of their similarities and their

differences. This will enable us to better appreciate any dichotomies of thought amongst their disciplines and disciples. Steiner (2008) aptly describes the importance of clarity of definition of these technical disciplines as follows: ‘Computation is so pervasive in modern society that one might easily assume all of the foundations to be well understood, but while we very well understand how to use computation, we are harder pressed to explain what it is.’

The average person who uses a computer or mobile phone to browse the web, access digital media, or communicate with their friends, has so little knowledge of the underlying technology he is using that, as the late Sir Arthur C. Clarke (1917–2008) might say, ‘it may as well be magic’. Even current day programmers, who work solving problems through the high-end abstractions of objectorientation and dynamic languages, may be more than a little vague about the nebulous activities going on inside the magic black box upon on which their code is executing. In this chapter, I attempt to define an understanding of what the digital hardware and software components of a semiconductor system are from the design perspective. It is interesting to note that 33

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

from this vantage point, the separation between the skills of hardware and software disciplines is becoming more blurry over time. An etymology is a study of the history of words—where they came from, and how their form and meaning has changed over time. In personal correspondence (January 2009), Alice Gaby clarified that linguistic researchers have ‘different reasons for wanting to know about etymology—from the ‘naturalness’ of certain kinds of sound change, grammaticalization, semantic pathways, etc.’ Dr. Gaby also recommended that Sweetser (1991) has a ‘fine illustration of how illuminating etymological research can be’: ‘Language is systematically grounded in human cognition, and cognitive linguistics seeks to show exactly how. The conceptual system that emerges from everyday human experience has been show in recent research to be the basis for natural-language semantics in a wide range of areas.’ (Sweetser, 1991)

Studying the phrases hardware and software, and their evolution does help in early identification of some of the salient differences between the two technical specialities—as will be refined and discussed later in Chapter 6. In this chapter, I briefly discuss the historical basis in support of analysing digital hardware and software disciplines as broadly similar in terms of the macro level issues that affect their output. Having discussed the historical roots of digital hardware and software design in mathematical logic, I draw reference to some pre-existing hypotheses from the field of linguistics, and their pre-existing crossover into computational logic—specifically the notion that language and culture may have a systematic relationship to comprehension and behaviour.

3.2 Definitions



The investigation of the meaning of words is the beginning of education. — ANTISTHENES



445–365 B.C, Greek Philosopher and Disciple of Socrates.

The term mathematical machine was used early on in describing machines capable of processing and manipulating information expressed in a mathematical format. Murray (1948) describes a mathematical machine as ‘. . . a mechanism which provides information concerning the relationships between a specified set of mathematical concepts’. Wartime secrecy restrictions prevented Murray from

discussing the latest computing advances of the time, but the refinement of such machines into software and hardware (in terms of components and of philosophical stances) was very much in its infancy. 34

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

The original use of the term software to describe computer programs is attributed (Cipra, 2000; Peterson, 2000) to Massachusetts statistician, John W. Tukey in 1958:1 ‘Today, the “software” comprising the carefully planned interpretive routines, compilers, and other aspects of automativeprogramming are at least as important to the modern electronic calculator as its “hardware” of tubes, transistors, wires, tapes and the like’ (Tukey, 1958).

The earliest citations of hardware predate this (Niquette, 2006).

Hartree uses the term

‘hardware’ when describing the ENIAC in 1947 (Ayto, 2006): ‘The ENIAC . . . I shall give a brief account of it, since it will make the later discussion more realistic if you have an idea of some “hardware” and how it is used, and this is the equipment with which I am best acquainted.’

Booth and Booth (1953) used the term in 1953: ‘The engineering difficulties encountered in this type of machine are great, and a considerable increase in the size and complexity of the “hardware” seems inevitable.’

We will now discuss each term individually in more detail.

3.2.1 Software If we are to consider software, we should be able to adequately define what we mean by the term. And yet common dictionary definitions are often less than adequate, failing to capture what the essence of software is, and its differentiation with respect to hardware components of a system: SOFTWARE: The programs, routines, and symbolic languages that control the functioning of the hardware and direct its operation. SOURCE: THE AMERICAN HERITAGE ® DICTIONARY OF THE ENGLISH LANGUAGE, FOURTH EDITION. COPYRIGHT © 2000 BY HOUGHTON MIFFLIN COMPANY.

SOFTWARE: (noun) programs and other operating information used by a computer. SOURCE: THE COMPACT OXFORD ENGLISH DICTIONARY.

In a seminal piece on improving the quality and productivity of software development, Brooks, Jr. (1995) defined the essence of a software entity as: ‘a construct of interlocking concepts: data sets, relationships among data items, algorithms, and invocations of functions.’

Brooks, Jr. (1986) refines this by presenting the inherent properties of ‘this irreducible essence of modern software systems:’

1 Although

Niquette (2006) claims to have coined the phrase ‘software’ in October 1953.

35

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

• Complexity—the vastness of requirements, the large number of states and exception conditions to be addressed, the unintended interactions between elements in the software system, all of which affect both technical and management complexity, and thus Brooks’ concept of ‘conceptual integrity’; • Conformity—the need for new software solutions to fit problems or constraints arbitrarily set by previous software and interfaces; • Changeability—the fact that software is constantly subject to pressures for change; • Invisibility—the abstractness of software, the invisibility of bits and bytes, the virtual nature of software models (flow charts, state diagrams, etc.) Mays (1994) tackles what constitutes the essence of software. He contends that: ‘the essence of a thing is that which gives it its identity. It is the inherent, unchanging nature of the thing. Essential attributes are those properties that are intrinsic and indispensable, as opposed to coincidental or accidental.’

Mays’ claims that for those of us who are software practitioners, intimately familiar with the endeavour, the essence of it should be immediately obvious and universally applicable to all software. Building upon the work of Brooks, Jr. (1986, 1995) and Parnas (1972), Mays (1994) declares: ‘A software entity is in essence a construct of interlocking concepts characterised by a conceptual context derived from its problem with which it interfaces, by representations of its concepts both in the data it uses and in the functions it performs, and by the multiple sub-domains of its input domain that characterise the different transformations that will occur, depending on the conditions that are present during execution.’

Firmware Firmware is a sub-category of software that is traditionally highly coupled to a particular hardware platform. A formal definition for firmware is as follows: FIRMWARE: The programs, routines, and symbolic languages that control the functioning of the hardware and direct its operation. n. Computer programming instructions that are stored in a read-only memory unit rather than being implemented through software. SOURCE: THE AMERICAN HERITAGE ® DICTIONARY OF THE ENGLISH LANGUAGE, FOURTH EDITION. COPYRIGHT © 2000 BY HOUGHTON MIFFLIN COMPANY.

The Jargon File takes a more practitioner-oriented approach in describing firmware as follows: 36

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

FIRMWARE: Embedded software contained in EPROM or flash memory. It isn’t quite hardware, but at least doesn’t have to be loaded from a disk like regular software. . . it implies that the firmware could be changed, even if doing so would mean opening a box and plugging in a new chip. A computer’s BIOS is the classic example, although nowadays there is firmware in disk controllers, modems, video cards and even CD-ROM drives. — RAYMOND, E. S., ED. (2003)

An exact definition of firmware is difficult to achieve, but it appears that it is more commonly linked with the hardware than the term software. Firmware designers need to be more intimately familiar with the resources, capabilities and limitations of their hardware platform than desktop application developers—the desktop environment having been abstracted away from the nuts and bolts of the platform through the metaphors of virtual memory, virtual machines and complex operating system development frameworks. It is also true to say that there is some cross over here when dealing with extremely large systems, or systems which have very large transaction processing requirements—for example, web applications such as the Google search engine or Twitter. As with embedded developers, system architects and designers working on these extremely large scale platforms need to understand their platform nuances and limitations. As examples of what is required for large scale systems, see DeCandia et al. (2007); Gorman (2008). By comparison, most developers working on desktop platforms are abstracted and isolated from much of the resource limitations of the hardware: ‘Software architecture has abstracted the underlying hardware so much that many developers don’t have any idea how it really works. Furthermore, there is often a direct conflict of interest between best programming practises and writing code that screams on the given hardware.’ (Sage, 2009)

Definition: For the purposes of this thesis, the term ‘(embedded) software’ is used to describe the programs that run on semiconductor devices. This term will be used interchangeably with firmware in this context. This flexible use of software/firmware is common in embedded device development—for example, see Netgear (2009).

3.2.2 Digital Hardware Common dictionary definitions for the term hardware are similarly lightweight. In contrast to the situation for software, they do however capture one essential difference between hardware and software: HARDWARE: Computer Science. A computer and the associated physical equipment directly involved in the performance of data-processing or communications functions.

37

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

SOURCE: THE AMERICAN HERITAGE ® DICTIONARY OF THE ENGLISH LANGUAGE, FOURTH EDITION. COPYRIGHT © 2000 BY HOUGHTON MIFFLIN COMPANY.

HARDWARE: (noun) 2 the machines, wiring, and other physical components of a computer. SOURCE: THE COMPACT OXFORD ENGLISH DICTIONARY.

Note in both definitions the use of the term ‘physical’. Hardware is a physical construct, unlike software. Hardware is differentiated into two basic types: analog hardware and digital hardware2 . Analog signals are continuous over time and space, and thus analog electronics is used to interface to the real world in providing power (e.g. SMPS), clocks (e.g. PLLs) and input data for processing (e.g. from A2Ds). Analog designs are typically more concerned with signal fidelity, and filtering, and are specific to a particular process technology. Digital hardware systems, or digital electronics, have some special characteristics that distinguish how they perform with respect to analog systems in general. Digital systems are discrete systems. Unlike analogue systems, they operate in discontinuous steps and with discrete values. Digital signals are composed of samples at discrete points in time, and thus the signal is quantised (approximated). Digital signals are more immune to noise, and easier to process. Digital hardware designs have a high degree of abstraction from the physical manufacturing process, and can be ported from one manufacturing technology to another relatively easily. Digital electronics can modify their behaviour based on the information they process. They are representations of Boolean algebra, and thus can implement much the same sequential logic as software. They share many of the inherent properties of Brook’s essence of software systems, namely those grouped under:

• Complexity—modern electronic systems contain sophisticated semiconductor devices, high speed digital and analog interfaces and a variety of peripherals and input/output functions; and

• Conformity—de facto hardware interfaces (such as the ARM AMBA® bus specification) make hardware design, interfacing and integration easier, and common requirements for semiconductor pin-compatibility often makes manufacture and mass production easier.

2 Digital hardware is actually a subset of analog hardware, where the discrete nature of its operation and the clear distinction between logic levels better suits the representation of Boolean logic.

38

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

However, unlike software systems, digital electronics are neither invisible, nor changeable.

Definition: For the purposes of this thesis, the term ‘(digital) hardware’ is used to refer to a specific type of digital hardware system, namely the semiconductor ASIC design—either in its final silicon instantiation, or prototyped/simulated in FPGA, emulation or simulation.

The intrinsic coupling of software/firmware with hardware is described in Kitchenham and Carn (1990): ‘Software does not exist in isolation from the hardware that it animates, or from the environment with which it interacts.’

and in Steiner (2008): ‘Software is of course incapable of acting in any fashion whatsoever without the means of its underlying hardware substrate.’

Steiner (op. cit.) further refines this, in presenting software as equivalent to information, and describing code execution as a process of: ‘. . . hardware modifying its information under informational control . . . Informally, one may say that information is unable to act on its own, and that hardware is unable to think for itself.’

3.2.3 Workflow Terminology Both software and digital hardware are created through the use of various development methodologies—what hardware engineers typically term ‘design methodologies’ and what software engineers term ‘development processes’. Both are concerned with quality assurance of the finished product. They are also concerned with the efficiency of the process, the mechanisms by which the engineers get from concept to finished product. Quality of a product is conceptually familiar to us, and quite easy to understand—if difficult to pin down precisely: QUALITY: Degree or grade of excellence: yard goods of low quality. adj. Having a high degree of excellence: the importance of quality health care. SOURCE: THE AMERICAN HERITAGE ® DICTIONARY OF THE ENGLISH LANGUAGE, FOURTH EDITION. COPYRIGHT © 2000 BY HOUGHTON MIFFLIN COMPANY.

QUALITY: (noun) (pl. qualities) 1. the degree of excellence of something as measured against other similar things. 2. general excellence. 3. a distinctive attribute or characteristic. 4. archaic high social standing.

39

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

ORIGIN Latin qualitas, from qualis “of what kind, of such a kind”. SOURCE: THE COMPACT OXFORD ENGLISH DICTIONARY.

Quality of process is more subjective. So that its quality can be measured, what exactly is a process?:

PROCESS: (noun) (pl. processes) 1 A series of actions, changes, or functions bringing about a result: the process of digestion; the process of obtaining a driver’s license. SOURCE: THE AMERICAN HERITAGE ® DICTIONARY OF THE ENGLISH LANGUAGE, FOURTH EDITION. COPYRIGHT © 2000 BY HOUGHTON MIFFLIN COMPANY.

PROCESS: (noun) 1 a series of actions or steps towards achieving a particular end. 2 a natural series of changes: the ageing process. PROCESS: (verb) 1 perform a series of operations to change or preserve. 2 Computing operate on (data) by means of a program. 3 deal with, using an established procedure. Origin: Latin processus “progression, course”, from procedere (see ‘proceed’) SOURCE: THE COMPACT OXFORD ENGLISH DICTIONARY.

In software terms, a software development process has been defined as: ‘A software process is a set of activities, methods and practices, and transformations that people use to develop and maintain software and associated products (project plans, design documents, code, test cases, user manuals, and so on). (Paulk et al., 1993)

Quality of process is concerned with how efficiently is the process working? How quickly/costeffectively can engineers turn concept into product? How easy is it to replace individual members of a team? How well do they communicate? How efficient is the company in addressing its business goals through its technical planning and work output? The term software quality assurance is used to denote the monitoring of software engineering processes and methods used to ensure quality in the final product (NASA SOFTWARE ASSURANCE, 2004).

3.2.4 Coding as an Art There is a delicate balance between process weight and stifling developer creativity. Are aspects of the work more artistic in nature, and thus suitable for less stringent checks and balances? Are other aspects more engineering in nature, suitable to methodological routine and enforcement? Vygotsky (1925) makes the following observation: ‘Thus, poetry or art is a special way of thinking which, in the final analysis, leads to the same results as scientific knowledge (Shakespeare’s plantation of jealousy), but in a different way. Art differs from science only in its method, in its way of experiencing and perceiving, in other words, psychologically. “Poetry and prose”, says Potebnia, “are first and foremost a certain way of thinking and perceiving. Without an image there is no art, especially no poetry”.’

40

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

Application of Science: Characteristically integrative Disaggregated description → aggregated description

Systems Analysis

Engineering:

Distinctive features of engineering with respect to science

The application of science for human benefit

Evaluation of Human Benefit: a. Monetized benefits b. Non-monetizedbenefits

Problem-Solving and Service to Society

Figure 3.1: A Perspective on Engineering Endeavour and its Relationship to Systems Analysis and Science.

Adapted from Lynd (2003). What is the nature of the process of coding abstract Boolean logic into software algorithms or digital hardware that address real-world problems? Is it valid to consider coding (either software or hardware) a form of engineering, or is it purely an artistic creative endeavour? The word engineering derives from the Latin root ingenium, meaning ‘innate character, talent, nature’ (Lewis, 1969), and brings with it connotations of ingenuity and inventiveness (MéndezArocha, 2002). The American Engineers’ Council for Professional Development (ECPD) defines engineering as: ‘The creative application of scientific principles to design or develop structures, machines, apparatus, or manufacturing processes, or works utilising them singly or in combination; or to construct or operate the same with full cognisance of their design; or to forecast their behavior under specific operating conditions; all as respects an intended function, economics of operation and safety to life and property.’ (Engineering, 2010)

Engineering is the discipline of acquiring and applying knowledge of mathematics and science, within economic resource limitations, for practical purposes in order to develop new and better products. This is well illustrated in Figure 3.1. It involves informed compromise so as to best satisfy conflicting requirements of cost, schedule and quality, and in this respect bridges the realms of fundamental scientific discovery with commercial application. It is ‘both a driver of social growth and an integral part of the economic business cycle’ (Wicker, 2003). Engineering is aggregative and

concerned with creation, whereas science is disaggregative and concerned with knowing. Science and engineering have different, but complementary objectives. McCarthy (2006) notes that ‘science aims to build theories that are true’, whereas ‘engineering aims to make things that work’. Engineering is important to the acquisition of knowledge—while it is often used in supporting theoretical theories, it additionally delivers knowledge more directly through yielding ‘highly 41

3. ETYMOLOGY

3. 2. DEF I NI T I ONS

successful knowledge about how to control materials and processes to bring about desired results’ (McCarthy,

2006). Software engineering was first coined as a term by the NATO science committee in 1967 as an experiment: ‘The phrase “software engineering” was deliberately chosen as being provocative, in implying the need for software manufacture to be based on the types of theoretical foundations and practical disciplines, that are traditional in the established branches of engineering.’ (NATO Science Committee, 1969)

Given its deliberately contentious start in life, the applicability of the term software engineering appears open for debate. Bond (2005) suggests that: ‘Casting software as an artistic medium might strike many readers as odd, or even objectionable, but there is a growing body of evidence to show that it is perceived and utilised in just this way.’

Knuth, however, in the foreword to (Petkovšek et al., 1997), presents a more fundamental perspective, describing how: ‘Science is what we understand well enough to explain to a computer. Art is everything else we do. During the past several years an important part of mathematics has been transformed from an Art to a Science . . . ’

McConnell (2007) also disagrees with the notion of artistic creation, taking the viewpoint that the debate should be as regards ‘in what circumstances should software development be treated as engineering’, instead of arguing that software development can be approached as an engineering

discipline. According to Reeves (1992), the engineering aspect of software design comes in when an issue is discovered, and ‘part of the effected design can not change for some reason’. This necessitates that ‘other parts of the design will have to be weakened to accommodate.’ Reeves (op. cit.) postulates that this is inevitable: ‘Despite all attempts to prevent it, important details will be overlooked. This is the difference between craft and engineering. Experience can lead us in the right direction. This is craft. Experience will only take us so far into uncharted territory. Then we must take what we started with and make it better through a controlled process of refinement. This is engineering.’

Kitchenham and Carn (1990) likewise disagree that coding is an art form, preferring to liken it to an youthful form of engineering: ‘Closer inspection of the software production process suggests that it is an engineering discipline like any other engineering discipline. It is not as mature as electrical or chemical engineering or even agriculture. Nonetheless, it is not an art form. ’

Gabriel (1996) focuses on this immaturity of form, reflects more on the history of programming, and on how this differentiates software engineering: 42

3. ETYMOLOGY

3. 3. DUA L I T Y OF C OM P UT I NG L OG IC

‘Building software – some call it software engineering – is only 30 or 40 years old, and it shares with other engineering disciplines virtually nothing. . . . in software we frequently need to invent new techniques and technology. What’s easy and hard is not known, and there are very few physical principles to guide and constrain us...’

Cockburn (2004) notes that software engineering as a model for software development is unable to adequately predict project successes and failures, suggesting that software is not ‘ “naturally” a branch of engineering’ but a series of ‘resource-limited, goal-directed cooperative games of invention and communication’.

Osterweil (2007) comments that a retrospective look on the emergence of the discipline of software engineering suggests re-evaluating this model (and indeed a greater epistemology of software development and construction) in terms of addressing ‘problems arising from the difficulties of the real world’.

It is interesting to note at this point that the landscape of available literature on digital hardware design appears not as fertile as for software when it comes to the concept of quality3 . Perhaps this is due to the higher level of abstract conceptualisation that software engineers generally work at, that they are more comfortable in dealing with and embracing less rigorously definable aspects of the engineering cycle such as quality research. We shall return to the topic of software as an art or an engineering discipline in Subsection 8.5.1.

3.3 Duality of Computing Logic Having struggled somewhat to find definitions for hardware and software, it is now appropriate to review their ontogenesis for those unfamiliar with it.

Appendix C: Historical Roots of

Computational Logic presents a very brief discussion of the historical development of computing. Tedre (2006) acknowledges the ‘importance of historical, cultural and societal self-understanding of computer science’, in arguing: ‘No matter in what terms the shaping of computer science is presented, if computer scientists wish to retrospectively understand the reasons why computer science and computing have shaped as they have, the methods of those computer scientists must include historical methods. This is because computer science and computing are always situated in some sociohistorical context.’

My purpose for suggesting that the interested reader refer to Appendix C (page 241) at this point is to place some of the deep seated notions that may be present in current generations of hardware or software developers in context with the history of Computer Science and of the relatively recent specialisations into the disciplines of Hardware and Software Engineering.

3 This was subsequently borne out in field work : ‘The rigour that has been brought in (to the act of coding) from software engineering has not really, for some reason, reached RTL programming’ (Interviewee David, IC Engineering Manager).

43

3. ETYMOLOGY

3 . 3 . DU A L I T Y OF C OM P UT I NG L OG IC

The remainder of this section relies on this background knowledge and appreciation in comparing hardware and software as techniques for the expression of digital logic.

3.3.1 The Techno-Philosophical Argument With definition and history providing insufficiently illuminating in distinguishing the essence of hardware from software, we next review the boundaries between them from a philosophical perspective. Steiner and Athanas (2005) appears to be among the first to examine the philosophical relationship between hardware and software, focusing on the boundary of interaction.Their paper describes how ‘our present understanding of software and its boundaries and interaction with hardware appears to be very incomplete’, with scant evidence of philosophical progress in this regards. However,

in personal correspondence (June 2009), Dr. Steiner revealed that: ‘. . . we don’t even really know how to pose the right questions yet . . . ‘At the physical level, one of the most relevant but problematic questions is whether information is a subset of matter and energy or vice-versa (though we might also leave open the possibility of it being something else entirely). Given any kind of interaction between matter and energy, there ought to be some physical laws that play into that interaction, if they don’t outright govern it.’

Despite the shaky retrospective philosophical understanding of the interfaces between hardware and software, nonetheless both are used in practice for computational systems modelling. Paul et al. (1999) illustrates how: ‘. . . language-based behavioral specification for both simulation and synthesis in the hardware domain has made it possible to consider common models of computer system behavior with domain-specific inferences on the physical means of implementing the behavior.’

This paper demonstrates ‘the importance of preserving semantics of hardware and software modelling in separate languages’ whilst simultaneously illustrating the capacity for functionality to transfer across

the hardware/software divide and presenting both as models for computation. Notwithstanding the modern conventions that hardware corresponds to a physical entity and software is a fluid abstract concept—“‘everyone” agrees that hardware is purely physical’ (according to Steiner and Athanas, 2005)—it certainly appears that the philosophical debate concerning the difference in nature of each is cloudy at best. Indeed, Brebner (1996) introduces the concept of reconfigurable hardware, listing its main aim as speed-ups through programming closer to the physical hardware in a mechanism that allows exploitation of parallelism. Taking this further, Brebner (1998) proposes ‘portability of circuitry in a network computing environment . . . that is, expressing applets in circuitry terms, rather than program terms’.

Plessl and Platzner (2004) presents the field of hardware virtualisation and its approaches, and identifies the motivations for the various strategies presented as including: 44

3. ETYMOLOGY

3. 3. DUA L I T Y OF C OM P UT I NG L OG IC

• mapping an application of arbitrary size to a reconfigurable device with insufficient hardware capacity; • achieving a certain level of device-independence within a device family—where members of a family can different in the amount of resources they provide but all implementations support the same programming model; • achieving an even higher-level of device-independence through mapping a virtual architecture to a concrete architecture. In addition to virtualisation of hardware, the paper describes how ‘a reconfigurable computing system can swap in and out portions of the hardware by a reconfiguration process’. It also presents the

concept of ‘reconfigurable hardware operating systems’ (Plessl and Platzner, 2004; Vuleti´c, 2006; Walder and Platzner, 2003; Wigley and Kearney, 2001) that: ‘treat reconfigurable devices as dynamic resources that are managed at runtime. Similar to software operating systems, these approaches introduce tasks or threads as basic units of computation and provide various communication and synchronisation mechanisms’

Steiner (2008) alludes to the fluid boundary between hardware and software through reference to reconfigurable systems for computation , in reference to autonomic cognitive systems that are self-aware, self-configuring, self-healing and self-protecting: ‘. . . because the hardware and information function jointly as a system, there is reason to develop a slightly more realistic understanding of the possible dynamics: System modifying its information; System modifying its hardware.’

From a philosophical perspective, therefore, not only have hardware and software come from a shared genesis, but the present boundary between software and hardware is not well explored, beyond the superficial level. Recent innovations in reconfigurable hardware have done little to clear the waters further. This suggests it is worth exploring the notion of treating hardware design language as a variant of software further.

3.3.2 Hardware Design Language as a Variant of Software Naggum (1996) describes the process of abstraction as consisting of: ‘. . . rejecting irrelevant data and selecting relevant data to form an idea. The more concrete an idea, the easier it is to see which was the relevant and which was the irrelevant data, the easier for others to judge why each was discarded or retained, and the easier to communicate it. Conversely, the more abstract an idea, the harder.’

Krishnamurthi (2007) declares that ‘languages are abstractions: ways of seeing or organising the world according to certain patterns, so that a task becomes easier to carry out’. Classical literature has shown that

45

3. ETYMOLOGY

3 . 3 . DU A L I T Y OF C OM P UT I NG L OG IC

digital hardware and software are models of implementation for the abstract concepts of Boolean logic (see Figure 3.2). Mathematics

Digital Hardware

Firmware

Boolean Logic

Software

Figure 3.2: Boolean Logic is Instantiated as Hardware, Firmware and Software Models.

As both hardware and software are designed using domain-specific languages, there are suggestions in literature that hardware design can be treated as a form of software in certain contexts, and that there is scope for process and methodology transfer between the two disciplines. Indeed, programmer David Chisnall4 , interviewed in Schwartz, Laporte, Chisnall and Mathé (2009), describes how‘pretty much anything you do with a computer involves some level of programming . . . its just a question of whether you are using a general purpose language or a domain specific language’.

Kloos (1987) confirms both software programs and digital circuits as instantiations of algorithms,

‘first being suitable to be run on a general purpose computer and the second to be etched into silicon (for example, or realised with standard “off-the-shelf” components, etc.). ’

Berger (1998) alludes to the ‘similarity between both (digital hardware and embedded software) processes’, in an ASIC project, as being due to the logical similarity of their objectives:

‘The reason for this is quite simple. Today (1998), hardware designs are based upon compiler technology. Just like software. The processes are similar and the solution to the problem that the embedded system is designed to solve is based upon engineering design trade-offs . . . ’

Ghosh and Giambiasi (1999) describes the evolution of Hardware Description Languages (HDLs), stating that ‘as a subset of computer programming languages, hardware design languages (HDLs) must be necessarily precise and unambiguous.’ They further attribute concepts in the Ada software

4 Chisnall

is a developer on the open source Étoilé project.

46

3. ETYMOLOGY

3. 3. DUA L I T Y OF C OM P UT I NG L OG IC

programming language as constituting the basis for equivalent entity concepts in ‘the leading HDL – VHDL5 ’.

Kloos (1987) further notes that: ‘it is not surprising that essentially the same techniques that have been used in the last twenty years to overcome the “software crisis” for sequential programs6 /Bauer 71/ are now starting to be applied to master what could be called the “hardware crisis”. ’

Recognising that ‘the problem of hardware design is becoming more and more a problem of software design’, Mayrhauser et al. (2000) treats ‘a VHDL model as a software routine with some specific hardware information such as hardware delays and triggering mechanisms’.

In this regard, it then discusses

the potential benefits in the use of software testing techniques as applied to the verification of behavioural hardware designs described by VHDL. Smith and Gross (1986) asserts that the ‘application of concepts and prinicples from one domain to another’ (i.e. hardware and software domains) ‘has begun to break down the once rigid boundaries between these two engineering disciplines’.

Bening and Foster (2001) gives an example of this—namely the software programming concept of information hiding (Parnas, 1972): ‘By applying the information-hiding principle to the functional grouping of state elements and other objects within the RTL and introducing a new level of design abstraction, we have successfully isolated design details within tool-specific libraries. ’ (Bening and Foster, 2001)

Furthermore, in an editorial for the EETimes online journal, Chapelle and Lewis (1999) describes the potential cross-domain applicability of the same technique of metrics used to gain visibility into the software development process to hardware design, in order to streamline the hardware development process: ‘The hardware description languages (HDLs) serve to develop ASIC and FPGA in a manner similar to that of traditional software development. . . Fortunately, the distinct differences that do exist in development between software and VHDL don’t limit the application of software methods to hardware design.’

At a more practical level, Buckley (1992) illustrates how both disciplines benefit in the same fashion from certain strategies of process improvement, specifically in this case the implementation of a configuration management system.

5 VHSIC

Hardware Description Language (VHDL). not sure I fully agree that the software crisis has been adequately resolved. Rather, with object oriented programming effectively running out of steam, with multi-core hardware altering the programming paradigm away from traditional von Neumann-influenced designs, and the increasing popularity of functional languages (e.g. Erlang, Haskell) and dynamic languages (Ruby, Python) I feel the landscape has become more colourful and rich, but equally as problematic – just on a different scale. 6 I’m

47

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

Not everyone agrees with the notion that hardware design is becoming more like software design. In personal correspondence (July 2009), Neil Steiner suggests that the ‘typical software developer’ would struggle with some alien concepts from hardware design:

‘While both hardware and software may well be described in text formats, HDL depends on a number of concepts that are foreign to most software developers, with true concurrency being a prime example. ’

Steiner’s remark does suggest that as (certain) software languages lack the ability to express certain major hardware concepts, this has an important differentiating effect between the way both are built and used?

3.4 Cognitive Effects of Modelling and Abstraction, Linguistics and Culture



Civilisation advances by extending the number of important operations which we can perform without thinking about them. — ALFRED NORTH WHITEHEAD (1861–1947), English mathematician and philosopher. Taken from



‘Introduction to Mathematics’, 1911

Having established the historical separation of hardware and software through the successive refinement of specialisation, literature has demonstrated that there is scope for the transference of engineering and engineering process design skills and best practice from one domain to the other. At this point, it is relevant to look at the cognitive aspects of the design of both digital hardware and embedded software product—specifically how models, tools, abstractions, languages and cultures mediate/influence cognition during development. First, we will cover the psychology aspects of logic modelling—how mathematics is both itself a model and the basis for the domains of hardware and software science/engineering. Next, we will look at one of the basic building blocks of scientific advance in Computer Science, the use of abstraction7 .

7 Although best friends with the computing concepts of modularity and decomposition, abstraction stands as perhaps the basic building block of all scientific advance—the building of a theory or model to simplify and explain a complex phenomenon. Then again, Stoskopf (2005) makes a reasoned argument concerning serendipity’s “significant value in the advancement of science”.

48

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

Finally, we will look at the hypothesis that language, as a tool for the construction, communication and evaluation of ideas, has a significant effect on the very nature of these ideas, and on the ways in which they are fabricated and utilised to solve problems. From these, we will establish a set of research questions to be answered in the course of this work.

3.4.1 Psychological Aspects of Logic Modelling Davis and Hersh (1995), cited in Borovik (2007), describe mathematics as ‘the study of mental objects with reproducible properties’. Borovik (2007) refers to this as the ’Davis-Hersh Thesis’and Borovik

(2007) suggests that the Davis-Hersh Thesis works at three levels: • allowing the placement of mathematics in the wider context of the evolution of human culture; thus • allowing examination and study of the underlying cognitive aspects of mathematics within the mechanisms of the evolution of human culture; • aiding the understanding the mechanisms of learning and teaching mathematics, forcing an analysis of ‘the underlying processes of interiorisation and reproduction of the mental objects of mathematics’. Furthermore, Borovik (op. cit) remarks that: ‘. . . the development of neurophysiology and cognitive psychology have reached the point where mathematicians should start some initial discussion on the issues involved in mathematical cognition.’

Perry (1997) discusses how humans have the ability not only to process the information through cognition, but also to create tools which aid and augment this cognition—tools known as ‘cognitive artefacts’. Perry continues to elaborate how cognitive artefacts ‘transform the task into a different one, allowing resources to be reallocated into a configuration that better suits the cognitive capabilities of the problem solver’—in essence, they allow translation from (‘modelling of’) one representative problem

domain into a model which is easier to deal with. Mahoney (2002) describes how von Neumann redefined the traditional limitations of science through the application of abstract models, and argued against the need for ‘a physical model to mediate between nature and mathematics’, quoting von Neumann as follows: ‘the sciences do not try to explain, they hardly even try to interpret, they mainly make models. By a model is meant a mathematical construct which, with the addition of certain verbal interpretations, describes observed phenomena. The justification of such a mathematical construct

49

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

is solely and precisely that it is expected to work—that is, correctly to describe phenomena from a reasonably wide area. ’

The ability to represent and organise arbitrary information in mathematical form gave rise to the desire to automate the manipulation of this information, cf.: ‘. . . if we define a system of mathematical concepts as one which is determined by “postulates’, i.e., certain statements which can be used as the basis of a logical discussion, in general, such a system is obtained by a process of “abstraction” and the theory of mathematical machines may be of considerable interest in the study of this latter operation.’ (Murray, 1948)

The early mathematical machines for digital logic diverged ultimately into the separate contemporary philosophies of digital hardware and software design. Digital hardware and software development are both techniques for producing solutions to problems through the modelling of digital logic. Hardware does this via a physical model of sorts—through the manipulation of semiconductor physics. Software does this at another level removed, ‘abstracted’ through the manipulation of hardware circuits for its own end. Unlike pure mathematics, hardware and software systems give their users the ability to interactively verify and validate ‘theories’ (codings) to their problem sets through their computational power (Iverson, 1980). As hardware and software creation is involved with the instantiation of abstract mathematical logic into models which can be automated by machine, it is reasonable to accept that the implications of the ‘Davis-Hersh Thesis’ (as discussed in Borovik, 2007) also apply to both of these disciplines.

3.4.2 Scientific Advance through Abstraction The semiconductor design industry has increased incrementally, generation by generation, in capability—primarily by raising the level of abstraction at which designers are able to work. Aart de Geus, Chairman and CEO of Synopsys notes: ‘Struggling against a rate of increasing complexity of 10X every 6 years, semiconductor designers have moved to ever-higher levels of abstraction to keep pace.’ (Hassoun and Sasao, 2002)

Issues of material science are isolated from semiconductor physicists. Semiconductor physics as a domain is isolated from analog and mixed signal circuit designers through foundry libraries and design rules. Mathematical analysis models abstract designers from analogue circuits. Mundane Boolean logic is further removed from digital circuit designers through the use of high-level Hardware Description Languages (HDLs) and logic synthesis. Chang et al. (1999) describes how the: 50

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

‘. . . transition from transistor-based to gate-based design . . . provided huge productivity growth . . . and altered the relationship between designer and design by introducing a new level of abstraction.’

Integrated circuit hardware designs today are typically coded in high level design languages that more easily allow consideration of sophisticated logic constructs than the semiconductor physics they decompose to, and are arranged in design models of various hierarchies (see Figure 3.3, reading from left to right, then top right to bottom left).

Figure 3.3: Design Levels.

Taken from Dost and Herrman (1999), as cited in Schlosser (2001).

Of course, the pinnacle of abstraction is in the virtualised world of the software engineer. Software designers are abstracted from the quantum mechanical semiconductor effects of the electrons by simplified and idealised models of digital hardware. Software designers work upon hardware abstraction interface layers. Software designers are abstracted from each other through the development and standardisation of successive layers upon layers of framework designs. Software takes abstraction much further than hardware, with a vast variety of problem domain specific languages in both compiled and interpreted varieties.

Software systems are general

constructed as ’stacks’ of layers, each offering a defined interface of functionality to the layers above and below it—layers of code, layers of languages, layers of tools. Software libraries of code have in certain situations evolved into sophisticated application frameworks with various static and dynamic design rules to be understood and obeyed. Dijkstra (1972), in an ACM Turing Award lecture, presents abstraction as the most significant means of dealing with complexity:

‘We all know that the only mental tool by means of which a very finite piece of reasoning can cover a myriad cases is called “abstraction”; as a result the effective exploitation of his powers of abstraction must be regarded as one of the most vital activities of a competent programmer. In this connection it might be worth-while to point out that the purpose of abstracting is not to be vague, but to create a new semantic level in which one can be absolutely precise.’

51

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

Some software designers work in virtual machine environments which bear some resemblance to the hardware (e.g. VMware8 , Parallels9 , Virtual Box10 , QEMU11 ). Others work in virtual machine environments and byte-code that bear little or no resemblance to the hardware (Java byte-code, JVM). Many application developers write code without concern to the availability of computing resources, such as processor, memory and disk storage. They just assume sufficient resources will be present in the system, and develop with a mental model in which resources are effectively infinite. Nevertheless, software is ultimately, by its very nature, an abstraction of the state of dynamic digital logic upon digital hardware. There is inefficiency carried all the way up along through the abstractions, but nevertheless these abstractions are what allow us unprecedented levels of productivity, innovation and inventiveness (Brooks, Jr., 1995; Davis et al., 1998)—through the use of skill specialisation. Davis et al. (1998) describes that: ‘. . . each generation of software engineering researchers applies the term “software synthesis” to one language “higher” than the one currently used for programming. . . . Since lines of code produced by person-month are relatively independent of the implementation, it becomes clear that the higher the programming language used becomes, the more true productivity . . . increases.’

Brooks, Jr. (1995) concurs with this argument, claiming the following reasons that greater abstraction is desirable: ‘. . . programming productivity may be increased as much as five times when a suitable high-level language is used. . . productivity seems constant in terms of elementary statements, a conclusion that is reasonable in terms of the thought a statement requires and the errors it may include . . . ’

Digital hardware and software productivity has increased substantially through ever higher levels of design abstraction. According to Booch et al. (2007): ‘Experiments by psychologists, such as those of Miller, suggest that the maximum number of chunks of information that an individual can simultaneously comprehend is on the order of seven, plus or minus two. . . . As Miller himself observes, “The span of absolute judgement and the span of immediate memory impose severe limitations on the amount of information that we are able to receive, process and remember. By organising the stimulus input simultaneously into several dimensions and successively into a sequence of chunks, we manage to break . . . this informational bottleneck” . In contemporary terms, we call this process chunking or abstraction.’

The level of abstraction attainable through a multitude of layers of software is both a blessing and a curse. It is responsible for the sophistication and complexity of problem that software is able to address (Booch et al., 2007; Kunii and Hisada, 2000). Reuse is more likely to be possible

8 http://www.vmware.com/ 9 http://www.parallels.com/ 10 http://www.virtualbox.org/ 11 http://www.qemu.org/

52

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

(particular from a design rather than code perspective) at higher levels of abstraction (Grady Booch, cited in Hoffman, 2009). Nevertheless, some would argue that our current ‘software’ models of abstraction are also responsible for some of the issues that software struggles to come to grips with in terms of reliability (Savain, 2009). In this vein, Wegner (1970) mentions the ‘dangers in pursuing abstraction as an end in itself’, noting that:

‘. . . Computer scientists should be aware of the dangers of losing touch with reality, and of losing a sense of direction through excessive abstraction.’

This notion of ‘losing a sense of direction through excessive abstraction’ is a topic we will return to later, in Subsection 8.5.4. Despite these potential misgivings about abstraction, the benefits are undeniable—it is interesting to note that the truly exponential increment in pace of development and technical improvement is immediately obvious from the brief summary of just a few of the key technical advances in semiconductor development, shown in Figure C.2 (on page 245).

3.4.3 Linguistic Relativity Hypothesis In considering how we communicate thoughts and ideas to each other, Lanzara (2007) describes a medium as ‘any material carrier of objects and relations that work as signs conveying information and meaning’ and qualifies it in noting that ‘a medium is not simply a neutral carrier or channel of things’, rather

that it ‘actively shapes the informational content that it carriers’. Hayles (2002) (as cited in Lanzara, 2007) claims that ‘. . . one can only do and think what the medium allows one to do and think’. Language is a cognitive artefact which allows ‘humans to spread their cognitive load over a group, changing the task from an individual cognitive problem to a distributed problem dispersed over social space’ (Perry, 1997). In considering language as a medium to convey information and meanings,

available literature shows the hypothesis that language shapes our cognition and view of the world is well established in the field of linguistics (Gaby, 2008a). Polish-American philosopher Alfred Korzybski (1879–1950) is quoted in Pula (1992) discussing the hypothesis as follows:

‘We do not realize what tremendous power the structure of an habitual language has. It is not an exaggeration to say that it enslaves us through the mechanism of the s.r. (semantic reaction) and that the structure which a language exhibits, and impresses upon us unconsciously, is automatically projected upon the world around us.’

53

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

This hypothesis is most commonly known as the Sapir-Whorf Hypothesis12 , and also called the Linguistic Relativity Hypothesis. It is named after German-born linguist and anthropologist Edward Sapir and Benjamin Lee Whorf, an American linguistic and chemical engineer—and Sapir’s graduate student at one time. The hypothesis asserts a causal relationship between the grammatical constructs of the particular language a person speaks and the way that person subsequently comprehends, contemplates and functions. It postulates that different languages condition and influence the thought patterns of their speakers—that the ‘ “rational power” of the human animal’ is, to any significant measure, ‘determined by the formal properties of the linguistic game it has been taught to play’ (Brown, 1960).

Parr-Davies (2001) qualifies the Sapir-Whorf Hypothesis as follows:

‘The Sapir-Whorf Hypothesis is in effect two propositions, which in a very basic form could perhaps be summed up as firstly Linguistic Determinism (language determines thought), and secondly Linguistic relativity (difference in language equals difference in thought).’ (Parr-Davies, 2001)

Gaby (2008b) describes the effect of Sapir-Whorf when discussing grammatical gender systems as the influence of language in:

‘prompting us, not forcing us, to pay attention to things in different kinds of ways, and thereby to remember them differently . . . ‘Whorf is saying that we are no equivalent as observers. We are not just observing the world around us, but . . . the language we speak is actually pointing us to attend to different aspects of what we are perceiving around us . . . ’

On the degree of influence of Sapir-Whorf, Harley (2004) remarks on the different variants of the hypothesis, characterised by different gradients of influence:

‘In the strong version, language determines thought. In a weaker version, language affects only perception. In the weakest version, language differences affect processing on certain tasks where linguistic encoding is important. It is the weakest version that has proved easiest to test, and for which there is the most support. ’

The last quarter century has seen extreme scepticism concerning the extremist view of SapirWhorf and the degree to the possible effects of language on thought. Nevertheless advances in cognitive psychology and linguistic anthropology have re-awakened the debate (Bowerman et al., 2003), with many linguists now entertaining a milder viewpoint on its effects (Gaby, 2008a).

12 Although

perhaps more correctly as the Sapir-Whorf-Korzybski Hypothesis (Pula, 1992)?

54

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

3.4.4 Mediation of Culture Culture also plays ‘an important role in mediating between language and cognition, though it’s often backgrounded more than it should be.’13 Misztal (2003) discusses how ‘we preserve the past by representing it to ourselves in words or through storing it in traditions and habitual conduct’: ‘Language, seen as the social mechanisms guiding memories, bodily practices, habits and religious symbolic systems, is the vehicle for the past’s influence over the present. According to Durkheim, it is to society that we owe these benefits of knowledge; society, which comprises past and present generations.’

I propose that it is not unreasonable to suggest that the separate and distinct cultures that have evolved for hardware and software development have, through the collected wisdom of generations of engineering, encumbered each discipline with a manner of working consistently with achieving its specific goals. By culture, I refer to the combined influences of individual team culture, culture inspired by technical specialisation, organisational culture and ethnic/national culture. Furthermore, I postulate that it is also not unreasonable to suggest that cross-pollination of these cultures is necessary in order to re-learn the reasons for our current behaviours and to adequately address their short comings with regards to the modern semiconductor ecology and market place. The gradual loss of original reason for many of our continuing (social) actions is succinctly phrased in: ‘We speak a language that we do not make, we use instruments that we did not invent; we invoke rights that we do not found; a treasury of knowledge is transmitted to each generation that it did not gather itself.’ — ÉMILE DURKHEIM (1858–1917), QUOTED IN MISZTAL (2003)

3.4.5 Computer Programming Languages Does existing literature indicate that the Sapir-Whorf hypothesis of the linguistics realm have an equivalent cross-over effect in the realm of languages of logic? The suggestion is certainly present in classical mathematical/computer literature that the SapirWhorf Hypothesis is applicable (to some degree) to scientific and mathematical notations—and specifically to the dynamic expression of logic in both hardware and software design languages. For instance, mathematician George Boole, cited in Iverson (1980), asserted in his ‘Laws of Thought’: ‘that language is an instrument of human reason, and not merely a medium for the expression of thought, is a truth generally admitted.’

13 Email

from Dr. Gaby, January 2009.

55

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

Iverson (1980) also describes how Whitehead claimed in ‘A History of Mathematics’ that: ‘By relieving the brain of all unnecessary work, a good notation sets it free to concentrate on more advanced problems, and in effect increases the mental power of the race.’

Wegner (1970) describes how: ‘In the early fifties, highly respected computer pioneers like von Neumann felt that computer users should be sufficiently ingenious not to let trivial matters such as notation stand in their way. But . . . the need for problem oriented languages had become almost universally accepted by 1960 . . . to solve more complex and more ambitious problems than would otherwise have been possible.’

The proposition that language (and possibly culture) is both a tool for thought and an influencer of thought is an intriguing theme as regards the comparison of hardware and software design philosophies. The capability of the toolset may have an effect on the attitudes and behaviour of the designers. Dijkstra (1972) suggests that the notion of abstraction, as a mean of enhancing productivity, is coupled to the programming techniques and languages used: ‘. . . The analysis of the influence that programming languages have on the thinking habits of its users, and the recognition that, by now, brainpower is by far our scarcest resource, they together give us a new collection of yardsticks for comparing the relative merits of various programming languages.’

Reeves (1992) writes that: ‘Ultimately, real advances in software development depend upon advances in programming techniques, which in turn mean advances in programming languages.’

Dijkstra (1972) further highlights this connection between tool and thought, in discussing his concepts of humble programmers and modest programming language: ‘. . . The competent programmer is fully aware of the strictly limited size of his own skull; therefore he approaches the programming task in full humility, and among other things he avoids clever tricks like the plague . . .

Iverson, the originator of the APL programming language, examined the salient qualities of a computer programming language that could make it ‘an effective tool of thought’, illustrating the Sapir-Whorf Hypothesis can be applied to computer languages (without explicitly mentioning it by name). His ACM Turing Aware Lecture, ‘Notation as a Tool of Thought’ (Iverson, 1980) discussed this theme, and argues that the instantiation of algorithms in computer programs can be used to perform thought experiments. Wegner (1970) notes that ‘widely used programming languages’ have come to ‘represent a way of thinking to large groups of computer users’, and also refers to ‘new programming languages with the new way of thinking they represent’.

56

3. ETYMOLOGY

3 . 4 . C OG NI T I V E EF F EC T S

The many essays of venture capitalist and LISP hacker, Paul Graham, explore similar philosophical themes concerned with language design. Graham notes that languages are tools for use by people (Graham, 2001):

‘The point of programming languages is to prevent our poor frail human brains from being overwhelmed by a mass of detail . . . designing programming languages is like designing chairs: it’s all about dealing with human weaknesses.’

Furthermore, Graham (2002) identifies that higher-level languages can help achieve greater productivity through succinctness:

‘It seems to me that succinctness is what programming languages are for. . . I think that the main reason we take the trouble to develop high-level languages is to get leverage, so that we can say (and more importantly, think) in 10 lines of a high-level language what would require 1000 lines of machine language. In other words, the main point of high-level languages is to make source code smaller.

Jones et al. (2003) makes the observation on language design which has implications for both software and hardware design language evolution:

‘We believe putting human concerns at the forefront of language design will become increasingly important. The ability to integrate programming language principles with human problem solving principles when evolving established programming systems may in the future be the factor that differentiates successful applied programming language design research and practice.’

There is widespread support in historical literature, therefore, that the Sapir-Whorf Hypothesis has a visible presence in languages of computer logic—at least as regards language efficiency. More succinct language, which embody higher levels of abstraction, allow for greater productivity through the subservience of detail. Does it serve to differentiate between hardware and software development cultures through the design languages in use? In his critique of Sapir-Whorfism, Parr-Davies (2001) offers technological determinism (‘the study of to what extent technology influences our lives’) in addition to linguistic determinism:

‘Somehow, the idea of technological determinism—more of a prediction in a way than a statement, seems more acceptable than linguistic determinism . . . although where the linguistic determinists saw language as a prison, the technological determinists see technology as an enabler, not directly limiting our everyday thoughts and perception, but allowing us the possibility to adapt and evolve.’

This is an interesting proposition, as it suggests that software engineers in general, with their ability to create their own automated tools of control, observation and visualisation, may enjoy an advantage in this regard over their digital hardware designer colleagues. 57

3. ETYMOLOGY

3.5. S UM MA RY

3.5 Summary Dijkstra (1978) notes how ‘the way in which people use – or misuse – words (is) always most revealing’. Dijkstra (quoted in Dale and Lewis, 2006) stated that ‘computer science is no more about computers than astronomy is about telescopes’. I agree with this, and suggest that the term computer science is at

the very least misleading14 . It is important, I believe, to separate the tool (the computer) from the activity (computing). It is the science of computing and computing automation that we are considering. To this end, Denning et al. (1989) describes computing as: ‘. . . the systematic study of algorithmic processes that describe and transform information: their theory, analysis, design, efficiency, implementation, and application. The fundamental question underlying all computing is “What can be (efficiently) automated?”’

As we have discussed, the notion that software and hardware are both fundamentally expressions of digital (computational) logic has basis in computing history—both are tools for the algorithmic processing and transformation of information. Having commonly evolved from the same philosophical realm, it is reasonable that they may share inherited characteristics that lend to the potential for process knowledge cross-pollination. Their subsequent differentiation (through successive specialisation) also conjectures that there may be sufficient essential characteristics endemic to each to suggest possible sources of development stress and friction, inter- and intradiscipline. Indeed, the ease of technology transfer and concept sharing between both disciplines that is expressed in the discussed available literature serves to enforce the concept that there is much to be explored in terms of their compatibility at a development issue level. American philosopher Wilfrid Sellars, in the opening of Sellars (1993), noted that: ‘The history of philosophy is the lingua franca which makes communication between philosophers, at least of different points of view, possible. Philosophy without the history of philosophy, if not empty or blind, is at least dumb.’

In this chapter, I have presented a historical perspective on the common origin of both digital hardware and software development disciplines; both evolving from the field of computational logic. I have also introduced the Sapir-Whorf hypothesis from the field of linguistics that suggests the impact that language and culture can have on thought and the structuring, ordering, and processing of ideas. I have discussed hardware design languages as a subset of the catalogue of software development languages, and how concepts from cognitive psychology such as abstraction and information hiding have enhanced the usefulness and productivity—a fact explained by the

14 Interestingly, Dijkstra argued against regarding the term ‘Computer Science’ as a misnomer, with his assertion that computers are ‘exceptional gadgets’ and that ‘it is very exceptional that a tool gives its name to a discipline’ (Dijkstra, 1978).

58

3. ETYMOLOGY

3.5. S UMMA RY

Davis-Hersh Thesis. I have analysed the fundamental differences between hardware and software, as used historically and in the current context. The ’hard’ in hardware has traditionally been used to refer to the difficulty in changing the design once manufactured. Hardware is not as malleable like software is—small scale printed circuit board modifications or the use of Focused Ion Beams (FIBs) on semiconductor silicon is possible particularly to support development work, but not practical for any sort of volume. The ’soft’ in software refers to the fact that software can be reworked—a new image programmed to a flash memory or downloaded via some means to the silicon at virtually any time. Firmware sits somewhere in between. Perhaps it has real-time constraints that make reworking it more difficult / challenging? Perhaps it requires some form of standards conformance qualification to ensure it is suitable for market? With a better appreciation of the etymology and shared history of hardware and software, as well as some hints to the validity of determinisms from cognitive sciences, it is now time to consider what research questions consequently unveil themselves.

3.5.1 Research Questions The research problem I address in this thesis is an etiology of development stress, which results in sub-optimal inter-group performance between digital hardware and software teams working on a semiconductor design – this problem is specifically motivated by the relentless pace and market aggressiveness of the challenging commercial landscape described in Chapter 2. Eisenhardt (1989) writes that research questions provide focus, and ‘without a research focus, it is easy to become overwhelmed by the volume of data’.

Through the research documented in this thesis, I am attempting to build as rich a picture as possible of hardware and software team interaction during semiconductor projects, and the other emergent concepts and themes that are related directly to this. Literature covering linguistics and psychology alludes to the beneficial effects on productivity that abstraction has to offer. Both the digital hardware and software technical domains have evolved and self-refined to increasing levels of productivity primarily through the use of abstraction. Despite this continual incremental refinement, however, we can also see from a study of the history of computation that digital hardware and software had a single beginning—a shared genesis from the abstraction that is mathematical logic. Are the causes of development stress between these teams attributable to problems and tensions between the differently skilled individuals that become exhibited as a higher layer of dysfunction of the interworking of their respective teams? 59

3. ETYMOLOGY

3.5. S UM MA RY

Are they related instead to deterministic effects of the different tools in use by developers of either specialisation? Or are they intrinsic to the technical subjects undertaken? Given the research problem presented in 2.2.7, I propose that analysis of the literature describing the shared history of both technical disciplines adds to the investigation the following set of research questions concerning successful and productive inter-dependency in a complex semiconductor project:

Research Question 1: Is there a different frame of reference regarding how digital hardware engineers and software engineers approach their work that causes development stress between the different technically skilled individuals themselves and, by consequence, their respective teams?

Research Question 2: If so, is this development stress related to an intrinsic quality of their discipline, or is it a mere artifact of the processes, techniques and tools they use in achieving their work? (a) Do the differences in inherent properties of the essence of software and digital hardware—namely the changeability and invisibility of software—affect development? (b) Does the ease at which the logic implementation can be changed have an impact at a fundamental level on how the logic implementation is created?

Research Question 3: As a result of technical specialization, are there other effects from developer culture and mindset, from shared experiences or from the language and terminology in common use, that cause development stress through how digital hardware engineers and software engineers analyse, model, solve and test problems of logic?

Research Question 4: Can a solution be provided which will help to relieve the problem of development stress?

60

Chapter 4 Digital Hardware Flows vs. Software Development Processes



Virtually every company will be going out and empowering their workers with a certain set of tools, and the big difference in how much value is received from that will be how much the company steps back and really thinks through their business processes, thinking through how their business can change, how their project management, their customer feedback, their planning cycles can be quite different than they ever were before. — BILL GATES



1955–, American Entrepreneur and Founder of Microsoft Corporation.

4.1 Introduction

I

N

order to consider team interworking of hardware and software development activities, it is

important to understand the basics of each activity. As this thesis is aimed primarily towards

software developers and managers, it is necessary to provide some background into the undertaking of digital hardware design: ‘If our understanding is inappropriate we will misunderstand the difficulties that arise in the activity and our attempts to overcome them will give rise to conflicts and frustrations.’ (Naur, 1985)

This chapter compares and contrasts the progression through various stages of development in software projects (typically called the “software development process”) with that of digital hardware projects (typically called the “hardware design flow”). It aims to demonstrate that conceptually both software and hardware flows are similar in nature—with an almost one-forone correspondence when approached from a high-level vantage point. Additionally, the chapter introduces and discusses the differences between the flows: those introduced by technical and cultural degrees of separation between the two disciplines, and the associated additional stages in the software flow for reasons of ongoing customer support. 61

4. HW FLOWS VS. SW PROCESSES

Programming Subtasks

Basic Process

Design

Coding

Maintenance

C om pr e h e ns i on C om pos iti on

Understanding the Problem

4 . 2. T EC HNI C A L C OMP A RI S ON

Knowledge Domains

Mental External Representations Representations

Domain Knowledge (e.g. mobile consumer electronics)

Situation model

Requirements documents, Specification documents

Design Strategies, Programming algorithms and methods, Design Languages

Solution model, Plan representation

Design Document

Programming language, Programming conventions

Problem representation

Code

All knowledge domains, Debugging, testing strategies, Frequent kinds of error

All representations

All documents

Table 4.1: The Tasks of Programming. Taken from Pennington and Grabowski (1990).

4.2 Technical Discipline Comparison Pennington and Grabowski (1990) summarises the tasks encountered in a typical software programming project, as shown in Table 4.1. Composition refers to the development of a design—in essence, mapping a natural language description of ‘what’ the program is to accomplish into a set of computer instructions describing ‘how’ to perform this task. Comprehension is described in Pennington and Grabowski (1990) as resulting in an understanding of a design—it is the reverse series of transformations to composition, i.e. from ‘how’ to ‘what’. When considered through the lens provided by Table 4.1, it is striking how similar digital hardware and software activities may look, superficially. Indeed, it is likely the case that all matters of an engineering nature have a similar flow of development, from a suitable high vantage point—as presented by Perry (1997) (see Figure 4.1). Projects with digital hardware and software aspects generally involve a common problem analysis phase (‘understanding the problem’), and a common system architecture phase (‘design’) where the design is apportioned between digital hardware and software. Both hardware and software specific tasks involve the design and implementation of Boolean logic to solve the task (‘coding’), and the subsequent verification of the solution. Finally, hardware and software components come together at some point for full system integration and verification. In addition, the ultimate goal of the endeavour is to get these designs into the hands of customers, and this entails some necessary amount of project sustaining and support (‘maintenance’). Figure 4.2 illustrates this high level equivalence of the hardware and software tasks. Nevertheless, whilst these 62

4. HW FLOWS VS. SW PROCESSES

4 . 2. T EC HNI C A L C OMP A RI S ON

tasks are superficially equivalent, they have subtle differences which yield themselves upon closer examination—these differences will be examined discussed in Section 4.3, but include an awareness of physical layer requirements such as timing closure and design layout. Information gathering

Information collation

Reporting

Construction

Generation of structural design

Organisation of activities Figure 4.1: The Cycle of Design.

Taken from Perry (1997).

4.2.1 History of Hardware Development Approaches Despite their shared genesis and early infancy, digital hardware and software approaches have been developed separately in recent decades, with software practitioners reaping the benefits of abstraction quicker than those of the digital hardware community. In a keynote lecture to celebrate the 40th anniversary of the Design Automation Conference, Sangiovanni-Vincentelli (2003) describes the trends of electronic design automation (EDA) through the literary device of comparison with the ages of mankind’s history—as presented in Scientia Nova by Giovan Battista Vico, 1650—the age of gods, the age of heroes and the age of men. The age of gods is characterised by the acquisition of knowledge through the use of the senses—‘events and natural phenomena are inexplicable . . . and attributed to “external” entities, like the ancient gods.’ The age of heroes establishes the first abstract interpretations and understandings of the reality— when the first great milestones of creative achievement are reached. The age of men describes a trend towards rational analysis and fear of creativity and novelty. Vico presented these transitions as part of a repeating cycle, with the age of men epitomising the decay of society. 63

4. HW FLOWS VS. SW PROCESSES

4 . 2. T EC HNI C A L C OMP A RI S ON

Problem Analysis

System Architecture

HW Design

SW Design

HW Implementation

SW Implementation

HW Implementation Testing

SW Implementation Testing

System Integration

System Testing

Maintenance Figure 4.2: Naïve Digital Hardware Flow/Software Process Comparison.

64

4. HW FLOWS VS. SW PROCESSES

4 . 2. T EC HNI C A L C OMP A RI S ON

In terms of EDA and hardware development, Sangiovanni-Vincentelli maps technological advances to the following era in computing history:

Transistor model capacity load

Gate-level model capacity load

Standard delay format wire load

IP block performance inter-IP communication performance models Abstract

Abstract

RTL

Cluster

IP blocks

RTL clusters

Software models

Abstract

Abstract

1970s

Cluster

1980s

1990s

2000+

Figure 4.3: ASIC Abstraction.

Taken from Sangiovanni-Vincentelli (2003).

The Age of Gods (1964–78) Fundamental inventions of circuit simulation (driving the birth of the EDA industry), logic simulation and testing, static timing analysis, wire routing; The Age of Heroes (1979–93) Progress in verification and testing, layout and placement (including simulated annealing), logic synthesis, hardware acceleration for EDA, high-level design; The Age of Men (1993–2002) System-on-Chip devices, a continuing drive towards smaller process geometries, intrusion of physical design considerations into digital designs, component integration in place of optimisation, software-rich solutions. Figure 4.3 illustrates the progress made in EDA in raising the level of abstraction of digital hardware design.

Dramatic increases in hardware developer productivity have ensued from

this, with potential for more to come from the adoption of high-level imperative and functional programming languages into behavioural synthesis tools: ‘. . . the development of such languages and associated tools will help to manage the increasing size and complexity of modern circuits.’ (Sharp, 2002)

4.2.2 Design Domains A useful tool to conceptualise hardware design is to think of it in terms of three design domains (Gerez, 2005): 65

4. HW FLOWS VS. SW PROCESSES

4 . 2. T EC HNI C A L C OMP A RI S ON

Behavioural Domain

Structural Domain

Systems Algorithms

Processors

Register transfers

ALUs, RAM, etc.

Logic

Gates, flip-flops, etc.

Transfer functions

Transistors Transistor layout Cell layout Module layout Floorplans Physical partitions

Physical Domain Figure 4.4: Gajski-Kuhn Y-chart.

• Behavioural Domain—this domain describes the algorithms, the high-level logic. It represents the temporal and functional behaviour of the system. • Structural Domain—this domain describes the sub-circuits of the design, and their interconnection, for each level of abstraction. • Physical Domain—this domain deals with the realisation of the design on a physical silicon chip. It contains information about the size, shape and placement of logical units on the silicon. The three domains can be visualised on a diagram called a Gajski-Kuhn Y-chart (Gajski and Kuhn, 1983; Gerez, 2005; Healy and Gajski, 1985), as illustrated in Figure 4.4. The three domains of the Gajski-Kuhn Y-chart are on radial axes. Each of the domains can be divided into levels of abstraction, using concentric rings, as illustrated in Figure 4.5. At the top level (outer ring), we consider the architecture and system-level design of the chip; at the lower levels (inner rings), we successively refine the design into finer detailed implementation: • Architectural—this level captures requirements and describes an overall design and structure to address these requirements; • Algorithmic—this level includes functional descriptions of how subsystems behave and interact; 66

4. HW FLOWS VS. SW PROCESSES

4 . 2. T EC HNI C A L C OMP A RI S ON

Behavioural Domain

Structural Domain

Systems Algorithms

Processors

Register transfers

ALUs, RAM, etc.

Logic

Gates, flip-flops, etc.

Transfer functions

Transistors Transistor layout Cell layout Module layout Floorplans Physical partitions

Physical Domain Figure 4.5: Gajski-Kuhn Y-chart showing levels of abstraction.

• Functional block or register-transfer—this level provides specific descriptions of what is occurring at a register level, i.e. a datum is transferred from what register and over which line to where; • Logic—this level is concerned with logic cells (AND gates, OR gates, flip-flops and interconnects); • Circuit—this is the actual physical hardware, presenting and describing the system as the sum of electrical characteristics of a network of transistors. This level comprises the design which, when fabricated onto various layers of the silicon, instantiates the design in a chip. These various levels of abstraction in hardware development (‘design levels’) are illustrated in Figure 3.3, on page 51. Creating a structural description from a behavioural one is achieved through the processes of high-level synthesis or logical synthesis. Creating a physical description from a structural one is achieved through layout synthesis. These domains (and their constituent stages) are further ordered, in a temporal sense, into two design phases: • Front-end/Logical Design is concerned with the design and engineering trade-offs required in behavioural and structural domains. • Back-end/Physical design is concerned with the physical domain, of realising the design as a physical device. 67

4. HW FLOWS VS. SW PROCESSES

4 . 3 . HI G HL EV EL S OC DES I G N F L OW

In a simplistic view, the output of the front-end design can be considered as feeding into the back-end work as the hardware design is instantiated from a logical abstraction into a physical realisation. However, there are physical design issues which have an impact early in the frontend design process (Keating and Bricaud, 1998). For example, decisions need to be taken on the use of hard macros (which affect place and route for the entire chip), on floorplanning targets (which impact performance, timing and cost goals), and clock tree hierarchy (which affects power consumption). Gerez (2005) shows that Y-charts are a useful tool to aid in visualising design methodologies. Figure 4.6(a) illustrates a methodology which is based on top-down structural decomposition— parts which known behaviours are broken down into smaller blocks with simpler behaviour and interconnections. This is followed by bottom-up layout, where transistors are grouped into gates, cells into higher level logic units, and so on. In comparison, Figure 4.6(b) shows a floor-plan based methodology, where layout aspects are taking into account earlier in all design stages. Gerez (2005) presents such a design methodology as being based upon ‘insights’ from ‘structured design methods developed for software systems (e.g. the hierarchical partitioning of the software in small procedures)’.

4.3 Highlevel SoC Design Flow Having gained a conceptual model for digital hardware design through the use of the Y-chart device, we next consider a more complete design flow for an ASIC device—from market analysis through the chip characterisation. As this thesis is focused more on the development of a design, rather than ongoing customer support and design sustaining efforts, the topic of maintenance is deliberately omitted. Figure 4.7 illustrates a typical highlevel SoC ASIC development flow. It begins with the identification of a market opportunity, moving from requirements capture, through analysis, architecture, implementation and on to qualification and deployment (mass production). The most significant difference between hardware and software vis-à-vis this diagram is the quantity of ongoing support and sustaining activity that is typical of the software lifecycle after deployment. Software is still changeable at this point, whereas the hardware ASIC has gained significant inertia—warts included—due to the significant financial cost considerations of reworking it. Chang et al. (1999) recognises that modern SoC development is defined by platform-based design, based on ‘our ability to harness reusable virtual components (VC), a form of intellectual property (IP), and deliver it on interconnect-dominated deep submicron (DSM) devices.’

68

4. HW FLOWS VS. SW PROCESSES

4 . 3 . HI G HL EV EL S OC DES I G N F L OW

Behavioural Domain

Structural Domain

Systems Algorithms

Processors

Register transfers

ALUs, RAM, etc.

Logic

Gates, flip-flops, etc.

Transfer functions

Transistors Transistor layout Cell layout Module layout Floorplans Physical partitions

Physical Domain (a) Y-chart with top-down structural decomposition and bottom-up layout reconstruction

Behavioural Domain

Structural Domain

Systems Algorithms

Processors

Register transfers

ALUs, RAM, etc.

Logic

Gates, flip-flops, etc.

Transfer functions

Transistors Transistor layout Cell layout Module layout Floorplans Physical partitions

Physical Domain (b) Y-chart with Floorplan-based Design Methodology

Figure 4.6: Gajski-Kuhn Y-chart showing Design Methodologies.

Taken from (Gerez, 2005).

69

4. HW FLOWS VS. SW PROCESSES

4 . 3 . HI G HL EV EL S OC DES I G N F L OW

Market Requirements Capture

Market Requirements Specification (MRS)

Architecture Design Product Specification, IC Architecture Document, IC Functional Specification, IC Software Specification Implementation

Complete IC Verification Plan/Reports, IC Database, IC Design Reviews and Checklists

IC Characterisation

Characterisation Reports

IC Production Qualification

System Test and Product Release

Qualification Reports, IC Production Release

Test Reports, Production Release Documents

Figure 4.7: High-level ASIC Design Flow Overview.

70

4. HW FLOWS VS. SW PROCESSES

4 . 4 . C OMP UT A T I ON

Coudert (2002) notes that current submicron process geometries are forcing the need for ever more sophisticated design signoff, with ‘every design variable (timing, area, power, congestion, signal integrity)’ needing careful consideration and monitoring—effectively marking ‘the end of the logical and physical dichotomy’. Martin and Leibson (2008) concurs with this view, agreeing that problems

already occur ‘frequently at the 90-nm process node, and will only get worse in designs fabricated at smaller geometries’.

4.4 Computational Complexity Appendix D describes the tasks involved in digital hardware design, specifically, to educate any software engineering audience as to the additional considerations that bear an impact upon this activity. The various stages of hardware design flow discussed in Appendix D are algorithmic in nature, and thus are amenable to various computational tools known as electronic design automation (EDA) tools. All algorithms have an associated computational complexity—that is, a set of requirements in terms of processing time and space (memory) to get from input to solution, expressed as a function of the size of the input. The time requirement is more important, as space can be traded off (for example, through the use of virtual memory) for time1 . The fundamental criteria used for assessing the time requirements are whether the computational complexity scales as a polynomial function of the input size (polynomial order) or as an exponential (or worse) function of the input size (exponential order). The Cobham-Edmonds Thesis (Kozen, 2006) asserts that problems can only be feasibly computed if they can be computed in polynomial time. Such problems are called tractable problems. Problems which are not solvable in polynomial time (but in exponential time or worse) are called intractable problems. Without delving too far into the realm of decision problem complexity classes, sufficit to say that solutions for certain problems (non-deterministic polynomial complete, NPcomplete) can be approximated in polynomial time, without any guarantees of having found the optimal solution. The processing steps in VLSI design automation often involve problems of combinatorial optimisation—many of them being intractable, but NP-complete. Gerez (2005) presents the options for intractable problems as:

1 Although

the problem cannot be solved if sufficient space of some sort cannot be provided.

71

4. HW FLOWS VS. SW PROCESSES

4 . 4 . C OMP U T A T I ON

• trying to solve the problem exactly, if the problem size is sufficiently small to allow the use of an algorithm that has exponential (or worse) time order—for example, using an exhaustive search, or using methods to limit and prune the search space where possible; • using approximation algorithms—however, general-purpose approximation algorithms do not exist, and each algorithm needs tailoring to problem-specific issues; • using heuristics—for example simulated annealing, Tabu search, and genetic algorithms. Gerez (2005) claims that ‘for most NP-complete or NP-hard problems in CAD for VLSI, heuristics seem to be the only way to solve problems’. To better understand these heuristics, we will briefly consider

the example of simulated annealing. In metallurgy, annealing is a heat treatment that alters and homogenises the microstructure of a metal for the purposes of relieving internal stress, and improving strength and hardness. It is done through heating the metal to higher than its crystallisation temperature (which results in a softening of the metal through the removal of the internal stresses caused by crystal defects), and then allowing the metal to cool slowly (which allows new strain-free grains to nucleate and grow). Inspired by these physical techniques and crafts, simulated annealing (Kirkpatrick et al., 1983; ˇ Cerný, 1985) is a probabilistic meta-heuristic approach to solving global optimisation of large graph problems such as those that arise in VLSI layout (Smith, 1997). It has proven itself as ‘one of the best methods, if not the overall best one’ for placement (Gerez, 2005). Simulated annealing takes an existing solution and then makes successive changes to the system via a series of random moves. Each move is accepted or rejected based on an energy function, which is re-calculated for each new trial system configuration. The energy function is so designed that minima of the energy function correspond to possible solutions. The best solution is called the global minimum. Occasionally, the system can end up in a local minimum—that is, a configuration where more energy is required to move out of the current state, but yet that state may not be close to the global minimum. In order to avoid such states, we also accept moves that perturb the system occasionally, allowing it to escape from a local minimum and find other better solutions. The name for this strategy is hill climbing. The critical discriminating parameter which governs the behaviour and optimality of the simulated-annealing algorithm is the rate at which the temperature is reduced. This is known as the cooling schedule. Finding a good solution—i.e., a local minimum close to the global minimum— requires a high initial temperature and a slow cooling schedule. This results in many trial moves and very long computer runs (Rose et al., 1990). 72

4. HW FLOWS VS. SW PROCESSES

4 . 5 . T ES T I NG : T HE T WO ‘ V ’S

The important point to take from this brief discussion on computational complexity is that due to the specific types of problems being dealt with in digital SoC hardware (floorplanning, placement, routing, simulation) very often the tools take substantial amounts of time to run. Coudert (2002) notes that, with ever increasingly complex designs, capacity problems are emerging such as that of pure scalability: ‘the raw number of objects that need to be managed and processed is stressing memory limits and computational resources (only linear or n log n algorithms can reasonably be applied to a complete netlist).’

Software engineers rarely experience similar tool or flow phenomena with modern tools. However, such was not always the case (Brooks, Jr., 1995). Houghton (1988) describes the early days of crude software tools in the 1960s, explaining that his ‘first goal as a software pioneer was to find ways to avoid compiling programs’ and the compiler. Houghton presents how the poor quality tools had a technical deterministic effect on his engineering behaviour, claiming that ‘not having the right tools for the job leads to software engineering malpractice’. In a similar manner, having FPGA synthesis tools with large turn-around time can encourage FPGA engineers to speculatively set builds running (a good thing), but also to collate multiple patches and fixes into a single build—often a bad thing from the perspective of one patch, one build, one test.

4.5 Testing: Validation and Verification Testing of SoC devices is a difficult task, and requires careful coordination between digital hardware and embedded software function. Andrews (2009) states that: ‘One of the most difficult challenges in SoC verification today is determining how to make sure the hardware and software work together at the SoC level. Hardware verification has advanced to the point where the verification of individual functional blocks in a design can be achieved with reasonable confidence using constrained random test benches, code coverage, assertion coverage, and functional coverage. Challenges remain in making sure the blocks work correctly when placed in the context of the SoC.’

There are two fundamental aspects to the testing of SoC devices: • Validation—ensuring that the product built is the correct one in terms of meeting the specified requirements and target market; and • Verification—ensuring that the product is correctly built—that it functions as intended. Validation is an architectural exercise between the hardware and software architects and product marketing, comparing product specifications against market requirements documentation and 73

4. HW FLOWS VS. SW PROCESSES

4. 5 . T ES T I NG : T HE T WO ‘ V ’S

product roadmaps. Verification involves product testing across a number of technical domains, and using a number of different tools. Digital hardware designers tend to use the simulation tools previously described as their preferred mechanisms to verify block operation. These tools model the behaviour of the hardware (at a chosen level of abstraction) but allow unlimited visibility into the design in terms of points of observation and control. The downside to the use of these simulators is their speed of execution— they run slower by many orders of magnitude than real-time. SoC devices typically involve integration of IP blocks from a variety of different vendors. These blocks may require software involvement and stimulus in order to correctly verify their interconnection, or their inner workings—if so, then either test code or device drivers will need to be written. Software and hardware designs meet for the first time at the coal-face of the FPGA, with virtually all ASICs prototyped to some degree on FPGA prior to tapeout (Jaeger, 2007). A typical FPGA-based verification flow is presented in Figure 4.8. The FPGA device allows prototyping of a hardware design, and enables software exposure to real hardware many months before the SoC arrives back from fabrication. It is a very valuable tool for system-level verification, helping to ensure that design assumptions are common between hardware and software designers2 . FPGAs are again significantly slower than real-time, but typically by a much smaller factor—depending on the design, this may or may not cause a problem. FPGAs are also limited in terms of capacity, and may not fit a full SoC design. Splitting hardware designs across multiple FPGAs is possible, but this significantly reduces the speed at which the design can run due to multiplexing of various signals between the two FPGAs. A variety of co-simulation frameworks (Engblom et al., 2006; Rowson, 1994) are used occasionally to create virtual prototypes of entire systems. Co-simulation in general describes the running of software on simulated hardware with the express purpose of verifying system functionality (both hardware and software) prior to tapeout—and with this perspective, running on FPGA can be seen as a form of co-simulation. However, the term is more commonly restricted to frameworks and environments which enable system modelling through different EDA and software simulation tools running simultaneously and exchanging information. Software designers can also verify their designs pre-tapeout by creating software models of the hardware, instantiating their own interpretation of the hardware-software interface. This technique affords similar visibility of points of control and observation to the software engineer that

2 Assumptions such as whether the hardware is sufficiently resourced (memory, MIPs, interfaces, etc.); whether it is feasible to adequately program the hardware to achieve system use-cases; and so on.

74

4. HW FLOWS VS. SW PROCESSES

4 . 5 . T ES T I NG : T HE T WO ‘ V ’S

Initial FPGA Netlist

Partition design and add FPGA-specific Models

Implement and Test Peripheral Boards

Develop and Run Verification Tests

Firmware Verification Plan

Integrate updated Netlist

Sign-off final netlist (including ROM bootstrap and library)

FPGA-based Verification Report

Figure 4.8: FPGA-based Verification Flow Overview.

simulation offers to the hardware engineer. It allows state machines in the software to be controlled and exercised, and allows code instrumentation for the purposes of capturing code coverage and performance data. However, the maintenance of such models can be a resource intensive activity. Such software models can be much quicker than real-time, depending on the SoC application (i.e. one without significant DSP for instance)3 .

Verification is concerned with getting an impression that the design is adequately tested and functional. There are a number of difficulties in establishing this (Wilson, 2008), such as the correct metric to use to determine the level of verification complete4 , and differences in interpretation of requirements and specification documents. Ultimately, it is a sign-off decision as to when verification is sufficient, with the criteria being that ‘you stop verification when you are virtually certain of the critical blocks’ (Wilson, 2008).

3 This is enabled by the fact that developer workstations are significantly more powerful than SoC-based devices, although this performance gap in terms of processing power is certainly closing. 4 Various coverage metrics can be used to establish the degree of verification complete—such as code coverage, functional metrics, constrained random testing, etc.

75

4. HW FLOWS VS. SW PROCESSES

4 . 6 . A B S T RA C T I ON B REA K DOWN

4.6 Abstraction Breakdown The use of abstraction in hardware design is subject to the breakthrough of physical aspects of the design. Smith (1997) describes how:

‘Logic synthesis provides a link between an HDL (Verilog or VHDL) and a netlist similarly to the way that a C compiler provides a link between C code and machine language. However, the parallel is not exact . . . When talking to a logical-synthesis tool using an HDL, it is necessary to think like hardware, anticipating the netlist that logic synthesis will produce.’

As an example of this, consider floorplanning/placement. Floorplanning and placement are aspects of hardware design that separate the discipline from software, illustrating again the multiple design domains in the Y-chart (Figure 4.4). Floorplanning and placement are concerned with maximising the utilisation of chip area, and minimising the overall size of the chip by efficient routing of wiring. Gerez (2005) describes how:

‘Software has an arbitrary number of dimensions: there are no limitations in the number of procedures that one procedure can call. The hardware analog of a procedure call is the exchange of signals between modules. When the number of interconnections between modules increases, it becomes harder to design a satisfactory circuit.’

Additionally, hardware designers may occasionally use uncommitted gates in blocks as a crude form of late safety net—just in case a patch is required on a couple of layers of metal in the silicon masks (Wilson, 2008). Taking floorplanning considerations into account early and through all design stages allows for the early consequences of structural design decisions to be dealt with. Digital hardware design is readily recognised as being multi-dimensional in terms of its coupling to its physical realisation whereas software usually isn’t. Valid argument could be made, however, that abstraction breakthrough also occurs for embedded software development: for example, software partitioning across tightly coupled RAM, system/bulk RAM, external SDRAM, executein-place flash memory and ROM where different wait states impact performance; cache partitioning and lock down etc. Just as digital hardware design engineers have to be finely aware of how their code will ultimately synthesise to the other domains of the Gajski-Kuhn Y-chart, embedded software developers also have to be aware of the ramifications of their coding decisions—although perhaps not as acutely. Sage (2009) notes that:

‘It’s a common opinion that slow software just needs faster hardware. This line of thinking is not necessarily wrong, but like misusing antibiotics, it can become a big problem over time . . . Furthermore, there is often a direct conflict of interest between best programming practises and writing code that screams on the given hardware . . . Video game and embedded system developers know the hardware ramifications of their code. Do you?’

76

4. HW FLOWS VS. SW PROCESSES

4 . 7. DET A I L ED I NT EG RA T ED S OC F L OW

4.6.1 Trend towards Higher-Level Synthesis and Functional Programming Hardware design languages (HDLs) are moving from register transfer level to higher level languages capable of more easily describing behavioural aspects of the system nature. Functional languages5 in particular look promising for addressing concurrency requirements in multi-core programming, but also potentially as new mechanisms for HDLs/hardware design (Axelsson, 2006; Mycroft and Sharp, 2001).

4.7 Detailed Integrated SoC Flow Having considered hardware development flows, it is now apropos to deliberate on an overall integrated SoC development flow, which shows the activities of digital hardware and embedded software teams in tandem. Figure 4.9 presents a naïve rendition of such an SoC development flow, very much in the spirit of the Waterfall model familiar to most software developers. The naïvety comes from the fact that, for simplicity of illustration, no allowance is made in the diagram for rework or the necessity for repeated runs through certain parts of the overall flow. Additionally, the hardware development flow depicted does not allow for early backend design work or for EDA tool flow ‘pipe-cleaning’ activities, i.e. the pushing of an early (non-complete) hardware design database through the entire EDA flow to ensure all tools are understood and working satisfactorily. Nevertheless, this diagram captures a very salient point: there is a significant number of concurrent activities that need to be co-ordinated across both hardware and software development teams. These activities have to be co-ordinated both in terms of time, and in terms of task prerequisites and task output dependencies.

4.8 Summary In this chapter, the tasks of digital hardware and software were presented as being very similar from a high-level. However, digital hardware design has a multi-dimensional set of inter-related problem domains that need to be considered when engineering from problem to solution—as illustrated in the conceptually amenable fashion of the Gajski-Kuhn Y-chart. Following this, we looked at the

5 Functional programming advocates all computation as the evaluation of mathematical functions, avoiding changes in state and mutable data—thus it does not suffer many of the side effects with respect to concurrency that are common place in imperative programming.

77

4. HW FLOWS VS. SW PROCESSES

4. 8. S UMM A RY

Problem Analysis Specify SoC System Architecture Partition into HW / SW Design Implementation HW Design

SW Design

Component Testing Block Simulations

Unit Test

Integration Testing FPGA Prototype

Co-simulation Productisation

Application Software Completion

Backend Physical Design

Pre-tapeout Verification

ES Tapeout (Engineering Sample ICs)

Chip Bringup

MP Tapeout (Mass Production ICs)

Application Software Release Candidate

Product Maintenance System Test

Production Qualification

Application Software Release v1.0

Release Notes

Maintenance

Figure 4.9: Detailed SoC Flow Overview.

78

4. HW FLOWS VS. SW PROCESSES

4. 8. S U MM A RY

detailed stages of digital hardware design flow, and how much of this work is now automated via EDA software tools. We also discussed the very significant resources (both in terms of computing power, and of time) that these tools consume. From a semiconductor process technology perspective, pace has continued unabated in recent years. Yet the overwhelming complexity of SoC design has seen little in the way of magic panacea in the years since Sangiovanni-Vincentelli’s keynote. The increasing power of computing workstations has enabled engineers to tackle more ambitious larger designs, but has not achieved a dramatic breakthrough in productivity. Even the process technology itself is hurtling towards a fundamental obstacle—that of the size of a silicon atom and of the diamond cubic crystal structure into which it crystallises. And so perhaps we have entered the Vico-Sangiovanni-Vincentelli Age of Men in electronic design automation. Nonetheless, work is ongoing in raising the level of abstraction further to a behavioural level, which may bear fruit in terms of electronic design capability (Axelsson, 2006; Bjesse, 2001; Chiodo et al., 1994; Mycroft and Sharp, 2001; Sharp, 2002). Having seen the conceptual and philosophical similarities of digital hardware and embedded software exposed through their shared origins in Chapter 3, digital hardware design has been presented as a technology-specific workflow in this chapter—and yet one which, yet again, bears at least some resemblance to software design and development. The added complexity of the physical dimension is also present on occasion in software design (i.e. considering the limitations of the physical platform). The profitable elevation of abstraction level in software development is something that academics, EDA tool vendors and hardware designers are actively applying to hardware description and design entry. These facts further add credence to the notion of ease of technology transfer and concept sharing between both disciplines, within the overall context of governing social processes and structures.

79

4. HW FLOWS VS. SW PROCESSES

4. 8. S UMM A RY

80

Part II

Research Design

Chapter 1: Introduction

Part I: Initial Literature Review Chapter 2: Semiconductor Ecosystem

Chapter 3: Etymology of Hardware and Software

Chapter 4: Digital Hardware Flows vs. Software Development Processes

Part II: Research Design Chapter 5: Research Design and Investigation

Part III: Theoretical Model and Solutions Chapter 6: Emergence of Theoretical Model

Chapter 7: Emergent Toolbox of Patterns for SoC Project Organisation

Chapter 8: Validation of Theoretical Model

Chapter 9: Conclusions

81

Chapter 5 Research Design and Investigation



Research is the process of going up alleys to see if they are blind. — MARSTON BATES



1906–1974, An American zoologist.

5.1 Introduction

T

HIS

thesis examines the specific relationship between digital hardware and software development

activities in the successful creation of modern consumer electronics devices. My aim is to

determine which aspects of digital hardware and embedded software inter-disciplinary development caused the most difficulty to a semiconductor device for the consumer electronics market, specifically looking at: Research Question 1: Is there a different frame of reference regarding how digital hardware engineers and software engineers approach their work that causes development stress between the different technically skilled individuals themselves and, by consequence, their respective teams? Research Question 2: If so, is this development stress related to an intrinsic quality of their discipline, or is it a mere artifact of the processes, techniques and tools they use in achieving their work? (a) Do the differences in inherent properties of the essence of software and digital hardware—namely the changeability and invisibility of software—affect development? (b) Does the ease at which the logic implementation can be changed have an impact at a fundamental level on how the logic implementation is created? Research Question 3: As a result of technical specialization, are there other effects from developer culture and mindset, from shared experiences or from the language and terminology in common use, that cause development stress through how digital hardware engineers and software engineers analyse, model, solve and test problems of logic? 83

5. RESEARCH DESIGN AND INVESTIGATION

5 . 2 . C ONC EP T UA L DES I G N

Research Question 4: Can a solution be provided which will help to relieve the problem of development stress? In this chapter, I provide a context for the research undertaken to perform this study, through specific attention to focusing and bounding of initial research aims via a conceptual framework. I present and discuss the various study conceptual design decisions that shaped and directed the course of this research. The original conceptual framework for the study is conveyed, along with its subsequent refinements.

5.2 Conceptual Design According to Miles and Huberman (1994), ‘Theory building relies on a few general constructs that subsume a mountain of particulars... Any research, no matter how inductive in approach, knows which bins are likely to play in the study and what is likely to be in them.’

The categorisation of research data is influenced by both the theory of the scientific method in use, the experience of the researcher, and from the general objectives of the study. Explicitly naming these categories early on helps specify what will and will not be studied. In quoting (Wolcott, 1982), Miles and Huberman (op. cit.) present the quality of ‘boundedness’ in a qualitative research design as follows: ‘There is merit in open-mindedness and willingness to enter a research setting looking for questions as well as answers, but it is “impossible to embark upon research without some idea of what one is looking for and foolish not to make that quest explicit”.’

From Chapter 3, we have seen that there is classical ex litterae evidence for treating software and hardware disciplines as very closely related at a conceptual level. Despite these shared beginnings, my original starting premise for this work was to suggest that discipline-specific technical and related “techno-cultural” differences are a major contribution to development stress. With this in mind, I purposefully tried to gain a clear perspective on the inter-relationships of the various interaction phenomena of digital hardware and software activities in consumer electronics ASIC and SoC projects. Creswell (2003) describes the three questions central to the design of research as: 1. What knowledge claims are being made by the researcher? 2. What strategies of inquiry will inform the procedures? 3. What methods of data collection and analysis will be used? 84

5. RESEARCH DESIGN AND INVESTIGATION

5 . 2 . C ONC EP T UA L DES I G N

I will answer the first of these questions by presenting the conceptual framework I developed to bound the work and bring focus to the research. I will deal with the second and third of these questions through discussion of my choice of research methodology—specifically Section 5.3 for my method of investigation, and Section 5.4 for my procedures for data collection.

5.2.1 Conceptual Framework Miles and Huberman (1994) present that a conceptual framework: ‘. . . explains, either graphically or in narrative form, the main things to be studied—the key factors, construct or variables, and the presumed relationships among them.’

They further elaborate that making conscious study design decisions early on that bound a research study can be seen as: ‘. . . analytic—a sort of anticipatory data reduction because they constrain later analysis by ruling out certain variables and relationships and attending to others’. Miles and Huberman (op. cit.)

The diagram in Figure 5.1 illustrates, in the form of a mindmap, the conceptual framework constructed in an effort to constrain and bound my research early on. The graphical conventions in this diagram are as follows: • the non-directional links imply loosely related themes; • the directional links (depicted with arrows) imply possible sources of clarification—for example, the node ‘What are the causes of stress?’ (the effect) may have a causal relationship with ‘Is there a cultural divide to be addressed?’ (the cause); • the bubble-like patterns represent parent/child relationships (of categories to sub-categories or illuminating questions), with the nodes of ‘strengths’, ‘weaknesses’, and ‘technological aspects’ being the parent nodes (i.e. categories). I found this framing of the research area to be crucial initially:

85

5 . 2 . C ONC EP T UA L DES I G N

Figure 5.1: Initial Conceptual Framework for the Study.

5. RESEARCH DESIGN AND INVESTIGATION

86

5. RESEARCH DESIGN AND INVESTIGATION

5 . 2 . C ONC EP T UA L DES I G N

• in allowing me to successfully discriminate and filter what categories of information were important to the research; • in providing me with some accessible and specific areas to direct subsequent investigation; • and in providing filtering bins into which I could place data collected during my field work, for subsequent processing and analysis. My bounding assumptions for the study at this initial stage were a direct result of my own personal experiences in embedded development in the consumer electronics industry. Glaser and Strauss (1967) stress the importance of studying ‘an area without any preconceived theory that dictates, prior to the research, “relevancies” in concepts and hypotheses’. In tandem with this, Glaser

and Strauss (op. cit.) make the separate point of encouraging that researchers ‘should deliberately cultivate such reflections on personal experiences’ for the purposes of developing theory from insights.

There is an important but subtle distinction between the two. It is important to leave your mind open to surprise discoveries, new avenues of inquiry or contradictory results to expectations, whilst at the same time using personal knowledge of the subject areas as an instrument of navigation— focusing the research in the most interesting and fertile directions. Consequently, I was consciously aware of the risk of this limiting in scope of my study becoming a pre-mature restriction that could possibly blind me ‘to the important features in the case’, or cause me to misread ‘local informant’s perceptions’ (Miles and Huberman, 1994). Therefore, I did not attempt to further deductively refine, constrain, or weight in importance any of the phenomena covered by my conceptual framework at this early inception point in my research. Rather, I chose to let them self-refine inductively and iteratively, as further field work was undertaken. Miles and Huberman (op. cit.) indeed suggests it may be necessary to do this: ‘Conceptual frameworks are simply the current version of the researcher’s map of the territory being investigated. As the explorer’s knowledge of the terrain improves, the map becomes correspondingly more differentiated and integrated...’

For example, many causes of development stress presented themselves early on in my research, leading to refinements in my conceptual framework such as: • Cultural and Techno-Cultural Issues; • Business Model and Business Goals; • Geo-spatial Issues; • Technological Constraints; • Social Issues. 87

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

I will further discuss these causes of development stress in detail in Chapter 6, when describing my emergent theoretical model.

5.3 Method of Investigation Methodology is ‘the philosophical and theoretical underpinning of research that affects what a researcher counts as evidence’ (Balnaves and Caputi, 2001), referring to ‘the choices we make about appropriate models, cases to study, methods of data gathering, forms of data analysis etc. in planning and executing a research study’ (Silverman, 2009). When undertaking any research study, there are a variety

of accepted scientific research methodologies which can be applied. There is an onus upon the researcher to clearly identify what the most appropriate method of inquiry is, based upon the advantages and disadvantages of each as applied to the particular research questions under study. Creswell (2003) describes the need in research to ‘match between problem and approach’, and comments that ‘certain types of social research problems call for specific approaches’. Based on the research questions presented in Chapter 3 and the conceptual framework in Subsection 5.2.1, in this section I will now discuss my rationale for selecting the qualitative method of grounded theory as my choice of research methodology.

5.3.1 Qualitative versus Quantitative versus Mixed Methods Qualitative research describes an approach to inquiry that seeks to explore and derive understanding of the interaction of human groups in a social environment. It has been loosely defined as: ‘any kind of research that produces findings not arrived at by means of statistical procedures or other means of quantification’ (Strauss and Corbin, 1998)

The process of conducting such research involves a feedback loop between emerging questions and the generation of hypothesis, data collection in the subjects’ environment, data analysis that works from unstructured specifics to emergence of generalised themes, and interpretation of the meaning behind the data (Creswell, 2003). In contrast to this, quantitative research is an approach to test objective hypothesis through the examination of specific relationships between accurately measurable variables. Data in qualitative research is usually amenable to analysis and protection against bias through the use of statistical techniques. In considering scientific methods, there are two basic forms of formal logic argument (Giere, 1984; King, 2006): inductive reasoning – which is knowledge expanding in arguing from the particular to the general (Hawthorn, 2008); and 88

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

deductive reasoning – which is truth preserving in arguing from the general premises to the specific conclusions. Qualitative research employs inductive reasoning (see Figure 5.2(a))—that is, it starts with observation of phenomena, and works towards a theory that explains these observations and predicts similar phenomena. Qualitative research employs deductive reasoning (see Figure 5.2(b))— that is, it starts with a predictive theory, and attempts to validate and confirm this theory through observation. Giere (1984) presents that: ‘The philosopher’s dream of finding a form of argument that would be both truth preserving and knowledge expanding is an impossible dream. You must choose one or the other. You cannot have both.’

Qualitative research is very adept at establishing the landscape of a research problem. Nevertheless, it is now recognised that there is significant merit to the application of both qualitative and quantitative procedures within a research study. Creswell (2003) recommends the use of qualitative research for such exploratory purposes, and the use of quantitative research as a means of further elaboration on the theme and of providing order and importance to the qualitative data— so-called ‘mixed methods’ approaches Mixed methods research attempts to combine qualitative and quantitative research forms, mixing both in a study. The predictive capability of qualitative hypothesis can be validated through subsequent quantitative work to see if it can be generalised (Creswell, 2007). Mixed methods research is described as ‘more than simply collecting and analyzing both kinds of data: it also involves the use of both approaches in tandem so that the overall strength of a study is greater than either. . . ’ (Creswell,

2003). A number of the potential synergistic benefits of linking both qualitative and quantitative methods are described in Miles and Huberman (1994).

5.3.2 Philosophical Perspectives of Research Paradigms All research and formal inquiry, whether it be quantitative or qualitative, is underpinned by philosophical perspectives and epistemology which directs the work in terms of what is valid research and what methods of inquiry are appropriate. Creswell (2007) categorises these philosophical assumptions into five categories: Ontological – a stance toward the nature of reality; Epistemological – the study of the nature and scope of knowledge itself—in essence, how to acquire knowledge by observing the world; Axiological – the role of values in the research; 89

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

General Theory

U n d e r sta n d i n g

Confirmation

Hypothesis Pattern Recognition Observation

Time (a) Inductive Reasoning (“Bottom-up” Reasoning)

Abstract Theory Un d e r sta n d i n g

Specific Hypothesis Observation

Confirmation

Time (b) Deductive Reasoning (“Top-down” Reasoning)

Figure 5.2: Reasoning.

Based on illustrations from Trochim (2006).

90

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

Rhetoric – the language of research; Methodological – The methods of inquiry employed. The ontological issue relates to the concept of multiple realities—the fact that each individual contributing in a study brings their own perspective and understanding of the reality. Epistemologically, qualitative researchers try to get as close to the participants being studied as possible, and immerse themselves in the environment. Creswell (2007) presents the axiological implications of qualitative research that researchers acknowledge ‘the value-laden nature of the study and actively report their values and biases as well as the value-laden nature of information gathered from the field’. Qualitative studies tend to embrace and employ a style of rhetoric that becomes personal

and literary in form. The methodologies of qualitative studies are characterised as inductive and emerging, ‘shaped by the researcher’s experience in collecting and analyzing the data’. The researcher’s chosen set of assumptions, embodied in the selection of an appropriate research methodology, adds further coherency and structure to the research by applying to the inquiry certain paradigms or worldviews—a ‘basic set of beliefs that guide action’ (Guba 1990, cited in Creswell 2007). This is explained succinctly in the following: ‘any research method, any approach to the systematic investigation of phenomena, rests upon epistemological and ontological assumptions; assumptions about the nature of knowledge and about the kinds of entity that exist. These assumptions are literally embodied in the practices of a scientific community, and in what this community takes to be the exemplars of paradigmatic inquiry. These assumptions typically go unnoticed because they are taken for granted. . . ’ (Packer, 2005)

Amongst the various philosophical stances I reviewed, the approaches I consider most relevance to this research include: Positivism – a philosophy that advocates the only authentic form of knowledge is that which is acquired based on actual sense experience, and excludes as useless any metaphysical speculation about origins or causes. Positivist Research assumes that reality can be objectively described and measured independently of the observer. Interpretivism – a philosophy of research that believes all knowledge is a matter of interpretation, actively acknowledging the bias that a research introduces into the work. Critical Research – a philosophy that is interested in the prevailing social structures and aimed at emancipating and empowering its human research subjects. An important aspect of critical research is that it aims to make a practical difference to the quality of their lives and social environments. 91

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

Constructivism – a philosophy which views all knowledge as being contingent upon human interactions in a social context, and thus may change depending on circumstances. These descriptions are simplistic and somewhat naïve, but serve well to illustrate the basic philosophies of the approaches.

5.3.3 Selection of Interpretation Methodology An initial literature review had suggested the core research problem to be addressed in this thesis— that hardware and software teams need to operate as effectively and efficiently together as possible in order to compete in a very challenging global marketplace—and both a conceptual framework and set of illuminating research questions. My conceptual framework had suggested that issues were to be found in the techno-cultural aspects of team interworking, while my research questions were focused on identifying whether these issues were related to developer interaction (collectively and individually), to deterministic effects from tool selection, or to deterministic effects related to the technical specialisations themselves. This initial literature review had also shown that despite widespread acceptance and usage of the terms, the remit and boundaries of what is understood by software and hardware is imprecise, and neither etymologically grounded in definition, nor historically. Having identified an investigation of digital hardware and software team interaction as my research domain, and having surveyed various scientific methodologies, it became apparent from my research problem and research questions that a qualitative study was required to understand the nuances of interaction between the two groups for the following reasons: the research is exploratory in nature, trying to understand the practitioners view of the essence of two specific technical specialisations, and trying to identify and elaborate upon the causes of development stress between inter-working, co-dependent, teams of each specialisation in a social setting. We will now discuss why qualitative studies are appropriate to research of this type. Qualitative research is exploratory in nature (Creswell, 2003; Miles and Huberman, 1994), allowing its application when a phenomenon needs to be understood because little research has been done on it. Creswell (2003) describes the qualitative approach to research as: ‘. . . one in which the inquirer often makes knowledge claims based primarily on constructivist perspectives (i.e., the multiple meanings of individual experiences, meanings socially and historically constructed, with an intent to developing a theory or pattern). ’

On the applicability of qualitative research, Eisenhardt (1989) writes that it is appropriate when ‘little is known about a phenomenon’:

92

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

Data Collection

Data Reduction

Data Display

Conclusions: Drawing/Verifying Figure 5.3: Components of Data Analysis: Iterative Model.

Taken from Miles and Huberman (1994). ‘. . . (It) is most appropriate in the early stages of research on a topic or to provide freshness in perspective to an already researched topic.’

My research questions, presented in Subsection 3.5.1, are concerned with exploring and examining the interaction of digital hardware and software developers working on complex semiconductor projects. This involves achieving an understanding of the operation of two specific technical teams in a social environment. According to Ericksson (cited in Miles and Huberman, 1994), ‘social facts are embedded in social action, just as social meaning is constituted by what people do in everyday life’. Furthermore,

‘. . . social phenomena, such as language, decisions, conflicts and hierarchies, exist objectively in the world and exert strong influences over human activities because people construe them in common ways . . . Qualitative data, with their emphasis on people’s “lived experience,” are fundamentally well suited for locating the meanings people place on the events, processes, and structures of their lives . . . ’ (Miles and Huberman, 1994)

Figure 5.3 illustrates the basic characteristics of data analysis in qualitative studies. In research of this type, theories are often ‘grounded’ in data. The major components of this type of data analysis are: Data collection – the mechanism by which information about the social environment under investigation is faithfully captured and recorded, to address the purpose of qualitative research (in discovering, exploring, understanding or describing phenomena that have may already been identified but are not well understood). Data reduction – the process of condensing the source research material into a more compressed form amenable to the drawing of conclusions. 93

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

Data display – the organising and visualising of data such that the it is immediately accessible and discernable. Miles and Huberman (1994) states that ‘extended text can overload human’s information-processing capabilities and preys on their tendencies to find simplifying patterns’. Humans

are very proficient at categorising data automatically (Gaby, 2008b), and data display is important to ensure the salient information is captured and not overlooked. Conclusion drawing – the mechanism by which the underlying themes permeating the data are drawn to the fore, based on the work done in continual analysis. As with most of the stages in this research, the process is iteratively feeding back into subsequent stages of collection, reduction, display and conclusion drawing, as presented in Section 5.2. The ideal reaction to early conclusions formed is described as follow: ‘The competent researcher holds these conclusions lightly, maintaining openness and skepticism, but the conclusions are still there, inchoate and vague and first, then increasingly explicit and grounded . . . ’ (Miles and Huberman, 1994)

5.3.4 Grounded Theory The social aspect of my research suggests the use of qualitative research. The research is also highly exploratory in nature, hence my specific selection of grounded theory (Glaser and Strauss, 1967) as the most appropriate scientific research method for the study. Providing a mechanism to support the systematic discovery of theory in data makes grounded theory extremely useful for the exploration of a new area of research (Creswell, 2003), where little pre-existing literature is available. The scientific method of grounded theory is a methodology in the field of social anthropology that was developed by Barney Glaser and Anselm Strauss (Glaser and Strauss, 1967; Haig, 1995) in the 1960’s. Miles and Huberman (op. cit.) explain that social anthropology is concerned with the genesis or refinement of theory explaining the way that people in particular work situations come to ‘understand, account for, take action and otherwise manage’ their daily activities. Social anthropological

methods are ‘typically based on successive observations and interviews, which are reviewed analytically to guide the next move’.

Creswell (op. cit.) describes the strategy of grounded theory as being one of attempting ‘to derive a general, abstract theory of a process, action or interaction grounded in the views of the participants in a study’.

The process of grounded theory involves multiple stages of feedback between data collection, and the ‘refinement and interrelationship of categories of information’. The most important characteristics of research of this design are the ‘constant comparison of data with emerging categories’, and strategic ‘theoretical sampling of different groups’ of participants, in order to maximise the ‘similarities and the differences’ of collated information—thereby generating as rich an explanatory theory as possible.

94

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

Knigge and Cope (2006) note that grounded theory focuses on the ‘subjective experiences’ of an environment, ‘as they are influenced by broader historical, geographical, and structural contexts’. This makes grounded theory an apt choice for exploring ‘human agency and social structures’ for trends of ‘small-scale and large-scale social phenomena’.

Bailey et al. (1999) favourably qualify grounded theory, in describing how it: ‘demands critical inquiry: it starts from the premise that the world is in a constant state of flux, and that individuals are not all equally placed; it seeks not only to uncover conditions that are relevant to the research question, but also to build in process and change by exploring how individuals respond to changing conditions and to the consequences of their actions.’

Through the use of grounded theory, I was able to discover the important variables in digital hardware and software team interaction in the context of semiconductor development, and to establish some hypotheses concerning the causes of development stress.

Controversy and Split Opinion on Implementation Since its inception, the scientific method of grounded theory has fractured into two distinct models. Bryant and Charmaz (2007) humourously refers to these dichotomy of opinions as ‘the internal conundrums contained within grounded theory methods that confuse students and instructors’. The models

differ primarily in how pre-determined ideas are allowed to affect the emerging theory: • the simplistic inductive of grounded theory, championed by Glaser (1992); • the sophisticated model of grounded theory, advocated by Strauss and Corbin (1997). The simplistic inductive model is concerned with undertaking research with as few predetermined ideas as possible, in order to not pollute the emerging theory with research bias. Glaser and Strauss (1967) note that researchers should commence ‘without any preconceived theory that dictates, prior to the research, “relevancies” in concepts and hypotheses’.

Strauss and Corbin (1997) instead subsequently call for a ‘theoretical sensitivity’ that recognises the value that the background and experience of the researcher brings to the data in providing a theoretical lens through which to focus the research and the processing of data and themes. They also recognise the value of pre-existing literature in helping to sensitise the research to pertinent aspects of the area under study, whilst still allowing the theory to emerge organically. Glaser’s antagonistic and critical reaction to this model is captured in Glaser (1992), and further in Glaser and Holton (2004). Matavire and Brown (2008) notes the popularity of the sophisticated “Straussian” approach to grounded theory in Information Systems research, with the simplistic inductive “Glaserian” 95

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

‘Glaserian’ Approach

‘Straussian’ Approach

Beginning with general wonderment (an empty mind).

Having a general idea of where to begin.

Emerging theory, with neutral questions.

Forcing the theory, with structured questions.

Development of a conceptual theory.

Conceptual description (description of situations).

Theoretical sensitivity (the ability to perceive variables and relationships) comes from immersion in the data.

Theoretical sensitivity comes from methods and tools.

The theory is grounded in the data.

The theory is interpreted by an observer.

The credibility of the theory, or verification, is derived from its grounding in the data.

The credibility of the theory comes from the rigour of the method.

A basic social process should be identified.

Basic social processes need not be identified.

The researcher is passive, exhibiting disciplined restraint.

The researcher is active.

Data reveals the theory.

Data is structured to reveal the theory.

Coding is less rigorous, a constant comparison of incident to incident, with neutral questions and categories and properties evolving. Take care not to ‘over-conceptualise’, identify key points.

Coding is more rigorous and defined by technique. The nature of making comparisons varies with the coding technique. Labels are carefully crafted at the time. Codes are derived from ‘microanalysis which consists of analysis data word-by-word’.

Two coding phases or types, simple (fracture the data then conceptually group it) and substantive (open or selective, to produce categories and properties).

Three types of coding, open (identifying, naming, categorising and describing phenomena), axial (the process of relating codes to each other) and selective (choosing a core category and relating other categories to that).

Regarded by some as the only ‘true’ GTM.

Regarded by some as a form of qualitative data analysis (QDA).

Table 5.1: Key differences in Grounded Theory approaches.

Taken from Onions (2006).

approach being the least often employed. In terms of its epistemological stance, there is suggestion that methodologies such as grounded theory are independent of an underlying philosophical position (Matavire and Brown, 2008). Nevertheless, at least in the field of Information Systems research, ‘Glaserian grounded theory methodology is often alleged to be positivist’ (underlying Glaser’s belief in the independence of the observation from the observer – cf. Bryant and Charmaz, 2007), while ‘the Straussian approach is often associated with interpretivism’ (Matavire and Brown, 2008). It is interesting also to note the suggestion that whilst the rhetoric of positivism vs. interpretivism in the literature of scientific methods may historically have been ‘useful as a way of laying the foundations for change—of unseating the positivist hegemony and allowing newer, interpretive forms of research to grow and prosper’ (Weber, 2004), it may currently be outdated and misleading. Weber

(2004) considers it the responsibility of the researcher to deeply understand the ‘strengths and weaknesses’ of not only different research methods in the context of the research, but also of the

knowledge which they yield about the phenomena under study. In this regard, Weber (2004) proposes that the ‘longstanding positivist versus interpretive rhetoric’ may be inhibiting, rather than facilitating. In presenting criticisms of the method of grounded theory, Thomas and James (2006) note issues with whether the outcome qualifies as theory, and whether this outcome is inductively discovered, or actually invented. I disagree with some of their positioning. There is obvious value in imperfect abstraction and modelling (see discussion on abstraction in the context of languages of 96

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

logic in Chapter 3, Section 3.4). Whilst scientific breakthroughs may indeed be achieved through rethinking existing assumptions—a process of ‘conjectures and refutations’ (Popper, cited in Thomas and James, 2006)—it is not true to discount the potential for incremental scientific advance through the use of inductive reasoning. An additional criticism is that grounded theory is popular as it offers a methodology for researchers new to sociological research. This is unfair, as I feel more sophisticated/experienced research in any field will use derivative methods that have been ‘tweaked’ over time. Nevertheless, they do make a reasonable argument for a decidedly interpretivist approach to grounded theory, in noting the ‘indissoluble’ inter-relationship between interpreter and interpretation: ‘Those disgraced “theories logically deduced from a prior assumptions” are no more or less sinister than the already existing hermeneutic brackets in the researcher’s head. . . For why is the researcher there at all? There must be some assumption that the chosen topic is a worthy field for study.’ (Thomas and James, 2006)

Thomas and James (2006) do applaud the importance of constant comparison within grounded theory, and in fact vaunt it as perhaps the method’s most significant contribution to the philosophy of social studies. I consciously have taken a decidedly Straussian approach to this research, and am content with the resulting bias I am bringing as a software developer in the field of semiconductor devices. I chose it not because it is more popular in Information Systems research, but because I felt most comfortable with using it. Figure 5.4 illustrates the the hermeneutic circle of knowledge interpretation that I pursued in this research: • My professional background provided the basis for an original research problem; • An initial etymologically-focused literature review helped identify research questions; • The method of grounded theory was then employed to produce a generalised theoretical framework.

5.3.5 Phases of Grounded Theory Research Grounded theory consists of a number of distinct phases: • Sampling and data collection; • Coding; • Data analysis, and emergence of hypotheses. 97

5. RESEARCH DESIGN AND INVESTIGATION

5 . 3. M ET HOD OF I NV ES T I G A T I ON

informs Detailed Research Questions

Literature Review informs

informed by

Main Research Problem

Data Gathering

informs

informs Overall Conclusions

Figure 5.4: Hermeneutic Circle.

Taken from Brown (2007). An important salient consideration of grounded theory is the way in which these phases are intertwined, concurrent and recursive. Constant comparison of emerging themes within the collected data is used to support the systematic discovery of theory—in effect, “grounding” the theory in the data, rather than trying to apply a theory to the data. Both data and emerging themes act in a feedback loop to continually improve upon the development of emerging themes, through influence on subsequent sampling, data collection and analysis. This is a defining characteristic of grounded theory. Coding is the term used for the identification of categories and concepts from the collected data, and is a fundamental step to the capturing of emergent theory from the data. Presenting coding as ‘one of the most important elements of grounded theory’, Knigge and Cope (2006) define coding as:

‘. . . a process of both data reduction (for example, making hundreds of pages of notes easier to grasp) and data analysis (that is, by evaluating data, looking for internal consistencies or inconsistencies, and identifying patterns, the researcher is analyzing her or his findings).’

Rigorous coding ensures that the researcher is scientifically analysing the data, rather than being unduly influenced by either pre-conceived ideas, or by the point of view of the interviewees of the study. In the Strauss and Corbin (1997) model of grounded theory, there are three stages to the analysis of collected data: open coding, axial coding, and selective coding. During open coding, it is important to identify what is going on behind what the participant said, rather than just coding literally what is said. Axial coding is the process of relating codes to each other, through background contexts, causeand-effect conditions etc. It may suggest refinement of actual codes, may highlight importance of certain concepts, and may link codes together in meaningful ways—either as peers or in a majorcode / sub-code relationship. Selective coding is concerned with choosing one category as the central 98

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

component, around with each other category related. It helps to focus and improve the structural integrity of the emerging theory by integrating the axial coding into a narrative. ‘The act of constant comparison during coding helps . . . to generate abstract categories and their properties, which, since they emerge from the data, will clearly be important to a theory explaining the kind of behavior under observation.’—(Glaser and Strauss, 1967).

An important activity during continual analysis is the writing of memos: ‘Writing theoretical memos is an integral part of doing grounded theory. Since the analyst cannot readily keep track of all the categories, properties, hypotheses, and generative questions that evolve from the analytical process, there must be a system for doing so. . . Memos are not simply “ideas”. They are involved in the formulation and revision of theory during the research process.’—(Corbin and Strauss, 1990)

A memo is a note which identifies a possible hypothesis or relationship between emerging themes. The writing of memos is intrinsic to data analysis, and it occurs as a parallel activity to the process of data collection and coding. Glaser and Holton (2004) describe that while codes ‘conceptualize data’, memos ‘reveal and relate by theoretically coding the properties of substantive codes – drawing and filling out analytic properties of the descriptive data’. Glaser and Holton further explain how ‘comparitive reasoning’ in memos: ‘. . . undoes preconceived notions, hypotheses, and scholarly baggage while at the same time constantly expanding and breaking the boundaries of current analysis.’

Figure 5.5 shows a representative memo, dealing with the importance of incidental knowledge transfer. Grounded theory naturally terminates when the point of diminishing returns is reached on data collection vis-à-vis a particular emerging category: ‘The criterion for judging when to stop sampling the different groups pertinent to a category is the category’s theoretical saturation. Saturation means that no additional data are being found whereby the sociologist can develop properties of the category.’—(Glaser and Strauss, 1967).

5.4 Data Collection Once the concept of hardware and software as two similar but distinct forms of logic expression had gained purchase in my mind, I began to explore the notion that both disciplines might experience the same forms of progress-retarding stress during the development cycle. At this point, I began interviewing with the view to grounding a theory as to what these development stresses might be, and how best they might be alleviated. Additionally, I was interested in what either discipline could learn from the peer. I interviewed 20 industry experts with a variety of backgrounds and experiences. The mixture was predominantly of a software background, but also included some 99

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Memo: “Design by Chat”: Importance of Tacit/Informal/Incidental Knowledge Related To: Geographical Separation,Trust,Cultural Issues Have established that Informal/Incidental knowledge is seen by many as a ‘secret sauce’ that is lost with geographical separation: ‘My view on it is you know these products, they are always designed by chat at the end of the day, *laughs* . . . You got your charts and your la-la-la, and your design processes. . . They are not, they are designed by chat.’ — INTERVIEWEE JOHN ‘I think the nature of innovation is that guys love designing and they hate documentation whether that be lab notes or whatever. I know that skype/IM is used quite a bit for more hints or guidance in this area or that. If you are talking about innovation or design, you can’t stifle that.’ — INTERVIEWEE JOSEPH

However, it appears that technology doesn’t adequately alleviate the need for face-to-face and informal communication/social interaction: ‘(incidental knowledge sharing) is hugely important. It is the “secret sauce”, I think, that makes all the difference in the office. It is all very well to say you can use technology to get around the gap, but . . . I have yet to see that done right.’ — INTERVIEWEE MICHAEL

Nevertheless, social networking tools (blogs, wikis, IM) were seen as power tools for collaboration once social relationships had been established (“TRUST”). There is an important social aspect to informal communication: ‘what you would then be missing is the ability to have those conversations in the canteen. . . people getting to know each other, the relationships, and also people talking about what they are working on. . . there is this sort of indicental knowledge which is the key.’ — INTERVIEWEE MICHAEL ‘They (informal conversations) let you know a lot of the background as well, what is going on in the company, what the politics are. . . They make you feel a bit more part of the culture. It is sociable. About building up the informal relationships.’ — INTERVIEWEE WILLIAM

The effect of missing this information transfer is that communications becomes more structured, and hence slower. ‘we don’t actually hold formal or informal meetings where we bounce ideas between the two teams, but we do pick things up because we are in close proximity to each other. Take away the close proximity and you are missing a lot. You actually have an interface there that you then need to work hard on. And unless you are prepare to work hard - to actually recognise that there is an interface and are actually prepared to work at it - you are going to lose a lot, and it definitely will impact the schedule, for sure.’ — INTERVIEWEE THOMAS

Figure 5.5: Representative Memo.

100

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Selective Coding: generalise and abstract into theoretical model Axial Coding: hierarchical relationships

Mindmaps: establish themes

Model Development

f ocus(2)

sof tware(11)

scheduleimpact(2)

specialisation(5)

greatestimpact(2)

social(31) algorithmicsof tware(1)

IP (8)

reluctancetochange(1)

mitigation(8)

adherencetoprocess(2) marketrisk(1) valueintestbench(2)

requireshardwaref ocus(1) productspecif ication(7)

projectinception(7)

whatHW canlearnf romSW (1)

conf idencef romexperience(1)

communicationsdif f iculties(1) limitationsof SW models(2)

f luidrequirements(3)

specif yingHW resources(2)

inf ormalismostimportant(2)

gsdmitigation(1)

riskmitigation(16) opportunityf orHW change(2)

incidentalismostimportant(2)

weeklystatuscalls(1)

competitiveness(16)

IP (6)

hardware(9) movingsof twareintohardware(2)

realtimemissingf romSW model(1)

ng eri ine eng

inf ormalcommunication(2) HW attitudetorisk(1) opportunityf orHW change(1)

bringinsof twareexpertiseearly(2)

F abless(2)

SW f ocus(2) weightof HW risk(2) f ablessriskof newtechnology(1)

incidentalknowledge(1) culture(63)

HW vsSW (52)

costof changingHW (2)

SW models(2) verif ySW withoutHW (2)

specialisation(1)

engineersover − simplif y(1)

earlyprototype(1) marketanalysis(1)

importanceof cross − f unctionalskills(2)

systemunderstanding(4)

toolproblems(2)

controlsof tware(1)

ac

tiv itie s

risk(125) communication(13)

changingmarketrequirements(5) costof changingHW (7)

gsd(3)

earlyprototype(1)

marketwindow(13)

competitiveanalysis(1) changingmarketrequirements(3) aggressiveschedules(3)

experience(6)

verif ication(21)

tools(1)

essenceof HW (1)

marketwindow(14)

businessmodel(65)

businessmodel(67)

co − location(4)

timetomarket(1)

mindsetgap(19)

cross − f unctional(1)

importanceof f acetof ace(10)

costof wrongHW (4)

approachtotest(1) aggressiveschedules(3) inf ormationsharing(6) complexity(32)

geographical(48) SW workarounds(2)

geographicalmitigation(8)

designmodelling(2)

conf idencethroughtest(2)

f abless(4)

competitiveness(9) consumerelectronics(2)

lackof mixeddesignskills(4) geographicalmoreimpactthantechnical(1)

costof test(3)

keepSW modelinsyncwithHW (1) workaround(2)

in wo flue rk nc pr es ac tic e

teambuilding(1)

verif icationrisk(2)

technical(27)

opportunitytochange(1)

SW inf luenceonSystemArch(2) changeabilityof SW (14)

IP underestimatesF ablesswork(2)

validation(3)

competitiveanalysis(1)

socialf amiliarity(4) HW reluctancetodesignchange(1) technicallanguagebarrier(4)

SQA(1)

complexityinSW controlcode(1)

SQA(4)

consumerelectronics(2) SW workarounds(3) f abless(7)

perceptionof otherdiscipline(1)

f ablessriskof newtechnology(1)

underestimatelearningcurve(1)

HW f earof risk(15)

HW (1)

Development Teams

Digital Hardware Design: Not easily changeable, formal approach to test significant capital investment, costly to prototype

differs in changeability

Embedded Software Design: Easily changeabile (and expected to be so), differentiator

techno − geographicalsplit(3)

overconf idence(2)

requireshardwaref ocus(1)

testcodesharing(2)

whatSW canlearnf romHW (1)

Technical Determinism: Tool and tool-flow related issues

valueof ref erenceplatf orms(1) testingHW withoutf inalSW (1) newplatf orm(1) cost − benef itof process(3)

technicaldeterminism(3) resourcerequirements(2) resourceusageanalysis(1)

unedited(1)

costof wrongHW (4)

telecoms(3) valueintestbench(2)

f luidspecif ications(2)

movingschedule(2) movingSW intoHW (1)

implementation(1)

f reedomtoinnovate(1) SW ischangeable(2) IM (1) multi − disciplinary(1)

dimensioningHW (1)

systemresources(1) inadequatetesting(1)

socialtools(4)

f alseperception(1)

cost − benef itof process(1)

marketanalysis(1) socialrisk(2)

o

es

valueof SW models(2)

opportunitytochange(1) tapeoutsetbyhardware(1)

agilemethods(1)

webtools(1)

m

uenc

internalisingIP (1)

fl y in

essenceof SW (1)

inf ormalchats(1)

internalisingIP (1)

tel

timetomarket(1)

schedule(6) methodology(3)

ra

ambition(1)

f riction(3)

management(1)

marketchange(2)

Risks: Fundamental Market Understanding, Product Specification, Technical, Social, Weight of Risk strongly influences

n nt i tme nves limit i

adverse(1) F P GA(2)

visibility(1) HW bias(2) costof test(3) learningcurve(1)

generates

de

Business Realities: Tight Market Windows, Business Models, Justification of Investment

HW f ocus(2)

marketrisk(2)

internalising(1)

process(7) HW isf ixed(1)

f acetof ace(1)

organisation(1)

Communication Mediators

conf idence(1) constraints(1)

Techno-Cultural Effects: Linguistic, Technical, Ambiguity, Time, Culture

time Memo: “Design by Chat”: Importance of Tacit/Informal/Incidental Knowledge Related To: Geographical Separation,Trust,Cultural Issues Have established that Informal/Incidental knowledge is seen by many as a ‘secret sauce’ that is lost with geographical separation: ‘My view on it is you know these products, they are always designed by chat at the end of the day, *laughs* . . . You got your charts and your la-la-la, and your design processes. . . They are not, they are designed by chat.’ — Interviewee John ‘I think the nature of innovation is that guys love designing and they hate documentation - whether that be lab notes or whatever. I know that skype/IM is used quite a bit for more hints or guidance in this area or that. If you are talking about innovation or design, you can’t stifle that.’ — Interviewee Joseph

However, it appears that technology doesn’t adequately alleviate the need for faceto-face and informal communication/social interaction: ‘(incidental knowledge sharing) is hugely important. It is the “secret sauce”, I think, that makes all the difference in the office. It is all very well to say you can use technology to get around the gap, but . . . I have yet to see that done right.’ — Interviewee Michael

Nevertheless, social networking tools (blogs, wikis, IM) were seen as power tools for collaboration once social relationships had been established (“TRUST”). There is an important social aspect to informal communication: ‘what you would then be missing is the ability to have those conversations in the canteen. . . people getting to know each other, the relationships, and also people talking about what they are working on. . . there is this sort of indicental knowledge which is the key.’ — Interviewee Michael ‘They (informal conversations) let you know a lot of the background as well, what is going on in the company, what the politics are. . . They make you feel a bit more part of the culture. It is sociable. About building up the informal relationships.’ — Interviewee William

The effect of missing this information transfer is that communications becomes more structured, and hence slower. ‘we don’t actually hold formal or informal meetings where we bounce ideas between the two teams, but we do pick things up because we are in close proximity to each other. Take away the close proximity and you are missing a lot. You actually have an interface there that you then need to work hard on. And unless you are prepare to work hard - to actually recognise that there is an interface and are actually prepared to work at it - you are going to lose a lot, and it definitely will impact the schedule, for sure.’ — Interviewee Thomas

Memos: coding memos theoretical memos operational memos Figure 5.6: Grounded Theory Method employed in this research.

101

GSD: Familiarity, Trust, Irrelevance of Specialisation, Flow of (informal) Information

understanding

Open Coding: concepts, categories sub-categories

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

ASIC and SoC designers. Both Intellectual Property and Fabless Semiconductor models were covered. The interviews were semi-formal and conversational in nature. An interview guide (presented in Appendix B) was developed and used as a basis for the interviews, but I allowed the interviews to wander off topic if the interviewee so wished. Details of the selection process is described on 104. Each interview was recorded digitally, and averaged approximately an hour in length. The interviews were transcribed into text, and this text was then coded for significant topics of interest to start building up the theory.

5.4.1 Data Collection Procedures Qualitative research typically focuses around four basic types of data collection procedures (Creswell, 2003): 1. Observations; 2. Interviews; 3. Documents; 4. Multimedia Content. In selecting amongst the various data collection techniques, I chose face-to-face semi-structured interviewing with experienced developers in the embedded software and digital IC hardware communities as the most suitable type for my study—9 hardware engineers and 11 software engineers were interviewed. Interviewing is a widely used method for collecting data in social sciences (Berry, 1999), and semi-structured interviewing is ‘especially useful in studies where the goals are exploration, discovery, and interpretation of complex social events and processes’ (Blee and Taylor,

2002). Furthermore, Fowler and Mangione (1990), cited in Foddy (1993), note that: ‘Exploratory research usually is not done best using standardized interviews. By design, in a standardized interview one only learns the answers to the questions that are asked.’

In comparison to structured interviewing, where respondents are asked a predefined schedule of questions, with a limited set of expected responses, semi-structured interviewing relies on an interview guide to initiate a consistent discussion about a set of topics. Blee and Taylor (op. cit.) claim the interviewer is ‘allowed more flexibility to digress and to probe based on interactions during the interview’. Blee and Taylor (op. cit.) further propose that ‘the open-ended nature of such interviewing strategies make it possible for respondents to generate, challenge, clarify, elaborate, or re-contextualise understandings . . . based on earlier interviews, documentary sources, or observational methods’. This style

102

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

of interview also allows the researcher to actively probe areas of interest, based on the responses of the interviewee. Participants were able to provide historical information based on their experiences across a number of projects, and also to reflect upon and compare these various experiences against each other. I was successfully able to secure an average of one hour of one-to-one time with the participants in my study. I do not believe it would have been possible to gain access as a passive observer in the workplace environment; nor do I believe observation would have been successful in highlighting the various aspects I was attempting to explore. Many of the engineers work in an office setting, the similarities of which belies the specific separation of the discipline into software or digital hardware. It may as well be any modern office environment. To illustrate this, consider the data collection method known as “Think-Aloud Protocol”:

‘The basic idea of thinking aloud is very simple. You ask your users to perform a test task, but you also ask them to talk to you while they work on it. Ask them to tell you what they are thinking: what they are trying to do, questions that arise as they work, things they read. You can make a recording of their comments or you can just take notes. You’ll do this in such a way that you can tell what they were doing and where their comments fit into the sequence.’ (Lewis and Rieman, 1993)

There are a number of practical real-world considerations which ruled out the option of observing developers whilst they participated in a think-aloud session: 1. Requesting developers to talk out loud whilst they perform their daily tasks is difficult in a shared-office space environment—it may serve as a distraction to other developers, and may introduce biases in terms of how the developers respond, through being self-conscious of how their thoughts will be received by their peer group;

2. It may also introduce artificial justifications for tasks performed—one of the facets of this research that surprised me most is the spread in which different developers are aware of their craft;

3. A derivative data-collection method, ‘Talk Aloud’ encourages developers to describe their actions bot not to interpret the rationale for these actions—however, this would fail to identify the areas which caused inter-team difficulties;

4. The timescales over which the observations would need to be performed may be quite significant, in order to gather a rich enough data set – this is due primarily to the duration of the development projects in question. 103

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Selection of Candidates In a similar fashion to theory, sampling in grounded theory is emergent:

‘Unlike the sampling done in quantitative investigations, theoretical sampling cannot be planned before embarking on a grounded theory study. The specific sampling decisions evolve during the research process itself.’ (Strauss and Corbin, 1990)

As the study concerns hardware and software team interactions, the initial sampling for interviewees in the study were hardware and software engineers directly involved at the coal-face of system development. Following grounded theory’s axiom of purposeful theoretical sampling, as categories emerged from the collected data through coding and constant comparison of codes with new data, other categories of participant were added to increase diversity of viewpoints in a useful manner— for example, senior management (particularly with experience of managing both hardware and software functions) and semiconductor product marketing. Four aspects are commonly mentioned as regards the purposeful selection of participants and sites in Qualitative research (Creswell, 2003; Miles and Huberman, 1994): 1. The Actors—i.e. who will be interviewed; 2. The Setting—where the research will be conducted; 3. The Events—what the actors will be researched doing; 4. The Process—the ‘evolving nature of events undertaken by the actors within the setting’ (Creswell, op. cit). The actors, or interviewees, in the study are all personally known to me through my background working in the Irish semiconductor industry. As a group, they are of mixed technical specialisation, mixed backgrounds, and mixed levels of experience—ranging (at the time of the interviews) from Senior Engineer to VP Engineering level. Focusing the discussion on team interaction within complex semi-conductor system development, with a specific focus on team interaction, I interviewed 20 industry practitioners working in Ireland on either digital system-on-chip (SoC) designs or in embedded software development. This size of interviewee group was chosen for the following reasons: • Responses from the interviewees started hitting saturation after approximately 10 interviews—and incidentally Eisenhardt (1989) does note that ‘while there is no ideal number of cases, a number between 4 and 10 cases usually works well’.

104

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

• As Williams (2003) puts it, research design ‘is question-led’ and ‘resource-driven’. Each interview took significant time to schedule, organise, record, transcribe, code and process1 ; Williams (2003) suggests that semi-structured interviewing is best suited to and most practical for ‘small scale or exploratory research’.

The deciding point on when the stop adding further cases to the research is ‘simply the point at which incremental learning is minimal because the researchers are observing phenomena seen before’ (Eisenhardt, 1989)—however, ‘in practice, theoretical saturation often combines with pragmatic considerations such as time and money to dictate when case collection ends’ (Eisenhardt, 1989).

The interviewees varied in experience from 10 to 20 years, working for intellectual property vendors, EDA and tools vendors, and fabless semiconductor vendors. These semi-structured interviews were conducted over a period of two years, between 2006 and 2008. Interviewees held the positions of hardware and software engineers directly involved at the coal-face of system development, senior management (particularly with experience of managing both hardware and software functions) and semiconductor product marketing. All interviewees were male, not due to any particular selection criteria but due to the population gender bias in interviewees which is seemingly consistent with the demographics of engineering in Ireland as a profession (Drew and Roughneen, 2004). Due to their geographical separation (as regards their residence and also their places of work), the settings for data collection varied from person to person. Typically they were conducted in the work environment of the participant. In certain situations this was not possible, so more informal settings were used, such as a cafe or hotel lobby. Four were conducted over the telephone due to distance and time constraints. I was primarily interested in finding out about the participants’ direct experiences and reflexions upon the productivity impediments of co-development between software and digital IC hardware teams—where productivity in this context is the ability to get work, which addresses business objectives, completed to schedule, and a productivity impediment is anything which affects either the quality or timely delivery of this work.

Questionnaire Design In creating an interview guide, I was conscious of the fact that individuals with different degrees of situational awareness and perception would react differently to specific questioning on their work methods and environment.

1 I estimate that each hour long interview took between 4 and 6 hours to transcribe, and between 12 and 14 hours to subsequently code.

105

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

For this specific reason, I was careful to provide some degree of limited and qualified scenarios in questioning (for example, ”What is the biggest risk at project onset? Who is stricter in following their development methodology during crunch periods?”2 ) rather than asking big open-ended questions (for example, “What is the biggest difference between hardware and software engineering activities?”). The questionnaire included questions relating to the following topics, focusing on how technical specialisation into digital hardware / software disciplines affects productivity: • risks; • complexity; • verification / validation; • information sharing and communication; • lessons to be learned across technical specialities.

Interview Procedure I conducted semi-structured open-ended interviews, with an interview guide of mixed topics acting as a springboard for exploring other avenues of interest during the interviews themselves. Rather than focusing on following the interview guide in a strict fashion, I actively encouraged discussion in topics that the interviewee felt were significant to the interaction of teams. Interviews ranged from 40 minutes to 90 minutes, and averaged one hour. Interviews were digitally recorded, but participants were reassured that any details they revealed would not be personally attributable. Additionally, I gave reassurance that any private or company sensitive information that I was exposed to would not be reported except again in a non-attributable, non-identifiable fashion. Blee and Taylor (2002) described how ‘in semi-structured interviewing, analysis and interpretation are ongoing processes’. During the interviews, I took quick notes on what I considered key and

salient points—often these notes, on subsequent reflection, yielded points that re-enforced themes from previous interviews, or that presented new aspects to be investigated. This helped me to rank various interviews in terms of significance and in of themes explored. It also provided a degree of fail-safe in case the recording equipment that I used failed, or in case the resultant audio was difficult to discern.

2 The

actual interview guide used are presented in Appendix B

106

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Afterwards, I transcribed the interviews so that I could print them and mark them up for the purposes of coding and code development. This was a laborious task, each interview taking on average approximately 4–6 times longer to transcribe than interview. However, it did enable a quick mechanism to implement open coding/line-by-line coding on the transcribed text, and overall I feel it was a beneficial activity, as it often helped the need for constant comparison by refreshing the context of the overall interview in my mind just prior to coding it. Traditional grounded theory (Glaser and Strauss, 1967) advocates against the use of audio recording, or the transcribing of interviews. Glaser and Holton (2004) explains that ‘field notes are preferable’, and refutes any tendency to ‘engage in descriptive capture in QDA fashion’ which potentially ‘attacks the main tenant of GT, that theory can emerge’. Nevertheless, there were a number of reasons

why I felt it necessary to capture the interviews in detail (over and above what field notes would capture). I found working with the transcripts an excellent way to immerse myself in the content of the interviews. Additionally, the ability to replay interviews gave me the benefit of hindsight, and the potential to recognise where I had mis-interpreted the implications of certain conversations on earlier listening. So in practice, I compromised and recorded interviews to free me from the potential distraction of comprehensive field notes. This enabled me to take informal notes without any worry or guilt about missing significant concepts during the interviews themselves. Subsequently, I was able to use these recordings (and the generated transcriptions) as a source to compare my notes and themes against. I found this to be a natural and productive mechanism for me to perform data capture, and it was subsequently that I discovered this hybrid approach recommended elsewhere— for example, (Dick, 2005) argues that Glasers recommendation ‘against recording or taking notes during an interview’ is flawed, suggesting instead that the researcher ‘take key-word notes during the interviews’

and ‘tape-record the interviews’, with the view to converting to themes afterwards.

Ethical Considerations Participants were fully informed about the nature of the study when I solicited their involvement. Participants were guaranteed confidentiality of their discussions. They were informed that I would be recording the interviews, and that these recordings would be transcribed by me personally for the purposes of data analysis. I also explained that excerpts may be used to illustrate certain points throughout the text, but that where this would happen anonymity was guaranteed—both for them personally, and also for any corporate entities or commercial products they might refer to. Where interviewee names are used in this thesis, they are fictional and used to humanise the text. 107

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Potential Biases and Influences There were also a number of drawbacks to this data collection method, which I will now present. By their very nature, interviews present an implicit filtering of experiences through the views of the interviewees. Whilst I perceive this as a beneficial factor for data validity, it does raise the risk of unintended biasing of the conversation during the interview: • Due directly to my presence; • Due to the unnatural setting of the interview; • Due to the fact that individuals are not equally perceptive and articulate concerning the environment in which they work. As regards positioning the researcher vis-à-vis the interviewees, Blee and Taylor (2002) present that: ‘. . . all social research involves what Thorne describes as a “problematic balance, a dialectic between being an insider, a participant in the world one studies, and an outsider, observing and reporting on that world.” That balance is absolutely fundamental for collecting rich data . . . ’

My background (see Section 1.3) as an embedded software engineer may have caused a bias in the data. It may have allowed software engineers to relax with someone they consider as a friendly comrade. It equally may have have deterred hardware engineers somewhat from fully expressing sentiment they would consider as negative or derogatory towards the software discipline. In this respect, I believe my personal relationships with those interviewed helped gain a rapport with them, to ensure frank conversations. Williams (2003) claims that rapport is particularly important: ‘. . . in focused interviews where respondents are very often being encouraged to talk openly about issues which they may find sensitive or difficult.’

Deacon et al. (1999) describe how semi-structured interviews can help in the building of rapport, and the beneficial effects this has on the data collected. Interviews were typically conducted on site where the participant works, but not in their natural location of work. Some interviews were conducted off-site and/or out of hours. I did not consider that demographic information (such as time, place, and absolute date of the field setting) was of significant importance to the data collected. Whilst fortunate in being able to secure access to a group of engineers who are very well respected amongst their peers, I discovered very early on that, even amongst great engineers, not all engineers are equally reflective on the nature of their work. Some do think explicitly about what they do, but there are others who are naturally talented. They have good engineering instincts, and 108

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

may have powerful abilities to complete on projects—and yet they may never have thought deeply about what exact process is that has allowed them achieve this productivity, or they may not have tried to put this process or protocol into words so that they can communicate it to someone else. I was very cognisant of these factors when creating my interview guide. Being a software engineer by profession, I was very careful to be neutral in my recording and analysis of the interview data, from both digital hardware and software camps. Rather than attempting to weight viewpoints in proportion to (my interpretation of) their prominence, I have attempted to capture and analyse differences and variety of opinion. I have neither endorsed nor rejected any particular perspective, and have actively disengaged from attempting to resolve conflicts in the work—specifically through the use of formal coding of the interview data. The objective of this research is to capture as much of the domain under study as possible, and not to qualitatively rank any particular aspect of it in order of significance.

5.4.2 Interviewing Risks The key goals of good question design is that it support both comparability of answers, and also reproducibility of results. According to Foddy (1993): ‘. . . the success of any interview or questionnaire depends on good question design, yet most of the literature has devoted itself to interview techniques rather than the prior task of formulating questions for an interview or questionnaire.’

It is important to recognise that ‘human beings take one another’s viewpoint into account when interpreting each other’s behaviour’ (Foddy, 1993) when generating a questionnaire as a data collection

instrument. As illustrated in Figure 5.7, it is important that: (I) the researcher be clear about the information required, and communicate this to the interviewee; (II) the interviewee correctly interpret the question, in the manner intended by the researcher; (III) the interviewee accurately address the question posed, and clearly communicate an answer to the researcher; (IV) the researcher correctly interpret the answer provided, as the interviewee intended. Belson, cited in Foddy (op. cit.), identified risks that are associated with interviewing and the use of questionnaires as data collection techniques: 1. ‘respondents’ failure to understand questions as intended’; 109

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

I Interviewer

II Respondent

Encodes question, taking into account own purposes and presumptions/knowledge about the respondent, and perceptions of the respondent’s presumptions/knowledge about self

Decodes question, taking into account own purposes and presumptions/knowledge about the interviewer, and perceptions of the interviewer’s presumptions/knowledge about self

IV Interviewer

III Respondent

Decodes answer, taking into account own purposes and presumptions/knowledge about the respondent, and perceptions of the respondent’s presumptions/knowledge about self

Encodes answer, taking into account own purposes and presumptions/knowledge about the interviewer, and perceptions of the interviewer’s presumptions/knowledge about self

Figure 5.7: Model of Symbolic Interactionist View of Question-Answer Behaviour.

Taken from Foddy (1993). 2. ‘a lack of effort, or interest, on the part of the respondents’; 3. ‘respondents’ unwillingness to admit to certain attitudes or behaviours’; 4. ‘the failure of respondents’ memory or comprehension processes in the stressed conditions of the interview’; and,

5. ‘interviewer failures of various kinds (e.g. the tendency to change wording, failures in presentation procedures and the adopting of faulty recording procedures)’.

With respect to ‘respondents’ failure to understand questions as intended’, Foddy (1993) remarks it is important to recognise that: ‘It seems that respondents do their level best to answer all questions put to them. They even modify questions, in their minds, if they have to, so that they can answer them.’

A certain level of misinterpretation can be useful in teasing out differences in philosophical stances between interviewees and interviewee/researcher—especially in qualitative research. For instance, each interview is subjective: some interviewees are just more introspective than others. I was interested in problems/differences in perspectives amongst my interviewees, bearing in mind my own insights—derived from being employed as an embedded software practitioner. Technical vocabulary did not introduce problems during the interviewing—all my interviewees were experienced in the sector of semiconductor engineering and were comfortable with the basic terms of both disciplines. 110

5. RESEARCH DESIGN AND INVESTIGATION

5 . 4 . DA T A C OL L EC T I ON

Rather than there being ‘a lack of effort, or interest, on the part of the respondents’, I found that all my interviewees were more than happy to participate and talk about their work—particularly as the research topic provided them an opportunity to discuss any frustrations they might have been interacting with other teams of homogeneous or heterogeneous technical specialisation. As regards ‘. . . respondents’ unwillingness to admit to certain attitudes or behaviours’, respondents were assured confidentiality with their answers—that neither they personally nor the institutions to which they are associated would be directly or indirectly identified. In addition, I was careful when introducing the research to reassure respondents that there are no expected right/wrong answers to any of the topics we would be discussing—rather, I was interested in collecting and studying different perspectives on the dynamics of inter-team interaction, each of which is valid in its own right. By keeping the interview environment as relaxed as possible, I hope to have mitigated any influence from ‘the failure of respondents’ memory or comprehension processes in the stressed conditions of the interview’. A questionnaire was used to guide the semi-structured interviews, but the interviews

themselves were kept conversational where possible. I also made certain to talk in general with the interviewee prior to actually pursuing research related questioning, in an attempt to establish rapport put them at ease. As previously mentioned, interviewees appeared enthusiastic to discuss their perspectives on their work, and to share their experiences. I further contend that we were not discussing subjects that could be considered emotionally traumatic or socially embarrassing. Considering the possibility of ‘interviewer failures of various kinds (e.g. the tendency to change wording, failures in presentation procedures and the adopting of faulty recording procedures)’, I propose

that the exploratory nature of the research means that cross-interview wording is not as crucial as it would be in quantitative/survey type research. Fowler and Mangione (1990), again cited in Foddy (1993), assert that:

‘At the exploratory stages of research, finding out which questions to ask is a major goal. . . Restricting or structuring answers . . . should not be done until the researcher is sure the answer options are comprehensive and appropriate.’

Furthermore, Curtis et al. (1988) suggests that ‘reshaping questions to match the participant’s role in the project presented few problems’ when ‘not attempting to derive quantitative data from the responses’.

Interviews were digitally recorded and subsequently strictly transcribed word-for-word to eliminate data recording errors. 111

5. RESEARCH DESIGN AND INVESTIGATION

5 . 5 . DA T A REDUC T I ON

5.5 Data Reduction To reduce and categorise the data collected, I used open coding, axial coding and selective coding techniques (Glaser and Strauss, 1967; Strauss and Corbin, 1997) to identify categories and concepts. Miles and Huberman (1994) describe codes as ‘efficient data-labelling and data-retrieval devices’ which ‘empower and speed-up analysis’. Coding is a constituent part of the process of Grounded Theory, and intimately linked to the emergence of hypothesis, and refreshing of the scope of data collection. It is critical that coding is seen as driving ongoing data collection, and not just as a method of data preparation prior to analysis (Miles and Huberman, 1994).

5.5.1 Open Coding Open coding is the process of locating and assigning initial codes to pertinent themes during the first pass through recently collected data. The intention is to look for critical terms, events, comments or opinions which are then notes along with an appropriate label. The intention is to be open to the creation of new themes and to bring the salient themes that are buried within the data to the surface. Figure 5.8 shows the codes I collected during this phase of my research with emphasis on those terms which emerged as being the most significant.

5.5.2 Axial Coding During open coding, I found I ended up with much duplication and redundancy, even after constant comparison with the data. Axial coding is performed on the second and subsequent passes through the collected interview data. The preliminary themes and concepts collected during open coding feed in as the primary source material for axial coding—although new themes may still emerge from reflection towards the raw interview data. The purpose of axial coding is to review, examine and organise the codes gathered from the open coding, and also to identify the axes of key conceptual themes in the data. The important areas to look for during axial coding are causes and consequences, environmental conditions and interactions, and concepts that naturally cluster together. Through axial coding, I began to stitch related themes together, and some basic theme/sub-theme relationships started to emerge. This phase helped greatly in compaction of my codes. 112

5. RESEARCH DESIGN AND INVESTIGATION

5 . 5 . DA T A REDUC T I ON

adherence-to-process adverse aggressive-schedules agile-methods algorithmic-software ambition approach-to-test bring-in-software-expertise-early

business-model

changeability-of-SW changing-market-requirements co-location communication communications-difficulties competitive-analysis

competitiveness complexity

complexity-in-SW-control-code complexity-risk confidence confidence-from-experience confidence-through-test constraints consumer-electronics control-software cost-benefit-of-process

cost-of-test cost-of-wrong-HW cross-functional

cost-of-changing-HW

culture

design-modelling

dimensioning-HW early-prototype engineers-over-simplify essence-of-HW essence-of-SW

experience fabless

fabless-risk-of-new-technology face-to-face false-perception fluid-requirements fluid-specifications focus FPGA freedom-to-innovate friction

geographical

geographical-mitigation

geographical-more-impact-than-technical greatest-impact gsd gsd-mitigation HW-attitude-to-risk HW-bias HW-reluctance-to-design-change

HW-fear-of-risk

hardware

HW

HW-focus HW-is-fixed

HW-vs-SW importance-of-face-to-face

IM implementation

importance-of-cross-functional-skills

inadequate-testing

incidental-is-most-important incidental-knowledge informal-chats informal-communication informal-is-most-important

information-sharing

internalising internalising-IP

IP

IP-underestimates-Fabless-work keep-SW-model-in-sync-with-HW lack-of-mixed-design-skills learning-curve limitations-of-SW-models management market-analysis market-change market-risk

methodology

market-window

mindset-gap mitigation moving-schedule moving-software-into-hardware

moving-SW-into-HW multi-disciplinary new-platform opportunity-for-HW-change opportunity-to-change organisation overconfidence perception-of-other-discipline

project-inception

resource-requirements resource-usage-analysis schedule-impact

process product-specification

realtime-missing-from-SW-model reluctance-to-change requires-hardware-focus

risk

risk-mitigation schedule

social social-familiarity social-risk social-tools software specialisation

specifying-HW-resources SQA SW-focus SW-influence-on-System-Arch SW-is-changeable SW-models

SW-workarounds system-resources system-understanding tapeout-set-by-hardware team-building technical-determinism technical-language-barrier techno-geographical-split

technical

telecoms test-code-sharing testing-HW-without-final-SW time-to-market tool-problems tools underestimate-learning-curve unedited validation value-in-test-bench value-of-reference-platforms value-of-SW-models

verification verification-risk verify-SW-without-HW visibility web-tools

weekly-status-calls weight-of-HW-risk what-HW-can-learn-from-SW what-SW-can-learn-from-HW workaround

Figure 5.8: Visualisation of Open Codes as a Tag Cloud.

Auto-generated from in-vivo coding of interview text.

113

5. RESEARCH DESIGN AND INVESTIGATION

5 . 6. DA T A DI S P LA Y

5.5.3 Selective Coding Selective coding is typically the final processing through the data, when all the major themes of the research have been identified and captured (Neuman, 2006). As the name implies, selective coding involves re-scanning collated data and previously identified codes, carefully selecting cases that illustrate and elaborate certain themes through the use of comparisons and contrasts. It provides an additional opportunity to look for discrepancies with emerging themes through comparison back to the original source data, and an opportunity for further theme emergence and theory re-balancing centered on the salient themes. Selective coding is a critical stage in ensuring higher level understanding of the emerging theory— otherwise, the lure of the more basic stages of coding may mistakenly lead to sophisticated summary descriptions rather than the desired goals of in-depth analysis: ‘It is very tempting to neatly attach codes to data, as it gives the impression of an analytical ordering of data, particularly, when combined with a well structured coding tree. . . . the inductive coding of data, particularly when done in vivo, that is, the coding of text onto itself, leads often only to summary descriptions rather than analysis. ’ (König, 2009)

Using a custom visualisation flow (developed in LATEX) that pushed coded interviews through, I was able to see my selectively coded themes evolve from interview to interview. This helped me greatly to see saturation. Initially, my ontology changed at a rapid pace, as each interview added potential new themes of inquiry. However, it quickly reached a point where subsequent changes eventually tended towards embellishment and saturation—if anything, I was discovering minor sub-themes and minor qualifications of existing themes. After perhaps the first 3–4 interviews, certain themes started becoming more apparent in the coded data set. Using some visualisation experiments (see Appendix E), I managed to identify what I considered to be the most pertinent themes and perform a ‘re-balancing’—centring those significant themes within the locus of my inquiry. Risk was the first high-level category to emerge from coding, followed by concerns related to geographical separating. Teasing further into both of these yielded subsequent high-level categories: risk was informed by business realities, and geographical concerns yielded the separate but related category of social difficulties.

5.6 Data Display The topic of data display is covered in detail in Appendix E, so I do not propose to deal with it extensively here, except in summary. I used a variety of graph techniques to visualise emerging 114

5. RESEARCH DESIGN AND INVESTIGATION

5. 7. V A L I DA T I ON

codes and their inter-relationships. Figure 5.8 illustrates an example of these, where font pitch signifies the frequency of occurrence of the code to aid salient theme identification. As my workflow for achieving these visualisations was automated, it did suffer to some degree from a propensity towards dense hierarchical clouds of codes, as described in König (2009): ‘Somewhat related to this danger is the tendency to identify a too large number of codes, even if coding in itself is an advisable practice for a given methodology. The ganglia of large, cluttered coding trees more often than not do not facilitate but preclude effective analysis of data.’ (König, 2009)

Nevertheless, I printed these code clouds on A3 paper, and circulated clusters of inter-related codes which emerged—this served as a viable form of data reduction and code de-duplication. From these, I selectively coded through the creation of various mind-maps (see Figure 5.9 for an example of visualising selective coding) which eventually matured into themes and, ultimately, detailed memos. I used these visualisations as instruments to facilitate the writing of descriptive narrative—at which point I felt the source illustrations themselves succumbed to this narrative, and diminished somewhat in importance.

5.7 Validation In discussing the context of theory assessment and critiquing, Eisenhardt (1989) notes that: ‘. . . good theory is parsimonious, testable, and logically coherent . . . a strong theory-building study yields good theory . . . which emerges at the end, not beginning, of the study.’

Eisenhardt (1989) continues: ‘. . . the assessment of theory-building research also depends upon empirical issues: strength of method and the evidence grounding the theory. Have the investigators followed a careful analytical procedure? Does the evidence support the theory? Have the investigators ruled out rival explanations?’

Golafshani (2003) notes that both qualitative and quantitative researchers need to test and demonstrate that their studies are credible. However, they are ‘fundamentally different approaches to research and therefore need to be considered differently with regard to critiquing’ (Ryan et al., 2007).

Knigge and Cope (2006) agree that ‘the standards of rigorous quantitative work cannot and should not apply to qualitative work; however, there is still an imperative to insure(sic) robust work’. Strauss and

Corbin (1998) recognise that the conventional quantitative testing methods ‘require redefinition in order to fit the realities of qualitative research’.

115

5. RESEARCH DESIGN AND INVESTIGATION

5 . 7 . V A L I DA T I ON

Figure 5.9: Early Selective Coding Data Analysis.

116

5. RESEARCH DESIGN AND INVESTIGATION

5. 7. V A L I DA T I ON

Indeed, much literature concerns the use of the terms reliability and validity (‘the positivist convention of reliability and validity’—Speziale and Carpenter, 2007) as regards their applicability to

qualitative research (Golafshani, 2003; Morse et al., 2002). Lincoln and Guba (1985) suggest the use of four equivalent terms applicable to qualitative research (Speziale and Carpenter, 2007): credibility – a judgement on how credible the work is overall; dependability – a criterion met once the credibility of the findings has been demonstrated, and thus similar to reliability in quantitative research; transferability – guides to the applicability of the work in similar contexts; confirmability – an audit trail of recorded activities in time for another individual to follow in order to assess the evidence and thought processes that led to the conclusions drawn. Whilst recognising the contribution of Lincoln and Guba (1985), in slight contrast Morse et al. (2002) argue for the applicability of the terms validity and reliability to qualitative research. Morse et al. note that specifically in qualitative research ‘verification refers to the mechanisms used during the process of research to incrementally contribute to ensuring reliability and validity, and thus, the rigor of a study’. The iterative nature of the research (Creswell, 2007) is such that errors are identified and

corrected ‘before they are built into the developing model and before they subvert the analysis’ (Morse et al., 2002). The ‘congruence of reliability and validatity’ (Golafshani, 2003) in the context of qualitative research provides that ‘demonstration of the former [validity] is sufficient to establish the latter [reliability]’ (Lincoln and Guba, 1985).

Morse et al. describe a number of verification strategies for qualitative research, including ensuring: methodological coherence – congruence between research question and chosen scientific method; appropriate sampling – participants must have appropriate knowledge of the research topic to allow for saturation to be achieved, which leads to ‘replication in categories; replication verifies, and ensures comprehension and completeness’;

concurrent data collection/analysis – constant comparison by continued questioning and redirection of questioning is ‘the essence of attaining reliability and validity’: theoretical thinking – emerging ideas are reconfirmed in new data, and new ideas are reconfirmed in old data; theory development – a deliberate move from a ‘micro perspective of the data’ to a ‘macro conceptual/theoretical understanding’.

117

5. RESEARCH DESIGN AND INVESTIGATION

5. 7. V A L I DA T I ON

Morse et al. contend that these verification strategies ‘incrementally and interactively contribute to and build reliability and validity, thus ensuring rigor’. Judged against these strategies, the mechanics

of the process of grounded theory compare very favourably, with grounded theory’s focus on purposeful saturation, constant comparison, theme emergence and the creation of well developed, parsimonious and comprehensive theory (Glaser and Strauss, 1967). Knigge and Cope (2006) agree that the process of grounded theory is itself self-validating, with strategies for rigour built-in (as recommended by Morse et al., 2002). Considering grounded theory, Bailey et al. (1999) presents how, ‘through research activities’, theory emerges which is subsequently used to ‘guide a fresh collection of data, to review the original data and literature, to appraise new literature and to form new explanations’

in a continuing process. Evaluation is ‘not carried out post hoc, but continues throughout the research process’ (Bailey et al., 1999).

Knigge and Cope (2006) recognises grounded theory as ‘a robust method of ensuring rigorous research’ by virtue of it being ‘deeply concerned with enabling rigorous qualitative research in which theory is held accountable to empirical data’:

• grounded theory requires evaluation of ‘findings and theories’ and reflection on the role and biases of the researcher throughout data collection; • the researcher must be transparent in describing research procedures so that overall findings can be evaluated. In summary, it is an inherent characteristic of grounded theory that it ‘seeks to find rigorous, verifiable, and explicit ways to draw conclusions’ (Knigge and Cope, 2006). Rigour is important in

qualitative research ‘in demonstrating to . . . ’ (those) ‘who read qualitative research that it is a respectable approach to science’ (Speziale and Carpenter, 2007). The rigorous use of grounded theory thus

ensures credibility and dependability of the findings. In discussing literature review, Creswell (2003) notes: ‘One of the chief reasons for conducting a qualitative study is that the study is exploratory. That means that not much has been written about the topic or the population being studied, and the researcher seeks to listen to participants and build an understanding based on their ideas.’

He suggests that literature is used to ‘frame’ the problem in the introduction to the study, and also that comparison against it is presented at the end of the study so that ‘it becomes a basis for comparing and contrasting findings of the qualitative study’.

Furthering this topic, Eisenhardt (1989) writes: ‘Overall, tying the emergent theory to existing literature enhances the internal validity, generalisability, and theoretical level of the theory building from case study research. . . because the findings often rest on a very limited number of cases.’

118

5. RESEARCH DESIGN AND INVESTIGATION

5 . 8. S UM MA RY

In addition to the use of favourably accordant literature, Eisenhardt (1989) identifies the significant value of contradictory literature, in writing: ‘By examining conflicting literature and seeking explanations for why there are contradictions one can preempt criticisms and enhance confidence in the findings, but looking at conflicting theory also represents an opportunity for deeper insights. By looking at literature with similar findings, one connects the current study to previous work, corroborate the findings and strengthen confidence.’

Specifically with regards to grounded theory, Creswell states that: ‘the researcher may incorporate the related literature in the final section of the study, where it is used to compare and contrast with the results (or themes or categories) that emerged from the study.’ (Creswell, 2003)

Additionally, Couros (2004) notes that, with grounded theories, ‘theoretical constructions are perceived to be strengthened and better “grounded”’ through the application of ‘thorough data analysis and the examination of a wide range of related literature’.

In addition to ensuring that the scientific method was applied in a rigorous manner, I use literature to compare and contrast with the findings of the research, thus demonstrating transferability and confirmability. the theory that is presented in

Chapter 8 (“Validation of Theoretical Model”) compares Chapter 6 (“Emergence of Theoretical Model”) against pre-

existing literature. This ex post facto comparison increases the ‘truthfulness of a proposition’ through triangulation (Golafshani, 2003). Beyond establishing validity and reliability, Power (2002) notes in her own grounded theory research that the pertinent question to be asked about any model or theory is not whether it is simply true or false, but whether it is useful. At the conclusion of my research, I will provide answers to the originally posed research questions, describe implications for practitioners and for educators, and list potential future research directions this work suggests.

5.8 Summary In this chapter, I described the design and implementation of my research. I introduced my initial conceptual framework (an elaboration of my research problem), and justified my selection of a qualitative research approach—the grounded theory method. I presented details on my data collection procedures (semi-structured interviews, involving candidate selection and questionnaire design) and described how I performed data reduction through the employment of grounded theory’s various coding stages towards the generation of an encompassing theoretical model.

119

5. RESEARCH DESIGN AND INVESTIGATION

120

5 . 8. S UM M A RY

Part III

Theoretical Model and Solutions

Chapter 1: Introduction

Part I: Initial Literature Review Chapter 2: Semiconductor Ecosystem

Chapter 3: Etymology of Hardware and Software

Chapter 4: Digital Hardware Flows vs. Software Development Processes

Part II: Research Design Chapter 5: Research Design and Investigation

Part III: Theoretical Model and Solutions Chapter 6: Emergence of Theoretical Model

Chapter 7: Emergent Toolbox of Patterns for SoC Project Organisation

Chapter 8: Validation of Theoretical Model

Chapter 9: Conclusions

121

Chapter 6 Emergence of Theoretical Model



We only think when we are confronted with problems. — JOHN DEWEY



1889–1952, American Philosopher, Educator.

6.1 Introduction

I

N

the previous chapter I described my approach to data collection and coding of the data. This

chapter will cover the theory that emerged from these activities. I will present my theoretical

model showing the various influences on hardware/software team interworking within the context of semiconductor design. Each of the main themes will be discussed individually, and their interrelationships examined in detail. Selective quotations from interviews will be used to enrich the discussions where appropriate.

6.2 Theory Building One of the ambitions of Grounded Theory research is to not only identify the various themes that influence a particular phenomenon, but also to obtain a deeper sense of the inter-relatedness of these themes, through the generation of a rich theoretical model. A theoretical model is a postulated explanation of observed phenomena. My theoretical model for the themes that affect interaction of digital hardware and embedded software teams is presented in Figure 6.1. Digital Hardware and Software development are similar endeavours philosophically—they are both concerned with the implementation of instantiated models for Boolean logic. However, the fundamental difference between them is the scope for changeability. Software is, by definition, more malleable than hardware. Both of these activities come with an associated cost—a cost which must be funded and justified as a means to achieving the business goals of a particular organisation. Through this research, I have discovered that business goals have a significant influence on the scope of engineering activities, and especially on the form that these activities take. 123

6. THEORETICAL MODEL

generates

in wo flue rk nc pr es ac tic e

nces fl u e y in Development Teams

Digital Hardware Design: Not easily differs in changeable, changeability formal approach to test significant capital investment, costly to prototype

Technical Determinism: Tool and tool-flow related issues

o tel

ac

tiv itie s

m

ra

strongly influences

g rin nee ngi ne nt i tme nves limit i

Risks: Fundamental Market Understanding, Product Specification, Technical, Social, Weight of Risk

de

Business Realities: Tight Market Windows, Business Models, Justification of Investment

6 . 2 . T HEORY B U I L DI NG

Embedded Software Design: Easily changeabile (and expected to be so), differentiator

Communication Mediators

Techno-Cultural Effects: Linguistic, Technical, Ambiguity, Time, Culture

GSD: Familiarity, Trust, Irrelevance of Specialisation, Flow of (informal) Information

Figure 6.1: Theoretical Model of Influence on Hardware / Software Interworking.

124

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

Like all businesses, semiconductor organisations employ business models to generate revenue for market appropriate products or services. My research hints that certain business models may not be sustainable in the longer term. This is a natural ecological occurrence in the semiconductor market place. While individual technologies come and go, the technology ecosystem evolves over time—and business models must adapt to address new market needs and challenges or suffer being made redundant1 . Business realities influence risk. The requirement to capitalise on market opportunities within the sector generates a degree of commercial and technical risk. Through this research, it is apparent that this risk affects digital hardware and software design practises to differing degrees. For digital hardware and software design teams to work together, efficient communication is essential. The communication between these groups is subject to a set of communication mediators—specifically related to how these groups are separated by geography, socially, and by technological specialisation. Finally, deterministic effects from tools and tool-flow related issues were observed to play a role in shaping the techno-cultural biases and stances of the design teams. We will now discuss each block in Figure 6.1 in greater detail.

6.3 Business Realities: Impact on Technical Flow As a practitioner in the industry, I did have some expectations on the sorts of categories and themes that would emerge from formal coding and analysis of the data. For me personally, one of the unexpected themes that emerged early on in this research was the impact of market segment and the organisation’s choice of business model on how it runs its engineering model, particular its attitudes to software risk and to (the cost of) test and documentation. As a result of this research, I propose that market segment and the choice of business model have a significant impact on the way in which technology is designed, developed and verified. This is stronger in its impact than any any other theme that emerged from my research. Its effects are seen within every other major theme that emerged. Business realities are coupled closely with risk determination. All other emerging themes are then concerned with mediation of this risk in some manner. Business realities impede the amount of verification possible due to the requirement for aggressive schedules to meet market windows. The choice of market segment—for example,

1 For example, interviewees felt the IP business model is one that is becoming more challenging. It is difficult to differentiate it from design services, except for very specific platform IP like processors, or highly discrete blocks like crypto engines, UARTs, USB, etc.

125

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

TechnoCultural

SocioGeographic

Approach to Risk

Business Realities

Table 6.1: Emergent Themes.

Inherency of Risk Aggressive Schedules Tight Market Windows Intellectual Property Cost of Test Risk is fundamental to semiconductor system development Risk is primarily dominated by Human Factors Risk of Poor Market Research Product Specification Risk Technical Risk Weight of Risk System Validation and the Influence of Discipline Specialisation on Inter-Team Cultural Differences Teams that did not know each other interacted poorly Degree of irrelevance of technical specialisation in terms of geographical separation Information flowing and knowledge transfer is key to distributed collaboration Importance of regular distributed team meetings Importance of informal communication tools The importance of regular face-to-face meetings and contact Team Inertia Team Conflict NIH Syndrome SW-SW vs. SW-HW interaction dynamics Linguistic Determinism Technical Determinism Hardware engineering aversion to ambiguity Parallel and Temporally deterministic vs. Sequential NonDeterministic Cultural Determinism Lack of Shared Tools Appreciation of Other Discipline

126

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

  

    

       

               

  

        

     

      

      

 

           

      

    

 !  

Figure 6.2: Axial Coding of Business Reality Concepts.

(For a larger version, see Figure E.6 on page 256).

CE Market Intercept Window CE: Price, features vs. Quality/Reliability

Market Intercept Windows are Tight

IP needs Strong Lead Customer

IP needs Broad Testing

Business Models involve Different Engineering Trade-offs

Business Realities: Impact on Technical Flow

IP needs to educate Fabless needs Focused Testing Fabless needs End Product Awareness

Justification Required for Investment

Figure 6.3: Emergence of Business Realities Theme.

127

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

Higher Quality

Lower Cost

Shorter Time

Figure 6.4: Competing Influences of Time, Quality and Cost.

consumer electronics—has the greatest impact in this regard, followed closely by choice of business model within the market segment (e.g. design services, IP, fabless, IDM). The amount of verification attempts to mediate risk. Aggressive schedules and razor thin margins affect investment in both product verification and more fundamentally in the development process. This implies a need for a certain ‘agility’ in the development approach.

6.3.1 Consumer Electronics The consumer electronics industry, like most industries, is subject to the confluence of three orthogonal marketing requirements, depicted in Figure 6.4. There is at the same time the need to get the highest quality product to market, as quickly as possible, for the lowest cost. Specifically for the CE industry, the window of market opportunity roars loudly—often necessitating trade-offs in the other requirement dimensions.

‘I think the risk is actually well founded in consumer electronics because you are trying to hit market windows and you are trying to get products out that do what the marketing guys tell you they have to do. So as long as the attitude you take to risk feeds in well enough with the schedule of when you are expected to deliver it, then I think that is the way it should be.’ — INTERVIEWEE THOMAS

Figure 6.5 illustrates how my interviewees differentiated CE from other semiconductor industries (not to any scale)—specifically medical and automotive.

Figure 6.5 is intended to indicate

the relative directions of market pressures for each sector, with the influence of each pressure described as either weak, moderate or strong. It is worth emphasising clearly that my interviewees are primarily of CE backgrounds with limited exposure to medical and automotive product design. Nevertheless, they unanimously felt that the window of market opportunity is one of strongest differentiating feature of the three markets, along with the requirement for differentiation through features, and potential sales volumes. These market window pressures introduce a need for a consistently quick pace of development. The market realities are such that aggressive 128

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

Long Market Life

Mission Critical

Limited Market Window

Consumer Electronics Automotive Medical Achievable Margin

Anticipated Volume

Feature Integration

Figure 6.5: Radar Plot of Market Pressures.

schedules are necessary to be competitive. By comparison, the automotive industry requires that parts are subject to significant production qualification standards2 , and even the non-critical car ‘infotainment’ electronics are required to provide significant usable lifespan in an outdoor (i.e. harsh) environment (Servais, 1999). In addition, the CE market is fickle, as illustrated by the strength of the requirement for feature integration/differentiation in Figure 6.5. This is represented in the interviewee data by the many late changing requirements that interviewees had to deal with (‘changing goalposts’). This impacted software teams most significantly—in being asked to address feature requirements which were outside the original hardware design remit, and for which it is not properly dimensioned, as well as needing to develop workarounds for discovered hardware design and implementation flaws that either escaped pre-tapeout checks or were discovered too late in the implementation phase. Chapter 2 introduced a number of business models within the CE semiconductor sector. It is interesting to note that there are degrees of friction and tension at the interfaces between teams of different business models—even within the same market sector. Quite often these interfaces are cross-company, and bring the additional baggage of consumer-supplier relationships. Nevertheless, there is a perception that IP companies can underestimate the amount of work required to take an IP design out to the market as a fabless product—and that IP companies don’t always adequately address fabless business model requirements in their product design. Similarly, fabless companies underestimate the testing burden/scalability burden of the IP company, and the fact that they can

2 For

instance, see the Automotive Electronics Council AEC-Q100 standards.

129

6. THEORETICAL MODEL

6 . 3 . B US I NES S REA L I T I ES

work more efficiently with their IP vendor if they provide market guidance as to feature and test prioritisation (‘the need in IP for a strong lead customer to pull you through’). From a fabless customer’s perspective, the purpose of purchasing intellectual property is to reduce development costs and time to market. Yet this research shows that the use of IP is certainly not risk free: • Reducing development costs—you are assuming that your IP supplier’s business model is such that they can leverage multiple sales from their particular IP. This is clearly what separates IP from design services, and yet is not necessarily always the case. • Accelerating your product development cycles, and reducing your time to market—you save on development but spend on learning curve and validation: ‘Even though you may not be designing most of the IP, you still have to read all of the app. notes to understand how to use it, test it. So effectively you end up doing 75% of the work. There is definitely 25% which is the bought-in design that you don’t do.’ — INTERVIEWEE RICHARD

With IP vendors there is typically as much value in the test bench as in the IP itself: ‘If you were selling into the hardware intellectual property, you know, that value is in the test bench . . . ‘In IP, if you don’t have it (a comprehensive test bench), no-one is going to buy it (the product) from you.’ — INTERVIEWEE JOHN

An intellectual property vendor is aiming to get to market early with IP. As a consequence of market immaturity, it is not possible for the IP vendor to clearly identify the business opportunity for their customer or how the IP will be deployed in product, and thus the breadth of feature implementation and test for an IP vendor is generally quite large. In this case, business realities intrude on the technical reality of test planning. It was felt that testing needs to be changed based on whether you are in an IP-model or fabless-model from the point of view of having to offer broader level of support for features: ‘. . . in IP you have to be all things to everybody. All functional. Whereas chip, in theory you know what you are doing, so you know what features are critical to you. You know the ones you need so you focus in on that.’ — INTERVIEWEE RICHARD

Additionally, as the (IP) supplier you must anticipate customer demand, and have your features verified and validated, ready for market when the market requires them. This has an associated impact on the amount of testing you must perform, and the fact that it needs to be more horizontally focused across the IP block rather than vertically focused on the actual end application usage scenario: 130

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

‘. . . one of the problems is that you must have the IP ready when your customers decide they want it—which means you have to be ahead on the risk curve . . . to some degree. ‘. . . you have to cover all of the things that the customer might want. . . If you think your customers are going to want feature xyz in two years time, you have to start now—whereas they will make their choice in two years as to whether (the market for) feature xyz is going to take off or not.’ — INTERVIEWEE DAVID

The IP test harness is typically exhaustive and as a consequence can be difficult to maintain. In addition, delivering it en mass to the customer can be challenging from the viewpoint of cost of support—thus a subset of tests in the form of an ‘acceptance’ tests deliverable may occasionally be agreed between vendor and customer. Fabless companies assemble IP from various different vendors, and depending on the brand and reputation of the supplier, either rely on detailed test reports from the vendor for the validation of the individual blocks or repeat vendor testing entirely. Their testing efforts are more concerned with the correct integration of the component blocks of IP—ensuring connectivity is verified, and basic operation from a system level. ‘. . . you have a block that you assume is fully verified by the IP vendor—though that is not always true—when you are scheduling you assume a certain level of confidence so you verify to a certain degree at RTL level—but not fully, because you don’t have the time. You are buying IP for a reason—to get you to tapeout faster.’ — INTERVIEWEE RICHARD

Often, they deal with second order effects, such as if the design will work at the required speed with acceptable levels of electromagnetic interference (EMI) emissions, minimising bill-of-materials, etc. In addition, they will frequently encounter the inadequacies of the IP as regards the presence of test points: ‘One problem with guys doing IP previously was that they didn’t think about how the thing gets integrated into a chip. So you start off with a block of IP and all your interfaces are exposed. And you can do what you like. You can create SystemC models and bus functional models and it is all great stuff. And test the ∗ ∗ ∗∗ out of it. And then you get over someone else like me, as to what is put in the chip. And you realise that you can’t really test any of this at chip level. ’ — INTERVIEWEE RICHARD

6.4 Risk: Approach to it



Risk is the chance of something happening that will have an impact on objectives. — STANDARDS AUSTRALIA AND STANDARDS NEW ZEALAND



‘Risk Management’, AS/NZS 4360:2004.

The generalised coding ‘risk’ has the highest quantity of logical connections to other themes in my axial coded data. The following themes were identified in this category: 131

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

• Risk is fundamental to semiconductor system development—Risk is an intrinsic aspect to be wrestled with when developing semiconductor systems, particular in the consumer electronics segment. This is primarily due to the significant investment in and subsequent rework cost of the hardware platform. It is a risky business, due to the volatility of the market, and the fickle nature of consumers. Often, the best technology (or technology provider) does not always prevail—interviewees mentioned the importance of relationships, particularly in Asian cultures. • Risk is primarily dominated by Human Factors—It was generally felt that the consequences of risks affecting the correct running and executing of a complex multi-discipline semiconductor project were primarily dominated by human factors, such as social issues (team contentment, efficient interaction and communication)—as shown in Figure 6.6. Social / Political Business Technical

Figure 6.6: Perceived Consequence Effect of Influencing Factor contribution to Total Project Risk.

• Risk of Poor Market Research—There is a risk that market research is flawed, particularly for a new emerging technology/standard. Very often, early movers in a market, such as IP creators, have to inform and lead the market. • Product Specification Risk—The risk that the wrong solution for the market is being developed was identified by both software engineers and business development staff as the second most critical area of risk. Interestingly, it was very strongly felt that software practitioners were much more likely to understand the application of the hardware platform in the market place than their hardware counterparts. There is a significant risk that the product is misspecified, over-engineered, or under-engineered for the target market. Maybe it bears the additional silicon area and time to market costs of an unwanted feature. Maybe a tradeoff was taken between product marketing/sales and engineering to drop a feature to meet a particular schedule, and the consequence of this trade-off affects the competitiveness of the product adversely. The pertinent questions to ask in this context are: 132

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

   

    

     

       

  

   





        

    

     

   

  

   



 

       

      

       

   

 

   

     



          

    

 



   

       

   



   

   

         

   

   

  

   

  

    

   

 

       

  

 !

    

Figure 6.7: Axial Coding of Risk Theme.

(For a larger version, see Figure E.7 on page 257).

Fundamental and Intrinsic to semiconductor design Mis-timing Market Window

Poor Market Research

Risk: Approach to it

Categories of Risk Product (Mis-)Specification

Technology Implementation and Validation Risk Social Risk

Weight of Risk

Technical

Perception

Changeability of SW HW Mask Costs

Human Mistakes

dependent on job function

Social Dysfunction

Figure 6.8: Emergence of Risk Theme.

133

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

– Did the organisation get its market analysis right, and specify its product correctly? – Did engineering build the correct product required by the product specification (validation) ? – Did engineering correctly implement the product (verification)?

• Weight of Risk—Hardware mistakes are costly, due to the capital investment in mask sets and also the loss of time to market. With many companies working towards first time right silicon, hardware mistakes can very easily derail a business plan and upset funding opportunities. Hardware engineers, through training and culture, are very cognisant of this, and the ‘weight’ of responsibility in tending to this risk bears heavily upon them. In contrast, software engineers enjoy the benefits of chaneability of software, and do not experience this weight to the same degree—unless they are made jointly-accountable for pre-tapeout design verification.

• Technical Risk—The risk that a mistake was made in implementing the solution was considered the most manageable risk by interviewees. Nevertheless, there was significant concern on the part of the hardware engineers in ensuring that sufficient design verification was conducted to catch any design flaws in the hardware. Hardware engineers are much more focused on ensuring that what they have developed is validated against a tight product specification (i.e., they had built the correct product, as tasked) and that they have verified its correct expected behaviour and operation (i.e, they have built the product correctly) than software engineers. This can be a source of tension between hardware and software teams. Hardware designers may look to push complexity into software, especially if feature requirements are vague or involve complex control logic. Software designers conversely will be looking to ensure that all heavy lifting (processor intensive algorithms, heavy bit manipulation, etc.) is implemented in parallel hardware.

• Funding Risk—The risk that the corporate entity runs out of cash during its strategy execution is one that was mentioned only once during all my interviews (Interviewee Seán). This may indicate a selection bias towards identification of risk where the interviewees understand the risk and have some mitigation strategies against it—what is referred to as ‘the availability heuristic’ (Jørgensen, 2009; Schneier, 2009).

The heuristic ‘implies that we use the mental

accessibility of events and experience as indicator of its importance’ (Jørgensen, 2009).

134

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

6.4.1 System Validation



When we join the verification plan from software with our matrix, it will then be time to get out the shovels and fill in some holes. — INTERVIEWEE RICHARD



SoC Technical Lead, in conversation with the researcher.

In addition to the impact on cost of test engineering imposed by business model, a further imposition differentiates the cost afforded to hardware versus software test and verification. There is no intrinsic technical characteristic of software that precludes its formal verification. Rather, it is case of business model and risk which mediate the decision on when sufficient engineering effort has been expended on the task. As identified in Figure 6.5 (on page 129), a level of resource investment equivalent to missioncritical industries such as medical or automotive is simply not commercially justifiable for a consumer electronics product. Reeves (1992) mentions that (certain) software design activities naturally curtail themselves/yield themselves to specific methodologies as these methodologies are the most cost efficient for the work required. Based on the ideas discussed in Chapter 3, I believe it is rational to suggest the application of this reasoning to hardware design activities also. Hardware is not easily changeable—it requires significant capital investment in its mask sets, and the cost of prototype is large. By comparison, software product is easily changeable—and indeed is expected to be so. Software is seen as a potential differentiator in terms of ease-of-use, and feature addition. The economics of hardware changeability deserves more study. It is the business impact of the cost of hardware change (in terms of financial costs, and also lost-opportunity cost via potentially increased time to market) that requires a formal approach to hardware testing. These economics have a significant determinism on technical activity. As example, Interviewee Chris mentioned the alternative approach taken to board design and prototyping by Asian competitors. In their lower cost economics, they are able to quickly spin and respin design prototypes at much lower cost, and are able to use this degree of freedom as an additional mechanism to iron out bugs from the product. Irish teams would tend towards more formal schematic design review, with a much greater expectation of first time correct product. Both of these approaches have their merits, and it is difficult to clearly differentiate which, if any, is superior from a purely technical level. 135

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

6.4.2 The influence of discipline specialisation on inter-team cultural differences It appears generally accepted by the participants in this study that software engineers work at a higher level of abstraction, whereas hardware design engineers apply a tremendous degree of rigour to precise testing. Hardware engineers are usually more comfortable on a block-by-block verification programme (a bottom-up approach), whereas software engineers usually sit at system level verification (a top-down approach). ‘The software guys understand the application better than the hardware guys. The hardware guys understand the nitty gritty more than the software guys. And the two aspects have to come together, in terms of verification planning.’ — INTERVIEWEE RICHARD

It has been mentioned that in projects of mixed-skills hardware verification—chosen either as a deliberate verification strategy, or by necessity of resourcing—that hardware engineers can over time feel comfortable sharing code with the software team, and that the software team can be trained to approach the verification from a more hardware-oriented focus in searching for corner conditions. Thus, it seems likely that the difference in initial aptitude/reception towards a verification focus (bottom-up vs. top-down) is more one of cultural training than of pre-disposed tendencies of the engineers attracted to the individual disciplines. ‘In hardware we are more paranoid. We have to be more paranoid because you can’t change it too easily. ’ — INTERVIEWEE RICHARD

Software specifications tend to be “much softer, much more vague than they are in hardware . . . because there is no . . . perceived opportunity” in hardware “for changes.” The participants of the study considered it an intrinsic part of the nature of these complex systems such that the hardware engineers take on the bit that they can specify rigidly, and the software developers are left to deal with the abstraction. Hardware is generally much more concerned with formal verification, and traceability back to requirements. ‘. . . always open for change. you know, software guys see things as being very flexible, we can always get you a firmware upgrade to fix a particular issue, that type of thing, whereas the hardware guys would agonise over the choice or if they wanted like a 14-bit ADC they’ll go to a couple of different vendors and it will be thoroughly spec’d out as to exactly what it does.’ — INTERVIEWEE JOHN

This tendency is correlated somewhat with the business model. In order to achieve the best price per unit for the high volume low margin consumer electronics parts, hardware designs must be converted into dedicated silicon. If the volume was lower and the margin higher (for example, in 136

6. THEORETICAL MODEL

6 . 4 . RI S K : A P P ROA C H T O I T

an industrial application), it may be possible to implement the design in FPGA instead. In this case, hardware developers aren’t as “paranoid”: ‘Totally different story if you could change it, yeah. You’d be less thorough. ‘Yeah. You’d make different decisions. You wouldn’t make . . . Say this morning, integrating something, the first thing I am thinking of is how can I get myself out of jail if something doesn’t work. ’ — INTERVIEWEE RICHARD

The converse also appears true—software engineers tend to act like hardware engineers when it comes to the most critical portion of the software code—the Boot ROM: ‘You’d put in as many get out of jails as you could. ‘If that was a piece of soft code, you would not have gone to as much detail at all, would you? ‘But that is okay though, it is the nature of software.’ — INTERVIEWEE RICHARD

There seems to be an apparent contradiction here as to the importance of FPGA for designs which are ending up as SoC ASIC tapeouts (Baldwin, 2009; Jaeger, 2007) versus the resourcing of the FPGA porting and generation activity.

‘. . . I think that think FPGA work is often under-resourced. . . The guys developing the FPGA see their value reduced within the development team, to the point of almost being ostracised by the development team . . . I would see an FPGA as a pure prototyping tool, and to me a prototyping tool equals a learning vehicle. And they are not used that way. . . if you can even find one or two real bugs through the FPGA process, I would say that is successful.’ — INTERVIEWEE JOSEPH ‘I am surprised how much effort FPGA takes, still, for a major product of this type of size and scope. But then again I don’t underestimate the value of the FPGA activity. The fact that we are finding issues earlier is not to be . . . You wouldn’t underestimate the value of that, let’s put it that way.’ — INTERVIEWEE DAVID

One noteworthy aspect of this is the suggestion that FPGA work is somehow beneath pure ASIC design work in terms of prestige, which can lead to hardware designers not sufficiently engaging in the FPGA support activity. Quite often, the lag that inevitable gets introduced between FPGA port and the main development stream serves as an additional impediment to good FPGA support: ‘. . . It wasn’t sufficiently resourced, and because of those teething problems the effort of the software team that was doing the FPGA testing could have been better used or could have been more effective. The key designers tended to say “it (the problem) is not in our database”, or “our database is more up-to-date than that one, so therefore the problem might not be in ours so get your one up to date”. And there could be a number of weeks lag before you get that. You test again and you find out that either the problem isn’t there, and they were right . . . or the problem was still there, so maybe they should have looked at this earlier. . . . But the resourcing to get it (the FPGA database) up-to-date (vis-à-vis the SoC database) should have been more adequate, so that it could have all been turned around faster and with less problems. Incidentally, we did end up with getting good turnaround build times towards the even of the development, which maybe something that happens anyway.’ — INTERVIEWEE THOMAS

137

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

It appears from this research that technical specialisation influences developer culture, but its remit of influence is primarily due to the changeability of design in software and the rigidness of design in hardware. Ironically, the confluence of hardware and software—the invaluable validation coalface that is FPGA development—is often insufficiently resourced.

6.5 Social/Geographical Factors It transpired to be impossible to entirely separate social and geographical factors. Many social influences occur on team behaviour irrespective of where the team members are located, but the separation of teams into separate geographical locations also contributes a significant number of impediments to efficient product development.

6.5.1 Social Factors Independent of Location The following sub-themes emerged from the research: • Teams that did not know each other interacted poorly—The most significant impairing factor was when two teams did not know each other socially. One interviewee commented that ‘products of this type are designed . . . not by process or flowchart . . . but by chat’. Knowing the other team socially acted as a ‘social lubricant’ for easing the mechanics of all technical discussions to follow. ‘I think as soon as you have . . . more than one office. . . you are starting to impact the schedule adversely. Having said that, the more experience there is and the better kind-of, you know, even if it is only at a social level that people know each other in the office that you can overcome huge chunks of that. ‘It depends, again if you have experienced hardware and software dev. teams who are used to talking to each other, there aren’t really any difficulties. . . Now if you have a couple of naïve teams then you have extraordinary difficulties.’ — INTERVIEWEE JOHN

One qualification on this sub-theme that emerged is the nature of experience in counterfoiling social impediments to some degree at senior level. For more junior developers, social acquaintance appears to be vital. Even at senior level, it is important to get the key designers and architects to meet face-to-face—at the very least during project start-up. • Degree of irrelevance of technical specialisation in terms of geographical separation— Interviewees were questioned as to whether the nature of the work carried out by the team has an impact on overall schedule or on inter-team friction. While it was agreed that hardware and software teams traditionally document and design their components through different vocabulary and cultures, it appears somewhat irrelevant to the outcome whether the hardware 138

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS



  

       



         

    

  

    

  

 

        

         

    

 

 

                              

 

       





  

    

             

Figure 6.9: Axial Coding of Social Theme.

(For a larger version, see Figure E.8 on page 258).

Team Familiarity

Location Independent

Technical Specialisation

Mediate against ownership issues

Trust

Not-invented-here

Information flow

Recognise importance in GSD Context

Regular core team meetings

Figure 6.10: Emergence of Social Theme.

139

Social Factors

Location Dependent

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

and software teams are geographically dispersed or whether two collaborating software teams are geographically dispersed. The adverse effect in either case on overall productivity and progress is equivalent.

‘At one stage I thought, back in my previous intellectual property days, that it would be brilliant to have a couple of VHDL guys in with the software team. And we did have software people here. But I found that if you have that, you just create a new boundary in a different place—between the RTL people in one location and another. So, on balance, it is more the location than the functional split.’ — INTERVIEWEE DAVID ‘I do think multiple offices do affect the development schedule adversely in general. Having said that, I don’t think that it matters whether you have a hardware team in one office or a software team in one office, or two software teams in two different offices. ‘Ideally, you would have the hardware team and software team in the one place, I think. But if you don’t have that, that is not necessarily a bad thing. — INTERVIEWEE JOHN

Social acquaintance and familiarity is a more significant influence on productivity than technical specialisation: ‘. . . it is more how well you know the people in the other offices rather than what it is that they are doing.’ — INTERVIEWEE JOHN

• Information flow and knowledge transfer is important to cross-functional team collaboration—Effective communication between digital hardware and software teams was seen as the absolute requirement to ensure productivity. Lack of productive communication was seen as leading to ‘them vs. us’ attitudes developing. It also helped to fester conspiracy theories, demotivated individual teams, and increased a sense of isolation from what was going on.

‘the problem is the information flow at a level that is appropriate to both teams. Information flow in an understandable, common language for both teams is what is required. There is a tendency for both sides to perceive that the other ones are talking gobbldy-gook. So once the software guys mention words like UML and sequence diagrams, interaction diagrams and Harol state charts, the hardware guys just switch out. And if you actually explained these are very simple message sequence diagrams, just state diagrams, all the same concepts you guys are using in all your normal flows, except that we just have different names for them, you might be a lot of the way to closing the gap.’ — INTERVIEWEE JAMES

• The importance of regular distributed team meetings—Weekly conference calls, where all team leaders are assembled for a discussion of overall issues relevant to the project, updating of common milestones and effort estimations, were seen as incredibly productive in terms of keeping all parties informed of the program progress as a whole. Feedback from product marketing was seen as particularly important to the troops on the ground. 140

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

• The importance of regular face-to-face meetings and contact—Despite technological advances in telecommunications, it was widely felt that there is no sufficient substitute to face-toface meetings — whether formal or informal. Compromise was something that could be easier to reach once people were in the same room. At the other end of communication tools such as email, instant messaging, teleconference, or even video-conference, there was a perception that various groups were much more likely to entrench in their positions during an argument. Additionally, the importance of informal meeting was also a recurring theme— this corresponds well with the social dimension to inter-team collaboration. One interviewee mentioned the significance of informally discussing issues over a cup of coffee.

‘if you are trying to moderate between, it might be customers and the development teams, or it could be between different partners or whatever . . . if there is a standoff, with face-toface I have seen it generally gets resolved in a couple of hours. . . People compromise much quicker on decisions. I have not seen that in any other of the technologies, including video conferencing. Video conferencing is them and us. . . ’ — INTERVIEWEE JAMES ‘The ideal scenario is that you bring them all together in a social aspect, because humans are animals. Basic psychology applies to a lot of this. ‘If I have an problem, who do I ring? If I ring a guy in the States, why should he go off and help out? He might give me advice on the phone, but he might say “I’ll look into it” and really it’s not his problem . . . I personally think it should be not only the key-individuals. The key-individual might be site/project-manager, waving the location flag inadvertently. He is a project manager, but when times get tough it is the designers who do the actual work, not the managers.’ — INTERVIEWEE JOSEPH ‘Asking over email is pointless. Asking over the phone is better. And arriving on the doorstep. . . Well, you can only do that a couple of times before you’ve burnt all your bridges. You are better off to make sure you have rang them a number of times on the phone fifteen or twenty times before you actually arrive on their doorstep. Arriving on their doorstep sometimes can be like putting a gun to their heads.’ — INTERVIEWEE WILLIAM

The issues raised above were found in mixed teams with both software and hardware components. However, many of them can be parallelled with the issues facing pure software projects. The degree of impact seems to be unrelated to the geographical location of skill sets. It correlates much higher with the social awkwardness (or lack thereof) between the teams. Difficulty in team interaction leads to miscommunication, misunderstandings, increased costs due to rework, and delays in reaching the market.

6.5.2 Social Factors that are Exacerbated by Geographical Separation With specific reference to the Irish semiconductor industry, having globally dispersed teams adversely affects the ability of any overall programme to deliver its outputs: 141

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

• Recognition is required of the need to foster information flow and knowledge transfer— Information flow is especially important to distributed collaboration, where opportunity for communication is limited by geography. Information flow issues are exacerbated by geographic separation—particularly the flow of informal information, the non-documented cultural ‘glue’ that is lost across sites: • The importance of informal communication tools—It was suggested by interviewees that informal communication tools, such as wikis, blogs, message boards, and instant messaging tools such as Skype can be very useful in terms of sharing information between teams. Information sharing was identified as one of the key concerns in dispelling the ‘them vs. us’ entrenchments that can result from geographical separation. ‘I think that a lot of it can be mitigated by informal communication where physically people move between offices.’ — INTERVIEWEE JAMES

Informal knowledge is often tacit knowledge that is often difficult to communicate consciously. Many times the possessor is not aware of the knowledge they possess, or its applicability and usefulness to others. ‘. . . we have different disciplines here in the one office, and we don’t actually formally interact. We have our team meetings, they have theirs. So there is that interaction that gets lost. That is the first thing, and that is on the same site! When you put that across sites, you are even less likely to pick things up. We tend to pick things up across functions here through casual conversations. So we don’t actually hold formal or informal meetings where we bounce ideas between the two teams, but we do pick things up because we are in close proximity to each other. Take away the close proximity and you are missing a lot. You actually have an interface there that you then need to work hard on. And unless you are prepare to work hard—to actually recognise that there is an interface and are actually prepared to work at it—you are going to lose a lot, and it definitely will impact the schedule, for sure. ’ — INTERVIEWEE THOMAS

• Cross-functional conversations in the context of informal knowledge loss—We previously discussed the relative insignificance, as perceived amongst interviewees, of technical specialisation as compared to the more significant influence of the need for social familiarity. Nevertheless, whilst a long history of working together is the ideal situation between two interworking sites, an equal mix of cross-functional skills appears to go some way in assuaging and mitigating this requirement: ‘You need a balance, you need to have, if you are on a mixed project, you need the skills on, at least some skills on each site, so there can’t be an artificial barrier by distance between the teams, which is often (the case) if you only have single functional sites. So at least if you have a hardware engineer with the software team then they can actually represent the hardware project to the rest of the software engineers.’ — INTERVIEWEE JAMES ‘These middle ground guys—who spend most of their time on the hardware/software interface—they don’t have one foot fully in either camp. These are the guys that are also

142

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

most valuable in interfacing with customers. These are the guys that are most critical when integration issues come up. These guys spend more time on planes than anybody else, going to customer sites.’ — INTERVIEWEE WILLIAM

Of course, even with mixed skills, you occasionally come across the odd issue whose symptoms are not clear enough for an immediate diagnosis and categorisation into either a hardware or software issue: ‘There was an voltage drop in a small island on the XYZ silicon. The toolchain had, probably, misrouted the ground and VDD signals for this island, to a ground/VDD a significant distance away from the island instead of routing closer to the island. At least some parts of tightly coupled memory were in this island. The net result of this was that at high speed, at times of high load elsewhere on the chip, some of the memory cells in certain cuts can fail. This was resolved in the subsequent revision of the chip, but initially this looked like a bizarre software bug involving stack or memory corruption.’ — INTERVIEWEE JOHN

The exact nature of mixed skills is interesting. When probed further, there appear to be downsides to having technical functions duplicated across sites. In fact, there is suggestion that technical specialisation (and indeed demarcation) across locations may be a benefit for the purposes of avoiding not-invented-here syndrome, c.f.: ‘You are only going to get your not invented here syndrome between two teams of the same discipline . . . Your software teams will be less likely to want to interact with each other on a goodwill basis because . . . Again it has to do with the culture at the location you are talking about as well as the fact that not invented here comes into it. And it comes down to the fact that if you haven’t built up an inter-personal relationship, you haven’t built up respect, you are always inclined to . . . it is easier to slag off, or not respect somebody you don’t know. . . . ’ — INTERVIEWEE THOMAS

• The need for Trust—trust and the development of trust are essential pre-requisites to efficient team inter-working, independent of technical specialisation. Trust is required to ensure effective co-operation across team members. In this regard, Interviewee Thomas made some insightful remarks: ‘Every group of people that works together builds up a kind of relationship, a relationship of respect and trust. They know they can count on people that they know . . . When you don’t have a (pre-existing) relationship you are going to lose that, and you have got to start building that up. And when you are building that up across site, then it makes it more difficult.’ — INTERVIEWEE THOMAS

Trust is established through the development of inter-personal relationships, and delivery of/achieving (shared and dependent) technical commitments. Humans naturally nurture bonds of trust through direct contact, and so the challenge of establishing trust across geographically separated teams is significant. One potential remedy for this is the notion of jump-starting the relationship of trust at the beginning of a project: ‘You get everyone together the first day, do some meetings, whiteboard sessions, and then before very long, maybe 3–4 weeks, have everyone out on the beer—when they have already

143

6. THEORETICAL MODEL

6 . 5 . S OC I A L / G EOG RA P HI C A L F A C T ORS

built up some sort of working relationship together. Then that breaks down inter-personal barriers, hopefully. I think things start working better when you have some sort of interpersonal relationship rather than just a working relationship. So that would be something I’d do very early on.’ — INTERVIEWEE THOMAS

It is more difficult to establish inter-personal relationships over artificial means of communication (e-mail, telephone, instant messaging, social networking) as they reduce informal and nuance information flow: ‘Email is too easy to ignore. There is something great about hearing a voice. Even if he is not in the cubicle alongside you, . . . you can laugh, you can talk, you can ask how the kids are, how life is. . . And you can build up a rapport with person. . . They let you know a lot of the background as well, what is going on in the company, what the politics are. . . They make you feel a bit more part of the culture. It is sociable. About building up the informal relationships.’ — INTERVIEWEE WILLIAM

Even thought relationships and trust build quicker with face-to-face contact, they still require time to mature and solidify: ‘You can have all the friendly friendly stuff, but as soon as you leave the site, up go the barriers again. Grand and friendly, everything can be social. But you don’t shift mindsets over a few pints, or over dinner. . . You may get small movement, but you don’t shift it.’ — INTERVIEWEE WILLIAM

Without this investment of time, interactions of all sorts may experience awkwardness: ‘Asking over email is pointless. Asking over the phone is better. And arriving on the doorstep. . . Well, you can only do that a couple of times before you’ve burnt all your bridges. You are better off to make sure you have rang them a number of times on the phone fifteen or twenty times before you actually arrive on their doorstep. Arriving on their doorstep sometimes can be like putting a gun to their heads.’ — INTERVIEWEE WILLIAM

Being confident in the knowledge that work can be handed off successfully to a peer, or that work will be provided on schedule and to specification allows developers to more effectively plan and schedule their work. Beyond this, having trust in the wider social context of working for the one goal, and of being part of the one team, allows developers the confidence that socio-political issues are less likely to appear. Key tactical business strategies for success in consumer electronics are to be either an early market leader or the cheapest and most technically competitive solution provider for the mass market adoption.

In either case, pace of development was noted as critically important in

differentiating competitors and achieving success. This is particularly true in light of the growing trend for increasingly feature rich devices (Park, 1998; Wilson, 2004), where it is software usability rather than hardware innovation that can determine market winners—for example, (Ranger, 2008). As we will see in Chapter 8, the loss of trust and informal knowledge across sites is not specific to the semiconductor domain. Many of the strategies developed through GSD can potentially be 144

6. THEORETICAL MODEL

6 . 6 . T EC HNO-C UL T U RA L

successfully employed in a mixed hardware-software environment. In addition to this, teams with technical specialisation can be made work more effectively through the development of interfaces of mixed-skill developers, bearing in mind the need not to over-mediate and stray into the potential remit of not-invented-here territory.

6.6 Techno-cultural The following themes were identified in this category: • Linguistic Determinism—It became readily apparent that hardware designer attitudes to risk/test are based in established culture, rather than derived from any form of linguistic determinism. Nevertheless there is definitely a language barrier between hardware designers and software programmers. For example, see Table 6.2. Hardware uses the term block for a piece of functionality, indicating it is an individual block to be tested. Software uses the term module, suggesting an interconnected module of a larger system. Hardware uses the term design flow for their work process. Software has much more elaborate vocabulary to deal with this—development process has a number of distinct attributes and phases that are explicitly named out. It is interesting that the same phases are discernible in the hardware flow, but not explicitly named as per software. The hardware term flow suggests the lightweight nature of it as compared to software use of process. Hardware on the other hand tends to use much greater descriptive power when it comes to testing than software—verification, validation versus test. Table 6.2: Digital Hardware vs. Software Vocabulary.

Hardware Phrase

Software Phrase

Design flow

Development Process (Analysis, Design, Development, Test)

Block

Module

Verification, Validation

Test

Design entry

Programming (specifically the typing in of code)

Simulation

Modelling

Synthesis

Compilation, Assembly

One curious outcome is the acknowledgement amongst my interviewees that, as devices become more featureful, software is increasingly becoming the product differentiator. In 145

6. THEORETICAL MODEL

6 . 6 . T EC HNO-C UL T U RA L

this regard, linguistics may be seen as betraying the true location of complexity—hardware integration is commonly more straightforward from the peripheral component perspective, and a consequence of this is that software—the easiest portion to change—adds a ‘stickiness’ to any design/socket win—i.e., once your software is integrated into an overall customer solution, it becomes harder to design you out. • Technical Determinism—There is evidence of technical determinism.

The slowness of

hardware simulation tools means that hardware designers often don’t engage with bug reports from software team unless there is a sufficiently laconic description of a test case that they can feasibly simulate. They tend to retreat from high-level bug reports, and really push an onus on the software team to develop as small a test case as possible. Yet, it is not always feasible for the software team to do so—many times conditions are apparent only when there are full system dynamics and interactions in play—for example DMA arbiter issues, bus timing and inter-working issues. Deficiencies in hardware developers’ models of system usage can come into effect here too— ‘if you were paranoid and you didn’t trust did the hardware guy simulate correctly or not? Did he understand what he was simulating?’ (INTERVIEWEE RICHARD). And at times even software doesn’t

fully understand how the system will be used until it is coded debugged and developed. Many of the hardware interactions are deterministic in a temporal sense (at least as regards their reduction into simulations) , whereas software interactions are vague and non-deterministic— it is hard to know exactly what thread is running or what synchronisation objects are locked etc. The counter aspect to the slowness of hardware simulation is the enhanced level of visibility into their design that hardware developers are presented with—as compared to software developers. Every signal transition in the system can be captured and visualised. Software developer interviewees commented on what they termed the maturity of EDA tools— and of the build flows around them. The suggestion was made that hardware engineers get used to such arcane systems with flaky tools (‘sensitive to command line argument order’– INTERVIEWEE MICHAEL) in part because they do not have the technical capability to understand

the tool implementations or how they could be improved. Additionally, I noted the curious phenomenon that digital hardware engineers are typically reticent to work directly on boards (preferring the environment of software simulation), whereas embedded software require real reference platforms to completely test and validate their work products. 146

6. THEORETICAL MODEL

6 . 6 . T EC HNO-C UL T U RA L

• Aversion to Ambiguity—hardware designers dislike ambiguity in all aspects of their work, as a reaction to the weight of risk. This ranges from risk in specification (market requirements specifications, product design specifications) to bug reports. • Parallel and Temporally deterministic vs. Sequential Non-Deterministic—my research demonstrates that digital hardware designers are more comfortable dealing with concurrent behaviour in systems. This may in part be due to the ease of implementation of such concepts in hardware (i.e. just instantiate another block), versus the necessity to serialise operations in software to run on the processing core—and also to the degrees of abstraction provided by HDL tools and languages versus software languages and software developer kits (SDKs)3 . Multi-core SoCs are likely to have an impact on this, by virtue of introducing a greater impetus and necessity for software engineers to come to grips with true distributed multi-core processing.

‘There are different mentalities between digital hardware, mixed-signal, analog and software—totally different mentalities. You can even see it here. Take mixed-signal. Mixedsignal people here are very sequential. Very methodical, and think of all the corners. When it comes to doing stuff in parallel, all at the same time, with two or three things moving but not getting locked up, good luck. They lose it. And digital guys can do all these concurrent transactions. Everything is not black and white, because you take this bit at 7% and drop it down here and I’ll come back to that when . . . That is the way they have to do it.’ — INTERVIEWEE RICHARD ‘. . . software guys are thinking sequentially, because the code is sequential. Whereas hardware is actually all concurrent. When you are writing Verilog, it is all about concurrency. Everything starts at the same time, so you have to think about all these interactions.’ — INTERVIEWEE MICHAEL

• Appreciation of Other Discipline—As personal computers get more and more sophisticated, there is a move in abstraction further away from hardware in the software community. This research shows that software developers have lost the link to what is happening at a register or clock level. ‘I think there should be much more focus, at least from an educational aspect, on much more common training. And the idea of re-emphasising through educational work, that the differences are very minimal. So if students are working on VHDL or Verilog, they are essentially doing a software design representing a piece of hardware. And likewise from the software guys perspective, if they are doing embedded design, that they need to understand exactly what is happening underneath in the hardware.’ — INTERVIEWEE JAMES

More concerning is the suggestion that this separation in abstraction is something that is deliberate amongst developers, particularly software developers:

3 For example, the C language doesn’t have an adequate multi-threaded programming model (Jones, 2009), and as such it is laboured with clunky threading and synchronisation APIs.

147

6. THEORETICAL MODEL

6 . 6 . T EC HNO-C UL T U RA L

  

  

  

 

 



     



  

     

     

 



     

    

    

 

 



   

 

Figure 6.11: Axial Coding of Techno-cultural Theme.

(For a larger version, see Figure E.9 on 259.) ‘The worst thing is that more and more of them have made a conscious decision to do this. And they have absolutely no desire to go behind the processor or operating abstraction. Visual Basic, Java, . . . The idea of being exposed to hardware now is a no-no.’ — INTERVIEWEE JAMES

• Influence of Discipline Specialisation on Techno-Cultural Effects Reconsidering the theme of influence of discipline specialisation on inter-team cultural differences within the category of techno-cultural effects, some interesting issues arise that are caused by approach to risk. As software appears more in tune with end product requirements it can lead to frustration for the software team that the hardware platform may not be competitive (or perceived by them as sufficiently competitive) due to some bugs that, due to risk aversion, the hardware team are reluctant to resolve. These issues can lead to social/political tensions and conflicts between the teams. 148

6. THEORETICAL MODEL

6 . 7. S U MMA RY

This is exacerbated by the market pressures of the business model: the narrow market windows, the predictable time periods of market volumes (‘often want to ship end product for the Christmas rush’–INTERVIEWEE JOSEPH) and the rapid obsolescence of unmaintained technological platforms. The software developers I interviewed did not appear to readily appreciate the pressure hardware developers are often under in order to achieve first time working silicon. Additionally, software developers may feel indignant about taking on workarounds for hardware. Yet, this is an intrinsic part of the business model, and so it would appear cogent for management to keep the software team agreeable to this:

‘I think it is certain that because it (a last minute feature request for the software usually) can be done, you have to do it. That is why feature creep happens. We are in a competitive world. If you won’t do it, your competitors will do it. And because the facility is there (in software), it comes with the territory. One of the reasons why software is powerful is because you can change, and because you can, you have to.’ — INTERVIEWEE DAVID

• Lack of Shared Partitioning Tools—interviewees also mentioned the lack of shared techniques, tools and conventions for sharing design information. Some felt that (relevant subsets of) design languages such as UML were of use in this regards—perhaps specifically statecharts (Harel, 1987), and use-case diagrams.

6.7 Summary Through the use of grounded theory, this research has developed a theoretical model presented in Figure 6.1 (in Section 6.2, on page 124). This model highlights that business realities in the CE semiconductor industry generate risks. These risks have a moderate influence on embedded software design, but a much more pronounced impact on digital hardware design – primarily due to the differing degrees of changeability in the work products of both disciplines. In both cases, business realities act to limit the investment in engineering activities—both in terms of monetary capital investment and non-recurring engineering costs, but also in terms of timescales. The effectiveness of digital hardware design and embedded software design team interworking is influenced to some degree by technical determinism, although most of the discipline specialisation specific influences are as a result of risk influence. Techno-cultural effects and disparate distributed development effects are seen to have a more significant influence on inter-team cooperation and effective inter-working than technical determinism. 149

6. THEORETICAL MODEL

6 . 7. S UMM A RY

The theme of situation appropriate methods is mediated by both the sub-theme of commercial realities (of the business model and the market ecosystem) and the sub-theme of risk (i.e. attitude and approach to). Many of the issues seen in complex multi-discipline semiconductor projects appear to be commonly reported in available software-only focused literature. This is not surprising, considering the importance of the social aspects of managing large teams of humans: ‘One of the things I have learnt over the years is that everything is personal. Everything is personal. There is no such thing as ‘it is only business’. Sure, it is only business, but you conduct business through people.’ — INTERVIEWEE THOMAS

Nevertheless, it is very encouraging that there is a volume of pre-existing techniques and processes in literature (i.e. GSD in the software community) that can potentially be applied to this area in semiconductor design. The risks inherently in the industry, and in specific business models within it, are difficult to resolve. There will always be pressure for consumer electronics companies to meet aggressive market windows, in order to wow the customer with the latest and greatest gadgets and devices. Notwithstanding that, interviewees felt more comfortable in general in dealing with technical risk, but were more uncomfortable in handling social and geographical issues. In order to successfully engineer a market-appropriate solution, information flow is essential—especially informal communication. The presence of familiarly, trust and the free-flow of information needs to be established and nurtured.

150

Chapter 7 Emergent Toolbox of Patterns for SoC Project Organisation



Each problem that I solved became a rule which served afterwards to solve other problems. — RENÉ DESCARTES



(1596–1650), French philosopher, mathematician, physicist and writer. Taken from ‘Discours de la Méthode’

7.1 Introduction

A

S

a result of this research, certain themes emerged as discussed in Chapter 6 and shown in Table 6.1. In analysing these, a toolbox of certain project organisation techniques which

would be useful to both digital ASIC hardware and embedded firmware practitioners alike also emerged. I developed this toolbox in pattern form. In this chapter, I discuss what this concept of a (design) pattern is, and its historical roots. After this, I list each pattern: explain its context, the problem it addresses, and the forces which act upon it. For the purposes of full disclosure and transparency, it is important to note that the high-level patterns became apparent during the course of my interviewing, but the textual embellishment of their descriptions is my own understanding of the various patterns. This toolbox of patterns is promoted for use by the development community as techniques to help mitigate cross-functional development stress.

7.2 What are Patterns? Berger and Luckmann (1967) noted that: ‘All human activity is subject to habitualization. Any action that is repeated frequently becomes cast into a pattern, which can then be reproduced with an economy of effort and which, ipso facto, is apprehended by the performer as that pattern.’

Various patterns have been shown to be beneficial to productivity and progress, and others have been shown to be ineffective or counterproductive. The term design patterns is used to 151

7. EMERGENT TOOLBOX OF PATTERNS

7 . 2 . WHA T A RE P A T T ERNS ?

describe documented effective ‘answers to design problems’ (Alexander et al., 1977)—forms of rules for performance improvement, invoked by certain associated circumstances and documented in a consistent manner. They form a literary mechanism to share situational experience and design expertise, and offer empirically validated solutions to problems that occur commonly in the field of work. The idea was first proposed by architect Christopher Alexander (Alexander, 1979; Alexander et al., 1977), but has since been adopted for use in various other technical disciplines and specialisations (Gamma et al., 1994). Alexander describes the patterns as arising from ‘conflicting forces’ of context and circumstances:

‘As an element in the world, each pattern is a relationship between a certain context, a certain system of forces which occurs repeatedly in that context, and a certain spatial configuration which allows these forces to resolve themselves. ‘As an element of language, a pattern is an instruction, which shows how this spatial configuration can be used, over and over again, to resolve the given system of forces, wherever the context makes it relevant. ‘The pattern is, in short, at the same time a thing, which happens in the world, and the rule which tells us how to create that thing, and when we must create it. It is both a process and a thing; both a description of a thing which is alive, and a description of the process which will generate that thing.’ (Alexander, 1979)

Alexander’s patterns are more concerned with the repeating traits and requirements of human living and organisation than with any specific architectural constructs—the pattern in this context is a process for solving a particular problem through the application of a specified set of predetermined design and implementation choices. Alexander et al. (1977) describes how each pattern has the same format, beginning with a name. The ‘search for a name’ is a ‘fundamental part’ of pattern development: ‘So long as a pattern has a weak name, it means that it is not a clear concept, and you cannot clearly tell me to make “one”.’ (Alexander, 1979)

Additionally, Alexander (1979) describes a pattern as ‘a three-part rule, which expresses a relation between a certain context, a problem, and a solution’.

Based upon the pattern language suggested in Alexander (1979) and Alexander et al. (1977), each pattern presented here contains the following structure of elements, which may be seen as a variant of ‘Coplien Form’ (Fowler, 2006): • Name—for ease of referencing to a problem/solution pairing; • Context—the circumstances and environment (in which we find and attempt solve the problem) that invariably imposing constraints on the solution; • Problem—the specific problem to be solved; 152

7. EMERGENT TOOLBOX OF PATTERNS

7 . 3 . T HE P A T T ERN G ROU P I NG S

FPGA

Development

Social Interaction

Table 7.1: Toolbox of Patterns.

Mitigate tacit knowledge loss through Social Networking Tools Actively Seed Social Interaction amongst Groups Provide Project-level Focal Point through Core Team Structure Drive continual progress through Daily Calls during Crunch Issues Manage IP Deliveries Efficiently Perform Regular Builds Share Code across Test Platforms and Technical Disciplines Minimum Test Case Example Keep the Firmware Design Simple Keep the Firmware team involved in C code for simulations Consider Agile Methods, Test-Driven Development Communicate in Diagrams Early On Implement Recovery Mechanisms for Boot ROMs Provide Software Test Plans Early, Hardware Features Early Automate FPGA Design Traceability through Version Tracking ASIC Synthesis Scripts Automate FPGA Programming Implement Best Practises for FPGA Development Keep ASIC/SoC team involved in FPGA Development

• Solution—the proposed solution to the problem—many problems can have more than one solution, with the context dictating the varying degrees of trade-off towards the resolution of the forces affecting the problem; • Forces—the considerations that must be taken into account when choosing an effective a solution to the identified problem.

7.3 The Pattern Groupings The research in this thesis identified the following maxims as extremely important to SoC development: • Communication amongst teams is essential; • Cross-familiarity with other team’s skill sets and capabilities is extremely important. The patterns presented in this chapter (and summarised in Table 7.1) are organised into three distinct groupings, the first two of which are related directly to the aforementioned maxims: • Patterns of Social Interaction—team work involves extremely high levels of human interaction, so these patterns address some of the soft skills that may make this a bit more manageable. • Development Patterns—patterns directly related to the technical aspects of semiconductor system development. 153

7. EMERGENT TOOLBOX OF PATTERNS7 . 4 . P A T T ERNS OF S OC I A L I NT ERA C T I ON

The third grouping was originally intended as a subset of patterns within the category of Development, and yet many interviewees recounted similar issues with FPGA-related activities, warranting an escalation of patterns that address some of these issues to a high-level category of their own: • FPGA Patterns—FPGA systems are such as important resource during the time pre-tapeout that they deserve a category of their own. This category describes mechanisms that may assist keeping this process running smoothly. As discussed, the contextual setting is an intrinsic and essential part of the pattern description. Thus, whilst some of specific patterns listed in this chapter may have general applicability outside the specific context in which they are described, this is to some degree incidental. These are patterns that are specific to helping teams of digital hardware and embedded software developers deal with the practicalities of working together.

7.4 Patterns of Social Interaction 7.4.1 Mitigate tacit knowledge loss through Social Networking Tools Context Teams of designers are geographically located in different offices.

Problem In situations of team geographical dispersion, incidental knowledge that generally permeates through close social contact of various team members is lost.

Solution Web-based social networking tools such as Wikis, Blogs, and RSS aggregators can help mitigate (but not cure) some of this loss of incidental knowledge. • Wikis (Cunninghman, 2006) are tools intended for “unfinished ideas” (Schwartz et al., 2008). They achieve a critical mass of knowledge that promotes their use. They are very useful in semiconductor projects for collecting such information as development board schematics, pin-outs, test setups, etc.; 154

7. EMERGENT TOOLBOX OF PATTERNS7. 4. P A T T ERNS OF S OC I A L I NT ERA C T I ON

• Personal blogs can collect the same useful snippets of information as engineering journals (such as chip bring-up/set-up notes), with the added benefit of being electronically indexable and searchable; • RSS aggregators can collect information from various sources (bug databases, configuration management check-ins, team blog postings, recent wiki page changes etc.) and present them in a single, accessible dashboard type view for project members. • Centralised sections of corporate intranets can be used to collate project-related information (such as datasheets, functional specifications, design documents, team meeting minutes, project-specific organisation charts).

Forces Social networking tools require critical mass of users and user-generated content before they become self-sustaining. Once sufficient information is in the database to make it of use to the random user, it is likely that the user will continue to visit and use the site. The information on the site satisfies their present need, enables them to get their work done—and positive behavioural feedback is affirmed. Conversely, if there is insufficient information held such that the user’s queries are returned unanswered, the user is likely to discontinue visiting the site.

7.4.2 Actively Seed Social Interaction amongst Groups Context The undertaking of complex semiconductor development requires substantial teams of analog hardware, digital hardware and software/firmware engineers working together in an effective manner.

Problem With large social groupings of any sort, human interaction issues can have an impeding impact on the rate of progress. The sheer potential scope of this problem is evident from the fact that a situation of dysfunctional inter-team relationships was consistently placed as the most significant issue facing a project. The following quotations bear witness to this: ‘. . . if you have a lot of people coming into the room, the sheer ease of working with people (that) you’ve worked with before vs. bring in for argument’s sake the best part of 10 to 15 people who

155

7. EMERGENT TOOLBOX OF PATTERNS7 . 4 . P A T T ERNS OF S OC I A L I NT ERA C T I ON

don’t know each other . . . There is a fair bit of time in overcoming just the social barriers and shyness etc. etc. that come what that . . . ’ — INTERVIEWEE JOHN ‘Having said that, the more experience there is and the better kind-of, you know, even if it is only at a social level that people know each other in the office that you can overcome huge chunks of that (impact on schedule due to working from multiple sites).’ — INTERVIEWEE JOHN ‘The social aspects. . . I think you can get over technical issue a lot easier than if you have poor cooperation from the start. Poor cooperation is the wrong mentality to tackle the problem . . . So people can get around technical issues if you have the right people.’ — INTERVIEWEE MICHAEL

This risk is due to the difficulty often faced in dealing with people you don’t know socially: ‘ I do find it more difficult dealing remotely with hardware teams, but that is because I deal more with the software guys. I am friends with them, I interact with them more frequently . . . It is a little more difficult dealing with the hardware guys, but largely that is because I don’t know them as well.’ — INTERVIEWEE WILLIAM

Solution Interviewees felt that the most productive environment for the encouragement of inter-personal relationships is through direct face to face contact. As one interviewee put it: ‘You can’t build team spirit over video conferencing, or web, or Internet . . . A cup of coffee is really important because then what happens is that you get a real perspective.’ — INTERVIEWEE JAMES

Social occasions were suggested as a good means of encouraging engineers and designers to get to know each other, and to help break down the barriers of social awkwardness between them. ‘Even soft skills like going for a social night out with guys on the other side, to show you are human, you’re one of the lads... That is hugely beneficial. Otherwise, when you put down the phone or mute the phone on a conference call, you explain what the . . . are they on about? And it builds a momentum, and suddenly the team on the other side is vilified and is the stupid team. They probably think your team is the stupid team. So the issue is not only lack of communication, it is also the ancillary soft-skills of interacting and spending time and developing a rapport with those guys.’ — INTERVIEWEE JOSEPH

This seems to stem from the theme that humans are biologically designed to think of people as faces they recognise. When you see a faceless email come in, it could be anyone. But if you put a face to it, it immediately identifies the person and you respond differently. ‘Until you have that relationship, it is something that happens to somebody else.’ — INTERVIEWEE THOMAS

People tend to go that extra distance for others they know and have a pre-existing social relationship with. Conversely, they tend to focus on more locally scoped goals without this: 156

7. EMERGENT TOOLBOX OF PATTERNS7. 4. P A T T ERNS OF S OC I A L I NT ERA C T I ON

‘When there is that sort of friendly relationship between people, they tend to be more aware of what the other side are going through, and are more willing to take it on board to find the right way to do it, as opposed to the way that suits their schedule the best.’ — INTERVIEWEE MICHAEL

It is especially important to kick-start the building of relationships through face-to-face meetings during the project start-up phase (Finholt and Birnholtz, 2006). Once relationships have been established, they can continue to be kept alive through more indirect techno-centric means of communication, such as email, telephone, and instant messaging. In addition, it is important for team management to lead by example, and to portray the idea of good working relationships between the teams to their respective team members—as exposure to positive intergroup contact may be associated with more positive intergroup attitudes (Ortiz and Harwood, 2007).

‘Every group of people that works together builds up a kind of relationship, a relationship of respect and trust. They know they can count on people that they know . . . When you don’t have a (pre-existing) relationship you are going to lose that, and you have got to start building that up. And when you are building that up across site, then it makes it more difficult. The other things that makes it more difficult are the cultural differences. The fact that on the remote site they have different ways of and approaches to working that we maybe didn’t understand. So it is just . . . There are things to take into account other than just technical skills.’ — INTERVIEWEE THOMAS

Team building is important in addition to social familiarity to ensure that everyone pulls their weight: ‘If you have one guy . . . Everyone has to get on, more or less . . . You can’t have one guy who is pretty arrogant, or one guy who is pretty lazy. Then it just gets people annoyed. So you need a team, actually, that . . . We are actually kind of lucky, because everybody knows everybody for years and you can slag them off, and there is a bit of craic then . . . ’ — INTERVIEWEE RICHARD

Even with the best of working relationships, human memory is fragile and often unintentionally biased. It is important that work expectations and commitments are formally documented, and are continually refreshed and re-agreed with relevant other parties.

Forces Cost may be a factor, particularly if the teams in question are geographically separated.

7.4.3 Provide Project-level Focal Point through Core Team Structure Context Working on a complex multi-discipline project with various inter-dependent work items. 157

7. EMERGENT TOOLBOX OF PATTERNS7 . 4 . P A T T ERNS OF S OC I A L I NT ERA C T I ON

Problem Progress deadlock can occur easily in complex multi-discipline projects for a variety of reasons: • Lack of authority or knowledge to make prompt decision; • Lack of awareness amongst separate disciplines of shared issues/tasks/problems; • Incorrect prioritisation of work across disciplines, especially as regards inter-dependencies.

Solution Holding a regular weekly conference call with technical decision makers in each discipline helps ensures consistent and ongoing communication across groups, and keeps decision procrastination from settling in. In addition, the responsible stakeholders who attend these regular meetings have a duty to inform the remainder of their respective teams of the outcomes of such meetings. Status reports should be prepared and circulated prior to the team meetings to ensure that sufficient information is available during the meeting for quick and prompt action.

Forces There may be a tendency for status reports to arrive late—i.e. just prior to or even during the meeting—which negates their usefulness in keeping the meetings short. There may be a tendency to work through status reports line-by-line, or to get distracted by specific technical issues. The purpose of the meeting is to keep focus on ensuring that all aspects of the project are moving forward towards completion and that deadlock does not occur – not to resolve specific technical issues. There may be a tendency for stakeholders to hoard information, or to misunderstand the importance of dissemination of this information amongst the wider team. The records of the meeting should be kept in a publicly accessible electronic forum to ensure their availability to the wider team members.

7.4.4 Drive continual progress through Daily Calls during Crunch Issues Context A project has hit a critical period, perhaps coming close to an important demonstration, such as a trade show or a customer engagement/release. 158

7. EMERGENT TOOLBOX OF PATTERNS7. 4. P A T T ERNS OF S OC I A L I NT ERA C T I ON

Problem What is the best way to keep focused momentum in a distributed team during a critical period?

Solution Interviewees unanimously claimed that despite modern technology advances, face-to-face meeting are the most productive ways of interacting with peer groups. Nevertheless, it was suggested that succinct, focused daily calls (lasting no longer than 15 minutes) at a regular time each day (perhaps 17h00, for instance) were a very good way of ensuring controlled and efficient progress in a short term. In general, they also felt some preventative medicine was useful in avoiding crunch periods where possible: • Travel early if necessary—preempt integration issues where possible. • Clear protocol of issue ownership and issue escalation to ensure resources in place when needed.

Forces Problems of interaction across multiple sites.

7.4.5 Manage IP Deliveries Efficiently Context Working on a large multi-discipline project where certain key technical components are not being designed in-house—but instead the designs are being outsourced to IP or design services companies.

Problem The need to ensure that outsourced components are delivered to schedule and within budget. The need to ensure that external companies are putting in place the agreed commitment in terms of resources etc. to complete work in a timely fashion. 159

7. EMERGENT TOOLBOX OF PATTERNS

7 . 5 . F P G A P A T T ERNS

Solution In addition to a clear statement of work at the onset (listing dates, deliverables, and responsibilities), the best way to tackle this issue was felt to be in gaining as much visibility into the subcontractor/vendor as possible: • Look for deliverables as early as possible—there are always logistics / set-up times etc. associated with the internalisation of outside IP; • For bespoke designs, ensure formal regular intermediary stages (Alpha, Beta, etc) to accurately gauge progress; • Align payment milestones with these regular delivery stages to ensure leverage over the subcontractor/vendor.

Forces It is likely the subcontractor/vendor may be balancing the needs of other customers with your needs. It is likely the subcontractor/vendor has provided aggressive schedules in order to compete for and secure your business. It is likely that agreed deliverable requirements, schedules, and costings will all need to be revisited during the course of the project—but having these clearly defined up front in a statement of work will assist greatly in providing leverage. Plan early for how to deal with the difficulties that will be encountered when internalising IP: ‘Then there is the mental hurdle of really taking ownership of third party IP – due to the headaches of the integrating and reworking of build systems, harmonising naming conventions, introducing fundamental and architecture modifications versus traceability against original design and associated IP vendor support complexities . . . ’ — INTERVIEWEE JOHN

7.5 FPGA Patterns 7.5.1 Automate FPGA Design Traceability through Version Tracking Context FPGA image files are a regular component of a system for the purpose of configuration management. As such, it is necessary that they are efficiently and uniquely tracked for a variety of purposes—for example, to determine in which version of the digital hardware configuration management database than an issue began. 160

7. EMERGENT TOOLBOX OF PATTERNS

7 . 5. F P G A P A T T ERNS

Problem The need to uniquely identifying an FPGA and to provide traceability back to the source configuration used to create it.

Solution Use source configuration management tool version numbers in an FPGA register to identify a particular build, rather than filenames with associated spreadsheets. This makes the subsequent checking of changes as simple as diff’ing different configuration management tags. Brooks, Jr. (1995) describes the phenomenon of unstable hardware platforms, and its consequences, in the 1960s and 1970s: ‘. . . Hardware failures, usually intermittent, are worse. The uncertainty is worst of all, for it robs one of incentive to dig diligently in his code for a bug—it may not be there at all.’

Interviewees, particularly the firmware developers, felt that the effective use of FPGA devices is heavily dependent on good traceability between FPGA images and the underlying SoC database. FPGA images should be individually named, and a mapping maintained between this naming and the source configuration management numbering of the SoC database. If different functional subset families of FPGA devices are required 1 , or if different pin-outs are required due to FPGA limitations versus real silicon, then the naming scheme should take these aspects into account also—for example, see Table 7.2 for a real-world scheme as used in a complex SoC project. Table 7.2: Example FPGA Naming Scheme

Field pp ii ss

cc

Description 0x15 corresponds to pinout version 1.5 identifies the bit file within the series bit file series 0x00—Standard ‘Peripheral’ builds 0x02—‘Peripheral-XBar’ 0x03—‘Peripheral-PDP’ 0x01—Standard ‘Core’ builds 0x04—‘Core and USB’ chip identifier

These fields may be combined to give a traceable FPGA image name of ‘ppii_sscc’—for example 1809_03c3.

1 For

instance, if the design is too large to fit in a single FPGA image.

161

7. EMERGENT TOOLBOX OF PATTERNS

7 . 5 . F P G A P A T T ERNS

Forces FPGA designers may not be keen to keep the source components for every single build in source configuration management. This may be cultural, due to the pressure on hardware designers to have first-time right success.

7.5.2 ASIC Synthesis Scripts Context Building SoC databases targeted for simulation, FPGA or silicon synthesis.

Problem Some of the software engineers I interviewed noticed that ASIC build systems on projects they worked on were using home grown build systems. Synthesis builds are painfully slow for digital hardware designers as it is – yet they can take longer than necessary if they are redoing work from earlier revisions. This can occasionally be the case when home made build systems based on shell scripts are used.

Solution Invest the time in using an expert system build tool, such as GNU make, rather than hodge-podge build scripts. GNU make tracks which files have changed since the last time the build was run, and invokes tools to operate on only those modified source code files and their dependencies.

Forces None.

7.5.3 Automate FPGA Programming Context During the verification and validation of a complex semiconductor design prior to tapeout, FPGA models are invaluable for verification, but are also expensive resources. 162

7. EMERGENT TOOLBOX OF PATTERNS

7 . 5. F P G A P A T T ERNS

Problem FPGA boards are expensive when used for silicon prototyping—costs increase with capacity, and with speed. Additionally, there is typically significant churn of FPGA images during the development phases of the project as both digital and firmware teams rush to verify the silicon pre-tapeout.

Solution Having a mechanism to allow developers to program FPGA systems from their desks, and debug via Ethernet-enabled JTAG can allow expensive FPGA resources to be productively shared amongst a number of developers. Some of my interviewees implemented such a system in a project, using Secure Shell (SSH) to remote access command line Xilinx FPGA programming tools. They have found that an essential component to making this solution with limited hardware resources (i.e. fewer platforms than developers) is the use of instant messaging tools, as a light-weight and quick mechanism for co-ordinating “who is accessing which” resource.

Forces None.

7.5.4 Implement Best Practises for FPGA Development Context Problem The use of FPGA platforms is a good substitute for hardware pre-silicon, but care needs to be taken to ensure the FPGA is representative of the ASIC and that time is not wasted on only FPGA bugs. It is not always possible to fit all memory and all complex blocks in an ASIC design into a single FPGA. As FPGA capacity increases, so too do the ASIC designs being targeted at them.

Solution • Use separate FPGA builds to verify subsets of functionality at a time—each family of builds following a unique naming/numbering convention (c.f. Table 7.2); 163

7. EMERGENT TOOLBOX OF PATTERNS

7 . 5 . F P G A P A T T ERNS

• Create regression and sanity tests for FPGA peripherals, to allow new FPGA images to be quickly accepted or rejected—before time is unnecessarily wasted on a troublesome FPGA; • Create a bit-mask register in the FPGA build to let software identify what blocks are present; • Perform regular FPGA builds to ensure that the FPGA database is at most 2 weeks stale versus the ASIC / SoC trunk; • Perform FPGA place and route at same time as FPGA simulations to pre-empt good FPGA creation (“baking”); • Keep FPGA dummy blocks/models to a minimum so that FPGA image is as close to the ASIC as possible—It is usually the case that certain block designs can not be (easily) targeted to FPGA (for example, clock and power management blocks). These blocks will require models to be created for the FPGA, that mimic the behaviour of the functionality in the ASIC. This creates a danger that what is being tested is not what will be taped out in the final ASIC; • If bug found in software, but it is not reproducible in ASIC functional simulation, or in FPGA functional simulation, try FPGA gate level simulation. It is possible that the FPGA synthesis has made a decision on a signal that is present in RTL, but ends up being heavily optimised in the FPGA design as one of its dependent input signals is undriven.

Forces FPGA activities are invaluable in verifying designs pre-silicon. However, they tend to be underresourced, and often the activity is seen as a distraction by members of the digital IC design team. FPGA activities also tend to incur a significant degree of overhead: it is common enough that a majority of the issues found during FPGA development are FPGA-specific issues that will not affect the ASIC design.

7.5.5 Keep ASIC/SoC team involved in FPGA Development Context FPGA activities are resulting in a significant number of FPGA-specific issues—related to FPGA build flow, or block functional models specific to FPGA; 164

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

Problem Digital IC Design Team feel that FPGA is a distraction from the main activity of reaching tape-out.

Solution • Agree an appropriate priority and importance to the FPGA task amongst the team; • Establish a clear understanding of the benefits of FPGA testing for tape-out success;

Forces This problem is usually more difficult to address if the FPGA issue is not on the critical path for ASIC / SoC team;

7.6 Development Patterns 7.6.1 Perform Regular Builds Context Parkinson’s Law—‘work expands to as to fill the time available for its completion’ (Parkinson, 1955, 1957)—and Boehm’s Deadline Effect—‘the amount of energy and effort devoted to an activity is strongly accelerated as one approaches the deadline for completing the activity’ (Boehm, 1981)—are recognised as

significant influences in commercial software development (Potok and Vouk, 1997). Based on my etymological investigation of the terms ‘hardware’ and ‘software’ (in Chapter 3), I propose that this is also strongly true for hardware design.

Problem Project management techniques that focus solely on final deadline and critical path models ‘fail to account for work force behavioral effects on the expected project completion time’ (Gutierrez and

Kouvelis, 1991). Without regular project checkpointing, it is very difficult to notice project delays accumulating. This is especially true in large complicated systems projects, with many interdependent technologies. Potok and Vouk note that: 165

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

‘a rigorous enforcement of final project deadlines, coupled with a lack of incentive to finish intermediate project tasks early may trigger Parkinson’s Law delays and negatively influence productivity.’ (Potok and Vouk, 1997)

FPGA work is typically based on ported snapshots of a real-ASIC database.

Software

development relies on accuracy of FPGA work. ASIC work relies on feedback from the software team as to the usability of the design for real use-cases.

Solution Regular build release cycles enforce a discipline to incremental improvement. They provide waypoints to measure the progress of improvement (via a short test cycle). They encourage a sense of urgency in completing work, by providing specific times and measures for productivity. FPGA builds should be re-synchronised with the main ASIC trunk regularly to ensure they stay in sync—perhaps again on a fortnightly schedule. Software releases should be make into formal test on a regular schedule to ensure tracking of improvement.

Forces Parkinson’s Law, Deadline Effect, Goal Theory.

7.6.2 Share Code across Test Platforms and Technical Disciplines Context In simulation testing, test code (written in C) is often used to stimulate the hardware through a certain sequence of operations through direct register writes. Hardware engineers are focused on testing of the hardware blocks, and not on code reuse. As a results, the ‘make it work’ patterns of register programming sequences, magic bit enables etc. can occasionally get coded as direct magic number writes without appropriate commenting. Firmware developers typically abstract away from error-prone direct magic number writes to macros or functions that perform the register interaction. 166

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

Problem Problems have been discovered in software / FPGA testing that need to be investigated in RTL simulation. However, the software test case that is failing is not easily portable to the RTL simulation environment.

Solution In order to achieve greatest efficiencies, test framework reuse across the various stages of the verification and characterisation life-cycle is a critical goal for automation. Both my hardware and software interviewees agreed that ideally, the same C code should run across all test platforms— both software in nature (simulation, models, etc.) and hardware (FPGA devices, bring-up boards, form-factor boards). In addition to this, automated testing frameworks developed pre-tapeout should be designed with a view to targeting to real silicon, once it returns from the fab. Test code should be shared across digital design teams and software teams—for example, C accessor macros for memory-mapped peripheral registers. Portability of unit test cases from FPGA to simulation allow problems discovered in FPGA environments to be rapidly reproduced in simulation environments, where much greater visibility into the hardware design can be exposed. Register definitions should be captured and formally controlled and numbered. C and RTL code for register addresses, bit-masks, and bit-shifts should be automatically generated from the same register definition spreadsheet, to ensure consistency. The following help in this regard: • Generate register #defines from spreadsheet—both C code and Verilog/VHDL; • Share hardware accessor macro code across firmware and ASIC / SoC teams; • Try to reduce software test cases to minimum number of instructions and register reads/writes in order to reduce and constrain simulation time. By enabling the hardware team to use this code, there is an additional implicit degree of code verification and code sharing across the two teams. This has a number of beneficial impacts: • it helps the firmware team in migrating patterns of code from hardware simulation land into SoC setup code; • it enables the firmware team to contribute test cases to the hardware test bench in an easier and more natural fashion—all the infrastructure required to build with their macro systems will be present; 167

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

• where descriptive register names form part of the actual coding as opposed to magic number values, it encourages more legible code and permits greater traceability back to hardware functional specifications. This helps keep coding near the data, which is better for maintainability and documentation (Appleton, 2005; Dyson, 2005; The Source Code is the Design, 2008). Appleton notes that ‘the likelihood of keeping all or part of a software artifact consistent with any corresponding text that describes it is inversely proportional to the square of the cognitive distance between them’. Appleton elaborates that ‘the phrase “out of sight, out of mind” gives a vague indication of what is meant by “cognitive distance”’ . . . ‘it doesn’t refer to a Euclidean notion of distance, but to the amount of conceptual effort required to recognize that the corresponding “item” exists and needs to be changed to stay “in sync”’.

Forces Hardware team may be reluctant to adopt software programming macros until build environment is established for them, and the benefits of this approach is explained to them along with some example code for using the macros.

7.6.3 Minimum Test Case Example Context A bug occurs on the boundary of hardware/software, and the software team is having difficulties in getting the hardware team to proactively look at the issue.

Problem Problem descriptions may be at too high a level of abstraction, or may be sufficiently vague that it is not feasible (in time) to reproduce in hardware simulation.

Solution Communication difficulties between the teams often mean that software team gets impression that the hardware team is not working on bug; whereas the hardware team may actually be technically prevented from working on bug due to size/scope of bug description. Therefore, it is important to ensure good regular communication between the hardware and software teams on the topic of 168

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

open bug reviews. Additionally, software team should aim to provide minimum working example test case to hardware team for simulation purposes—i.e., the smallest amount of code (perhaps test code, perhaps extracted from main system) necessary to reproduce the issue.

Forces None.

7.6.4 Keep the Firmware Design Simple Context Maintaining and debugging software and firmware for an embedded semiconductor device.

Problem Complex code is difficult to understand and difficult to debug. It can introduce many variables to a system, and particularly it can introduce indeterminism in the state of system resources as specific times: ‘And particularly as time goes by, you turn into a software state machine three or four years after it has been build and that will teach you what’s complex and what’s isn’t.’ — INTERVIEWEE JOHN ‘ In most of the systems I have seen, there have been significant more hardware state machines where there should have been significantly more software state machines. So often a lot of software is not done in state machine design. In terms of complexity, the software state machines that I have seen are often significantly harder than the hardware state machines—largely as a (result of a) lot of conditioning on the transitions.’ — INTERVIEWEE JAMES

Solution Interviewees appear to subscribe to many of the claims and statements in Hoffman (2009)— specifically that ‘simpler programs are easier to verify with tools . . . ’ • Avoid RTOSes if possible—prefer a simple custom scheduler and a state machined approach. RTOSes can, if not careful, introduce indeterministic latencies which are difficult to debug. • Keeping design simple increases likelihood that a single engineer can grok the entire system. This is not usually the case with a system of multiple threads, semaphores, mutexes, message queues, mailboxes, ... 169

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

• Keeping the design simple can also help keep code size and MIPs requirements down— compilers potentially can optimise better with less complicated code.

‘Ultimately. we (team-A) were vindicated in my initial design decisions when the customer came back and wanted to know why the resource requirements for the second generation version (from team-B) were so high. All the threads and stuff. Our version wasn’t a big piece of code, but theirs (team-B) went bananas.’ — INTERVIEWEE MICHAEL

• By all means include error handling, but consider its use or relevance at a system level.

‘So when I’ve seen them over the last couple of years, what I saw in the software state machines was significantly more complex than it needed to be. In other words, the lower level state machines were handling significant amounts of error situations that nobody could have handled at any of the higher levels. So even if this state machine did report this error, there was nothing that could be done with it. If I raise an event, nobody can do anything with it.’ — INTERVIEWEE JAMES

Forces None.

7.6.5 Keep the Firmware team involved in C code for simulations Context Digital hardware team are writing simulation test cases to test functionality of a block in the system.

Problem Digital hardware engineers are focused on testing of the hardware blocks, whereas firmware engineers are more focused on system interactions—on how the actual constituent blocks will be used in concert to implement the working application of the design. As a result, digital hardware engineers may occasionally misinterpret how a peripheral will be used in the larger system environment. Likewise, firmware engineers are more likely to miss nuances in part datasheets, and may also misinterpret who a particular peripheral will be used in an interworking environment, or may not identify a specific mode of operation of the peripheral that needs testing. 170

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

Solution Ensure the firmware team is kept involved in the specification and development of C code for simulation testing. Where possible, recycle test code between the two teams. This ensures that each team reviews how the other is driving a particular part.

Forces None.

7.6.6 Consider Agile Methods, Test-Driven Development Context Facilitating knowledge transfer between digital hardware and software development teams.

Problem Semiconductor development is a highly complex activity, involving tacit information transfer across technically specialised groups in a rapidly changing business environment with tight market windows. Tacit information transfer is dependent on the establishment of trust and the maintenance of good social ties between the teams.

Solution Agile development methods place people and social interactions above processes and tools, and change response over comprehensive (rigid) plans. Process needs to be kept light and simple to ensure it is used. Consider applying similar processes to both HW and SW design. FPGA phases of hardware development in particular may be amenable to Agile methods with short deadlines, as may homegrown IP. Hardware development in general may adapt well towards test-driven development approaches. Fortnightly milestones (Ousterhout and Muzaffar, 2008) and Scrum (Schwaber and Beedle, 2001; Taft, 2005) are ways of benefiting from Parkinson’s Law and Boehm’s Deadline Effect. Otherwise, consider blended approaches of both agile and plan-driven methods (Beck and Boehm, 2003; Boehm, 2006; Boehm and Turner, 2003). 171

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

Forces Developers may eschew the imposition of process, and their buy-in is essential. Murphy (2002) notes that: ‘Regulating design is tricky. If the process allows numerous variations on a design path, the description will be too vague. If the process is too restrictive, it may cut off paths that make sense.’

Successful deployment of agile methods requires the establishment of an agile culture, and may be ‘most suitable in democratic type organisations’ (Siakas and Siakas, 2007).

7.6.7 Communicate in Diagrams Early On Context Digital hardware teams focus on the block functionality, and may not complete knowledge on the system functionality. Conversely, software/firmware teams focus on the system functionality, and may not have complete knowledge of block functionality.

Problem The detailed nuances of system architecture need to be fully thought through and agreement reached between digital hardware and firmware teams to ensure a working system is possible. Many design decisions increase in cost the later they are made in the development cycle.

Solution Providing certain sequences of system architectural operation in easily-absorbable diagrammatical form is an effective way of sharing both static and dynamic functional knowledge of the design. This ensures that the approach taken to key peripheral integration is correct and workable from the onset. An example list of peripheral interactions to consider would include Boot ROM flow (register writes sequences and timings) to ensure correct power-up sequence, interrupt servicing (to ensure no lost interrupts) etc. ‘the problem is the information flow at a level that is appropriate to both teams. Information flow in an understandable, common language for both teams is what is required. There is a tendency for both sides to perceive that the other ones are talking gobbldy-gook. So once the software guys mention words like UML and sequence diagrams, interaction diagrams and Harol state charts, the hardware guys just switch out. And if you actually explained these are very simple message

172

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

sequence diagrams, just state diagrams, all the same concepts you guys are using in all your normal flows, except that we just have different names for them, you might be a lot of the way to closing the gap. ’ — INTERVIEWEE JAMES

As early as possible, it is important, between both hardware and software teams, to ‘bottom-out’ various scenarios such as system boot-strapping, interrupt handling, DMA transfers (and crossbar interlocking/arbitration), major I/O modes and other significant use-cases to ensure hardware design is valid and provides a sufficient platform for software requirements.

Forces The firmware team may assume that the digital hardware team is correctly integrating the peripherals. The digital hardware team may follow the integration guidelines supplied by the peripheral IP provider, without direct reference to their own firmware team.

7.6.8 Implement Recovery Mechanisms for Boot ROMs Context Software that is to go into a ROM effectively becomes a piece of hardware, as it loses it ‘malleability’, its ability to change. ROM is cheaper, smaller and uses less power than RAM, typically, so it may make sense to put library code that is stable and unlikely to change into ROMs. Additionally, most systems have a boot loader in ROM that is responsible for bootstrapping the system into operation and loading the firmware, either over a remote interface or from some local storage mechanism.

Problem Nevertheless, it is possible to patch around these library routines if a bug is subsequently discovered. If the bug is affecting the boot loader portion of the ROM, it may prevent the chip from booting at all. It is prudent when designing a ROM to brainstorm as many recovery mechanisms as possible in the event of a boot related problem being discovered on the silicon. Digital ASIC designers and firmware developers alike are acutely aware of the importance of the Boot ROM: ’When the chip comes back and the hardware technical lead can download software onto RAM, and read/write registers, that is a huge sigh of relief—because he knows then that from there on, he is confident in the test tools that everything else should be okay. It is the fuzziness of taking

173

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

the boot code and resetting everything correctly at boot up. Hardware guys are really nervous of boot code until that happens.’ — INTERVIEWEE JOSEPH ’I think the whole system is more relaxed about the software aspects of it. The whole company takes a different approach. Let’s be clear—it is not down to personalities, it is down to what software is. I do absolutely agree. It is fundamental. Where software gets like hardware is in Boot ROMs.’ — INTERVIEWEE DAVID

An apt example mentioned of what can happen was a problem described to me regarding a PLL block, where the polarity of the enable was different to the documentation and the FPGA digital models supplied by the IP provider prior to a tapeout.

Solution The mindset that the firmware developer needs to adopt was described to me as follows: ’You’d be thinking about if something doesn’t work. It is a different way of thinking. Always focusing on the system not working. . . . You’d put in as many get out of jails as you could. . . If that was a piece of soft code, you would not have gone to as much detail at all, would you?’ — INTERVIEWEE RICHARD

One model that works well is to decompose the firmware image into separate logical blocks where each block consists of a command header and an associated data payload. This allows the inclusion of additional debug/rescue techniques inline with the actual firmware download. Such techniques include: • Peeks and Pokes: Include support for special peek/poke command blocks, which are useful early on in the boot process for small register patches etc.; • Gosubs: Subroutines are useful for larger hardware reconfiguration routines; • Default Clock: Boot off a slow sane clock (e.g. a reference crystal) and allow the firmware download image to change to a PLL if required for quicker boot speed; • Timers vs. lock Bits for PLLs: Don’t trust the lock bits in PLLs in case they prove troublesome—guard the lock with a timeout after a reasonably large period of time regardless; • Checksums and Retransmission protocols: Apply checksums and a retransmission strategy to each block transfer; • Keep Booting Forever vs. Timeout: Decide on your boot philosophy—is it to continually try to boot (why give up after ’n’ attempts?) or to try a deterministic number of times and then to provide some form of appropriate error indication (e.g. turn on an LED). 174

7. EMERGENT TOOLBOX OF PATTERNS

7 . 6 . DEV EL OP MENT P A T T ERNS

Forces The following requirements add to the complexity of the Boot ROM functionality, and must be taken into account when assessing Boot ROM risk and related recovery techniques: • the desire to reduce the time taken to bootstrap the device may necessitate the need for variable length blocks and compression techniques; • the desire to reduce the size of firmware storage devices may necessitate the need for compression techniques; • the desire to keep the device secure for the purposes of DRM! (DRM!) may necessitate the need for secure firmware and booting techniques.

7.6.9 Provide Software Test Plans Early, Hardware Features Early Context • Software teams tend to be lax when it comes to test plans, due in part to its changeability if bugs in it are found; • Hardware teams tend to be reluctant to commit to aggressive design changes, due in part to to its non-changeability if bugs are introduced.

Problem Bugs and design oversights can slip through the validation/verification net due to lack of rigorous definition of software test and mis-design of hardware functionality and features. An intrinsic aspect of testing the software on an SoC ASIC device is to test not only the hardware implementation, but also to validate the hardware design.

Solution Software teams need to focus on test plans early on, to focus developers on ensuring all applicable use cases are considered—especially as to how they will be implemented upon available hardware. Significant feature deficiencies in the hardware design need to be identified, debated, and acted upon early in the design cycle to stand any chance of having solutions implemented and verified in hardware. Otherwise, software may be required to implement cumbersome workarounds that degrade the products ability to address the market. 175

7. EMERGENT TOOLBOX OF PATTERNS

7. 7. S U MM A RY

Forces Developer inertia and culture.

7.7 Summary This chapter presented techniques to help mitigate cross-functional development stress, which emerged from my research data, into pattern form. The patterns, summarised in Table 7.1, are of workflow organisations that are advantageous for cross-functional reduction of development stress. The patterns presented in this chapter are organised into three distinct groupings. Communication amongst teams is essential, and the theoretical model presented in Figure 6.1 shows that GSD and techno-cultural effects mediate communication between digital hardware and embedded software developers. The first grouping addresses these communication mediators: • Patterns of Social Interaction—patterns in this group focus on mitigating loss of tacit knowledge between groups through fostering both formal and informal means of social interaction. Cross-familiarity with other team’s skill sets and capabilities is extremely important, both for fostering good social interaction and trust, but also for enabling tacit knowledge transfer—all of which help in technical risk reduction, and in expediting resolution of technical stumbling blocks. The theoretical model presented in Figure 6.1 shows that, along with techno-cultural symptoms, technical determinisms do have influence upon work practice. The second grouping of patterns addresses these influences—specifically FPGA-related issues (where hardware and software logic commonly first meet for integration purposes), and concurrent development issues: • FPGA Patterns—patterns in this group focus on ensuring that development best practices aren’t neglected for FPGA, such as ensuring design traceability, proper dependency management, predictable FPGA build schedule, and formal build numbering; • Development Patterns—patterns in this group focus on gaining and maintaining development momentum through regular builds (thus benefiting from Parkinson’s Law and Boehm’s Deadline Effect), sharing hardware driver code across teams, keeping the firmware team involved in hardware testing and familiar with the restrictions and limitations of simulation environments, and introducing agile development practices where practical within the development cycle.

176

Chapter 8 Validation of Theoretical Model



Prolonged, indiscriminate reviewing of books is a quite exceptionally thankless, irritating and exhausting job. It not only involves praising trash but constantly inventing reactions towards books about which one has no spontaneous feeling whatever. — GEORGE ORWELL



1903–1950, British Author.

8.1 Introduction

B

UILDING

on the analysis of the emerged themes identified by my theoretical model (in Chapter 6,

and summarised in Table 6.1), this chapter validates these themes through comparison with

pre-existing consumer electronics and semiconductor project development literature, and preexisting literature in other disciplines.

8.2 Business Themes in Literature My theoretical model shows that market segment and the choice of business model have the strongest impact on the way in which technology is designed, developed and verified, as illustrated in the model extract presented in Figure 8.1. Business themes that emerged include tight market windows, influence of business model with the sector, and the requirement for justification of engineering investment (specifically verification). That a companies business model has a dramatic effect on engineering practice is certainly intuitive, and has basis in literature. According to Jacobson et al.: ‘a business model shows what the company’s environment is and how the company acts in relation to this environment. By environment we mean everything the company interacts with to perform its business processes, such as customers, partners, subcontractors and so on. It shows employees at every level what must be done and when and how it should be done ’ (Jacobson et al., 1994, as cited in Potok and Vouk, 1997).

Commercial software development, and by extension hardware development, always needs to be considered in the context of the business model (Hansen, 1996). The inherency of risk is a fact of 177

8. VALIDATION OF THEORETICAL MODEL

Business Realities: Tight Market Windows, Business Models, Justification of Investment

8 . 2 . B U S I NES S T HEM ES

generates

Risks

it lim

inv est me nt i n

Development Teams

...

Digital Hardware Design

Embedded Software Design

Figure 8.1: Influence of Business Themes in Theoretical Model.

conducting business in the semiconductor industry, especially for small start-up companies focusing on technological niches: ‘By Mitchell’s own admission, building a semiconductor company is fundamentally different from building a traditional business. Movidia will not have any revenues until late next year at the earliest, will have “losses of millions and millions for years” and will need investors with deep pockets and lots of patience. ’ (Daly, 2009)

Sangwan and Neill (2007) recognise that the architectural design of software needs to be aligned with the corporation’s business model and goals. However, it focuses more on software automation of enterprise resource planning rather than software as an end product. Hohmann (2007) takes issue with some aspects of Sangwan and Neill (2007), specifically noting that business goals evolve as businesses change over time. Additionally, Hohmann notes that goals (and by extension business process and software architectures) need tailoring to the market: ‘The authors also imply that the business goals of a company can apply uniformly to all of the companies products and services. As organisations become more complex, the business goals become more abstract. Divisional goals set in. And these must be further re-interpreted to the needs of specific markets.’ (Hohmann, 2007)

For a more appropriate systematic to addressing these issues, Hohmann recommends his own book which notes that: ‘You need to move beyond software architecture and move toward understanding and embracing the business issues that must be resolved in order to create a winning solution.’ (Hohmann, 2003)

Accepting that business objectives determine engineering goals and affect engineering practice, what exactly are the business concerns in consumer electronics semiconductor? The consumer electronics market is characterised by low cost, highly functional product with aggressive schedules that are driven by tight windows of market opportunity. Analyst Jackie Fenn created a graphical modelling tool called the Hype Cycle (Fenn, 1995) (illustrated in Figure 8.2) at US research firm the Gartner Group to represent the commercial 178

8. VALIDATION OF THEORETICAL MODEL

8 . 2 . B U S I NES S T HEMES

Visibility Peak of Inflated Expectations

Plateau of Productivity Slope of Enlightenment

Trough of Disillusionment Technology Trigger

Time

Figure 8.2: Gartner’s ‘New Technology Hype Cycle’.

Taken from Fenn (1995). maturity and market acceptance of an emerging technology. Gartner have successfully used Hype Cycles since 1995 to track the pattern of early over-enthusiasm and subsequent disappointment that occurs with the introduction of new technology. The Hype Cycle is composed of five distinct phases: ‘Technology Trigger’ - The first phase is a breakthrough, product launch or other event that generates significant media and (early-adopter) consumer interest; ‘Peak of Inflated Expectations’ - Following this trigger typically comes a frenzy of publicity which generates over-enthusiasm and unrealistic expectations; ‘Trough of Disillusionment’ Failing to meet expectations of early-adopters, technologies enter the ‘trough of disillusionment’ and quickly become unfashionable with the media; ‘Slope of Enlightenment’ - At this point, technology maturity and market understanding converge (as illustrated in Figure 8.2) to enable some successful experimentation and application of the technology, and its practical benefits are realised; ‘Plateau of Productivity’ - Once these benefits are widely demonstrated and accepted, the technology becomes stable, and may evolve into subsequent technology generations. The overall market size achieved in the plateau is determined by the scope of applicability of the technological solution. The Hype Cycle is a useful visualisation tool that shows the potential addressable market for a technology. It is important to realise that the successful progression of a technology from phase to 179

8. VALIDATION OF THEORETICAL MODEL

8 . 2 . B U S I NES S T HEM ES

Visibility

Visibility

Time

Time

(a) Hype Level.

(b) Technology Maturity / Engineering Capability.

Visibility

Time (c) Combined

Figure 8.3: Components of the Hype Cycle.

Taken from Bresciani and Eppler (2008). phase within the model is not guaranteed. Many technologies fail to emerge from the ‘trough of disillusionment’. it also serves to show the fleeting temporal nature of market opportunity windows - especially the early ‘peak of inflated expectations’: ‘It is better to time it to arrive too late to a market than too early. If you arrive too late, you’ve just sacrificed some of your potential addressable market to your competitors. If you arrive too early, particularly as a small Irish start-up, chances are you’ve burned through your venture capital before the market takes off.’ — INTERVIEWEE ROBERT

8.2.1 Commercial Realities of Software Development Business factors introduce risk through complex feature requirements, aggressive schedules and tight market windows. These risks are addressed in part through validation of design and verification of implementation. We will now look at the notion of the differences in verification justification/cost between hardware and software. Reeves (1992) provides some insightful views into the realities of modern software development, arguing that: • Software is cheap to build, and getting increasingly cheaper as computers get faster; 180

8. VALIDATION OF THEORETICAL MODEL

8 . 2 . B U S I NES S T HEMES

• In contrast, software is expensive to design, due to ever-increasing inherent product complexity; ‘Designing software is an exercise in managing complexity. The complexity exists within the software design itself, within the software organization of the company, and within the industry as a whole.’ (Reeves, 1992)

• As a result, historical formal engineering validation methods are not directly applicable—the economics are such that it is cheaper to build and test designs rather than formally prove. • Software development is currently still more a craft than engineering discipline because it lacks sufficient rigour in validation; • Real software development advances require new developments in programming paradigms, which ultimately yield more effective programming languages. This argument is based on the flexibility and changeability of software, versus the hardness of other engineering disciplines. Reeves (1992) suggests that this fundamentally affects the engineering approach that needs to be taken to software development. Reeves further takes the approach that the ultimate outcome of any engineering activity is documentation—documentation of a design that is to be handed off to some manufacturing process. With this definition, all aspects of developing software are part of the design process—from state machine visualisation through coding and verification. Coding is a more tangible form of validation of the theoretical design (– i.e. varying degrees of physicality/virtuality/abstraction—this mirrors, to some degree, the consistency checking that occurs in hardware design). A software design is not, therefore, complete until it is coded and tested. Naur takes the approach that the code itself is the documentation, and that any additional documentation is a secondary activity: ‘. . . programming in this sense primarily must be the programmers’ building up knowledge of a certain kind, knowledge taken to be basically the programmers’ immediate possession, any documentation being an auxiliary, secondary product.’ (Naur, 1985)

In contrast with the nature of software development, Reeves proclaims the following about hardware design: • Complex hardware designs have expensive build phases—thus there is a much smaller number of companies producing truly complex hardware than complex software; • As a result, ‘the software industry is not likely to find solutions to its problems by trying to emulate hardware developers’—rather, with EDA tool advances, ‘hardware engineering is becoming more and more like software development’.

181

8. VALIDATION OF THEORETICAL MODEL

8. 3. RI S K T HEM ES

Reeves summarises that coding, test and debugging should be recognised as legitimate design and engineering phases of software development—that that more formal methods of validating designs are not performed (in general) because of the ‘the simple economics of the software build cycle.’ Humphrey (1990) notes that: ‘For software, much as with hardware, . . . (the cost of defect prevention) generally includes the costs of testing, repair, customer dissatisfaction, and warranty and field service. Unless one considers both sides of this equation, it is impossible to make an intelligent decision on the matter.’

Test investment is a business decision, as is the expending of any commercial resource. That software can be rigorously tested like hardware is certain. For example, Feynman (1986) applauds the strict and rigorous engineering approach employed in the verification of software in the safetycritical systems of the shuttle—whilst also acknowledging the associated cost of this rigour: . . . ‘there have been recent suggestions by management to curtail such elaborate and expensive tests as being unnecessary at this late date in Shuttle history. This must be resisted for it does not appreciate the mutual subtle influences, and sources of error generated by even small changes of one part of a program on another. . . Changes are expensive because they require extensive testing. The proper way to save money is to curtail the number of requested changes, not the quality of testing for each.’ (Feynman, 1986)

It is important to realise that in order to meet the aggressive schedules proposed for the product, software testing is time and resource constrained. The cost of test and customer-facing defect prevention through test needs to be carefully considered and an appropriate engineering compromise made. In consumer electronics semiconductor projects, project ‘guilt’ as regards constrained testing is subjugate to the facts that judicious use of resources on testing software is necessary, and products can (and do) ship with known (hopefully minor, or cosmetic) bugs (Boran, 2009).

8.3 Risk Themes in Literature My theoretical model shows that risks are generated by business realities, but differ in their strength of influence on hardware and software development activities (Figure 8.4). Risk themes that emerged include misunderstanding market opportunities, specifying a product incorrectly to address a market need, technical implementation error, social interaction difficulties, and the difference in how risk itself is perceived (“weight of risk”). The general concept of commercial risk is well documented and understood in literature (Burton, 2008; Risk Management, 2004). There are many different risks present in the various business models of the semiconductor industry. It is relevant at this point, however, to specifically note that ITRS Update (2008) declares that ‘embedded software . . . has emerged as the most critical challenge to SOC productivity’. In an industry that has traditionally been hardware dominant, software does not

182

8. VALIDATION OF THEORETICAL MODEL

Business Realities

generates

8. 3. RI S K T HEMES

m

od era

tel

ces

strongly influences

n fl u e y in

Risks: Fundamental Market Understanding, Product Specification, Technical, Social, Weight of Risk

Development Teams

Digital Hardware Design

Embedded Software Design

Figure 8.4: Influence of Risk Themes in Theoretical Model.

always get the attention and due care that it needs. Wolf (2006) describes how embedded computing needs ‘a new generation of managers who better understand embedded software’. We have already discussed the finding that hardware engineering is acutely averse to risk, and strives to migrate this risk up to the software realm where possible. With this in mind, consider Finholt and Birnholtz (2006), who state some of the types of difficulties encountered when cultures of different technical disciplines meet—particularly cultures with different levels of risk aversion. They note that there is ‘a greater than normal chance for misunderstanding and mistrust’ due to these cultural differences, resulting in ‘awkward first contacts’ and subsequent difficulties in establishing social amicability and trust. In this research, techno-cultural differences have exhibited themselves in approach to system validation. Interviewees have mentioned the rigour brought to system verification by hardware engineers (verifying the “nitty-gritty”), and the system knowledge/use-case appreciation brought by software engineers (verifying the product in its intended use). Park (1998) acknowledges this duality of approach, in proposing that:

‘. . . only by adopting a methodology that supports the concurrent design and verification of the hardware and software elements of the system can the concept of SOCs be converted into a reality.’

183

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

Software skills in terms of automation are valuable assets to hardware development in catching system-level functional interactions (Babin, 2003), particularly as they allow much more testing to be performed than is possibly in hardware simulation testing. Mayrhauser et al. (2000) highlight some of the terminology differences between hardware and software teams when considering verification. They consider ‘a VHDL model as a software routine with some specific hardware information such as hardware delays and triggering mechanisms’, and differentiate

between hardware verification and software verification as follows:

‘We deliberately use verification instead of test to distinguish our work from some other existing research activities. “Verification” in our hardware design means a process to uncover design faults rather than manufacturing defects. However, in the software engineering community, “verification” refers to verifying a software program using formal methods, whereas “test” means exercising a program in order to uncover faults, including design faults’ (Mayrhauser et al., 2000).

8.4 Socio-Geographic Themes in Literature Development Teams Embedded Software Design

Digital Hardware Business Realities Design

Communication Mediators

Social/GSD:

TechnoCultural Effects

Familiarity, Trust, Irrelevance of Specialisation, Flow of (informal) Information

Figure 8.5: Influence of Social Themes in Theoretical Model.

My research has emphasised the importance of sociological aspects in team coordination for semiconductor SoC projects, as illustrated in the model extract presented in Figure 8.5. Social themes that emerged include the requirement for team familiarity, the establishment of trust, the need for tacit information transfer and the irrelevance of technical specialisation as regards the 184

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

importance of social issues. This section looks at the references in literature to the sociology of general computing, leading into the sociology of SoC development specifically.

8.4.1 The Sociology of Computing In hindsight, it is not at all surprising that social aspects are central to getting SoC projects to run efficiently and effectively. It is through social interactions that developers ‘determine what is to be taken for granted and what is to be changed’ (Eder, 2007). This is true in all aspects of human life, and

certainly in the workplace. Gutierrez and Kouvelis (1991) recognises the need to ‘account for work force behavioral effects on the expected project completion time’. Perry (1997) notes that: ‘One of the central premises of sociology is that all activity is social in nature (Schmidt, 1991): it is situated within a social context, mediated by social pressures and learned in a social milieu. Work is a social activity, with goals and operations defined by the social context that individuals are immersed in.’

There are obvious parallels both within the domain of computer science (esp. software–literature referring to sociology of hardware development is scarce), and also outside in other distributed team activities El-Tayeh and Gil (2007). Computer science and engineering are no different in this respect to any other field of human endeavour. Tedre (2006) proclaims that: ‘. . . computer science is done by people. No matter what ontological or epistemological standpoints one takes, one cannot escape the fact that science as an enterprise is run by scientists and all scientific statements are made by scientists. The relationship between science and social phenomena is an issue that has been debated extensively in fields such as physics and mathematics, but not so in the field of computer science.’

This applies equally to the technical specialisations of hardware (both digital and analog) and software. Gabriel (1996) writes that: ‘Software development is done by people with human concerns; although someday this component will be a smaller part of the total picture, today it is the high-order bit. The software community approaches these issues with high hopes and a pride in the term engineer. I approach it as a critic. . .

Gabriel argues that ‘technology, science, engineering and the company organization’ are all secondary to the human concerns in the endeavour, and that they will ‘ultimately fail when humanity is forgotten’. He continues with an observation, also made by Nerur and Balijepally (2007), namely that: ‘Alexander knew this, but his followers in the software pattern language community do not. Computer scientists and developers don’t seem to know it, either.’

The interviewees of this research made similar points, in ranking the risk of social incompatibility higher (in terms of potential consequences) than technical or business risks—see Figure 6.6 (on page 132). 185

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

Science

Technology

Economics

Social

Figure 8.6: Domain Bounds of Technology.

Adapted from Botha (2005). Botha (2005) presents a simple but informative graphic (depicted in Figure 8.6) which describes technology as ‘the culmination of scientific effort, being applied in such a way that it makes economic sense within a specific social setting’. Tedre (op. cit.) is concerned with arguing for the need to

broaden computer science with perspectives from the disciples of sociology, history, anthropology and philosophy. In this regard, he proposes that:

‘. . . ethnomethodological approaches in social studies of computer science may benefit computer science (both as an activity and as a body of knowledge) to the extent that they can expose how the philosophical, theoretical, conceptual, and methodological frameworks of computer science are created, maintained, and managed.’

Furthermore, Tedre (op. cit.) claims that such studies can expose how the ‘processes’ through which practitioners develop are later ‘perceived as something other than human products’: ‘Social studies of computer science can explicate implicit assumptions, shared attitudes, and tacit knowledge. Social studies of computer science produces unique meta-knowledge of computer science. Meta-knowledge (knowledge about knowledge) is an important aspect of understanding computer science, because it can offer insight into even the most insightful theories of computer science.’

I believe that this research has indeed generated meta-knowledge about the digital hardware and embedded software teams working on a single semiconductor SoC project—specifically with reference to influencing factors such as business model factors (esp. risk), socio-cultural and technodeterministic. Berger (1998) makes reference to the ‘sociology of SOC development’ when touting the benefits of ‘HW/SW Co-design and Co-verification’: ‘. . . working together implies that HW and SW designers must be aware of each other’s needs and development process. This may, or may not, be a significant change for the development team and how they do their job.’

Berger’s paper covers the mechanics of ensuring software involvement on ‘virtual hardware’ early enough to catch errors before they required ASIC re-spins: 186

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

‘What if the HW and SW designers worked together through the design and integration process so that debugging became an incremental rather than cataclysmic process? How would such a heretical suggestion actually occur in real life?’

Berger (op. cit.) states that ‘co-verification’, a process which he describes as ‘akin to incremental debugging’, ensures that: ‘ . . . the correctness of the hardware and the software pieces can be tested at the unit test level so that what eventually gets sent to the foundry is correct by design, not with the expectation of removing the bugs when the prototype comes back from the fab.’

Nowadays, this practice is common, with FPGAs (relatively inexpensively) filling the role of virtual hardware (Jaeger, 2007), as identified by Berger. However, it is interesting to note that Berger (op. cit.) makes reference to the sociological aspects of digital hardware and embedded software interaction—albeit with the techno-centric focus of tools and engineering process. Wilson (2003) furthers the discourse on ‘non-technical issues for SoC design’ eponymously, claiming that the ‘organizational requirements that a system-level IC imposes upon its design team’ are infrequently considered: ‘. . . structural issues assert themselves in SoC designs that occur infrequently in other types of chip design. These issues involve how the SoC design team is partitioned and how the subgroups communicate amongst themselves.’

Coudert (2002) acknowledges the problem of: ‘overall complexity of a chip design, which is often divided into several blocks, with several independent design teams working in parallel, each at a different page. This problem is inherent to project management . . . ’

Wilson (2003) focuses on the aspects of SoC design that differentiate ‘system-level chips from other IC undertakings’, namely:

• ‘system-level’—the fact that SoC designs contain a large number of functionally important circuits affects how the system design group, the verification team and the software team will inter-relate; • diversity of function that must be integrated—this level of diversity generally has the consequence that the various blocks ‘within the finished design will have to come from different groups, some within and some outside of the design team’;

• the need to communicate with foundry process engineers—this is ‘a significant organizational issue that must be explicitly considered by the design team’.

Wilson (op. cit.) considered specifically the interfaces ‘between members of the design team and other groups with whom they must share information’, but did not look in depth at the perspective differences

intra-design team, i.e. specifically between digital hardware and embedded software. 187

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

Donnellan and Kelly (2005) note many of the forces which impact business agility within the semiconductor industry, and separate these forces into two categories: • Inter-organisational factors—competing through standardisation, and vendor/customer relationship management; • Intra-organisational factors—virtual teams, cross-functional collaboration. In a keynote address to the Design, Automation and Test conference in Europe 2006, Rhines (2006) recognises that modern electronic systems require collaboration and information flow across groups of ‘design specialists who are becoming more dispersed geographically and organisationally’. Rhines discusses the impact this has on future design methodology evolution in the context of Electronic Design Automation (EDA) tools provision1 . He briefly touches on the conflict between the roles of software architect and chip designer, and on the potential use of “C synthesis” as a more productive design sharing tool than a system-level specification. C synthesis is presented as ‘enabling faster architectural exploration and shorter time to RTL’.

Pennington and Grabowski (1990) presents that: ‘Programming is a complex cognitive and social task composed of a variety of interacting subtasks and involving several kinds of specialised knowledge.’

Studies of (social) behavioural issues and cognitive effects in software development have a rich history tracing back to the late 1970s (Rosson, 1996). Beyond this, however, are the studies which place the programmer in the organisational context. The motivation is thus: ‘Professional programmers spend considerable time communicating with others in their organization, both individually and as part of a group. Thus the analysis of communication problems. . . is a key element in understanding how to better support the software development process.’ (Rosson, 1996)

Curtis et al. (1988) describe psychological paradigms that have been used in studying programming—as illustrated in Figure 8.7. Beyond basic cognition and personal motivation, these include: Group Dynamics - organisation and imposition of social behaviour on cognitive requirements of programming skills, interaction of methodology and team process; and Behaviour in Programming Organisations - impact of organisational factors on programming, communication and co-ordination breakdown.

1 Rhines is chairman and CEO of Mentor Graphics, a leading vendor of EDA technology, and there is an evident EDAcentric bias in the presentation.

188

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

Curtis et al. (1988) notes that the techno-deterministic effects of tools and the effects of processes can be seen to be relatively small in comparison to that of behavioural (human and organisational) factors on software productivity. The consequence of this is that ‘an effective model of Information Systems Development (ISD) processes must therefore support behavioural factors as well as technical ones’ (Gasson, 1993).

The layered behavioural model presented in Figure 8.7 focuses attention on ‘the behavior of those creating the artifact, rather than on the evolutionary behavior of the artifact through its development stages’ (Curtis et al., 1988).

These results are very similar to what I uncovered in my research, albeit in my case with the slightly different technical disciplines of digital hardware and embedded software included. Additionally, this gives extra credence to the notion that digital hardware design, from a sociological perspective, can be considered as an equivalent activity to embedded software design.

Business Milleu Company Project Team Individual Cognition & Motivation Group Dynamics Organisational Behaviour

Content of Analysis

Figure 8.7: Layered Behavioural Model of Software Development.

Taken from Curtis et al. (1988).

Based on the supposition that digital hardware SoC development is, or can be treated as, a form of programming, I propose that the layered behavioural model of the influences in software development (Curtis et al., 1988) is also valid for digital hardware development, and specifically with a view to understanding development stress. The model is interesting expressly because it focuses attention on the motivations and behaviours of the engineers/developers creating the artifacts, rather than on the iteratively increasing functionality and capabilities of the product itself.

Curtis and Walz (1990) discusses the

communication and co-ordination breakdowns that an occur in this model, noting that ‘good people must become involved in the myriad social and organizational processes’—performing such tasks

as resolving conflicts, negotiation, ensuring shared consistent vision, and fostering communciations between groups. 189

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

Curtis and Walz (1990) argued at the time that the scientific research community needed more focus on the psychology of software development, as opposed to solely the cognition of programming:

‘Better research on team and organizational factors may increase our ability to account for variation in software productivity and quality, and thus our ability to manage large systems development.’

Wilson (2003) identified that:

‘the “systemness” of the SoC causes a profound change in the way the chip design team relates to other teams: more specifically the system design group, the verification team and the software team. . . . when information must flow between groups that are isolated from each other . . . an interface is created. We use the word interface intentionally, because the analogy to an electronic interface is quite strong.

Wilson’s concept of interface is illustrated in Figure 8.8. The model usefully identifies the conduits of communication that must be nurtured to achieve a successful project. Based on my research, this model is slightly unbalanced—and we will return to this suggestion in Subsection 8.5.1.

Software Development

System Design

SoC Design Team

Foundry

IP Providers

Figure 8.8: Wilson’s Concept of (Social) Interface in SoC Design.

Taken from Wilson (2003).

8.4.2 Globally Distributed Teams Mitchell and Zigurs (2009) state that teams exist to create value for organisations, and that globally dispersed teams exist to bring together individuals from different areas of experience, allowing 190

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

access to a global market. Ortiz and Harwood (2007) list the conditions (identified by Allport, 1954) necessary for inter-group social contact2 to produce positive outcomes, namely: • equal status amongst the groups; • shared common goals; • working together to achieve these goals; • a social culture which favour inter-group cooperation and interaction. Thomson et al. (2007) discuss the concept of distributed design teams. Thomson et al. note that although communication is less likely between team members as their physical separation increases, there is more to geographically distributed design than just sharing information: “a wide range of factors affect its effectiveness and success”. Thomson et al. note that design is a highly diverse social activity, relying heavily for productivity on trust of peers, effective communication and flow of information, and ensuring work satisfaction. Pearn Kandola were commissioned by Cisco Systems to produce a report based on the increasing trends of globalisation, and its implications for knowledge management and distributed team interworking (Shearsmith, 2006) . The key points in the report are: • People must have trust in order to communication effective—however, communication is essential in order to establish and develop trust; • Spontaneous and clear communication is the key to reducing conflict in teams—especially so in virtual teams where there is more ambiguity about what remote colleagues are doing; • Both of these are vastly more difficult to achieve in a distributed working environment. Mayer et al. (1995) defines trust as ‘willingness to be vulnerable, based on positive expectations about the actions of others’. There is agreement that trust is a significant indication of more effective team

interworking, and that lack of trust results in frictions, and difficulties in remote delegation. Mitchell and Zigurs (2009) note that ‘when team members trust one another, they typically produce higher quality outcomes’ whilst, conversely, Zheng et al. (2002) state that ‘when people mistrust, they most often manifest it in withholding group investment rather than in outright defection,—making promises and not keeping them.’

Mitchell and Zigurs (2009) summarises existing research on trust in virtual teams into a conceptual framework, noticing that ‘. . . virtuality in geography, time, and other dimensions can create

2 Allport is credited with the development of Intergroup Contact Theory and the Contact Hypothesis, which suggests that properly managed interpersonal contact between groups can reduce interaction problems between them—specifically in the context of racial tensions.

191

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

barriers to trust.’ Bos et al. (2002) note that ‘richer media’ are better for building trust, qualifying

the trust formed in terms of its fragility and in how quickly it is established—trust from richer media tends to be less fragile and more quickly established, trust from highly computer-mediated communication tends to be fragile and delayed. Zheng et al. (2002) discuss a number of implications of the significance of trust: • of how lack of trust has been shown to be a hindrance to certain tasks; • of how trust of remote colleagues is significantly lower than those of co-located colleagues; • of how face-to-face meetings facilitate the development of trust more effectively than email; • of how engaging in social-activities (even in a computer-mediated context) can be surprisingly effective in jump-starting the relationship of trust between individuals. It is interesting to note the cognitive propensity towards face to face meetings, despite the ironic suggestion in research of a tendency for deception within face-to-face as a medium (Hancock et al., 2004). Finholt and Birnholtz (2006) and Bos et al. (2001, 2002) also identify face-to-face as important in establishing social familiarity and helping to build trust. Finholt and Birnholtz (2006) suggest that: ‘Projects should encourage a number of ways to communicate, both formal and informal. . . The ability to associate a face and a friendly relationship with a name that otherwise appears only in one’s email inbox often protects against harsh attributions that can arise between participants from different professional cultures.’ (Finholt and Birnholtz, 2006)

This informal, tacit knowledge (also referred to as sticky information—von Hippel, 1994) is difficult to catalogue (developers are not aware they have it), and difficult to transfer and use in other locations (developers in these remote locations are not aware that they do not possess this knowledge, and likewise do not know what information to solicit). The importance of establishing familiarity and trust to informal knowledge transfer is demonstrated by Collins (2001) as ‘. . . social contact . . . can transmit not only tacit knowledge but trust in a result, even before it has been accomplished or witnessed’: ‘Russian measurements of the quality factor (Q) of sapphire, made 20 years ago, have only just been repeated in the West. Shortfalls in tacit knowledge have been partly responsible for this delay.’ (Collins, 2001)

Social familiarity and amicability of team members works well in beneficially mediating communication across distances, preventing ‘uninhibited behavior’, ‘flaming’, formation of locationbased groups and cliques, and over-assertiveness / unwillingness to compromise (Orenga Castellá et al., 2000). 192

8. VALIDATION OF THEORETICAL MODEL

8 . 4 . S OC I O-G EOG RA P HI C T HEM ES

One related aspect to that of trust establishment is ‘Not-Invented-Here’ syndrome (Katz and Allen, 1982), the unfortunate tendency amongst developers to attribute value to code developed within their local group whilst devaluing code written by other groups. In this respect, it is similar to the biases of Fundamental Attribution Error (Ross, 1977) and Correspondence Bias (Gilbert and Malone, 1995) when attributing notional blame to an individual’s actions. Katz and Allen (1982) noted the performance increases in a distributed team over time up to 1.5 years tenure, holds steady for 5 years, and then declines. Teams suffering from Not Invented Here (NIH)-syndrome can impose hostile barriers to innovation and to process and technology transfer between groups, and thus collective team tenure needs to be taken as a consideration in establishing trust with external groups. Ernst (2004) looks specifically at the limits of modularity3 and their impact as a catalyst for change on business organisation and industry structure. In doing so, Ernst (op. cit.) specifically identifies with SoC design, describing how: ‘Of equal importance however is the second objective of “iterated co-design”, i.e. to coordinate the multiple interfaces that reflect the growing complexity of SoC design. . . The diversity of functions that must be integrated into the chip means that “various blocks within the finished design will have come from different groups, some within and some outside the design team. Some of these groups . . . may not share a vocabulary, or even a language and culture with the primary chip design group.” (Wilson, 2003).

Ernst (2004) introduces the term Global Design Network (GDN) to cover the groups of firms that ‘participate’ in large distributed design ecosystems: this list includes ‘system companies; integrated device manufacturers (IDMs); providers of electronic manufacturing services (EMSs) and design services (the so-called ODMs, or “original-design-manufacturers”); “fabless” chip design houses; “chipless” licensors of “silicon intellectual properties” (SIPs); chip contract manufacturers (“foundries”); vendors of electronic design automation (EDA) tools; chip packaging and testing companies; and design implementation service providers; and institutes and universities (both private and public)’.

Figure 8.10 summarises the concept of trust and its significance in team interaction. Chiu et al. (2006) describes both Social Cognitive Theory (which defines human behaviour as ‘a triadic, dynamic, and reciprocal interaction of personal factors, behavior, and the social network (system)’)

and Social Capital Theory (which suggests that ‘social capital, the network of relationships possessed by an individual or a social network and the set of resources embedded within it, strongly influence the extent to which interpersonal knowledge sharing occurs’) are important aspects of models attempting

to understand the motivations behind people’s knowledge sharing in virtual communities.

3 Modularity in this context is the concept of building complex systems from smaller subsystems that are designed, implemented and verified separately, and yet function together as a larger unit—a great example being the many hardware blocks and software modules of a complex SoC design.

193

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

Chiu et al. (2006) found that ‘social interaction ties, reciprocity, and identification’ were important to increasing the quantity of information sharing, but not directly to knowledge quality. Additionally, they found that trust did not have a significant impact on quantity of knowledge shared— however, they do present the argument that trust is perhaps ‘not crucial in less risky knowledge sharing relationships’. In this respect, trust is of vital importance when both a hardware team and software

team are tasked with verification and validation of an SoC design prior to tapeout. Interestingly, Chiu et al. (2006) found that ‘shared language did not have a significant impact on quantity of knowledge sharing’, and postulate that this may be due to the fact that ‘with shared language and vision, contributors focus more on quality rather than the quantity of contributions’. This is consistent

with my findings that teams of the same technical specialisation (e.g. two software teams) are more likely to be critical of each others work than teams of different specialisation (e.g. a software team and a digital hardware team).

8.5 Technical Themes in Literature My theoretical model presented techno-cultural effects as a communication mediator, influencing the transfer of design and process knowledge between digital hardware and embedded software teams. This is illustrated in the theoretical model extract presented in Figure 8.9. Development Teams Embedded Software Design

Digital Hardware Business Realities Design

Communication Mediators

Techno-Cultural Effects: Linguistic, Technical, Ambiguity, Time, Culture

Social/GSD

Figure 8.9: Influence of Techno-Cultural Themes in Theoretical Model.

Futurist Ray Kurzweil has claimed that collective human knowledge is doubling every 12 months (Wolf, 2008), in line with Kryder’s Law and ahead of Moore’s Law. The skill sets of 194

to Environment (if trusted) Related to Risk, Control, Power

to Individual (if untrusted)

(Siakas and Siakas, 2007)

risk of Fundamental Attribution Error

perceived risk and outcomes

(Ross, 1977, cited in Gilbert

(Gilbert and Malone, 1995)

195

Jumpstart through face-to-face (Zheng et al., 2002)

Blame causes. . . (Shearsmith, 2006)

(Schoorman

(Katz and Allen, 1982)

et al., 2007)

(Shearsmith, 2006)

Established Trust

Secure if through face-to-face Security of Trust (Bos et al., 2001, 2002)

Delayed by CSCW

Knowledge (esp. tacit— Collins, 2001)

facilitates transfer of. . .

Fragile if through CSCW Trust Maintenance factors (Jalali and Zlatkovic, 2009)

Figure 8.10: Refined Model of Trust (from Literature).

Future Research Question: Is the differentiator of trust security dependent on how trust was formed, or how it is continually reinforced (as suggested by Bos et al., 2002)?

8 . 5 . T EC HNI C A L T HEMES

stifles transfer of. . . (Mitchell and Zigurs, 2009)

NIH syndrome kicks in after 5 years

avoidance of. . .

Need for Trust Fear of Reprisal

culture and reciprocity

(Mayer et al., 1995)

and Correspondence Bias

and Malone, 1995)

8. VALIDATION OF THEORETICAL MODEL

Model of Trust

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

workers in the EU in 2015 are being altered by the relentless progress of science, technology and engineering (Directorate-General for Research of the European Commission, 2006): • The amount of information in all fields, particularly technological/scientific, is increasing; • The half-life of this new knowledge is decreasing; and • There are concurrent pressures of generalisation (for management) and specialisation (for workers) on the workforce. The resultant diminishing half-life of knowledge is certainly prevalent in the semiconductor industry, and particularly in within the Consumer Electronics sector. Technical specialisation is necessary for competitiveness, but not sufficient—lifelong learning is essential in all related disciplines. It is appropriate at this point to review available literature on the effects of this specialisation. Whilst direct literature pertaining to semiconductor development is not prevalent, we will draw conjectures to related work in other fields. We will look at tools, their social context, and the impact they have on cognition, and problem solving. We will look further into the nature of coding, and revisit the debate on whether it is a form of art or engineering. We will discuss the importance of mixed skill sets and look at developments within the technical domain which may be bringing a language convergence between software and digital hardware realms.

8.5.1 Techno-Cultural Perspective Differences Subsection 3.2.4 discussed the question as to whether it is a form of engineering or an art. The debate over whether software development is an intuitive or rational process is long standing within the software community (Parnas and Clements, 1996). Software engineering was introduced as a model (NATO Science Committee, 1969) for software development in 1968. Cockburn (2004) argues that this model fails to explain project successes or failures, and fails to aid practitioners to ‘formulate effective’ (and possibly remedial / corrective / contingency) ‘strategies on the fly’. Instead, Cockburn suggests an alternative model4 which depicts software creation as a series of resource limited, goal-directed cooperative games of invention and cooperation. The fact that it is a series illuminates the ongoing requirement for modification and change through software maintenance. Naur (1985) presents program modification as an intrinsic and expected part of the task of programming.

4 Cockburn sees his alternative model as having applicability not just in the realm of software: ‘it is seen that much of engineering in general belongs in the category of resource-limited cooperative games’ Cockburn (2004).

196

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

Kent Beck, creator of Test-Driven Development and eXtreme Programming (XP), defines engineering as the application of theory to practice, and declares software development as a form of engineering; but one with a subtly different workflow to other forms of engineering (Schwartz, Laporte and Beck, 2009). The reason for this, Beck asserts, is difference is the cheapness of reworking software. In other words, it is the cheap malleability (economically, and in terms of time) of software that distinguishes the activity from other engineering endeavours, and indeed also hardware. Despite the similarities of task at a coding level, hardware as a product output is not malleable like software —by its very nature, it is hardened into gates and transistors, conductive tracks and insulators. This metaphysical bifurcated nature of hardware and software, therefore, undoubtedly has an effect on the engineering processes required to coordinate and manage their creation. Social Cognitive Theory is a branch of learning theory based on the premise that individuals learn by observing others and imitating those observed actions that are perceived as replicating the behaviour of interest Bandura (2001). Social cognitists argue that individual cognition, the desired behaviour, and the social environment all act in concert to reciprocally influence personal development, as shown in Figure 8.11. Personal Determinants Behavioural Determinants

Environmental Determinants

Figure 8.11: Triadic Reciprocal Causation of Social Cognitive Theory.

Taken from Bandura (2001). Indeed, the notion that technical undertakings are affected by social environment has not been lost on the software community in particular—cf. Naur (1985) and his discussion on the practice of programming. According to Bertelsen (2000): ‘In Wartofsky’s terminology Naur’s “program theory” is a secondary artefact conserving the acquired knowledge and skills in working with the program. . . The concept of secondary artefacts helps us understand that the way humans understand their surroundings (including programs) is culturally mediated. Secondary artefacts not only conserve knowledge and skill among the individuals whose experience they are based on, but secondary artefacts also transfer these across a given culture, e.g. the programming profession.’ (Bertelsen, 2000)

Park (1998) alludes to the different the differences in mindset and perspective that digital hardware and software designers bring to a complex SoC project: ‘Complex electronic designs contain an increasingly significant amount of software code embedded within the final system. Typically, hardware and software designers approach their work from radically different perspectives and use very different tool sets, vocabularies and processes. This multi-faceted approach leads to disconnects throughout the design.’ (Park, 1998)

197

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

These disconnects only become apparent during system integration of hardware-software components, when: ‘errors in the hardware/software interfaces and misinterpretation in the system specification first become apparent, although they have most likely existed from very early in the design.’ (Park, 1998)

With the concept of cultural differences between programming teams focused on different problem areas (and thus existing in different social environments) previously documented in literature (Bertelsen, 2000; Naur, 1985), if we then allow the assumption that digital SoC hardware design is a form of programming, then the different cultural perspectives are not only understandable, but to be expected (Green, 1990) as social cognitive theory might suggest: ‘The differences between programming cultures are neither accidental nor short lived . . . How are these cultural differences maintained? There are several mechanisms: Firstly, the pedagogic traditions are very different . . . Secondly, there is considerable social pressure to conform to local culture . . . Thirdly, in certain circumstances there are clear demands for a particular culture’

As previously presented, Wilson’s concept of interface in SoC Design (see Figure 8.8 on page 190) has the SoC design team in the centre of the universe with the software team (amongst others) on the periphery. Whilst this is a powerful diagram in capturing the interfaces of social discourse and communication that need careful tending to during a project, my research disagrees with the impression this figure imparts that the SoC design team is the most central and important entity in the SoC design, with software functions on the periphery. In fact, the term ‘SoC design team’ is under-descriptive—rather, it makes more sense to think of the SoC design team as part the digital hardware team and part the embedded software team, as per Figure 8.12. Ernst (2004) notes that a typical SoC design group needs to manage at least six main types of design interface: with digital designers, with Intellectual Property vendors, with software developers, with verification teams, with EDA tool vendors, as well as with foundry services. Additionally, I have introduced an additional group—the SoC architecture team—which is comprised of the key stake holders from marketing, hardware and software (and possibly also foundry and test) to specify the architecture of the SoC. The reasons for this include the fact that the digital hardware team alone is not best qualified to make the architectural decisions on hardware / software partitioning within the device, or to ascertain whether the device is suitable for addressing market needs. The interviewees in this research agree with this. Some software viewpoints include: ‘I’d actually bring the software guys in the same morning that you’d bring in the hardware guys. . . Right from the get-go, yeah. Otherwise you end up bringing them in three months in(to the product), and. . . software guys have a lot of influence over what it is. . . I mean. . . Software guys are often what you might call the system architect guys at the end of the day.’ — INTERVIEWEE JOHN

198

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

System Design

SoC Design Team

SoC Architecture Team

Foundry

Hardware Development Team

Software Development Team

System Verification

IP Providers

Figure 8.12: Modified Concept of (Social) Interface in SoC Design.

Based on Wilson (2003). ‘From the very beginning the software team should have input on the decisions that shape the hardware architecture. The way I see it is that the hardware is just there for the software. The software guys should definitely have a lot of input on the way that happens.’ — INTERVIEWEE MICHAEL

Hardware engineering interviewees also clearly agreed that software involvement is essential from a system validation perspective at the outset of architectural work: ‘Oh, you are starting off a new chip, and when do you bring in the software team? At the very start. Pull in all the disciplines at the very start. Basically the architecture team, very high level. It is a pattern that works well, because you get results that bit quicker.’ — INTERVIEWEE RICHARD

. . . as did project engineering management, who noted (social) benefits in terms of a sense of shared design ownership, but also of shared incidental knowledge transfer and serendipitous acquisition: ‘Oh, I believe (software team should be brought onboard a joint product development) from the beginning. ‘. . . I think that if you actually build up a mutually agreeable specification of where the partitioning is agreeable to both groups, then you can remove the blame culture that typically builds up between hardware and software teams.

199

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

‘. . . I’ve seen the massive mistakes that hardware guys make in terms of assumptions as to what software can actually do. Software will generally pick up all the problems that they (hardware) can’t address. Software will be expected to work around any issue, so let’s leave that for the software guys. And often these issues become the project bottlenecks. Even a very simple one—a system where the hardware guys did not understand that software could not respond to some micro-second timers. The hardware question was “Why can’t they do that?” ’ — INTERVIEWEE JAMES ‘I think (software aspects should be brought into a system design) at the architecture stage, at the start effectively. The earliest opportunity.’ — INTERVIEWEE DAVID

8.5.2 Technical Determinism Development Teams

Technical Determinism: Tool and tool-flow related issues

influences tice work prac

Digital Hardware Design

Embedded Software Design

Figure 8.13: Influence of Technical Determinism Themes in Theoretical Model.

Having seen evidence in literature for the deterministic effects of culture—in this case, technoculture—we now turn our attention to devices that assist problem solving withing these cultural and social environments, i.e. the various tools of the trade. Naur presents that people familiar with different tools understand problems and their solutions differently - ‘when the tool changes, the problem is not the same anymore’ (Naur, 1965).

Tools

Problems

People Figure 8.14: Naur’s Symmetrical Relation between Tools, Problems and People.

Taken from Naur (1965). The previous sections have shown that all interactions between human beings are mediated by social and cognitive-behavioural psychological systems, but these interactions are also affected by tools and technology: ‘. . . the introduction of a new mediational means creates a kind of imbalance in the systemic organization of mediated action, an imbalance that sets off changes in other elements such as the agent and changes in mediated action in general.’ (Wertsch, 1998)

Technical tools can affect the way we create work artefacts (code, documentation, plans etc), but also in the ways we communicate and interact, and in our resultant (technical) culture: 200

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

‘. . . (a) seemingly increasing proportion of what people do and seek within practices mediated by new technologies - particularly computing and communications technologies - has nothing directly to do with true and established rules, procedures and standards for knowing.’ (Lankshear and Knobel, 2006)

Gauvain (1998) mentions the ‘human ability to develop intellectual and social skills adapted to the circumstances in which grown occurs’, and how this ‘relies on social and cultural practices that support and maintain desired patterns of development’:

‘Material and symbolic tools, or artifacts (Cole, 1996), are developed and used by cultural communities to support mental activity. Such tools not only enhance thinking but also transform it, and in so doing they channel cognitive development in unique ways. (Gauvain, 1998)

These ‘tools for thinking’ enable the sociohistorically formulated capturing and conveying of conventions, knowledge, and practice, from one mind/generation to the next. Cognitive development emerges from social situations, argues Gauvain (1998), and for research to advance, such ‘social systems’ need to be studied in the context of both the ‘developmental processes they help organize’ and the ‘cultural system of meaning and practice they represent’.

Soviet psychologist Lev Semyonovich Vygotsky (1896–1934) proposed theories that higher cognitive functions are a product of social and cultural development—the so-called ‘social dimension of intelligence’ (Moll and Tomasello, 2007). Vygotsky focused on aspects such as culture, collaboration, communication and teaching, and argued that the cognitive abilities of children are mediated by their interactions with ‘others in the culture or with the artefacts and symbols that others have created for communal use’ (Moll and Tomasello, 2007), specifically the culturally provided tools of language and socialisation (Shalizi, 2004). Vygotsky was interested in ‘the ways people “internalize” such tools, learning to do without such (cultural) scaffolding, though it’s necessary for the acquisition of the skill’ (Shalizi, 2004).

Aleksandr Romanovich Luria (1902–1977), a disciple of Vygotsky (Luria, 1973), researched in Uzbekistan in the 1930s, and demonstrated that the subjects were capable of solving problems presented in a concrete fashion which were formally identical to problems they were incapable of solving when presenting in a more abstract fashion. Luria discusses the Uzbekistani orality5 —and how having an oral based culture without literative mechanisms favours concept use in a situational way that minimises abstraction (Luria, 1976). Whilst not without its flaws (Shalizi, 2009), Luria’s work raises the notion that level of abstraction inappropriate to literacy impedes problem solving. This notion is intriguing visà-vis hardware engineers presented with abstract system-level problem descriptions—either they

5 Structure

of thought, communication and verbal expression, when literacy is not common amongst the population.

201

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

disassociate from the problems due to ambiguity’s connotations of risk, or they lack the tools with which to manipulate the problem. Doug Belshaw’s Ed.D. blog6 talks about new literacies (technological and information literacies, amongst others) from the viewpoint of the educator (Belshaw, 2009a,b). Belshaw’s model (Belshaw, 2007) presents literacy as a dynamic historically situated concept, and reflective of the society in which it is defined. Belshaw sees literacy in contemporary philosophy as the combination of skills and the application of knowledge using technology. Based on Belshaw’s concept of a multitude of new information literacies, it is interesting to postulate that software may provide a form of literacy that enables certain cognition, through its provision of cheapness of experimentation. Ihde’s concept of ‘technological intentionality’ (Ihde, 1990) is described in Verbeek (2006) as follows:

‘Artifacts are able to mediate our sensory relationship with reality, and in doing so they transform what we perceive . . . In their mediation of the relationship between humans and world, technologies have “intentions”; they are not neutral instruments but actively help to shape the nature of the relationship that comes about . . . On the basis of their mediating role in human perceptions and interpretations, therefore, technologies can indirectly influence human actions as well.’

Verbeek (2006) further elaborates that:

‘. . . technological mediation appears to be context-dependent, and always entails a translation of action and a transformation of perception. The translation of action has a structure of invitation and inhibition, and the transformation of actions7 a structure of amplification and reduction’.

This may further help explain why hardware designers, when presented with problems they cannot simulate in RTL, tend to switch off and become disinterested in the bug report. The use of technology which is particularly slow and cumbersome tends to have an inhibiting effect on the actions of hardware developer when it comes to trying to simulate complex, imprecise or largesystem bug reports by the firmware team. Bruister (2001) takes the viewpoint from the software camp, and alludes to the concepts of technical determinism explored in this thesis when describing the difficulties faced by software designers working on system-on-chip (SoC) solutions: • Software designers are forced to wait until hardware prototypes are available to finish their development;

6 Belshaw’s 7 [sic].

blog about working towards the degree of Doctor of Education is at http://dougbelshaw.com/blog. Should read ‘perceptions’.

202

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

• The co-simulation tools used by hardware designers to verify their designs functionally are ‘rarely used for any useful software development’ as the cycle-accurate simulations they perform

are excruciatingly slow for the software developer. The article describes the plight of software developers, who, despite outnumbering ‘hardware developers almost two to one for any given SoC-based design’ (Bruister, 2001) are given short-shrift when

it comes to tools (Ganssle, 2004b). Furthermore, Bruister (2001) claims that: ‘despite the large software effort, SoC and ASIC design methodologies are very hardware oriented. Software developers are developing embedded-system software the same way system designers develop board-level software’

and concludes that: • New methods of design are required in order to improve the chances of an SoC project being completed on time with fewer defects; • Emphasis must be placed on getting the system design finalised early before committing hardware to implementation; • Bridging the design gap is important to ‘seamlessly bringing software and hardware development’ together to reduce both time-to-market and system flaws. In addition to the issues identified in Bruister (2001), this research has established that software designers are also faced with the following issues: • As the software is at the end of the design lifecycle, software design and test schedules get shortened as hardware design cycles slip their due dates; • Hardware designers rarely see the complete system, and quite often system architecture flaws only become apparent during the software design or system integration phases of the lifecycle; • Hardware designs are physically committed to chip once tape-out occurs. Fixing a hardware design error is costly, as it may require a re-spin of the chip. Software workarounds are, as a result, seen as cost-effective solutions by hardware design engineers; • There are large cultural differences between hardware and software designers, and quite often there are technical language difficulties in communicating issues and problems from one group to the other. Barr (2003) concurs with this, stating that ‘too much of the terminology embedded systems engineers use in their everyday oral communications and written documentation is only vaguely defined—at best’;

203

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

• The software life cycle lives on long after hardware design has crystallised into silicon. Software is typically expected to be changeable and upgradable because it can. However, often hardware designs have moved onto new projects (both in terms of remit, but also in terms of cognition) by the time bug reports from a previous chip have been identified by the software maintenance and support team as having a hardware relationship. • Software as an activity is often not credited with the same value as hardware because its not as physically tangible a product—it is often only seen as a means to an end to sell hardware, despite the significant investment required in resources to develop the software (Vereen, 2004). Damaševiˇcius et al. (2003) presents the argument that (hardware-centric) design patterns (Gamma et al., 1994) could be used to simplify the integration and customisation of IP hardware components into SoC designs. This is an interesting paper in that it looks inter-disciplinary into the realm of software methodologies and process, and uses the design patterns method for describing structural or behavioural relationships between components. The authors argue that the benefits to be derived from this include: • Describing components in an abstract and implementation independent manner such as design patterns can significantly raise the level of abstraction; • It permits the use of standard diagrammatical notation (such as UML in this instance) to ease communication difficulties between the different design teams. Rincón et al. (2005) continue this argument, suggesting that there is applicability of higherlevel behavioural design patterns to hardware design. Rincón et al. (2005) contend that design patterns offer a different form of reuse to the IP/component-based reuse normally found in hardware designs—one which is independent of the implementation specifics. For the past decade or so, digital hardware designs have been coded in Hardware Description Languages (HDLs) such as Verilog or VHDL. Recent industrial trends in hardware development have involved the creation of new languages for describing hardware designs. New generations of languages has evolved, such as Agility’s proprietary Handel-C (Agility, Inc., 2007; Page, 1997) and the SystemC (Grötker et al., 2002) standards through industry consortia. One of the motivations behind both Handel-C and SystemC is to create languages that would empower the large software development community to develop hardware as well as software—to raise the level of abstraction to a more powerful and expressive syntax, just like the transistor raised it from electrons, and the HDLs and libraries raised it to the abstract hardware constructs of gates and blocks. This is not surprising, since the distinction between hardware and software has become considerably more cloudy of late: 204

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

‘At what point does hardware written in Verilog (or VHDL) and compiled into an executable binary format become indistinguishable from software written in C (or any other language) and compiled into an executable binary format?’ (Barr, 2003)

These ‘recent trends towards blurring the boundaries between SW and HW domains’ (Damaševiˇcius et al., 2003) illustrate that there really is little difference between hardware and software design at current—and what difference exists is continually being eroded by developments such as SystemC. Hardware and software developers are using conceptually similar tools, to do conceptually similar tasks, and go through conceptually similar stages in their product lifecycles. Why then does the body of work on process implementation and process improvement that exists for software design and development not exist for its hardware counterpart? Perhaps this is again a cultural issue. Damaševiˇcius et al. (2003) allude to this with ‘Conceptual gap: software designers think in terms of object-oriented design (objects and messages) whereas hardware designers are used to think (sic) in terms of the component-based design (components and wires)’.

Bruister (2001) contends that ‘software developers can write high-level applications’ whereas the concern of the ‘SoC (ASIC or FPGA) developers . . . is to verify the hardware with a cycle-accurate, and ultimately timing-accurate, simulation’. Software designers typically think in terms of system level

frameworks, of components and building blocks. They think of the big picture—and of how the complete solution is intended to work. Hardware designers think in terms of microseconds, or gates, or transistors—‘HDL design, gate-level synthesis, and timing analysis are very involved and relatively slow design processes’ (Bruister, 2001).

Embedded systems development has been a tough audience for development methodologies and process improvement—both from the software/firmware aspects, and also the hardware aspects (Ganssle, 2004a). This may be due to the dominance of the hardware-centric culture. It is the author’s viewpoint that the processes and methodologies brought to bear in software development over the last 25 years or so can not only be used to great effect in hardware design, but also to effect a synergy between the software and hardware components of SoC development. This is particularly of interest to the creation and maintenance of Intellectual Property, where the time to market is a crucial differentiator. IP providers need to be ready for early stages of the Hype Cycle (see Figure 8.2, on page 179).

8.5.3 A Language Convergence John Backus noted: ‘While it is perhaps natural and inevitable that languages like Fortran and its successors should have developed out of the concept of the von Neumann computer as they did, the fact that such languages have dominated our thinking for twenty years is unfortunate. It is unfortunate because

205

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

their long-standing familiarity will make it hard for us to understand and adopt new programming styles which one day will offer far greater intellectual and computational power.’ (Backus, 1979)

There are at least two pressures that are encouraging a convergence of language between software and digital hardware: • From the hardware perspective, the limitations of current logical synthesis techniques, the verbosity of HDLs and their inadequacies for dealing with more abstract concepts, and the significant simulation time required even at the highest levels of the tool flow; • From the software perspective, the onward move to multi-core as a means of satisfying Moore’s Law and providing more computing power within a practical power envelope is forcing a rethink of the Von Neumann architecture that has dominated embedded devices to date.

Logic Synthesis Limitations HDL synthesis came late into hardware design, being adopted from the software world of compilers. Smith (1997) presents some of the short-comings of logic synthesis: ‘The current state of synthesis software is rather like learning a foreign language, and then having to talk to a five-year-old. When talking to a logic-synthesis tool using an HDL, it is necessary to think like hardware, anticipating the netlist that logic synthesis will produce.’

Smith (1997) also noted that ‘this situation should improve in the next five years, as logic synthesizers mature’—and whilst the state-of-the-art has certainly moved on between then to the current date

(2010), it remains a fact of life that digital hardware developers needs to be cognisant of the way the synthesis tools will translate their behavioural designs into structural and physical representations. Up to the mid-90’s, most of the delay in a circuit was in the gates, and the impact of the netlist on timing could be disregarded. However, as process geometries scale down ‘interconnect becomes the predominant factor in determining delay’ (Coudert, 2002). This results in a necessary feedback

step from back-end characterisation to the behavioural domain and re-synthesis. There currently is a breakdown in information hiding, specifically in relation to hardware designers’ (required) knowledge of what is happening at lower abstraction levels8 : ‘With experience it is possible to recognize what the logic synthesizer is doing by looking at the number of cells, their types, and the drive strengths.’ (Smith, 1997)

8 This may also be inevitable to some degree, with software abstractions (cf. the ‘Law of Leaky Abstractions’ concept— Spolsky, 2002).

206

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

Conventional software programming languages are not well suited to the description of hardware circuits. Hardware design languages need to support the following constructs (Dart, 2009): • hierarchical design; • well-defined modules with clear inter-module communication mechanisms; • concurrency; • a notion of time. On this last point, Edwards (2005) notes that: ‘Time is absent from the C programming model. It guarantees causality, but says nothing about execution time. A simple model for both programmers and compilers, it can make achieving timing constraints difficult . . . Meeting a performance target under power and cost constraints is usually mandatory in hardware, since it is always easier to implement a function in software. Thus, any hardware synthesis technique needs a way to meet timing constraints.’

Both Handel-C and SystemC have similar goals and aspirations (Edwards, 2005). Both take existing software languages (in the case of Handel-C, the ANSI C programming language; in the case of SystemC, ANSI C++) and extend them through the use of new constructs or classes and templates into a form powerful enough to describe hardware systems. Perhaps the most significant difference between describing hardware and software is the ability to do tasks in parallel in hardware which cannot be done in software9 . As a result, both languages have added mechanisms for running tasks and functions in parallel. Such higher level behavioural languages are making inroads (mostly academic) into hardware design, with SystemC proving popular (Dart, 2009), particularly for modelling. Jones et al. (2003) eloquently hypothesises in this regard: ‘The ability to integrate programming language principles with human problem solving principles when evolving established programming systems may in the future be the factor that differentiates successful applied programming language design research and practice.’

However, it may be with the Actor Model and functional languages (Nyström et al., 2008) that a convergence point is achieved between multi-core software design and hardware design. Mycroft (2007) describes the use of functional programming languages (such as Erlang) and Lee et al. (2003) the use of the Actor Model to replace imperative languages as a high-level focus for hardware behavioural models and synthesis.

9 For example, in hardware it is possible to perform a calculation on n different sets of inputs simultaneously by replicating, or instantiating, the particular calculation block involved n times. The closest software equivalent to this is something like a for-loop construct, or the use of multi-threading.

207

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

The dichotomy between academic hardware design synthesis experimentation and industry de facto synthesis techniques is perhaps responsible for some of the technical determinism encountered between both teams. Hardware designers, being risk averse, appear slow to move from their established languages to embrace more radical software-oriented development systems.

Requirement for True Parallelism in Modern Software As of 2010, one of the trends increasing in importance in computer devices is the march towards multiple CPUs (Bright, 2008; Merritt, 2009; Torre, 2008), both in the desktop environment, and increasingly in embedded space—be that through the use of multiple discrete processing units, or through the use of multi-core devices. Sutter (2005) quotes American science fiction author Robert A. Heinlein (‘there ain’t no such thing as a free lunch’ 10 ) when discussing the move of high end silicon devices towards multi-core architectures in order to continue to meet the processing capability expectations of Moore’s Law. Mycroft (2007) states that chip design has ‘passed a threshold whereby exponentially increasing transistor density . . . no longer translates into increased processing power for single-processor architectures’.

Martin and Leibson (2008) reports that: ‘When Denard scaling(sic)11 (also called classical scaling) ended at 90 nanometers, transistors continued to get much smaller at each IC fabrication node but no longer got much faster or dissipated less power.’

In order to satisfy Moore’s Law, and simultanouesly dissipate the heat generated in a smaller and smaller area, Merritt (2009) describes how system designers are forced to adopt new techniques to provide extra processing capability whilst maintaining the power envelope: ‘Microprocessors are marching into a multicore future to keep delivering performance gains without frying in their own heat. But mainstream software has yet to find its path to using the new parallelism.’

Merritt (2009) (op. cit.) alludes to a degree of linguistic/technical determinism in that the language used for much pre-existing software development is not readily amenable to multi-core devices: • The ‘ubiquitous C language’ obscures any parallelism intent/opportunity inherent in an algorithm due to its sequential programming model;

10 1907–1907,

American science fiction author. Popularised the 1930s adage in ‘The Moon is a Harsh Mistress’. correct spelling is Dennard, after Robert Dennard (1932-), the inventor of dynamic RAM, and of a set principles for scaling (MOSFET) devices (Bohr, 2007) that effectively underwrite Moore’s Law. 11 The

208

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

• Applications not written for multiple cores and multiple threads need (non-trivial) refactoring to take advantage of these extra computing resources; • Researchers have developed various parallel programming languages, but all ‘face a long road to commercial adoption’.

8.5.4 Importance of Mixed Skill Sets One theme that became apparent through this research is the value of individuals with crossdiscipline skills and experience. For example, having the confidence to investigate both domains in tackling a problem: ‘The worst types of issues we have seen is where an issue or error occurs and you didn’t know which side of the boundary it was, whether it was hardware or software, because you just did big-bang integration. The challenge then is, if the problem is tough, nobody wants to look for the bug.’ — INTERVIEWEE JAMES ‘. . . I think it is definitely better to have mixed skills.’ — INTERVIEWEE MICHAEL

The divergence of skill sets is a natural phenomena of the need for specialisation and domain experts: ‘One of the big issues arising in it that causes the largest difficulties here is not having people who. . . Who have both skills. . . It is still very difficult to get people with a balanced hardware / software profile. It is usually skewed strongly one way or the other . . . Just because of the pressures of specialisation I guess causes you to actually . . . To. . . To not have the. . . It’s not down to interest, it’s often down to time, or the scope to actually develop the broad skills necessary to do both. ’ — INTERVIEWEE JAMES

Furthermore, engineers with mixed-skill sets can act as a binding force between the different technical cultures of embedded software and digital hardware: ‘I think sometimes there is that reluctance to provide more information (to the peer team) than they think is strictly necessary. There is a perception that “the software guys don’t understand our (i.e. hardware) stuff, so let’s try to protect them from knowing too much about it”, as much as possible. I get that vibe a lot, that you don’t need to know (about) that. Whereas I like to know it all. I just like what’s not important. That’s a big one. In the other direction, there is probably something similar going on. We (i.e. software) probably, not to the same extent, but probably tell them what is necessary again. Which is not the way to do it.’ — INTERVIEWEE MICHAEL

Mixed-skill sets are seen in literature as a potential means of ‘improving teamwork, team task outcomes, and aspects of communication behavior’ (Cannon-Bowers et al., 1998). These benefits may

be as a result of ‘by the more efficient communication strategy (i.e., volunteering more information) observed in cross-trained teams’ (Cannon-Bowers et al., 1998) or ‘by giving team members crucial knowledge

209

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

about one another’s jobs’ (Cannon-Bowers et al., 1998). With specific focus on the activities of

telecommunications software development, Downey (2006) describes an artefact-centric conceptual framework as a tool to help clarify software development job descriptions, where an artefact is a ‘tangible entity, such as a document or a piece of code’. Downey’s lifecycle model of artefacts has four

phases: • Trigger: Firstly, an artefact is triggered—that is, some event happens which causes the inception of an artefact; • Analysis: Secondly, its requirements are defined and raw data gathered; • Design: Thirdly, its concepts are designed; • Creation: Finally, these concepts are embodied as artefacts that are disseminated to interested parties. Downey proposes that this lifecycle model helps to explain: ‘why most of the practitioners seemed to require broadly similar skills sets. Although specific technical skills are needed in the different roles, all software development practitioners need to communicate, collaborate and support the decision-making process.’ (Downey, 2006)

Considering Downey’s work in light of the common design phases presented in Perry’s cycle of design (see Figure 4.1 on page 63), I propose that this is equally applicable in a broader sense across both digital hardware and embedded software disciplines. Practitioners, on both sides of this technical discipline divide, broadly need similar skill sets to work on SoC projects: an appreciation of the other discipline, and the abilities to collaborate and communicate effectively. In examining the argument that ‘effective teamwork depends on the emergence of shared knowledge representations or mental models’, Marks et al. (2002) found that ‘team processes fully mediated the relationship between mental model similarity and team performance’.

Marks et al. noted that

‘shared understandings of inter-role behaviors reduce coordination losses (thereby resulting in tighter team coordination), facilitating goal attainment’. Marks et al. further noted that they ‘believe that most work teams . . . would benefit from providing members with at least enough understanding of their teammates’ roles to discuss trade-offs of various strategies and behaviors related to team performance’.

Nevertheless, there was a feeling that, in Ireland, academic specialisation is resulting in software engineers who have lost the link to the underlying hardware. The hardware / software boundary is conceptually illustrated in Figure 8.15, with detachment from the restrictions and constraints of real hardware mattering less as we move up with increasing level of abstraction through real-time firmware to application software. 210

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEMES

‘. . . The worst thing is that more and more of them have made a conscious decision to do this. And they have absolutely no desire to go behind the processor or operating abstraction. Visual Basic, Java, . . . The idea of being exposed to hardware now is a no-no.’ — INTERVIEWEE JAMES

This immediately draws interesting recollections of the heady warning from Subsection 3.4.2, wherein Wegner (1970) mentions the ‘dangers in pursuing abstraction as an end in itself’, specifically that: ‘Computer scientists should be aware of the dangers of losing touch with reality, and of losing a sense of direction through excessive abstraction.’

Application Software

RTOS System Calls

Real-Time Firmware (Device Drivers) Hardware Interrupts Hardware Accelerators, other logic functions, I/O and analog Figure 8.15: System Architecture: HW/SW Boundary.

Kairus et al. (2003) identifies this loss of hardware familiarity as a concern to be addressed in the formal education of hardware and software engineers: ‘As digital designs grow in both size and functionality, the importance of software development concepts has increased. This has led to a demand in the industry for engineers with good skills in both electrical engineering and computer science. However, the general trend in undergraduate education has been a divergence of these two historically close fields of study.’

Mange (1993) also identifies the growing educational chasm between hardware and software, when proposing a new course to ‘tie together both sides of computer science (hardware, such as logic and digital systems, and software, such as classic procedural . . . programming)’. The course was:

‘... based on the central idea of the equivalence between hardware and software; and this equivalent, which at the mathematical level rests on the concept of the algorithm, is exhibited by means of one preferred representation, the binary decision tree’ (Mange, 1993).

In an opinion piece on what differentiates embedded development from other forms of software development, Ganssle (2009) concurs with this, commenting that: 211

8. VALIDATION OF THEORETICAL MODEL

8 . 5 . T EC HNI C A L T HEM ES

Electrical Engineering

Computer Science

Embedded

Figure 8.16: The Embedded Software Education Gap.

Taken from Barr (2009). ‘. . . colleges are under-serving the peculiar needs of this industry. . . Where will the next generation of writers of deeply-embedded firmware come from? I fear it will continue to be largely the EE (electronic engineering) community, simply because so many schools are poisoning the CS (computer science) well with glorified web developers.’

An interesting elaboration on this is a reflection on the process research that computer science brings into play in embedded projects, and on the potential for cross-polination of skills and techniques. Ganssle (2009) notes this also, and proposes that: ‘Many systems are big, sometimes cramming millions of lines of code into a single application. A division of labor and skills is required to manage projects of these scales. CS folks with little exposure to the underlying hardware are commonly found building embedded applications, which is in my opinion a “Good Thing”. Computer scientists bring (or, at least can bring; too often they don’t) more discipline to the process than the average EE. The latter probably has had no exposure to software processes in his or her formal education. While heroics can work on small-scale projects, they just don’t scale to larger systems.’

In contrast to this, Khosla et al. (2001) claims that mixed skill sets are again of most benefit: ‘The success of new System-on-a-Chip (SoC) initiatives depends on the availability of well-trained SoC designers who are able to bridge the gap between software centric system specification and hardware-software implementation in novel architectures. . . . Mastery of the complexity and heterogeneity of next generation Integrated Circuit design requires profound interdisciplinary understanding of the key design issues.

Barr (2009) notes what he terms the embedded software education gap, referring to the gradual drift of Computer Science courses away from the needs of embedded electronics programming over time: ‘American institutions of higher learning largely fail to teach the practical skills necessary to design and implement reliable embedded software. . . ’

212

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

Barr notes the ‘positive trend’ of the introduction of Computer Engineering courses in serving as bridges between the two cultures. Shared language may facilitate ‘a common understanding of collective goals’ through not only its ability to help ‘share ideas’, but also in how it ‘enhances the efficiency of communication between people with similar background or practical experience’ (Chiu et al., 2006). Having team members with mixed

skills provides opportunities to ‘actively involve (sic) in knowledge exchange activities and enhance the quality of shared knowledge’ (Chiu et al., 2006). The use of common tools, such as UML, for shared

logic description is also recognised in literature as being of benefit in this regard (Drusinsky and Harel, 1989; Mellor et al., 2005; Schattkowsky, 2005; Zhu et al., 2005)—as a tool with perceived usability and usefulness which has prominent endorsers and promotion, it may find easier crossfunctional acceptance (Bresciana and Eppler, 2009).

8.6 An Opportunity for Agility In order to make software development more predictable, engineering knowledge from other disciplines was incorporated into the body of knowledge of software process improvement (Siakas and Siakas, 2007). Although SPI methodologies have improved the quality and productivity of software creation (Siakas and Siakas, 2007), heavy Deming-inspired quality processes (such as ISO9000—Murphy, 2002) have not found much favour in the consumer electronics industry, where they are often seen as offering significant overhead (Siakas and Siakas, 2007) for very little perceived benefit: ‘Since the software crisis of the 1960’s, numerous methodologies have been developed to impose a disciplined process upon software development. It is now widely accepted that these methodologies are unsuccessful and unpopular due to their increasingly bureaucratic nature.’ (Conboy and Fitzgerald, 2004)

The ‘Manifesto for Agile Software Development’ (Beedle et al., 2001) was written as an attempt to find common ground and legitimise the use of various new low-overhead software development methodologies such as XP (Beck, 1999) and Scrum (Schwaber and Beedle, 2001). The document reads as follows:

Manifesto for Agile Software Development We are uncovering better ways of developing software by doing it and helping others do it. Through this work we have come to value:

Individuals and interactions over processes and tools; Working software over comprehensive documentation; 213

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

Customer collaboration over contract negotiation; Responding to change over following a plan. That is, while there is value in the items on the right, we value the items on the left more.

In contributing to the debate on whether software development is a form of engineering, Nerur and Balijepally (2007) position it as more akin to Horst Rittel’s concept of ‘wicked problems’—where large upfront design, driven by the philosophical perspective of technical rationality typical of more traditional engineering, is eschewed in favour of the ‘interaction between the source of knowledge and experience, and the decision maker’ (Christopher Alexander, cited in Nerur and Balijepally, 2007). In

this manner, agile methods reflect a fundamentally ‘new epistemology of software development’ (Nerur and Balijepally, 2007). Reeves (1992) suggests agile methods as almost being obligatory for software development, and espouses the notion that coding and test are intrinsic components of the engineering design task. Conboy and Fitzgerald (2004) recognises there is nothing intrinsically coupling agility to software development, it having first appeared in the domain of mainstream business literature in 1991—the fundamental prinicples of agility in the enterprise emerged from industry-collaborative studies at Lehigh University in the 1990s, with agility recognised as the new ‘competitive frontier for the 21st century’ (Dove, 2005).

I have already identified the market pressures that consumer electronics semiconductor design is under, and the need to be reactive. Additionally, this research has shown that social/geographical issues are perceived as more significant issues by my interviewees than technical challenges. SoC projects are highly complex undertakings, requiring significant amounts of tacit knowledge transfer between digital hardware and embedded software teams in order to verify and validate the system. Luqi and Zhang (2003) talk about agile methods specifically for embedded, hardwarecoupled, systems. Is there scope, therefore, for applying the principles of the Agile Manifesto for software design to semiconductor projects, specifically to digital hardware design? Ashby’s Law of Requisite Variety12 (Dove, 2005; Nerur and Balijepally, 2007) suggests that any process must be as flexible/reactive as the environment within which it is expected to function. Jacobson and Meyer (2009) note that while agile methods ‘have made a number of significant contributions and reminded us of the central role of people in software engineering’, the manifesto is ‘long on emotional appeals’ and ‘short on facts’. They continue to suggest that in practice, methods in

12 W.

Ross Ashby (1903–1972), English psychiatrist and pioneer in cybernetics.

214

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

commercial development are a combination of elements from the methodology literature, mixed with domain- or business-specific extensions—one of which may be the need to address agility. Abrahamsson et al. (2003) discuss the perhaps limited lifecycle cover for current agile methods, and suggest that situation appropriate rather than universal methods are required. They also suggest that the emphasis should be placed on methodological quality and not on method quality.

8.6.1 Agile Manifesto versus Agility The characteristic of agility can be defined as: ‘The continual readiness of an entity to rapidly or inherently, proactively or reactively, embrace change, through high quality, simplistic, economical components and relationships with its environment.’ (Conboy and Fitzgerald, 2004)

Agility is a cultural philosophy that is focused around dealing with risk and uncertainty. Understanding its basic tenets (its ‘cornerstones’–see Figure 8.17) allows its judicious application to the specific problem domain/business environment at hand. Response Ability

Cornerstones of Agility

Knowledge Management

Value Proposition

Figure 8.17: Cornerstones of Agility.

Taken from Dove (2005). The concept of agility as a business method has scope for application throughout various organisations and business models, particularly within semiconductor/embedded industries: • Wu et al. (2006) describes the success enjoyed by Taiwanese semiconductor foundries, due to the agility ingrained in their business processes. • The introduction of agile practices within the embedded design community seemingly has exhibited a spectrum of results. For instance, Fitzgerald and Hartnett (2005) note the successful adoption of agile software methods within Intel, whilst Abrahamsson et al. (2005) suggest inconclusive results in employing agile methods in the mobile telecommunications 215

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

sector. They note that ‘few if any’ agile methods can be successfully introduced ‘without proper, systematic software process improvement tactics’.

• Donnellan and Kelly (2005) note the forces impacting agility within the semiconductor industry (both inter-organisational and intra-organisational) and present various IT-based approaches to supporting agility, specifically the use of design repositories and eCatalogs 13 . Interestingly, much of the published work in the area of the applicability of software-derived agile methods to hardware development (beyond general principles of corporate agility) has focused on bringing changeability to hardware design as a means of gaining agility (through the use of FPGAs–for example, Allen et al., 2009; Athanas, 2009), at the expense of the social aspects and the principle of ‘individuals and interactions over processes and tools’. One possible explanation for this is perhaps a desire to make digital hardware more similar to software vis-à-vis its greatest differentiator—changeability—so that the Agile Manifesto may equally apply to hardware development. FPGAs are commonly used as a mechanism for SoC verification and validation prior to tapeout. They usually require porting of the SoC database to the FPGA platform. One potential application of agile methods in SoC development is within the realm of FPGAs: a combination of short iterations (Ousterhout and Muzaffar, 2008) with regular re-baselining on the mainline SoC trunk gives software developers fresh hardware upon which to work, and also provides a reasonably sustainable pace without risk of burnout for the FPGA resources. Many SoC projects have hardware phases where the activities are primarily verification efforts— testing the ‘stitching’ together of various third party blocks of IP. In these cases, formal and rigorous verification matrices are employed, and thus the mainline SoC trunk work in these projects may not be amenable to agile methods. Other projects have a mixture of third-party and homegrown IP. The internally-developed IP work-items may be amenable to an agile approach. Barry Boehm refers to the alternative to agile methods as plan-driven methods. He is a proponent of a blended approach, synthesising ‘the best from agile and plan-driven methods to address our future challenges of simultaneously achieving high software dependability, agility and scalability’ and notes that: ‘. . . both agile and planned approaches have situation-dependent shortcomings . . . The challenge is to balance the two approaches to take advantage of their strengths in a given situation while compensating for their weaknesses.’ (Boehm and Turner, 2003)

Siakas and Siakas (2007) distinguish organisational culture into four groupings (as illustrated in Figure 8.18) (clan, hierarchical, democratic, disciplined), categorised by the attributes of (need for)

13 I.e.,

lists of previously designed products in the NPD (New Product Development) community.

216

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

100 Strong PD Clan

Hierarchical

Democratic

Disciplined

Weak PD 0

100 Strong UA

Weak UA

Figure 8.18: The C.HI.D.DL typology of Organisational Cultures.

Taken from Siakas and Siakas (2007).

Uncertainty Avoidance (UA) and authoritative Power Distance (PD). They suggest that democratic (European) organisations are the most suitable organisational culture for embracing the agile professional culture. They mention the extreme success of Deming’s Total Quality Management (TQM) in Japan (where it may be a better fit, culturally), and the non-replication of this in Europe, noting ‘. . . the TQM approach is extremely bureaucratic in comparison to the flexible agile development, which fast respond (sic) to changing requirements’ (Siakas and Siakas, 2007).

Agile methods have traditionally been focused on small, co-located teams(Ambler, 2006a). Beck (1999) recognises the limitations of team size on the applicability of agile methods, so this is a pertinent issue when considering agile methods for hardware design:

‘Size clearly matters. You probably couldn’t run an XP project with a hundred programmers. Nor fifty. Nor twenty, probably. Ten is definitely doable.’ (Beck, 1999)

Boehm (2006) finds agile methods most workable in ‘small projects, with relatively low at-risk outcomes, highly capable personnel, rapidly changing requirements, and a culture of thriving on chaos vs. order’.

Agile may be possible in larger, distributed teams (Ambler, 2006a,b; Sureshchandra and Shrinivasavadhani, 2008), but this seems to rely on a divide-and-conquer approach—a hierarchy of agile implementations (‘scrums of scrums of scrums of. . . ’—Ambler, 2006b)—the fostering of a shared philosophies and vision, and regular deliverable milestones to ensure sub-teams are working towards the same goals. In particular, the fostering of shared culture is likely to provide troublesome. This culture of process, philosophy and goals is a form of tacit knowledge (Olson and Olson, 2003). Thus, whilst they may offer potential for an Irish SME involved in SoC design, the realities of dealing with distributed teams (either in-house, or organisationally external) and the capital investment risk of tapeout suggests the need for a hybrid approach, as Boehm and Turner (2003) suggest. 217

8. VALIDATION OF THEORETICAL MODEL

8 . 6 . A N OP P ORT UNI T Y F OR A G I L I T Y

Figure 8.19: Implications of Agility for SoC Development.

218

8. VALIDATION OF THEORETICAL MODEL

8. 7. S U MMA RY

In a semiconductor context, security of communication is an important consideration. Agile methods promote the sharing of tacit knowledge through knowledge identification, categorisation and management and the fostering of strong social bonds and trust.

However, intellectual

property protection is a significant concern within the fast paced consumer electronics industry. Organisations need to reflect on the best method for managing their information, within the context of an agile environment. Figure 8.19 summarises the implications that SoC programmes need to take into account when considering the introduction of agile methodologies for their work.

8.7 Summary This chapter considered the themes developed in Chapter 6 with extant literature either directly or, in the case where directly related literature was not apparent, indirectly related. The impact of a company’s business model on engineering practice was discussed, along with the commercial realities of software development versus hardware. Software was identified as the most critical challenge to SoC productivity (ITRS Update, 2008). Socio-geographical issues were discussed, especially concerning geographically dispersed teams. Techno-cultural deterministic effects were examined, including the apparent trending convergence of development language between multicore software and digital hardware. Finally, the opportunity for the application of agile methods to SoC digital hardware development was entertained and discussed.

219

8. VALIDATION OF THEORETICAL MODEL

8. 7. S UM MA RY

220

Chapter 9 Conclusions



If we knew what we were doing, it wouldn’t be research. — ALBERT EINSTEIN



1879–1955, German-born American Physicist.

9.1 Overall Conclusions In this chapter, I will present a summary of the research documented in this dissertation by discussing the contribution this thesis makes to the global body of knowledge: • I will review the alternate approaches taken by previous related studies; • I will address the research questions previously identified; • I will presenting scope for further study and research.

9.2 Research Contribution

In this dissertation, I have presented an approach to examining the causes of development stress between embedded software and digital SoC hardware teams in indigenous Irish semiconductor companies—where development stress is anything which impedes progress towards a finished product, suitable for purpose and for the market. The alternative approaches and remits of pre-existing studies have focused on: • the social and human factors that are acknowledged predominantly in the context of software development (Bertelsen, 2000; Cockburn, 2004; Curtis et al., 1988; Curtis and Walz, 1990; Egan et al., 2006; Green, 1990; Naur, 1965, 1985; Pennington and Grabowski, 1990; Rosson, 1996), with narrow comparable research on hardware design (Donnellan and Kelly, 2005; Khosla et al., 2001; Rhines, 2006; Wilson, 2003); 221

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B UT I ON

• globally dispersed teams, but again not specifically covering teams of different technical specialisations such as hardware and software (Casey and Richardson, 2004, 2008; Collins, 2001; Mitchell and Zigurs, 2009; Powell et al., 2004; Shearsmith, 2006); • identifying the interfaces between stakeholders on semiconductor system development (Ernst, 2004; Rhines, 2006; Wilson, 2003); • arguing that the social processes that ‘create and maintain computer science’ are an important constituent part of understanding computer science (Boehm, 1981; Botha, 2005; Cockburn, 2004; Gutierrez and Kouvelis, 1991; Hansen, 1996; Jørgensen, 2009; Naur, 1985; Parkinson, 1957; Tedre, 2006); • clearly defining the boundary between hardware and software, and the fundamental mechanisms by which they interact (Steiner and Athanas, 2005; Steiner, 2008); • the application of software development techniques to hardware development problems (and vice-versa) (Andrews, 2009; Brebner, 1996, 1998; Buckley, 1992; Chapelle and Lewis, 1999; Damaševiˇcius et al., 2003; de Geus, 2008; Ghosh and Giambiasi, 1999; Lee et al., 2003; Luqi and Zhang, 2003; Merritt, 2009; Page, 1997; Plessl and Platzner, 2004; Walder and Platzner, 2003; Wigley and Kearney, 2001); • to a limited degree, the technology transfer of such techniques between hardware and software teams without detailed consideration such aspects as linguistic determinism, technical determinism and risk aversion (Berger, 1998; Hoffman, 2009; Kairus et al., 2003; Park, 1998; Paul et al., 1999; Smith and Gross, 1986). The setting for my research, the highly technical and specialised field of semiconductor design in Ireland, is a field which has a limited history of introspection from a design process perspective1 , which is capital intensive and very averse to risk, particularly hardware risk. It is also predominantly historically hardware focused and hardware managed, with software activities as an afterthought. Software, however, is becoming an increasing proportion of its market differentiation, its capital investment and its strife. As an illustration, Ernst (2004) describes how: ‘In short, chip design has become itself a highly complex technology system, where multiple communication and knowledge exchange interfaces must be managed simultaneously. Obviously, the idea of translating technical modularity into organisational modularity through vertical specialisation has many attractions. Yet, its implementation requires a mind-boggling degree of cooperation among the diverse participants of design networks.’

1 In

comparison to PC-based software development, for example.

222

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B U T I ON

Through this in-depth grounded theory study of the industry, I have found that despite the significant technical difficulties involved, the main sources of project friction are social and geographical. These are common across many other technical disciplines. Nevertheless, the technical specialisation into hardware/software does appear to have a smaller modulating effect on culture and cognition. I have found these sources of friction present specifically in the Irish semiconductor development industry, and hypothesise that these are common problems geographically within this industry— to varying degrees, as mediated by cultural differences (for comparison, see Egan et al., 2006 and Tremaine, 2007 for research focused on software only). What specifically does this work contribute to the global body of knowledge? In establishing a Theoretical Model of Influence on Hardware / Software Interworking (Figure 6.1):

• firstly, it identifies that digital IC hardware engineering is very similar in many respects to software engineering. It verifies through the use of existing philosophical, technical and historical literature that both disciplines have a common ancestral root2 and similarities in their work flow. As the technology is similar, it identifies that it is reasonable to suggest similar cognitive mechanisms in plan in their design and engineering. This has implications for the potential application of the existing body of empirical and fundamental software engineering research to digital IC hardware—for example Software Engineering Institute (SEI) Capability Maturity Model-Integrated (CMMI), Agile Methods. Hardware design workflows can benefit from the body of knowledge that comprises software development philosophy. However, these philosophical underpinnings of software, whilst more mature process-wise that those of hardware, are far from complete or even self-consistent (cf. the debate on whether software is an art form or a form of engineering—Bond, 2005; Cockburn, 2004; Gabriel, 1996; Kitchenham and Carn, 1990; Osterweil, 2007; Reeves, 1992).

• secondly, it illustrates that the business models that survive in the consumer electronics semiconductor ecosystem rely on a tailoring of engineering work to meet market intercept windows. There is, potentially, a tangible commercial value to being first to market. Likewise, there is a tangible commercial value to having a significantly low cost base and razor thin margins. As a result, the costs of product verification and validation must be curtailed to fit within temporal and monetary budgets. This illustrates the fact that the ‘one verification methodology fits all’ approach does not adequately address all business models. Design

2 i.e.

the provision of dynamic models for the abstraction of mathematics.

223

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B UT I ON

processes and standard quality-assurance methods need costing and tailoring to ensure an adequate quality is costed and engineered into the end product. • thirdly, it indicates that geographical separation of design teams is a significant detraction of productivity potential in complex consumer electronics semiconductor projects. Semiconductor electronics is very much a global industry. In the rich marketplace of consumer electronics, Irish companies are dealing with remote development teams across the planet, but mainly centred in the US, in Europe and in Asia. These teams may be internal or external, and are customers of Irish product (design services, IP, or silicon) or are suppliers to Irish development teams. In addition, the products in question may be either digital hardware, software, or multi-discipline in nature. In such an environment, it is vitally important that social, geographical and cultural issues are recognised and addressed upfront to ensure effective and successful commerce. • fourthly, it acknowledges that, despite the shared ancestry, there is a growing separation of technical mindset between software and digital IC hardware—in terms of abstraction, fundamental development model (waterfall versus iterative/spiral) and in terms of focus/risk aversion. It illustrates that there is a techno-cultural dynamic between hardware and software teams that needs appreciation. • fifthly, it provides strong evidence for the applicability of Agile methods in the development of embedded semiconductor devices (as suggested by Donnellan and Kelly, 2005; Luqi and Zhang, 2003), specifically consumer electronics semiconductor devices. These projects can be seen through this work as being characterised by aggressive schedules, changing feature requirements, and multiple teams that are often geographically and culturally disparate. Agile methods were specifically devised to deal with the problems of rapid change and of tacit design knowledge flow amongst developers through fostering communication and collaboration from a people-centric management and organisation perspective. • sixthly, it presents a list of patterns of organisation and workflow (Chapter 7) for semiconductor projects that the interviewed practitioners have asserted help in the development process. Now, I will discuss each of the research questions that were identified previously in Subsection 3.5.1 in turn: Research Question 1: Is there a different frame of reference regarding how digital hardware engineers and software engineers approach their work that causes development stress between 224

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B U T I ON

the different technically skilled individuals themselves and, by consequence, their respective teams?

Hardware engineers are much more cognisant of risk than software engineers. Risk comes from insufficient verification. Ambiguity can introduce risk through insufficient verification coverage, and thus hardware engineers are also averse to ambiguity. Hardware engineers are more familiar with dealing with true concurrency. It is possible quite easily in hardware to instantiate multiple blocks in a design that are intended to run in parallel. In software, forgetting for a moment about that running on multi-core systems, multi-threaded application are typically sequenced serially at some point by the operating system. Software engineers would claim to be closer to the details of intended system use, and thus better appreciative of functional requirements as a whole. Research Question 2: If so, is this development stress related to an intrinsic quality of their discipline, or is it a mere artifact of the processes, techniques and tools they use in achieving their work? (a) Do the differences in inherent properties of the essence of software and digital hardware—namely the changeability and invisibility of software—affect development? (b) Does the ease at which the logic implementation can be changed have an impact at a fundamental level on how the logic implementation is created?

Due to the fear of risk, hardware engineers tend to push risk and ambiguity up towards software because it can (usually) be changed at a later date. Changeability is the determining factor here. Hardware designers working on FPGA platforms can afford to take more risk, and similarly software designers working on boot ROMs are anxious about ensuring good simulated test coverage of the ROM prior to tape-out. Technical determinism through tool flow issues do play some in influencing hardware designer attitudes. Being already averse to ambiguity, this is further strengthened by the length of time that system simulations take when attempting to debug a vague problem description from software or product test colleagues. Research Question 3: As a result of technical specialization, are there other effects from developer culture and mindset, from shared experiences or from the language and terminology in 225

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B UT I ON

common use, that cause development stress through how digital hardware engineers and software engineers analyse, model, solve and test problems of logic?

Developer culture plays a role in enforcing their behaviours. Quite often, it appears teams do not appreciate the motivating factors behind these cultures (risk aversion in the case of digital hardware, attempts to deal with and make sense of ambiguity of specifications and requirements in the case of software). Language also introduces additional barriers to the transfer of tacit design knowledge—different rhetoric is used to describe similar logical constructs, and the lack of a de-facto shared design language (textual and visual) is puzzling3 . Research Question 4: Can a solution be provided which will help to relieve the problem of development stress?

The causes of development stress between digital hardware and embedded software teams are complex and varied. Development stress is strongly influenced by communication difficulties between the two disciplines. This cross-functional communication is strongly mediated by the effects of degree of social familiarity, establishment and maintenance of trust, and difficulty of tacit knowledge transfer; and to a lesser extent by linguistic, techno-cultural and technical determinisms. Market realities impose a significant burden of risk upon the shoulders of hardware developers—a burden which directly shapes their approach towards verification and ambiguity avoidance. Development stress arises from a number of factors, but we can do things differently: • Practitioners of both disciplines can be better educated as to the social aspects of communication and collaboration as part of larger distributed cross-functional teams; • A better understanding of the business realities (and the burden of risk imposed on hardware developers) can assist each discipline appreciate the established culture of the other; • Trust can be jump-started through initial face-to-face meetings at project inception, and maintained through the regular use of face-to-face and rich media communication; • Hardware design workflows can benefit from the greater maturity and volume present in the body of knowledge that comprises software development philosophy—additionally,

3 Software interviewees tended to suggest à-la carte adoption of UML fits this bill, hardware interviewees tended not to be familiar with UML.

226

9. CONCLUSIONS

9 . 2 . RES EA RC H C ONT RI B U T I ON

there is scope for the judicious application of Agile methods to phases of SoC development.

Perhaps more illuminating that the answers found to these founding questions were some of the answers that emerged un-asked from this research, and the corresponding unposed questions that may have prompted them. Eisenhardt (1989) notes that:

‘Although easy identification of the research question and possible constructs is helpful, it is equally important to recognise that both are tentative in this type of research. No construct is guaranteed a place in the resultant theory, no matter how well it is measured. Also, the research question may shift during the research.’

I discovered that the largest cause of team-interworking difficulty between hardware and software teams is not, in fact, due to any effect of technical specialisation, but to the difficulty in transferring tacit design knowledge from one team to another. The pace of progress within the consumer electronics semiconductor industry means this tacit knowledge is generated rapidly, and has an ever diminishing half-life—both of these attributes conspiring to impede successful communication and collaboration. Success in this regard relies on the establishment of trust through social familiarity and good working relationships and dependability in the peers’ ability to delivery. In common with software-only projects, and indeed many other team-based activities, this tacit knowledge transfer is hampered further by introducing separation through geographical, social or cultural means.

9.2.1 Implications for Practitioners Practitioners of either digital hardware design or embedded software development in the CE market should review the toolbox of patterns presented in Chapter 7: Emergent Toolbox of Patterns for SoC Project Organisation. They need to be aware of the wealth of research knowledge with regard to effective and efficient group collaboration. They should consider the implications of trust establishment and maintenance, and of focusing on social development within distributed teams, bearing in mind that groups can be considered distributed even in a co-located environment (Allen, 1977). They should identify the scope for the introduction of people-centric agile methods within their development processes. This requires that they understand the potential for agile applicability, assess the implications (security and risk), and realise that it doesn’t have to be an all-or-nothing approach—consider situationally appropriate and hybridised with plan-driven practices where required. 227

9. CONCLUSIONS

9 . 3 . L I M I T A T I ONS OF T HI S WORK

9.2.2 Implications for Educators Be they corporate or academic, educators have a responsibility to better equip industry and society with practitioners who possess the skills required to foster a culture of cross-functional collaboration. The need for a focus on mixed skills was identified to bridge the domains of digital hardware and computer software, which specifically calls for computer engineering graduates. What differentiates Computer Engineering vis-à-vis Computer Science? • Closeness to hardware—breaking through that abstraction; and • Exposure to the discipline of engineering—applying real world constraints to problem solving. Educators need to be cognisant that SME organisations that are culturally hardware-dominant in the fast-moving CE market may tend to shy away from what they see as software-oriented processes. They may be reticent about investment in heavy, bureaucratic software process. Quite often they may not adequately have an appreciation of the current state of the art in areas such as global system development, agility and agile methods, the dynamics of individuals within groups, and the dynamics of cross-functional groups in relation to shared projects. This information needs to be better presented in a format more readily digestible by our SME organisations.

9.3 Limitations of This Work The limitations of this work include: • this research presents a snapshot of a particular set of semiconductor developers working in corporations within Ireland at a particular instance time (2006–2008); • the research method employed, grounded theory, generates a substantive theory (where the theory matches the data generated from the specific situational area of inquiry), rather than a general formal theory (which is more concerned with a conceptual area of inquiry); • this work does not propose strategies to measure development stress, in any meaningful manner other than by project goals being achieved (schedule/cost budgets) and ultimately the degrees to which organisations are successful in the marketplace; • this work presents patterns for mitigating development stress, but does not yield a quantifiable metric for measuring the extent of improvement due to any remedial action undertaken. These limitations are discussed next in the context of future research. 228

9. CONCLUSIONS

9. 4. SC OP E F OR F UT URE WORK

9.4 Scope for Future Work Based on the results of this thesis, I recommend the following topics for future study and investigation. • By its very nature, qualitative data analysis lends itself well to subsequent quantitative analysis. The qualitative work is a powerful means to establish theory grounded in a the data, whilst the quantitative techniques are effective tools in validating this theory in a larger context. • Consider gender issues—there is an inherent gender imbalance in the Irish engineering community. How would a more balanced gender profile affect team dynamics, inter-site and inter-discipline communication, and overall productivity? What new issues would it introduce? • Broadening the scope from Ireland specifically to see the commonality of differences on a global level—the market is global, as is the development force of engineering talent that is addressing the market. • How does company maturity and size affect business exit strategy (over and above business model). For example, small start-up heading for merger or IP trade-sale, versus mature startup heading for Initial Public Offering (IPO), versus stable incumbent public company. The small start-up is trying to grow revenues and demonstrate clear potential, and may trade off engineering effort and time to market for more saleable intellectual property and the potential to secure a strong sales pipeline in the future. The mature start-up is looking to keep sustained growth and a path to profitability as it heads towards its IPO date, and the public company is trying to stave off stagnation and continue to deliver shareholder value through consistent performance and growth. • How do nationalities and cultural differences affect development issues across mixed teams versus homogeneous team disciplines. For example, concepts such as the cultural bias of temporal urgency amongst software development teams (Egan et al., 2006; Tremaine, 2007) are of interest here, as is a comment made by Interviewee Robert on having to construct and generate bite-sized chunks of work for his ‘offshore’ out-sourced work team to digest—are these due to learning curve or social culture? • Communication was a major theme that emerged during the field work, and communication difficulties were seen as one of the most significant impairments to pace of development. Social networking tools, such as blogs, wikis etc. were suggested as potential mechanisms to cheaply and efficiently increase information sharing. Baker and Green (2005, 2008) suggest some of 229

9. CONCLUSIONS

9 . 4 . SC OP E F OR F UT URE WORK

the impacts social media tools will have, but it is interesting to consider the potential cultural and security implications of social media outlets for an industry such as CE semiconductors that is traditionally extremely concerned with protection of intellectual property through patenting and trade secrets. • The traditional problems of distributed/computer-mediated communication/culture shocks were a strong theme within this research. Having reviewed literature, this has obvious parallels to much broader Computer Supported Collaborative Work (CSCW) research, and even to the specific field of GSD. My research correlates with earlier work by Bos et al. (2001, 2002) and Zheng et al. (2002) on the hierarchy of trusted communication (face-to-face vs. video call vs. phone vs. instance messaging vs. email). Bos et al. mention that trust created over digital communications is often delayed (in forming) and fragile – introducing the concept of late round defections. Late is a relative term in the context of the experiments of Bos et al. Potential future work is to examine this concept, specifically as to how this trust atrophies without continual reinforcement through face-to-face/more immediate or nuanced forms of communication. • A toolbox of project organisation techniques also emerged as part of this research, which is intended for use by engineers and developers. Whilst not within the scope of this thesis, industrial validation would be very useful. • Specifically within the Irish semiconductor sector, it would be interesting to look at the particularly demotivational effect that economic downturns and position uncertainty creates amongst a workforce. Assuming the commercial organisation is able to remain a viable concern, then effective managerial process during a downturn is an interesting subject area, as downturns offer opportunity to put the head down and build value in product, with the ambition of capitalising on the eventual rebound. • Neurological studies have found that we cannot accurately separate cognitive artifact (‘how we think’) from emotional effect (‘how we feel’) (Damásio, 2004). The influence that emotions have on decision making particularly when developers are feeling frustrated (perhaps over perceived hardware inertia, over cavalier software attitude towards pre-tapeout verification, etc.) is an interesting topic to explore within consumer electronics semiconductor development. Consider the following: ‘The emotions that can be generated by controversies over programming languages are quite remarkable. . . new programming languages with the new ways of thinking they represent constitute a challenge to the “computer culture” that has been built up around established programming languages . . . The psychological factors involved in learning a programming language should not be underrated.’ (Wegner, 1970)

230

9. CONCLUSIONS

9.5. S UM MA RY

An example of this is the reluctance mentioned during several of my interviews of hardware designers to move to C-synthesis type language such as System-C. • Given the increasing acceptance and uptake of social networking amongst 18–24 year olds (Yehuda et al., 2009), a longitudinal study monitoring the trend of effectiveness of computer-mediated communication tools on trust (establishment and maintenance over time) between hardware and software teams might show interesting results as new engineers and developers enter the workforce. • Distributed open source development solves a number of the GSD-related issues in a community environment, through agile collaboration. Does the potential exist for education or technology transfer from the open source community to the proprietary world of (consumer electronics) semiconductor design?4

9.5 Summary



Enough research will tend to support your conclusions. — ARTHUR BLOCH An American writer, author of the Murphy’s Law books



Moore’s Law has thus far continued into its fifth decade, and has provided for an exponential increase in the hardware capability in consumer devices. This increasing silicon capacity has been filled with ever more ambitious device and product functionality—all of which come at the cost of additional complexity that needs managing not only through the examination of current tools, techniques and abstractions (improvement of technical technique), but through the efficient interworking of our hardware and software development teams (the better appreciation of social context). This research has show that both embedded software and digital hardware are representations of Boolean/digital logic. Both are socially mediated activities. They involve the generation of mental models (what Naur refers to as ‘theory building’), and more challengingly the sharing of these models (tacit knowledge) is socially and cognitively constrained. Tools and language have deterministic effects on development (through their effects on developer behaviour) but are not as significant in their effect as the difficulty in transferring tacit knowledge. These facts suggest that Agile methods may find purchase when introduced in the context of SoC system development, which

4 Albeit bearing in mind ‘the disparity between the quality and maturity of typical open-source software projects and that of open-source hardware projects’ (Neil Steiner, personal correspondence, June 2009).

231

9. CONCLUSIONS

9.5. S UM M A RY

is a very fast paced commercial environment technologically. In its current form, digital hardware design may more legitimately be a form of engineering than software—due primarily its lack of changeability. The Irish government is attempting to encourage a restructuring of the economy into a smarter, high-value and export-led economy (Cowen, 2009). Consumer electronics semiconductor design is a high risk business, of rapid technological innovation and ever changing business models, but it does have the potential to generate significant amounts of intellectual property. As a result of this research, I now hold the belief that Irish CE semiconductor organisations which recognise and address the social factors in digital hardware and embedded software team inter-working are giving themselves the potential to gain significant competitive advantages in de-risking their technical endeavours, and positioning themselves well for the recovery of the global economy and a resultant upswing in the semiconductor market and in semiconductor development.

232

Part IV

Appendices

233

Appendix A Interviewee Biographies

Table A.1: Biographies for Interviewees.

Interviewee

Experience

James

Due-Diligence Consultant Engineering Director (Software) at Intellectual Property Company (4 years) Ph.D. in Electronics

John

Principal Software Engineer at Fabless Semiconductor Company (4 years) Principal Software Engineer at Intellectual Property Company (6 years) Research Officer in University (3 years) M.Eng. in Computer Engineering

Robert

Chief Technology Officer at start-up Fabless Semiconductor Company (3 years) Embedded Software Contractor (2 years) Senior Software Engineer at Intellectual Property Company (5 years) M.Eng. in Computer Engineering

Michael

Principal Software Engineer at Fabless Semiconductor Company (3 years) Senior Software Engineer at Embedded Systems Debug Tools Vendor (8 years)

William

Staff Software Engineer at an Israeli Fabless Semiconductor Company (4 years) Staff Software Engineer at Intellectual Property Company (8 years) M.Eng. in Computer Engineering

David

General Manager, SoC Division at Fabless Semiconductor Company (4 years) Engineering Director at Intellectual Property Company (6 years)

Richard

Staff Hardware Engineer at Fabless Semiconductor Company (4 years) Staff Software Engineer at Intellectual Property Company (8 years)

Charles

Product Marketing Specialist at start-up Fabless Semiconductor Company (1 year) Project Manager at Fabless Semiconductor Company (5 years) Principal Hardware Engineer at an Fabless Semiconductor Company (4 years) Senior Hardware Engineer at Intellectual Property Company (8 years)

Joseph

Founder of start-up commercialising new technology research Principal Digital Engineer at Intellectual Property Company (5 years) Director of Marketing for various design service and fabless SMEs (7 years)

Thomas

Principal Software Engineer at Fabless Semiconductor Company (3 years) Principal Software Engineer at Consumer Electronics Manufacturer (5 years) Senior Research Engineer in University Research Centre (2.5 years) Development Engineer at Medical Devices Manufacturer (4 years)

Chris

Program Manager at Fabless Semiconductor Company (4 years) 235

A P P ENDI X A . I NT ERV I EWEE B I OG RA P HI ES Biographies for Interviewees (continued).

Interviewee

Experience Senior Hardware Design Engineer at Consumer Electronics Manufacturer (1 year) Principal Applications Engineer in Intellectual Property Company (5 years) Hardware Engineer at Consumer Electronics Module Manufacturer (5 years)

Daniel

Senior Staff Engineer at an Intellectual Property Company (10 years) Research Officer in University (5 years) M.Eng. in Computer Engineering, MBA

Kevin

Chief Technology Officer at start-up Fabless Semiconductor Company (3 years) Chief Technology Officer at Intellectual Property Company (8 years)

Seán

Entrepreneur—founded semiconductor start-up (5 years) Chief Technology Officer at Fabless Semiconductor Company (2 years) Engineering Director (7 years)

Paul

Senior Hardware Engineer at various Fabless Semiconductor Companies (12 years)

Mark

Principal Software Engineer at Fabless Semiconductor Company (5 years) Principal Applications Engineer in Intellectual Property Company (8 years)

George

Senior Software Engineer at various Fabless Semiconductor Companies (9 years)

Jack

System Architect (Software) at Fabless Semiconductor Company (12 years)

Brian

Staff Hardware Engineer at Fabless Semiconductor Company (14 years) Software Engineer at Fabless Semiconductor Company (2 years)

Steve

Staff Software Engineer at Semiconductor Company (6 years) Software Engineer at various Software Development Companies (8 years)

236

Appendix B Interview Guides

B.1 Initial Interview Guide

T

HE

purpose of my interview guide was to provide a starting point for subsequent discussion.

The interview guide was useful in acting as a general conceptual framework of topics to be

explored, but discussions were not kept rigorously to it if they happened to naturally wander off the listed topics.

237

A P P ENDI X B . I NT ERV I EW G U I DES

Questionnaire on Digital Hardware and Software Team Interaction during Semiconductor Development

Name: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Title: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Company: . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Experience (Summary): . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ................................................................................................. ................................................................................................. ................................................................................................. ................................................................................................. .................................................................................................

1. In your experience, what are the biggest issues facing projects that include both hardware and software development phases? 2. Where does the largest amount of risk lie at the onset? 3. From the HW perspective, where does the largest amount of risk lie in a new hardware design? 4. From the SW perspective, where does the largest amount of risk lie in a new software design? 5. What can be done to mitigate against the risk in a new multi-discipline project from the onset? 6. Which set of activities has a greater effect on overall project schedule—HW or SW? Why? 7. Does working from multiple office sites affect the development schedule? If so, in what manner? 8. If the answer to question 7 is yes, is this multiple sites issue exacerbated by having functional splits between the offices i.e. HW design in one, SW design in another? Or would a cross-

1

238

A P P ENDI X B . I NT ERV I EW G U I DES

functional split be theoretically better? (i.e. HW/SW in both, working on different system components in a larger design). 9. What can be done to increase the information sharing between HW and SW development teams? 10. Does the use of modern telecoms or IT technology (net meeting, video conferences, web/intranet, email, ...) preclude or reduce the need for face-to-face meetings? 11. Does the use of new social engineer tools (Wiki, Blogs, Aggregators, ...) 12. Where does the largest amount of complexity lie in the system? 13. In your opinion, which state machines are more complex ôc... the SW state machines or the HW state machines? 14. In your opinion, what are the benefits and trade-offs of implementing parts of the design in SW vs Hardware? 15. In your opinion, at what stage during a joint project development does the SW team need to be brought on board? How much influence should the SW team have on the HW functionality? 16. What are the difficulties in moving SW designs into HW for acceleration? 17. What do you think the most significant difference is between the HW Development Flow and the Software Development Process? 18. What communication difficulties exist (if any) between HW and SW development teams? 19. What suggestions would you have for improving communication and traceability between HW and SW development teams? 20. Are there any tools or methodologies applicable to help HW and SW engineers share designs etc? 21. Whom do you think is more suited to verify and validate the system? HW designers or SW designers? Why? 22. What is the biggest lesson that SW developers can learn from HW designers? 23. What is the biggest lesson that HW designers can learn from SW developers? 24. Which types of design are more complex—DSP oriented designs, or functional oriented designs (i.e. complex protocols, state machines) ? 2

239

A P P ENDI X B . I NT ERV I EW G U I DES

25. In your experience, from the cross-discipline projects you have worked on, who was more rigid in following a rational development methodology—SW teams or HW teams? Especially during ‘crunch’ periods when development methodologies tend to go out the window? 26. In your opinion, what single aspect of the HW development flow causes the most difficulty? 27. In your opinion, what single aspect of the SW development process causes the most difficulty? 28. Can SW be tested in the absence of working HW? What are the trade-offs and limitations? Can the production SW be tested without working silicon? 29. Can HW be tested in the absence of working SW? What are the trade-offs and limitations? Can real silicon be brought up without the final SW release? 30. What role does modelling play in a large system development? 31. Blue Skies Question: If, in the morning, you were to manage of two teams (that hadn’t worked together before—a HW team and a SW team) and you were charged with developing a new piece of technology ( 50-100 person-years of effort), what practices would you first implement, and what areas of inter-working between the teams would you focus on? What would you do different the next time around?

3

240

Appendix C Historical Roots of Computational Logic



It is not worth while to try to keep history from repeating itself, for man’s character will always make the preventing of the repetitions impossible. — MARK TWAIN 1835–1910



Taken from ‘Eruption: Hitherto Unpublished Pages About Men and Events’, ed. Bernard DeVoto, Harper, 1940.

C.1 Introduction

T

HIS

appendix presents a very brief discussion of the evolution and development of computing,

placing the (relatively) recent specialisations into the disciplines of Hardware and Software

Engineering in an historical context. As such, this appendix forms additional background reading for the content of Chapter 3: Etymology of Hardware and Software, specifically Section 3.3.

C.2 Advances in Mathematical Logic, and the Entscheidungsproblem Charles Babbage (1791–1871), a British mathematician and mechanical engineer, is traditionally credited with having built the first programmable ‘computer’. His ‘Difference Engine’ device was a special-purpose mechanical digital calculator designed to tabulate polynomial functions. A German engineer, J. H. Müller conceived the idea for a ‘Difference Engine’ in 1786 (Swedin and Ferro, 2005), but failed to secure funding to progress it further. Babbage suggested the use of such a machine to the Royal Astronomical Society on 14 June, 1822, in a paper entitled ‘Note on the application of machinery to the computation of very big mathematical tables’, and with the assistance of government funding managed to construct a prototype (Swade, 2001; Williams, 1997). George Boole (1815–1864), a British mathematician, invented what is now known as Boolean Algebra. As a mathematical tool for subsequent generations, it has become the basis of all modern computer arithmetic. During his lifetime, this work seemed to be of no practical use. It wasn’t until approximately 70 years after his demise that Claude Shannon (1916–2001), an American electrical 241

A P P ENDI X C . HI S T ORI C A L ROOT S engineer and mathematician, proved that circuits with electromechanical relays could be used as a model for, and as an aide in solving, problems of Boolean algebra (Campos and Martínez-Gil, 1992). Shannon’s 1937 MIT master’s thesis, ‘A Symbolic Analysis of Relays and Switching Circuits’ (Shannon, 1937), earned him the ‘Alfred Noble American Institute of American Engineers Award’ in 1940. The thesis has been described by Howard Gardner (1943-), psychologist and Harvard University Professor, as ‘possibly the most important, and also the most famous, master’s thesis of the century’ (MIT News Office, 2001, as cited in Campos and Martínez-Gil, 1992).

German mathematician Friedrich Ludwig Gottlob Frege (1848–1925) published ‘Begriffsschrift’ (Concept Notation) in 1879. In it, he formally discussed the ideas of functions and variables (Vilkko, 1998). Inspired by the concepts in logic developed by Frege (Zalta, 2009), British mathematicians Alfred North Whitehead (1861–1947) and Bertrand Russell (1872–1970) published a 3-volume manuscript on the foundations of mathematics between 1910–1913 called ‘Principia Mathematica’ (Whitehead and Russell, 2008). It was an attempt to ground mathematics on the laws of Logic by deriving all mathematical truths from a well-defined set of axioms and inference rules in symbolic logic. The book had a notable influence on predicate logic, set theory and analytical philosophy and ‘initiated a tradition of common technical work in fields as diverse as philosophy, mathematics, linguistics, economics and computer science’ (Irvine, 2008).

Tymoczko (1998) echoes the importance of the work, stating that: ‘Russell and Whitehead called their masterpiece Principia Mathematica, deliberately echoing Newton’s Philosophiae Naturalis Principia Mathematica. Principia Mathematica was to do to the philosophy of mathematics, if not to mathematics proper, what Newton’s work did to physics and its philosophy.’

David Hilbert (1862–1943), a German mathematician, proposed a research project concerned with meta-mathematics in 1920. His objective, which eventually became known as Hilbert’s program, was to formalise all existing mathematical theories to a finite and complete set of axioms, and to prove that the axioms in this set were consistent. Hilbert posed a number of very significant questions—questions that he could not answer but believed their answers to be true: 1. Was mathematics consistent? Was it free from contradiction, such that a false statement could never be proven with the rules of maths? (Coveney and Highfield, 1995; Hilbert, 1900) 2. Was mathematics complete? Could every mathematical statement be proven or disproven strictly with the rules of maths, without recourse to methods outside the logical system in question? (Hilbert and Ackermann, 1928) 242

A P P ENDI X C . HI S T ORI C A L ROOT S 3. Was mathematics decidable? Was there a definite finite number of steps (i.e. an algorithm) that would prove or disprove a given mathematical statement assertion? (Church, 1936; Hilbert and Ackermann, 1928) This problem has become known as the Entscheidungsproblem, German for ‘decision problem’. Kurt Gödel (1906–1978), an Austrian-American mathematician and philosopher was inspired by the work of Hilbert. Gödel published a paper in 1930 that was the first to prove that certain mathematical assertions can neither be proved nor disproved. In addition to this, Gödel demonstrated that it is never possible to provide a mathematical system is itself logically selfconsistent (Coveney and Highfield, 1995). Through this work, Gödel had dealt a considerable blow to Hilbert’s s first question. He had proved that arithmetic was incomplete, and also that mathematics could not be proven consistent and complete.

However, he had not answered the question of decidability, the

Entscheidungsproblem. Coveney and Highfield describe that:

‘Despite being demolished by Gödel, Hilbert’s ambitious plan for the foundations of mathematics gave us the crucial idea of computation and the notion of computability, originally abstract concepts that have led to modern computer and computer science . . . ‘Given the formal definition of an algorithm as a recursive procedure for solving a given problem in a finite number of mechanical steps, it can be in principle also be performed by “mindless” machines. Hilbert’s Entscheidungsproblem was nothing less than an attempt to prove that all mathematics could be obtained from the mechanical action of algorithms acting on strings of mathematical symbols . . . ’ (Coveney and Highfield, 1995)

C.3 Computing Advances Alonzo Church had given the first negative answer (Church, 1936) to the Entscheidungsproblem in 1936. Subsequent to this, a doctoral student being supervised by Church, British mathematician Alan Turing (1912–1954), wrote a momentous 1936 paper ‘On Computable Numbers, with an Application to the Entscheidungsproblem’ (Turing, 1937a,b). In it, Turing reformulated Gödel’s results on the limits of proof and computation, replacing Gödel’s universal arithmetic-based formal language with what are now called universal Turing machines—formal and simple devices (see Figure C.1). Turing proved that such a machine would be capable of performing any conceivable mathematical problem if it were representable as an algorithm. His ground breaking paper proved enormously influential and had remarkable consequences for computing and computer science. The universal Turing machine was a theoretical blueprint of a modern computer (Coveney and Highfield, 243

A P P ENDI X C . HI S T ORI C A L ROOT S

...

Tape

... Read/write head

Program Figure C.1: Turing Machine.

Taken from TEXample.net, by Ludger Humbert, retrieved 2009-02-27. 1995). At the time, no actual Turing machine would be likely to have had practical applications, being much slower than alternatives. Prior to the development of Turing machines, the separate concepts of hardware (the Turing machine) and software (the algorithm) had not really existed.

Babbage had sequences of

programmed steps in his Difference Engine, but that was more concerned with automation than the actual implementation of logic. Wegner (1970) describes how: ‘The theoretical concepts underlying computer science were smothered after 1940 by a great technological explosion. This resulted in a takeover of the computer field by technologists and in a de-emphasis of mathematical, philosophical, and scientific motivations for the study of computer science. Computers, programming languages, and other computer-related concepts were valued for their usefulness as tools in problem solving, rather than as objects of interest in their own right.’

In 1945, John von Neumann published his ‘First Draft of a Report on the EDVAC’ (von Neumann, 1945), a work which was significant in ‘transforming Turing’s abstract scheme into a general design for a physical device’ Mahoney (2002). Indeed, Campos and Martínez-Gil (1992) describe von

Neumann as ‘perhaps the first and foremost computer scientist’, claiming that ‘actual electronic computer development sprung as much from von Neumann’s work as from the theoretical work of Turing, which is why von Neumann is sometimes taken to be the major figure in the history of computer science’.

In 1956, John Bardeen, Walter Brattain and William Shockley shared the Nobel Prize for physics for the invention of the transistor, the fundamental building block of modern electronic devices (Riordan, 2007). And thus began a continual quest for smaller and more powerful platforms for computational logic. According to Mahoney (2002), the Turing/von Neumann concept of a computer has: ‘. . . remained the basic structure of the vast majority of computers. The processors have become faster and logically more complex, the memory . . . has expanded in size and speed of access, the modalities of input and output have grown more varied, all by orders of magnitude. This has happened largely through the development of the transistor and then of integrated circuits of ever larger capacity and ever more complex circuitry . . . ’

At this point in history, the differentiation between the modern concepts of hardware and software is cloudy at best. The software wasn’t all that ‘soft’. At its essence, the adjective qualifier 244

1780 • •1786

2009 • Müller’s Difference Engine fails to find funding

•1822

Logic Advances

Babbage’s Difference Engine paper

•1864

Boole’s death

Frege’s Begriffsschrift

•1913

Whitehead/Russell publish last volume of Principia Mathematica

•1920

Hilbert’s Program / Entscheidungsproblem

•1930 245

Computing Advances

Gödel’s Completeness Theorem

•1936

Turing’s introduces concept of HW/SW

•1937

Semiconductor Advances

Shannon models Boolean algebra with relays

•1945

von Neumann publishes ‘First Draft of a Report on the EDVAC’

•1956 •1952

Bardeen, Brattain and Shockley receive Nobel Prize in Physics for inventing the transistor

Geoffrey Dummer conceives the idea for an integrated circuit

•1958

Jack Kilby constructs the first integrated circuit at Texas Instruments

•1971

Intel introduces the 4004 microprocessor, a ‘computer-on-a-chip’

Figure C.2: Timeline of Fundamental Developments in Computational Logic.

A P P ENDI X C . HI S T ORI C A L ROOT S

•1879

A P P ENDI X C . HI S T ORI C A L ROOT S ‘soft’ or ‘hard’ applied to the ‘-ware’ stem qualifies the ease of which the logic implementation can be changed.

246

Appendix D Digital Hardware Toplevel Design



Real computer scientists despise the idea of actual hardware. Hardware has limitations, software doesn’t. It’s a real shame that Turing machines are so poor at I/O. — ANON Found in the Unix program /usr/games/fortune



D.1 Introduction

T

HIS

appendix looks at the steps involved in digital hardware toplevel design, for the purpose

of better understanding the tasks involved within the discipline. Specifically, this is to educate

the software engineering audience as to the additional considerations that bear an impact upon the work of a digital hardware designer.

D.1.1 Digital Toplevel Design Figure D.1 presents a typical toplevel digital hardware design flow. The chip toplevel is the uppermost functional view of the chip—in this instance, toplevel flow is used to refer to the activities involved in stitching together the entire chip, including the interconnection of the various individual logic blocks—but not including the work for the individual blocks themselves. The up-front creation of a chip toplevel allows for early size estimates, and allows the physical design work to begin ‘pipe-cleaning’ its flow early, before the block-level work is complete. It also allows for some amount of early I/O and pin-out decisions, which facilitates the early start of board and module design. The front-end portion of the flow involves design entry, typically in the form of VHDL or Verilog. This code is reviewed and verified through the use of behavioural simulations. In hardware parlance, synthesis is the process of reading a formal description of hardware logic and turning it into an equivalent description of the design, but at a much lower level (Gerez, 2005). In some respects, this can be considered as being similar in purpose to the software activities 247

A P P ENDI X D. DI G I T A L HW T OP L EV EL

Functional Specification Physical Synthesis Toplevel RTL Coding Clock Tree Insertion Toplevel RTL Code Review Clock Tree Review Toplevel RTL Verification FPGA Netlist FPGA-based verification report

Route Verification Review Parasitic Circuit Extraction Toplevel Verification Plan Timing and SI Analysis Logic Synthesis Timing and SI Review Floor Planning Chip Finishing (Metal Fill, Antennae, DRC, LVS)

Floor Plan Review

Design For Test (DFT) Insertion

DFT/Design for Manufacturing (DFM) Review

ESD Review Analog Hook-up Review Tapeout Check Lists Final Verification Review MRS/Business Case Review Tapeout Signoff

Figure D.1: Digital Hardware Toplevel Design Flow.

248

A P P ENDI X D. DI G I T A L HW T OP L EV EL of compiling high-level code, and assembling assembly language code—indeed, synthesizers can be considered to be silicon compilers. High-level Synthesis maps a behavioural description at algorithmic level (HDL) to a structural description in terms of functional units, memory and interconnections. Logical Synthesis takes these functional elements and, in combination with a target cell library (the logic cells—NAND gates etc.), turns them into netlist, a structural model description of an electrical circuit. Electronic Design Interchange Format (EDIF) is a widely used vendor-neutral file format for netlist output. Following synthesis stages, the design is simulated again so that the results can be compared for consistency to the earlier behavioural simulation. Functional equivalence checking tools such as Synopsys Formality are used to find RTL to gate-level mismatches between behavioural simulation and synthesis. The back-end work is more focused on taking the structural netlist design and preparing it for manufacture. Coudert (2002) describes the physical design process as ‘producing a GSD-II (a mask describing the complete layout that will be fabricated) from a gate-level description while satisfying a set of constraints’—challenges such as timing, area, routing, power consumption, etc. Floorplanning/Placement is used to estimate the chip size and to establish initial relative positions of the various blocks in the ASIC—covering such considerations as shape, allocation of space for clock and power wiring, and location of I/O and power pads (i.e. the I/O ring, or I/O fabrics). Design-for-Test involves the insertion of boundary-scan chain logic. This is used to ensure economical device testing. The importance of test is to isolate faulty parts early on to establish yield and to screen fault parts from reaching customers—where the repair costs increase substantially. Boundary-scan test is a method for testing devices using special logic cells added to each ASIC I/O pad. During normal operating, data passes between the pins and internal chip logic as if these boundary-scan cells were not present. In test mode, special test signals (‘vectors’) are passed into a device, and responses returned for analysis. Whereas Logical Synthesis maps an RTL description to a gate-level netlist—a set of gates that realize the same functional design—Physical Synthesis implements the netlist on a given targeted floorplan, whilst applying further constraints in terms of the physical characteristics of the design— constraints such as power, timing, routability, manufacturability etc. Clock Distribution Insertion ensures the clock distribution has minimum clock skew and minimum clock latencies for a given power dissipation and clock buffer area. It ensures net delays to leaf nodes are equal by balancing interconnect and buffer delays—providing synchronous clock edges to all parts of the chip. 249

A P P ENDI X D. DI G I T A L HW T OP L EV EL Routing is the process of interconnecting logic cells that have been assigned positions as an output of the floorplanning/placement process. The inputs to the routing process are the position of the terminals, the netlist (which indicates the required terminal interconnections) and the available area for routing. Martin and Leibson (2008) note that Moore’s Law is ‘both a blessing and a curse’— as each new manufacturing process generation shrinks circuitry and enables twice the functionality in the same die area, doubling the circuit density ‘leads to more wire congestion, which . . . can make it extremely difficult to achieve a successfully detailed route’.

At this point, various ‘parasitic’ electrical effects are associated with the design interconnects: interconnect capacitance (from wiring and routing), inductances and resistance. These parasitics have to be accounted for in the formal verification, to ensure they do not impinge upon the design functionality—and their effects become increasingly more predominant as process technologies shrink. Circuit Extraction is the process by which an integrated circuit layout is translated back into the netlist electrical circuit. Parasitic Circuit Extraction is performed at this point for the purposes of signal delay calculation, timing analysis, circuit simulation, and signal integrity analysis. Timing and Signal Integrity (SI) Analysis involves a variety of tasks—ensuring that timing constraints between various interconnects will be met, and that the deep sub-micron physical effects are controlled to ensure the correct functioning of the chip (for example, inter-wire coupling capacitance inducing timing issues or signal noise, self-heating effects.) With full timing information for the chip, the postroute delay interconnect delay information that is achieved at this point must be returned to the schematic, in a process called back-annotation. Chip Finishing involves the final activities before the design is taped out. Metal Fill and Antenna are two manufacturing concerns that must be addressed during the physical design phase of an IC (Rittman, 2004). Design-Rule Checks (DRC) ensure that errors have not been introduced in assembling the logic cells and routing—checking for such problems as shorts, spacing violations, etc. Layout versus Schematic (LVS) checking involves circuit extraction from the final physical layout and comparing it to the netlist to ensure equivalence between the logical and physical design processes.

250

Appendix E Interview Codes



Any fool can write code that a computer can understand. Good programmers write code that humans can understand. — MARTIN FOWLER



English author and international speaker on software development.

E.1 Introduction This appendix describes the mechanism by which axial coding was performed in-place in typeset interviews through the use of a custom LATEX macro. This workflow automation has culminated in the release of ulq da (Griffin, 2009), a LATEX package supporting Qualitative Data Analysis. ulq da assists in the analysis of textual data such as interview transcripts and field notes, and can generate various forms of data visualisation. Typeset in LATEX

Record Interviews

CSV generated through macro

Parse CSV file in Perl

Figure E.1: QDA Workflow.

E.2 Workflow Figure E.1 illustrates the basic workflow I employed in turning my recorded interview material into codes for the purpose of theory emergence. qda_codes.csv interview.tex

Perl

codes.dot

GraphViz (neato)

graph.pdf

PDFLATEX interview.pdf Figure E.2: Typesetting Workflow.

In the next few subsections, I will present how I used LATEX to record codes, and how I used GraphViz (Gansner and North, 1999) to visualise my ontology. 251

A P P ENDI X E. I NT ERV I EW C ODES .dot

.csv

Perl

GraphViz (neato)

PDF PDFLATEX

LATEX

PDF

JPG Figure E.3: Typesetting Feedback.

E.2.1 Coding through Typesetting I recorded all semi-structured interviews conducted using an Apple iPod with Griffin iTalk accessory. The recordings were then synchronised with my computer, converted to MP3s, and then downloaded via a playlist to the iPod. I then transcribed the interviews into LATEX, being careful to remove potentially sensitive information such as company names, site/locate names, names of people, etc. The interviews were then typeset and printed. With a highlighter and pen, I manually coded the interviews as I went along. Next, I marked up the LATEX source (using my special macro described in subsection E.3.1) with the codes I had written on the hardcopy. This was then typeset again—the act of which caused the codes and associated quotations to be written to a comma-separated-variables-format file (CSV file, with filename suffix .csv). What follows is an example of an interview excerpt that has been coded. The main body text is itself highlighted so that it stands out from surrounding text, and the codes are present in the margin.

IG: Do you think the social aspect of face to face is important for the project? . . . Interviewee James: . . . A cup of coffee is really important because then what happens is that you get a real perspective. My general experience of geographical!social,

having a functional group in one site,while I was in the other one, working geographical!face-to-face,

for you and using video conferencing, if you really wanted to get things geographical!telecoms

done you had to jump on a plane and fly over, there was nothing that could make up for sitting in a room with people to both get across the urgency and to ensure that communication among the team took place to address any of the issues.. . . 252

A P P ENDI X E. I NT ERV I EW C ODES

E.2.2 Visualisation of the Ontology Knigge and Cope (2006) describes visualisation as: ‘a broad term that refers to an array of methods that are used to provide insight into data through visual representations.’

Slocum et al. (2005), cited by Knigge and Cope (2006), states that the term has been informally: ‘used to describe any recently developed novel method for displaying data.’

At this point, I was able to post-process the CSV file with some custom Perl scripts into a variety of visualisations, and to generate both Table E.1, and also my axial codes for subsequent rationalisation and categorisation. My Perl scripts converted the CSV files into graph description input files for GraphViz, specifically its neato tool. These scripts also took account of the number of connections each code had with others, and it reflected this in the font size of the code when creating the neato graph description input file. Mishra (1999) notes that whilst visualisations are ‘powerful tools to communicate ideas’, it is important to understand the artistic conventions upon which the illustrations are based, and the hidden assumptions and biases within which the illustrations are grounded: ‘. . . scientific illustrations function within the matrix of science, with its hidden assumptions and biases. Quite often, these biases are invisible to us at the this moment in time and thus are quite insidious in their effect. Illustrations in a given domain are very dependent on the theory they are based on. This is not a one way street—a theory helps us “see” certain facts and then illustrate them; and these illustrations, in turn, support the theory.’

Figure E.4 shows a neato-generated graphing of the raw/open codes extracted from the CSV file. This diagram is in the form of a heat map, where both colour coding and font sizing is used to indicate the number of connections a code has to other codes. This embellishing information came later, during axial and selection coding. Figure E.5 shows a neato-generated graphing of the codes after the raw codes had been refined into axial codes, which show some degree of relationship/hierarchy. Again, it is in the form of a heat map, where both colour coding and font sizing is used to indicate the number of connections a code has to other codes. Similarly, Figure 5.8 (in Chapter 5, Subsection 5.5.2, page 113) presented the same code listing in the form of a tag cloud.

253

A P P ENDI X E. I NT ERV I EW C ODES

unedited

HW bias

adverse

process

telecoms

HW f ocus

webtools

costof test f riction

essenceof HW

agilemethods

essenceof SW

organisation

mitigation

internalising

marketchange

gsdmitigation

testcodesharing inadequatetesting co − location

SW workarounds

f reedomtoinnovate

dimensioningHW

HW attitudetorisk reluctancetochange movingSW intoHW

competitiveanalysis greatestimpact

resourcerequirements

technicaldeterminism verif ySW withoutHW inf ormalcommunication

marketwindow gsd

overconf idence

riskmitigation competitiveness

techno − geographicalsplit conf idencef romexperience hardware

tapeoutsetbyhardware

methodology

resourceusageanalysis

learningcurve underestimatelearningcurve

geographicalmitigation

F P GA

systemresources

culture complexityinSW controlcode HW reluctancetodesignchange

f luidspecif ications

SW models technical importanceof f acetof ace

f abless

testingHW withoutf inalSW

valueof SW models

validation projectinception

risk

whatHW canlearnf romSW

geographical

IP

complexity

social

verif icationrisk HW f earof risk f ocus IP underestimatesF ablesswork

f alseperception

tools

implementation

specif yingHW resources

approachtotest

limitationsof SW models

geographicalmoreimpactthantechnical

valueof ref erenceplatf orms

changingmarketrequirements

teambuilding

adherencetoprocess

businessmodel HW

importanceof cross − f unctionalskills verif ication

workaround

engineersover − simplif y

changeabilityof SW HW vsSW socialf amiliarity

bringinsof twareexpertiseearly inf ormalchats

communication

lackof mixeddesignskills SW inf luenceonSystemArch

keepSW modelinsyncwithHW sof tware

inf ormationsharing

movingsof twareintohardware costof wrongHW

newplatf orm

realtimemissingf romSW model

whatSW canlearnf romHW

specialisation

inf ormalismostimportant

conf idencethroughtest

f luidrequirements costof changingHW

algorithmicsof tware

systemunderstanding

requireshardwaref ocus opportunitytochange

SW f ocus

cost − benef itof process

communicationsdif f iculties

productspecif ication socialtools

consumerelectronics

perceptionof otherdiscipline

schedule

incidentalismostimportant

opportunityf orHW change SQA

multi − disciplinary

f ablessriskof newtechnology

weightof HW risk

socialrisk

valueintestbench

technicallanguagebarrier

incidentalknowledge

mindsetgap

aggressiveschedules

scheduleimpact

experience SW ischangeable

movingschedule

designmodelling

weeklystatuscalls IM

marketanalysis

cross − f unctional

controlsof tware

visibility earlyprototype

management

marketrisk

conf idence

constraints

toolproblems

internalisingIP

timetomarket

f acetof ace HW isf ixed ambition

Figure E.4: Visualisation of Open Codes.

254

A P P ENDI X E. I n te r vi e w C od e s



!   !



 !   ' !

!  

  !   $

  

  $$! 

  

 !  !  

 !" 

 !'     "  

 $ 



!    !  !

!    !   '$  !

$!!!   !  $  !

!$ $

    

 $

! $!       ! $ 

 !  !

!  !!

 ! $! 

    

$ !! # 

  ! $ 

$#

   !   

  

 !  !$  

! ! 

 !   ! "

 !#   ! 

$ 

/)0+

     

  & ! 

'   !  #   !    

$!$ !

"  $! (1

  !

  

  

   !

 

() !$  /" #

   ! !$   !! 

  $ 

#"  

    ! !!'  

' !

! 

!   #

 !       -

! $!  ! 

$  !

 !  !

   !#  

! #! $#

! 

$



& $#  

  $!

    #

!! 

   ! ! ! +     

!! 

! $!    ! 

! $!  !# $

 " 

 !  $ !

  ! !  

 !  %

  ! !   $

 #!$# ! ! 

 $ 

!  

#     

  ! 

#  ! ! 

.

!  !%

  !

 

!! & ! 

#  ! ! 

 $ 

$ !! 

   !

 

 $ 

  #!

# !$

  "$#

   

"!! #  

!"    " % "  !#  ! 

 %

  #

  



$

    !%  $ 

   ! 

    

  $

$& ! 

   !!%

!$ ,

" !$ %

*+

# !$

 '"!    

!$&    !



!

$    !  !   !

  !$ !$! $!   

 

!  $! !

       ! 

    

  

 $ !

  !

 !   !

#  

   ! $ 

!  

! 

"

   !

()  !  "! 

! $! 

!! 

" !

! ! !() ! ! !

Figure E.5: Visualisation of Axial Codes.

255

A P P ENDI X E. I n te r vi e w C od e s

  

    

       

               

  

        

     

      

      

 

           

      

    

 !  

Figure E.6: Axial Coding of Business Realities Theme.

256

A P P ENDI X E. I n te r vi e w C od e s

   

    

     

       

  

   





        

    

     

   

  

   



 

       

      

       

   

 

   

     



          

    

 



   

       

   



   

   

         

   

   

  

   

  

    

   

 

       

  

 !

    

Figure E.7: Axial Coding of Risk Theme.

257

A P P ENDI X E. I n te r vi e w C od e s



  

       



         

    

  

    

  

 

        

         

    

 

 

                              

 

       





  

    

             

Figure E.8: Axial Coding of Social Theme.

258

A P P ENDI X E. I n te r vi e w C od e s

  

  

  

 

 



     



  

     

     

 



     

    

    

 

 



   

 

Figure E.9: Axial Coding of Techno-cultural Theme.

259

A P P ENDI X E. I NT ERV I EW C ODES

Table E.1: QDA Open Codes for Interviews.

risk business model culture HW vs SW geographical complexity social technical verification mindset gap competitiveness risk mitigation HW fear of risk market window changeability of SW communication software importance of face to face hardware IP mitigation geographical mitigation product specification project inception cost of changing HW fabless process information sharing experience schedule changing market requirements specialisation system understanding co-location social tools social familiarity lack of mixed design skills SQA technical language barrier cost of wrong HW fluid requirements techno-geographical split gsd methodology cost-benefit of process telecoms validation friction technical determinism aggressive schedules cost of test SW workarounds

SW influence on System Arch IP underestimates Fabless work informal is most important greatest impact SW models consumer electronics informal communication specifying HW resources verify SW without HW weight of HW risk importance of cross-functional skills FPGA value in test bench incidental is most important adherence to process SW focus workaround market change schedule impact social risk HW focus market risk test code sharing resource requirements SW is changeable limitations of SW models bring in software expertise early confidence through test fluid specifications overconfidence verification risk moving schedule design modelling HW bias focus tool problems opportunity for HW change value of SW models moving software into hardware requires hardware focus visibility false perception moving SW into HW implementation tapeout set by hardware what SW can learn from HW confidence multi-disciplinary complexity in SW control code approach to test gsd mitigation resource usage analysis

market analysis HW attitude to risk essence of SW internalising agile methods cross-functional essence of HW competitive analysis realtime missing from SW model weekly status calls perception of other discipline reluctance to change team building informal chats communications difficulties dimensioning HW early prototype HW reluctance to design change adverse algorithmic software fabless risk of new technology control software internalising IP engineers over-simplify freedom to innovate constraints time to market tools HW is fixed value of reference platforms face to face opportunity to change underestimate learning curve HW web tools unedited keep SW model in sync with HW ambition new platform testing HW without final SW organisation system resources learning curve management geographical more impact than technical IM incidental knowledge what HW can learn from SW inadequate testing confidence from experience

E.2.3 Codes Table E.1 presents the list of low-level open codes that were used to form higher level themes and theory. Table E.2 presents a representative excerpt from the output file generated as part of the LATEX typesetting process, formatted as a table. These low-level axial open codes were used to form higher level themes and theory. Table E.2: QDA Open Codes and Interview Excerpts. Code

Description

technical!organisation!weekly status calls

"Definitely, you do want to have your weekly status reports and calls and. . . You do want to be trying to drill down into specific issues to see what’s going on, and you definitely want to be trying to get to decisions. . . And get decisions made. . . At the right pace. You don’t want to be knee-jerking but at the same time you don’t want to be procrastinating. . . But you do want to be moving the decisions fairly lively."

business model!competitiveness!market window, geographical, social

"Yeah. It (working from multiple sites) slows things up. Definitely. Slows it up big-time. It slows it up big-time with small teams because if you are in a small company with small teams, small teams move fast. If you are in a big company where you have more time to do it, and more structured, then things probably get done better in terms of quality, but they take a longer time to get there. So there are the tradeoffs. There is a happy medium somewhere. I’d prefer . . . "

verification!value of reference platforms, verification!value of SW models

"But even across a whole product’s lifetime, and particularly in the IP business, where at the end of the day you won’t have copies of all the hardware that that IP will ever run on, then you definitely need it need it in that case."

business model!IP!value in test bench

"If you were selling into the hardware intellectual property, you know, that value is in the test bench."

specialisation, business model!specialisation, risk!lack of mixed design skills

"So it is very difficult even from . . . Just because of the pressures of specialisation I guess causes you to actually . . . To. . . Too. . . To not have the. . . It’s not down to interest, it’s often down to time, or the scope to actually develop the broad skills necessary to do both."

260

A P P ENDI X E. I NT ERV I EW C ODES QDA Open Codes and Interview Excerpts (continued). Code

Description

geographical!mitigation!social familiarity, geographical!geographical mitigation!experience

"Having said that, the more experience there is and the better kind-of, you know, even if it is only at a social level that people know each other in the office that you can overcome huge chunks of that."

product specification, risk!HW fear of risk

"Yes, hardware guys see vagueness as a risk. I would see, being the marketing guy who helps spec-out the chip, that nearly 80% of the questions back from the team will be from the hardware team. As you go over to more mixed-signal/RF, they get more and more specific. It is not only digital logic then. As you go more towards mixed-signal and RF, the guys get even more paranoid concerning risk. THat is an example of the fact that the more and more further away from the software, let’s say into mixed-signal and RF, the more questions and the more specific the parameters of functionality are being requested from marketing by the RF team..."

geographical!geographical mitigation!co-location!social familiarity

"Ideally, you would have the hardware team and software team in the one place, I think. But if you don’t have that, that is not necessarily a bad thing. Again, in our case, with the COMPANY NAME REMOVED hardware team in Dublin and the software team in Shannon, it is virtually as good as having the . . . There wouldn’t be 5% in the difference between having everyone in the one office."

risk!hardware, culture!HW fear of risk

"if they can’t work off a 5-page hard specification with timing charts on it then they are . . . You know it is very difficult for them."

risk!HW fear of risk, culture!HW vs SW

"The training of hardware engineers is to get it right first time. So they are much more conservative in terms of their creativity in building the design. The focus is on really keeping it simple, and getting it working. That’s it."

culture!HW vs SW!adherence to process

"Hardware guys are. . . "

risk!social risk

"Yeah. and it is probably because we just don’t know each other and it takes that. . . To me, it literally takes a couple of years to . . . "

risk!changeability of SW, business model!competitiveness

"It is particularly prominant in consumer electronics software. and for good reason, like you know, the Philips hard disk recorder that I just bought there recently. The user interface on it is absolute rubbish, you see the reviews of it on the net and it is well known that one of the flaws of the product is the user interface and its lack of features. . . And all the features that it is lacking can and could be done on a firmware upgrade. . . They could be done by software changes, so you know you have to say to yourself Philips are probably following some kind of rigid software design methodology and it is actually not getting them what they need at the end of the day."

technical!tool problems!technical determinism

"I suppose it is the fact that it takes so long to run the tools over it, even when they are developing a new algorithm and they are getting going on the RTL test benches or even on the FPGA, they are never happy until eventually they have gate level timing done on the critical things, and verified, you know, that it does meet timing. . . *Laughs*. . . And they don’t know that until often 3-4 months after doing the original design. That now makes them very slow to commit to what you might call off the wall design schemes, or what they might see as off the wall design schemes, and I mean what can you do about that really. I suppose it is just wait for the tools to . . . "

geographical!geographical mitigation, social!technical language barrier, culture!HW vs SW

"Some of the things I thought of that would be good for improving the communciations. . . One thing that has always worked well in Shannon was that we have. . . HW/Applications Guy was co-located with the software team and he is able to speak fluent hardware. . . He can act like a translator."

risk

"A new device in the morning . . . Verification, verification, verification."

culture!HW vs SW!mindset gap, culture!mindset gap

"I think the language is completely different and a big problem in terms of understanding. I see hardware teams in talking with software teams. Software guys would go through bringing concepts down to a very low level, but the majority of hardware guys still completely glaze over. They just don’t understand, it is a completely different mindset, a different way of thinking. Hardware guys are programmed to be logic driven."

validation!system understanding!mindset gap, culture!HW vs SW!SW focus

"But as you go up towards what i think of as the front-end of the product you meet the customer. . . It is actually software guys more and more that. . . "

social!social tools, social!information sharing

"Funnily enough I think there are a couple of things that you can do. One is that you can get into a lot of these collaborative tools like wikis and blogs. . . "

business model!competitiveness, business model!time to market, technical!project inception

"In fact, one of the things I probably would do differently is that I probably wouldn’t move terribly fast at the start - there is definite a tendency at the start of these big projects that you wanna. . . And there has to be a sense of urgency about getting something done, because otherwise you are into analysis paralysis and nothing will actually happen for 3 to 6 months. . . But on the other hand, you definitely want to have a feel for what your competitors are up to. You gotta bear in mind that this project that you are doing, first of all, it in and of itself will take somewhere between. . . Somewhere between 9 months and 2.5 years to actually get done. So you have to try to figure out what’s the state of play going to be at that stage, in 2.5 to 3 years time."

opportunity for HW change, weight of HW risk

"When the chip comes back, there is the psychology of “we can work around it, it works, there is too much risk to tapeout another chip, because that might introduce new bugs.”"

culture!mindset gap

"Okay, I know I got that that far. There may be a bug in there. But there is no point waiting either because I want to start in this and anyway the software guys might solve a few bugs in the meantime. That’s the way the digital hardware guys is thinking. Whereas software guys are thinking sequentially, because the code is sequential. Whereas hardware is actually all concurrent. When you are writing Verilog, it is all about concurrency. Everything starts at the same time, so you have to think about all these interactions."

culture!HW vs SW!friction, business model!market window, business model, culture!HW vs SW

"I think it is certain that because it (a last minute feature request for the software usually) can be done, you have to do it. That is why feature creep happens. We are in a competitive world. If you won’t do it, your competitors will do it. And because the facility is there (in software), it comes with the territory. One of the reasons why software is powerful is because you can change, and because you can, you have to."

risk!changeability of SW, risk!cost of changing HW, culture!HW vs SW, technical!HW vs SW

"So I think this difference in language, where often the specifications in software are often much softer, much more vague than they are in hardware. . . Because there is no opportunity in hardware for. . . Perceived opportunity for changes. Changes are too expensive later in the project timeline, so you really spend a lot more time at requirements, getting agreement on your functional description, than you do in software. Even in hardware where I’ve seen very simple parts, there is often a team based formation - even ad-hoc teams to address and review entities, that I have not seen in software unless it is structured. The complexity in software is such that it often warrants much more time to go in and address a simple component."

complexity, business model!market window, business model, competitiveness

"A necessity to compete means you need to bring in more areas of complexity, and that means you have to optimize across disciplines . . . I should have mentioned the software / hardware thing. Of course, in the old days people used to bung together a hardware platform and deliver it to the software guys and try to go and explain how you go power it up and write some code for it. If those sequential operations took a bit of time, so be it. Nobody else was doing any better. That was just the way it was. There was no point in writing a bit of code for something that still required six people to get it into production."

geographical!face to face

"You always need face to face meetings. You can’t get rid of face to face . . . You can’t get rid of face to face and a whiteboard. Because the old face to face / whiteboard chit chat is such that you always start up on one tangent and you wander and you actualyl move faster and learn quicker. And you actually solve things a lot faster. In a room. With three people. Because your brain is triggered. You don’t trigger your own brain when you are just thinking to yourself. So yeah... "

risk!verification

"So in terms of risk in this type of stuff, that’s where it is. Because they don’t fully understand . . . they might understand the hardware in most respects, but not the application; whereas the software guys vice-versa. There will be a verification hole somewhere and you won’t get caught out until it comes back and somebody will try something and go “Oh Yeah! 10,000 packets and one byte is missing? Right yeah, never did that test, or never put it into that mode.” That’s what I think is the biggest risk."

risk

"You try to dimension a little bit large, you go too large and the product is just a dog, it’s uncompetitive, so trying to hit that is the hardest thing, whether it is square millimeters on a fabless semi chip or whether it is. . . You know. . . Processor choice on a product, that always seems to me the big risk anyway. . . "

risk!HW vs SW, culture!HW vs SW

"the focus on getting it right. So the hardware engineer is not finished until he has proven, usually at simulation, that the algorithm work, that there are no instabilities, that he has covered all of the possible requirements associated with that."

261

A P P ENDI X E. I NT ERV I EW C ODES QDA Open Codes and Interview Excerpts (continued). Code

Description

business model!fabless, social!communication!IP underestimates Fabless work

"And by the time they are going into SoC, you know, the people doing the SoC for instance. . . The Sharps of this world who are putting a whole heap of technologies around an ARM processor as media processor, for instance. . . They are very willing to do IP. But they like to see that technology out in the wild for the best part of 10 to 15 years. . . Mass production. They don’t really like dealing with a technology that they are not familiar with. They don’t like risk. "

business model!market window

"It is the old market window problem - you have to get out there with the right type of product at the right time."

risk!SW is changeable

"Whereas in the software, one of the problems is that there isn’t the same formality. In software, we generally teach people to build larger and larger systems. Generally there is a much more creative element to it (software). And the focus isn’t on getting it right, the focus is on building it."

incidental is most important, geographical, social

"But I think what you would then be missing is the ability to have those conversations in the canteen. . . people getting to know each other, the relationships, and also people talking about what they are working on. There would be no reason for two hardware guys working on . . . say for instance something was going wrong with a hardware block, and two hardware guys were working on it for a week, and they are in the canteen talking about. You are having your cup of coffee there, you are now well aware that this is going on. If you subsequently have a software problem in this area, you will be far quicker to ring them and ask about what they were talking about a couple of weeks ago. There is this sort of indicental knowledge which is the key."

specialisation, risk!lack of mixed design skills

"It is still very difficult to get people with a balanced hardware / software profile. It is usually skewed strongly one way or the other."

culture!ambition, technical!schedule, culture!confidence

"The software guys are much more ambitious. But they are also trying to get to an end-point much quicker than the hardware guys are. The hardware guys are often trained that the only way to do it is to use a hardware state machine in this situation, whereas the software guys have a number of tools they can actually use. They may have actually a very, very simple and soft implementation of the state machine, or they may not even see a state machine there whereas the hardware guys are trained, by purely mentoring, that this is the way they should approach it."

risk!schedule!tapeout set by hardware

"The hardware really dominates it at the end of the day. I mean, if the tapeout date is set, or whatever, they’ll go with that."

complexity

"Okay, the mathematics behind it is very complex, but implementing them is usually relatively straightforward, debugging them is usually very straightforward. Where you always find complexity is in the. . . What you might call the control software."

HW vs SW!mindset gap, technical!complexity

"In most of the systems I have seen, there have been significant more hardware state machines where there should have been significantly more software state machines. So often a lot of software is not done in state machine design. In terms of complexity, the software state machines that I have seen are often significantly harder than the hardware state machines - largely as a (result of a) lot of conditioning on the transitions. "

risk!product specification, risk!verification

"From the design perspective, the two major issues. . . the biggest issues I can think of, for the hardware anyway, are that you build something that doesn’t work; or else you build the wrong thing."

technical determinism

"I suppose one big difference is that they (hardware designers) probably write more (code) before they test. They probably implement a lot more before they run a simulation because the simulation step is a much harder step than compile times."

culture!market change, risk!product specification, risk!moving schedule

"It may be felt that the marketing and business development fellows don’t know what they are talking about, and there is a sociopolitical aspect of trying to get these changes adopted and into the development cycle. So that to me is a big risk, not only from a commercial side but also from a project management side."

verification!design modelling

"A lot of those circuits are almost physical implementations of mathematical functions, so you can use Matlab to model out your maths there, or out your mathematical functions before you lay them out, that is very valuable. . . You also have a whole heap of modelling to do on, say, complex digital circuits, like again say you go back to a processor, things like how much RAM to put down, how much MIPS you need, is your bus bandwidth wide enough, how deep do you need to make your FIFOs to keep . . . A lot of those models are simply excel spreadsheets, but they are very important up front."

geographical

"The ideal situation would be small teams on the same site, where the experience of everyone was there; so that both could udnerstand what the other is doing. You only get that when you are right beside each other."

verification!SW models!limitations of SW models!realtime missing from SW model

"You miss your realtime behaviour, and that may or may not be an issue, depending on the design you are doing. . . Lot of designs that’s actually not an issue at all, most of the software doesn’t know anything about the realtime. . . "

risk!complexity!experience

"Yeah, there was a combination of the two. One was a naïvety in the team as to even the existence of such things as corner cases, and there’s always a tendency among younger engineers to ignore the corners and to hope that they won’t actually occur. *Laughs*. A bit of ostrich, and i think it is probably because the design is. . . Something to do with the human mind, the design complexity is too large, it can’t be actually taken into the brain, so it’s simplified. . . Whereas you know more experienced engineers maybe are capable of holding a bigger design."

geographical!importance of face to face

"Which if you are trying to moderate between, it might be customers and the development teams, or it could be between different partners or whatever, if there is a standoff, face-to-face I have seen it generally gets resolved in a couple of hours. . . People compromise much quicker on decisions. I have not seen that in any other of the technologies, including video conferencing. Video conferencing is them and us., so whoever is on each site. Am just after coming from a meeting where face to face . . . it could never have been resolved, I don’t think it would ever have been resolved (without face to face)."

gsd!competitiveness

"I think naturally that teams across multiple sites . . . whether they be software teams distributed across sites, or multi-disciplinary teams . . . I think it does’t really matter whether they have software or hardware or marketing backgrounds . . . it is just human nature that teams compete with teams in other sites. The teams can become clique-y within the sites."

complexity, risk!risk mitigation, culture!HW vs SW

"Is that why software is complex then, because it is left to tie up all the loose ends?"

social!technical language barrier, culture!HW vs SW

"There is a language barrier, there is a language barrier, you know. . . Hardware engineers need to speak a little bit of software, and . . . A classic one that occurs quite a lot is that hardware engineers assume that all loops run in parallel."

geographical

"I think it impacts the schedule - it slows it down."

geographical

"relationships have been built, and personal relationships are very important."

complexity

"while I wouldn’t know an awful lot about it either, I perceive that the most complexity is in the projects I’ve been working on is in the analogue domain, going into to CMOS RF, you know, I think that that’s kind of a black art."

risk!mitigation!geographical mitigation, geographical!informal communication

262

A P P ENDI X E. I NT ERV I EW C ODES

E.3 Source Code E.3.1 Coding Macro The following macro was used to instrument the typeset interview text with codes, as part of data analysis. It was constructed with help from Peter Flynn and Marc van Dongen of the Irish TEX and LATEX In-Print Community. % % Usage : \ PhDThesisQDACode { code 1 , code 2 , code 3 } {Common Text } % \ newwrite \ qdaCodeFile \ immediate \ openout \ qdaCodeFile =qda_ c o d e s . c s v \ immediate \ w r i t e \ qdaCodeFile { Page Number , S e c t i o n , Code , Text } \ makeatletter % Allow @ i n commands . \ d e f \ P h D T h e s i s L i s t I t # 1 [ # 2 , {% \ w r i t e \ qdaCodeFile { \ thepage , \ t h e s e c t i o n , #2 , "#1"}% Output c u r r e n t i t e m . \ index {#2} \ @ i f n e x t c h a r ]% Look ahead one token . { \ e a t t h e s q u a r e b r a c k e t }% End o f l i s t . { \ P h D T h e s i s L i s t I t { # 1 } [ }% Process r e s t of l i s t . } \ def \ eatthesquarebracket ] { } % Gobble t h e s q u a r e b r a c k e t . \ makeatother % D i s a l l o w @ i n commands . \ newcommand { \ PhDThesisQDAHighlight } [ 2 ] { \ h l { \ p r o t e c t \ u l { # 2 } } \ marginpar { \ t i n y \ h l { # 1 } } } \ newcommand { \ PhDThesisQDAHighlight } [ 2 ] { \ h l { \ p r o t e c t \ u l { # 2 } } \ marginpar { \ t i n y \ h l { # 1 } } } \ newcommand { \ PhDThesisQDACode } [ 2 ] { \ PhDThesisQDAHighlight { # 1 } { # 2 } \ t y p e o u t {QDA: Coding "#2" a s "#1"} \ P h D T h e s i s L i s t I t { # 2 } [ # 1 , ] }

This macro has subsequently been expanded into a qualitative data analysis support package called ulq da which is now available on the Comprehensive TEX Archive Network (CTAN).

E.3.2 GraphViz graph description input file generation The following Perl script was used to convert the CSV codes output to GraphViz graph description input file, suitable for graphing with the neato tool. # ! / usr / bin / p e r l # AUTHOR: Ivan G r i f f i n ( i v a n . g r i f f i n @ u l . i e ) # DESCRIPTION : p a r s e s CSV f i l e c o n t a i n i n g QDA c o d e s and # g e n e r a t e s GraphViz i n p u t f i l e s u s e Getopt : : S t d ; u s e D i g e s t : : SHA1 qw( sha1 sha1_hex sha1_base64 ) ; sub D i s p l a y _ U s a g e ( ) { p r i n t "Usage: $0 \n" ; exit ; } \%o p t i o n s = ( ) ; g e t o p t s ( "ltgGh" , \%o p t i o n s ) or &D i s p l a y _ U s a g e ( ) ;

263

A P P ENDI X E. I NT ERV I EW C ODES &D i s p l a y _ U s a g e ( ) i f $ o p t i o n s { ’h’ } ; i f ( $ o p t i o n s { ’l’ } ) { p r i n t ) { chomp ; ( $page , $ s e c t i o n , $ c o d e _ l i s t , $ t e x t ) = s p l i t ( / \ , / , $_ , 4 ) ; # p r i n t "DEBUG: >> $ c o d e _ l i s t Node$digest [label =\"\"]\n" ; } $ p r e v i o u s _ c o d e = $code ; } } } } p r i n t