Around 1998 Free Software emerged from a happily subterranean and obscure existence stretching back roughly twenty years. At the very pinnacle of the dotcom boom, Free Software suddenly populated the pages of mainstream business journals, entered the strategy and planning discussions of executives, confounded the radar of political leaders and regulators around the globe, and permeated the consciousness of a generation of technophile teenagers growing up in the 1990s wondering how people ever lived without e-mail. Free Software appeared to be something shocking, something that economic history suggested could never exist: a practice of creating softwareâ€”good softwareâ€”that was privately owned, but freely and publicly accessible. Free Software, as its ambiguous moniker suggests, is both free from constraints and free of charge. Such characteristics seem to violate economic logic and the principles of private ownership and individual autonomy, yet there are tens of [PAGE 2] millions of people creating this software and hundreds of millions more using it. Why? Why now? And most important: how?
Free Software is a set of practices for the distributed collaborative creation of software source code that is then made openly and freely available through a clever, unconventional use of copyright law.1 But it is much more: Free Software exemplifies a considerable reorientation of knowledge and power in contemporary societyâ€”a reorientation of power with respect to the creation, dissemination, and authorization of knowledge in the era of the Internet. This book is about the cultural significance of Free Software, and by cultural I mean much more than the exotic behavioral or sartorial traits of software programmers, fascinating though they be. By culture, I mean an ongoing experimental system, a space of modification and modulation, of figuring out and testing; culture is an experiment that is hard to keep an eye on, one that changes quickly and sometimes starkly. Culture as an experimental system crosses economies and governments, networked social spheres, and the infrastructure of knowledge and power within which our world functions todayâ€”or fails to. Free Software, as a cultural practice, weaves together a surprising range of places, objects, and people; it contains patterns, thresholds, and repetitions that are not simple or immediately obvious, either to the geeks who make Free Software or to those who want to understand it. It is my goal in this book to reveal some of those complex patterns and thresholds, both historically and anthropologically, and to explain not just what Free Software is but also how it has emerged in the recent past and will continue to change in the near future.2
The significance of Free Software extends far beyond the arcane and detailed technical practices of software programmers and â€œgeeksâ€ (as I refer to them herein). Since about 1998, the practices and ideas of Free Software have extended into new realms of life and creativity: from software to music and film to science, engineering, and education; from national politics of intellectual property to global debates about civil society; from UNIX to Mac OS X and Windows; from medical records and databases to international disease monitoring and synthetic biology; from Open Source to open access. Free Software is no longer only about softwareâ€”it exemplifies a more general reorientation of power and knowledge.
The terms Free Software and Open Source donâ€™t quite capture the extent of this reorientation or their own cultural significance. They [PAGE 3] refer, quite narrowly, to the practice of creating softwareâ€”an activity many people consider to be quite far from their experience. However, creating Free Software is more than that: it includes a unique combination of more familiar practices that range from creating and policing intellectual property to arguing about the meaning of â€œopennessâ€ to organizing and coordinating people and machines across locales and time zones. Taken together, these practices make Free Software distinct, significant, and meaningful both to those who create it and to those who take the time to understand how it comes into being.
In order to analyze and illustrate the more general cultural significance of Free Software and its consequences, I introduce the concept of a â€œrecursive public.â€ A recursive public is a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives. Free Software is one instance of this concept, both as it has emerged in the recent past and as it undergoes transformation and differentiation in the near future. There are other instances, including those that emerge from the practices of Free Software, such as Creative Commons, the Connexions project, and the Open Access movement in science. These latter instances may or may not be Free Software, or even â€œsoftwareâ€ projects per se, but they are connected through the same practices, and what makes them significant is that they may also be â€œrecursive publicsâ€ in the sense I explore in this book. Recursive publics, and publics generally, differ from interest groups, corporations, unions, professions, churches, and other forms of organization because of their focus on the radical technological modifiability of their own terms of existence. In any public there inevitably arises a moment when the question of how things are said, who controls the means of communication, or whether each and everyone is being properly heard becomes an issue. A legitimate public sphere is one that gives outsiders a way in: they may or may not be heard, but they do not have to appeal to any authority (inside or outside the organization) in order to have a voice.3 Such publics are not inherently modifiable, but are made soâ€”and maintainedâ€”through the practices of participants. It is possible for Free Software as we know it to cease to be public, or to become just one more settled [PAGE 4] form of power, but my focus is on the recent past and near future of something that is (for the time being) public in a radical and novel way.
The concept of a recursive public is not meant to apply to any and every instance of a publicâ€”it is not a replacement for the concept of a â€œpublic sphereâ€â€”but is intended rather to give readers a specific and detailed sense of the non-obvious, but persistent threads that form the warp and weft of Free Software and to analyze similar and related projects that continue to emerge from it as novel and unprecedented forms of publicity and political action.
At first glance, the thread tying these projects together seems to be the Internet. And indeed, the history and cultural significance of Free Software has been intricately mixed up with that of the Internet over the last thirty years. The Internet is a unique platformâ€”an environment or an infrastructureâ€”for Free Software. But the Internet looks the way it does because of Free Software. Free Software and the Internet are related like figure and ground or like system and environment; neither are stable or unchanging in and of themselves, and there are a number of practical, technical, and historical places where the two are essentially indistinguishable. The Internet is not itself a recursive public, but it is something vitally important to that public, something about which such publics care deeply and act to preserve. Throughout this book, I will return to these three phenomena: the Internet, a heterogeneous and diverse, though singular, infrastructure of technologies and uses; Free Software, a very specific set of technical, legal, and social practices that now require the Internet; and recursive publics, an analytic concept intended to clarify the relation of the first two.
Both the Internet and Free Software are historically specific, that is, not just any old new media or information technology. But the Internet is many, many specific things to many, many specific people. As one reviewer of an early manuscript version of this book noted, â€œFor most people, the Internet is porn, stock quotes, Al Jazeera clips of executions, Skype, seeing pictures of the grandkids, porn, never having to buy another encyclopedia, MySpace, e-mail, online housing listings, Amazon, Googling potential romantic interests, etc. etc.â€ It is impossible to explain all of these things; the meaning and significance of the proliferation of digital pornography is a very different concern than that of the fall of the print encyclopedia [PAGE 5] and the rise of Wikipedia. Yet certain underlying practices relate these diverse phenomena to one another and help explain why they have occurred at this time and in this technical, legal, and social context. By looking carefully at Free Software and its modulations, I suggest, one can come to a better understanding of the changes affecting pornography, Wikipedia, stock quotes, and many other wonderful and terrifying things.4
Two Bits has three parts. Part I of this book introduces the reader to the concept of recursive publics by exploring the lives, works, and discussions of an international community of geeks brought together by their shared interest in the Internet. Chapter 1 asks, in an ethnographic voice, â€œWhy do geeks associate with one another?â€ The answerâ€”told via the story of Napster in 2000 and the standards process at the heart of the Internetâ€”is that they are making a recursive public. Chapter 2 explores the words and attitudes of geeks more closely, focusing on the strange stories they tell (about the Protestant Reformation, about their practical everyday polymathy, about progress and enlightenment), stories that make sense of contemporary political economy in sometimes surprising ways. Central to part I is an explication of the ways in which geeks argue about technology but also argue with and through it, by building, modifying, and maintaining the very software, networks, and legal tools within which and by which they associate with one another. It is meant to give the reader a kind of visceral sense of why certain arrangements of technology, organization, and lawâ€”specifically that of the Internet and Free Softwareâ€”are so vitally important to these geeks.
Part II takes a step back from ethnographic engagement to ask, â€œWhat is Free Software and why has it emerged at this point in history?â€ Part II is a historically detailed portrait of the emergence of Free Software beginning in 1998â€“99 and stretching back in time as far as the late 1950s; it recapitulates part I by examining Free Software as an exemplar of a recursive public. The five chapters in part II tell a coherent historical story, but each is focused on a separate component of Free Software. The stories in these chapters help distinguish the figure of Free Software from the ground of the Internet. The diversity of technical practices, economic concerns, information technologies, and legal and organizational practices is huge, and these five chapters distinguish and describe the specific practices in their historical contexts and settings: practices of [PAGE 6] proselytizing and arguing, of sharing, porting, and forking source code, of conceptualizing openness and open systems, of creating Free Software copyright, and of coordinating people and source code.
Part III returns to ethnographic engagement, analyzing two related projects inspired by Free Software which modulate one or more of the five components discussed in part II, that is, which take the practices as developed in Free Software and experiment with making something new and different. The two projects are Creative Commons, a nonprofit organization that creates copyright licenses, and Connexions, a project to develop an online scholarly textbook commons. By tracing the modulations of practices in detail, I ask, â€œAre these projects still Free Software?â€ and â€œAre these projects still recursive publics?â€ The answer to the first questions reveals how Free Softwareâ€™s flexible practices are influencing specific forms of practice far from software programming, while the answer to the second question helps explain how Free Software, Creative Commons, Connexions, and projects like them are all related, strategic responses to the reorientation of power and knowledge. The conclusion raises a series of questions intended to help scholars looking at related phenomena.
Recursive Publics and the Reorientation of Power and Knowledge
Governance and control of the creation and dissemination of knowledge have changed considerably in the context of the Internet over the last thirty years. Nearly all kinds of media are easier to produce, publish, circulate, modify, mash-up, remix, or reuse. The number of such creations, circulations, and borrowings has exploded, and the tools of knowledge creation and circulationâ€”software and networksâ€”have also become more and more pervasively available. The results have also been explosive and include anxieties about validity, quality, ownership and control, moral panics galore, and new concerns about the shape and legitimacy of global â€œintellectual propertyâ€ systems. All of these concerns amount to a reorientation of knowledge and power that is incomplete and emergent, and whose implications reach directly into the heart of the legitimacy, certainty, reliability and especially the finality and temporality of [PAGE 7] the knowledge and infrastructures we collectively create. It is a reorientation at once more specific and more general than the grand diagnostic claims of an â€œinformationâ€ or â€œnetworkâ€ society, or the rise of knowledge work or knowledge-based economies; it is more specific because it concerns precise and detailed technical and legal practices, more general because it is a cultural reorientation, not only an economic or legal one.
Free Software exemplifies this reorientation; it is not simply a technical pursuit but also the creation of a â€œpublic,â€ a collective that asserts itself as a check on other constituted forms of powerâ€”like states, the church, and corporationsâ€”but which remains independent of these domains of power.5 Free Software is a response to this reorientation that has resulted in a novel form of democratic political action, a means by which publics can be created and maintained in forms not at all familiar to us from the past. Free Software is a public of a particular kind: a recursive public. Recursive publics are publics concerned with the ability to build, control, modify, and maintain the infrastructure that allows them to come into being in the first place and which, in turn, constitutes their everyday practical commitments and the identities of the participants as creative and autonomous individuals. In the cases explored herein, that specific infrastructure includes the creation of the Internet itself, as well as its associated tools and structures, such as Usenet, e-mail, the World Wide Web (www), UNIX and UNIX-derived operating systems, protocols, standards, and standards processes. For the last thirty years, the Internet has been the subject of a contest in which Free Software has been both a central combatant and an important architect.
By calling Free Software a recursive public, I am doing two things: first, I am drawing attention to the democratic and political significance of Free Software and the Internet; and second, I am suggesting that our current understanding (both academic and colloquial) of what counts as a self-governing public, or even as â€œthe public,â€ is radically inadequate to understanding the contemporary reorientation of knowledge and power. The first case is easy to make: it is obvious that there is something political about Free Software, but most casual observers assume, erroneously, that it is simply an ideological stance and that it is antiâ€“intellectual property or technolibertarian. I hope to show how geeks do not start with ideologies, but instead come to them through their involvement in the [PAGE 8] practices of creating Free Software and its derivatives. To be sure, there are ideologues aplenty, but there are far more people who start out thinking of themselves as libertarians or liberators, but who become something quite different through their participation in Free Software.
The second case is more complex: why another contribution to the debate about the public and public spheres? There are two reasons I have found it necessary to invent, and to attempt to make precise, the concept of a recursive public: the first is to signal the need to include within the spectrum of political activity the creation, modification, and maintenance of software, networks, and legal documents. Coding, hacking, patching, sharing, compiling, and modifying of software are forms of political action that now routinely accompany familiar political forms of expression like free speech, assembly, petition, and a free press. Such activities are expressive in ways that conventional political theory and social science do not recognize: they can both express and â€œimplementâ€ ideas about the social and moral order of society. Software and networks can express ideas in the conventional written sense as well as create (express) infrastructures that allow ideas to circulate in novel and unexpected ways. At an analytic level, the concept of a recursive public is a way of insisting on the importance to public debate of the unruly technical materiality of a political order, not just the embodied discourse (however material) about that order. Throughout this book, I raise the question of how Free Software and the Internet are themselves a public, as well as what that public actually makes, builds, and maintains.
The second reason I use the concept of a recursive public is that conventional publics have been described as â€œself-grounding,â€ as constituted only through discourse in the conventional sense of speech, writing, and assembly.6 Recursive publics are â€œrecursiveâ€ not only because of the â€œself-groundingâ€ of commitments and identities but also because they are concerned with the depth or strata of this self-grounding: the layers of technical and legal infrastructure which are necessary for, say, the Internet to exist as the infrastructure of a public. Every act of self-grounding that constitutes a public relies in turn on the existence of a medium or ground through which communication is possibleâ€”whether face-to-face speech, epistolary communication, or net-based assemblyâ€”and recursive publics relentlessly question the status of these media, suggesting [PAGE 9] that they, too, must be independent for a public to be authentic. At each of these layers, technical and legal and organizational decisions can affect whether or not the infrastructure will allow, or even ensure, the continued existence of the recursive publics that are concerned with it. Recursive publicsâ€™ independence from power is not absolute; it is provisional and structured in response to the historically constituted layering of power and control within the infrastructures of computing and communication.
For instance, a very important aspect of the contemporary Internet, and one that has been fiercely disputed (recently under the banner of â€œnet neutralityâ€), is its singularity: there is only one Internet. This was not an inevitable or a technically determined outcome, but the result of a contest in which a series of decisions were made about layers ranging from the very basic physical configuration of the Internet (packet-switched networks and routing systems indifferent to data types), to the standards and protocols that make it work (e.g., TCP/IP or DNS), to the applications that run on it (e-mail, www, ssh). The outcome of these decisions has been to privilege the singularity of the Internet and to champion its standardization, rather than to promote its fragmentation into multiple incompatible networks. These same kinds of decisions are routinely discussed, weighed, and programmed in the activity of various Free Software projects, as well as its derivatives. They are, I claim, decisions embedded in imaginations of order that are simultaneously moral and technical.
By contrast, governments, corporations, nongovernmental organizations (NGOs), and other institutions have plenty of reasonsâ€”profit, security, controlâ€”to seek to fragment the Internet. But it is the check on this power provided by recursive publics and especially the practices that now make up Free Software that has kept the Internet whole to date. It is a check on power that is by no means absolute, but is nonetheless rigorously and technically concerned with its legitimacy and independence not only from state-based forms of power and control, but from corporate, commercial, and nongovernmental power as well. To the extent that the Internet is public and extensible (including the capability of creating private subnetworks), it is because of the practices discussed herein and their culmination in a recursive public.
Recursive publics respond to governance by directly engaging in, maintaining, and often modifying the infrastructure they seek, as a [PAGE 10] public, to inhabit and extendâ€”and not only by offering opinions or protesting decisions, as conventional publics do (in most theories of the public sphere). Recursive publics seek to create what might be understood, enigmatically, as a constantly â€œself-levelingâ€ level playing field. And it is in the attempt to make the playing field self-leveling that they confront and resist forms of power and control that seek to level it to the advantage of one or another large constituency: state, government, corporation, profession. It is important to understand that geeks do not simply want to level the playing field to their advantageâ€”they have no affinity or identity as such. Instead, they wish to devise ways to give the playing field a certain kind of agency, effected through the agency of many different humans, but checked by its technical and legal structure and openness. Geeks do not wish to compete qua capitalists or entrepreneurs unless they can assure themselves that (qua public actors) that they can compete fairly. It is an ethic of justice shot through with an aesthetic of technical elegance and legal cleverness.
The fact that recursive publics respond in this wayâ€”through direct engagement and modificationâ€”is a key aspect of the reorientation of power and knowledge that Free Software exemplifies. They are reconstituting the relationship between liberty and knowledge in a technically and historically specific context. Geeks create and modify and argue about licenses and source code and protocols and standards and revision control and ideologies of freedom and pragmatism not simply because these things are inherently or universally important, but because they concern the relationship of governance to the freedom of expression and nature of consent. Source code and copyright licenses, revision control and mailing lists are the pamphlets, coffeehouses, and salons of the twenty-first century: Tischgesellschaften become Schreibtischgesellschaften.7
The â€œreorientation of power and knowledgeâ€ has two key aspects that are part of the concept of recursive publics: availability and modifiability (or adaptability). Availability is a broad, diffuse, and familiar issue. It includes things like transparency, open governance or transparent organization, secrecy and freedom of information, and open access in science. Availability includes the business-school theories of â€œdisintermediationâ€ and â€œtransparency and accountabilityâ€ and the spread of â€œaudit cultureâ€ and so-called neoliberal regimes of governance; it is just as often the subject of suspicion as it is a kind of moral mandate, as in the case of open [PAGE 11] access to scientific results and publications.8 All of these issues are certainly touched on in detailed and practical ways in the creation of Free Software. Debates about the mode of availability of information made possible in the era of the Internet range from digital-rights management and copy protection, to national security and corporate espionage, to scientific progress and open societies.
However, it is modifiability that is the most fascinating, and unnerving, aspect of the reorientation of power and knowledge. Modifiability includes the ability not only to accessâ€”that is, to reuse in the trivial sense of using something without restrictionsâ€”but to transform it for use in new contexts, to different ends, or in order to participate directly in its improvement and to redistribute or recirculate those improvements within the same infrastructures while securing the same rights for everyone else. In fact, the core practice of Free Software is the practice of reuse and modification of software source code. Reuse and modification are also the key ideas that projects modeled on Free Software (such as Connexions and Creative Commons) see as their goal. Creative Commons has as its motto â€œCulture always builds on the past,â€ and they intend that to mean â€œthrough legal appropriation and modification.â€ Connexions, which allows authors to create online bits and pieces of textbooks explicitly encourages authors to reuse work by other people, to modify it, and to make it their own. Modifiability therefore raises a very specific and important question about finality. When is something (software, a film, music, culture) finished? How long does it remain finished? Who decides? Or more generally, what does its temporality look like, and how does that temporality restructure political relationships? Such issues are generally familiar only to historians and literary scholars who understand the transformation of canons, the interplay of imitation and originality, and the theoretical questions raised, for instance, in textual scholarship. But the contemporary meaning of modification includes both a vast increase in the speed and scope of modifiability and a certain automation of the practice that was unfamiliar before the advent of sophisticated, distributed forms of software.
Modifiability is an oft-claimed advantage of Free Software. It can be updated, modified, extended, or changed to deal with other changing environments: new hardware, new operating systems, unforeseen technologies, or new laws and practices. At an infrastructural level, such modifiability makes sense: it is a response to [PAGE 12] and an alternative to technocratic forms of planning. It is a way of planning in the ability to plan out; an effort to continuously secure the ability to deal with surprise and unexpected outcomes; a way of making flexible, modifiable infrastructures like the Internet as safe as permanent, inflexible ones like roads and bridges.
But what is the cultural significance of modifiability? What does it mean to plan in modifiability to culture, to music, to education and science? At a clerical level, such a question is obvious whenever a scholar cannot recover a document written in WordPerfect 2.0 or on a disk for which there are no longer disk drives, or when a library archive considers saving both the media and the machines that read that media. Modifiability is an imperative for building infrastructures that can last longer. However, it is not only a solution to a clerical problem: it creates new possibilities and new problems for long-settled practices like publication, or the goals and structure of intellectual-property systems, or the definition of the finality, lifetime, monumentality, and especially, the identity of a work. Long-settled, seemingly unassailable practicesâ€”like the authority of published books or the power of governments to control informationâ€”are suddenly confounded and denaturalized by the techniques of modifiability.
Over the last ten to fifteen years, as the Internet has spread exponentially and insinuated itself into the most intimate practices of all kinds of people, the issues of availability and modifiability and the reorientation of knowledge and power they signify have become commonplace. As this has happened, the significance and practices associated with Free Software have also spreadâ€”and been modulated in the process. These practices provide a material and meaningful starting point for an array of recursive publics who play with, modulate, and transform them as they debate and build new ways to share, create, license, and control their respective productions. They do not all share the same goals, immediate or long-term, but by engaging in the technical, legal, and social practices pioneered in Free Software, they do in fact share a â€œsocial imaginaryâ€ that defines a particular relationship between technology, organs of governance (whether state, corporate, or nongovernmental), and the Internet. Scientists in a lab or musicians in a band; scholars creating a textbook or social movements contemplating modes of organization and protest; government bureaucrats issuing data or journalists investigating corruption; corporations that manage [PAGE 13] personal data or co-ops that monitor community developmentâ€”all these groups and others may find themselves adopting, modulating, rejecting, or refining the practices that have made up Free Software in the recent past and will do so in the near future.
Experiment and Modulation
What exactly is Free Software? This question is, perhaps surprisingly, an incredibly common one in geek life. Debates about definition and discussions and denunciations are ubiquitous. As an anthropologist, I have routinely participated in such discussions and debates, and it is through my immediate participation that Two Bits opens. In part I I tell stories about geeks, stories that are meant to give the reader that classic anthropological sense of being thrown into another world. The stories reveal several general aspects of what geeks talk about and how they do so, without getting into what Free Software is in detail. I start in this way because my project started this way. I did not initially intend to study Free Software, but it was impossible to ignore its emergence and manifest centrality to geeks. The debates about the definition of Free Software that I participated in online and in the field eventually led me away from studying geeks per se and turned me toward the central research concern of this book: what is the cultural significance of Free Software?
In part II what I offer is not a definition of Free Software, but a history of how it came to be. The story begins in 1998, with an important announcement by Netscape that it would give away the source code to its main product, Netscape Navigator, and works backward from this announcement into the stories of the UNIX operating system, â€œopen systems,â€ copyright law, the Internet, and tools for coordinating people and code. Together, these five stories constitute a description of how Free Software works as a practice. As a cultural analysis, these stories highlight just how experimental the practices are, and how individuals keep track of and modulate the practices along the way.
Netscapeâ€™s decision came at an important point in the life of Free Software. It was at just this moment that Free Software was becoming aware of itself as a coherent movement and not just a diverse amalgamation of projects, tools, or practices. Ironically, this [PAGE 14] recognition also betokened a split: certain parties started to insist that the movement be called â€œOpen Sourceâ€ software instead, to highlight the practical over the ideological commitments of the movement. The proposal itself unleashed an enormous public discussion about what defined Free Software (or Open Source). This enigmatic event, in which a movement became aware of itself at the same time that it began to question its mission, is the subject of chapter 3. I use the term movement to designate one of the five core components of Free Software: the practices of argument and disagreement about the meaning of Free Software. Through these practices of discussion and critique, the other four practices start to come into relief, and participants in both Free Software and Open Source come to realize something surprising: for all the ideological distinctions at the level of discourse, they are doing exactly the same thing at the level of practice. The affect-laden histrionics with which geeks argue about the definition of what makes Free Software free or Open Source open can be matched only by the sober specificity of the detailed practices they share.
The second component of Free Software is just such a mundane activity: sharing source code (chapter 4). It is an essential and fundamentally routine practice, but one with a history that reveals the goals of software portability, the interactions of commercial and academic software development, and the centrality of source code (and not only of abstract concepts) in pedagogical settings. The details of â€œsharingâ€ source code also form the story of the rise and proliferation of the UNIX operating system and its myriad derivatives.
The third component, conceptualizing openness (chapter 5), is about the specific technical and â€œmoralâ€ meanings of openness, especially as it emerged in the 1980s in the computer industryâ€™s debates over â€œopen systems.â€ These debates concerned the creation of a particular infrastructure, including both technical standards and protocols (a standard UNIX and protocols for networks), and an ideal market infrastructure that would allow open systems to flourish. Chapter 5 is the story of the failure to achieve a market infrastructure for open systems, in part due to a significant blind spot: the role of intellectual property.
The fourth component, applying copyright (and copyleft) licenses (chapter 6), involves the problem of intellectual property as it faced programmers and geeks in the late 1970s and early 1980s. In this [PAGE 15] chapter I detail the story of the first Free Software licenseâ€”the GNU General Public License (GPL)â€”which emerged out of a controversy around a very famous piece of software called EMACS. The controversy is coincident with changing laws (in 1976 and 1980) and changing practices in the software industryâ€”a general drift from trade secret to copyright protectionâ€”and it is also a story about the vaunted â€œhacker ethicâ€ that reveals it in its native practical setting, rather than as a rarefied list of rules.
The fifth component, the practice of coordination and collaboration (chapter 7), is the most talked about: the idea of tens or hundreds of thousands of people volunteering their time to contribute to the creation of complex software. In this chapter I show how novel forms of coordination developed in the 1990s and how they worked in the canonical cases of Apache and Linux; I also highlight how coordination facilitates the commitment to adaptability (or modifiability) over against planning and hierarchy, and how this commitment resolves the tension between individual virtuosity and the need for collective control.
Taken together, these five components make up Free Softwareâ€”but they are not a definition. Within each of these five practices, many similar and dissimilar activities might reasonably be included. The point of such a redescription of the practices of Free Software is to conceptualize them as a kind of collective technical experimental system. Within each component are a range of differences in practice, from conventional to experimental. At the center, so to speak, are the most common and accepted versions of a practice; at the edges are more unusual or controversial versions. Together, the components make up an experimental system whose infrastructure is the Internet and whose â€œhypothesesâ€ concern the reorientation of knowledge and power.
For example, one can hardly have Free Software without source code, but it need not be written in C (though the vast majority of it is); it can be written in Java or perl or TeX. However, if one stretches the meaning of source code to include music (sheet music as source and performance as binary), what happens? Is this still Free Software? What happens when both the sheet and the performance are â€œborn digitalâ€? Or, to take a different example, Free Software requires Free Software licenses, but the terms of these licenses are often changed and often heatedly discussed and vigilantly policed by geeks. What degree of change removes a license [PAGE 16] from the realm of Free Software and why? How much flexibility is allowed?
Conceived this way, Free Software is a system of thresholds, not of classification; the excitement that participants and observers sense comes from the modulation (experimentation) of each of these practices and the subsequent discovery of where the thresholds are. Many, many people have written their own â€œFree Softwareâ€ copyright licenses, but only some of them remain within the threshold of the practice as defined by the system. Modulations happen whenever someone learns how some component of Free Software works and asks, â€œCan I try these practices out in some other domain?â€
The reality of constant modulation means that these five practices do not define Free Software once and for all; they define it with respect to its constitution in the contemporary. It is a set of practices defined â€œaround the pointâ€ 1998â€“99, an intensive coordinate space that allows one to explore Free Softwareâ€™s components prospectively and retrospectively: into the near future and the recent past. Free Software is a machine for charting the (re)emergence of a problematic of power and knowledge as it is filtered through the technical realities of the Internet and the political and economic configuration of the contemporary. Each of these practices has its own temporality of development and emergence, but they have recently come together into this full house called either Free Software or Open Source.9
Viewing Free Software as an experimental system has a strategic purpose in Two Bits. It sets the stage for part III, wherein I ask what kinds of modulations might no longer qualify as Free Software per se, but still qualify as recursive publics. It was around 2000 that talk of â€œcommonsâ€ began to percolate out of discussions about Free Software: commons in educational materials, commons in biodiversity materials, commons in music, text, and video, commons in medical data, commons in scientific results and data.10 On the one hand, it was continuous with interest in creating â€œdigital archivesâ€ or â€œonline collectionsâ€ or â€œdigital librariesâ€; on the other hand, it was a conjugation of the digital collection with the problems and practices of intellectual property. The very term commonsâ€”at once a new name and a theoretical object of investigationâ€”was meant to suggest something more than simply a collection, whether of [PAGE 17] digital objects or anything else; it was meant to signal the public interest, collective management, and legal status of the collection.11
In part III, I look in detail at two â€œcommonsâ€ understood as modulations of the component practices of Free Software. Rather than treating commons projects simply as metaphorical or inspirational uses of Free Software, I treat them as modulations, which allows me to remain directly connected to the changing practices involved. The goal of part III is to understand how commons projects like Connexions and Creative Commons breach the thresholds of these practices and yet maintain something of the same orientation. What changes, for instance, have made it possible to imagine new forms of free content, free culture, open source music, or a science commons? What happens as new communities of people adopt and modulate the five component practices? Do they also become recursive publics, concerned with the maintenance and expansion of the infrastructures that allow them to come into being in the first place? Are they concerned with the implications of availability and modifiability that continue to unfold, continue to be figured out, in the realms of education, music, film, science, and writing?
The answers in part III make clear that, so far, these concerns are alive and well in the modulations of Free Software: Creative Commons and Connexions each struggle to come to terms with new ways of creating, sharing, and reusing content in the contemporary legal environment, with the Internet as infrastructure. Chapters 8 and 9 provide a detailed analysis of a technical and legal experiment: a modulation that begins with source code, but quickly requires modulations in licensing arrangements and forms of coordination. It is here that Two Bits provides the most detailed story of figuring out set against the background of the reorientation of knowledge and power. This story is, in particular, one of reuse, of modifiability and the problems that emerge in the attempt to build it into the everyday practices of pedagogical writing and cultural production of myriad forms. Doing so leads the actors involved directly to the question of the existence and ontology of norms: norms of scholarly production, borrowing, reuse, citation, reputation, and ownership. These last chapters open up questions about the stability of modern knowledge, not as an archival or a legal problem, but as a social and normative one; they raise questions about the invention and control of norms, and the forms of life that may emerge from these [PAGE 18] practices. Recursive publics come to exist where it is clear that such invention and control need to be widely shared, openly examined, and carefully monitored.
Three Ways of Looking at Two Bits
Two Bits makes three kinds of scholarly contributions: empirical, methodological, and theoretical. Because it is based largely on fieldwork (which includes historical and archival work), these three contributions are often mixed up with each other. Fieldwork, especially in cultural and social anthropology in the last thirty years, has come to be understood less and less as one particular tool in a methodological toolbox, and more and more as distinctive mode of epistemological encounter.12 The questions I began with emerged out of science and technology studies, but they might end up making sense to a variety of fields, ranging from legal studies to computer science.
Empirically speaking, the actors in my stories are figuring something out, something unfamiliar, troubling, imprecise, and occasionally shocking to everyone involved at different times and to differing extents.13 There are two kinds of figuring-out stories: the contemporary ones in which I have been an active participant (those of Connexions and Creative Commons), and the historical ones conducted through â€œarchivalâ€ research and rereading of certain kinds of texts, discussions, and analyses-at-the-time (those of UNIX, EMACS, Linux, Apache, and Open Systems). Some are stories of technical figuring out, but most are stories of figuring out a problem that appears to have emerged. Some of these stories involve callow and earnest actors, some involve scheming and strategy, but in all of them the figuring out is presented â€œin the makingâ€ and not as something that can be conveniently narrated as obvious and uncontested with the benefit of hindsight. Throughout this book, I tell stories that illustrate what geeks are like in some respects, but, more important, that show them in the midst of figuring things outâ€”a practice that can happen both in discussion and in the course of designing, planning, executing, writing, debugging, hacking, and fixing.
There are also myriad ways in which geeks narrate their own actions to themselves and others, as they figure things out. Indeed, [PAGE 19] there is no crisis of representing the other here: geeks are vocal, loud, persistent, and loquacious. The superalterns can speak for themselves. However, such representations should not necessarily be taken as evidence that geeks provide adequate analytic or critical explanations of their own actions. Some of the available writing provides excellent description, but distracting analysis. Eric Raymondâ€™s work is an example of such a combination.14 Over the course of my fieldwork, Raymondâ€™s work has always been present as an excellent guide to the practices and questions that plague geeksâ€”much like a classic â€œprincipal informantâ€ in anthropology. And yet his analyses, which many geeks subscribe to, are distracting. They are fanciful, occasionally enjoyable and enlighteningâ€”but they are not about the cultural significance of Free Software. As such I am less interested in treating geeks as natives to be explained and more interested in arguing with them: the people in Two Bits are a sine qua non of the ethnography, but they are not the objects of its analysis.15
Because the stories I tell here are in fact recent by the standards of historical scholarship, there is not much by way of comparison in terms of the empirical material. I rely on a number of books and articles on the history of the early Internet, especially Janet Abbateâ€™s scholarship and the single historical work on UNIX, Peter Salusâ€™s A Quarter Century of Unix.16 There are also a couple of excellent journalistic works, such as Glyn Moodyâ€™s Rebel Code: Inside Linux and the Open Source Revolution (which, like Two Bits, relies heavily on the novel accessibility of detailed discussions carried out on public mailing lists). Similarly, the scholarship on Free Software and its history is just starting to establish itself around a coherent set of questions.17
Methodologically, Two Bits provides an example of how to study distributed phenomena ethnographically. Free Software and the Internet are objects that do not have a single geographic site at which they can be studied. Hence, this work is multisited in the simple sense of having multiple sites at which these objects were investigated: Boston, Bangalore, Berlin, Houston. It was conducted among particular people, projects, and companies and at conferences and online gatherings too numerous to list, but it has not been a study of a single Free Software project distributed around the globe. In all of these places and projects the geeks I worked with were randomly and loosely affiliated people with diverse lives and histories. Some [PAGE 20] identified as Free Software hackers, but most did not. Some had never met each other in real life, and some had. They represented multiple corporations and institutions, and came from diverse nations, but they nonetheless shared a certain set of ideas and idioms that made it possible for me to travel from Boston to Berlin to Bangalore and pick up an ongoing conversation with different people, in very different places, without missing a beat.
The study of distributed phenomena does not necessarily imply the detailed, local study of each instance of a phenomenon, nor does it necessitate visiting every relevant geographical siteâ€”indeed, such a project is not only extremely difficult, but confuses map and territory. As Max Weber put it, â€œIt is not the â€˜actualâ€™ inter-connection of â€˜thingsâ€™ but the conceptual inter-connection of problems that define the scope of the various sciences.â€18 The decisions about where to go, whom to study, and how to think about Free Software are arbitrary in the precise sense that because the phenomena are so widely distributed, it is possible to make any given node into a source of rich and detailed knowledge about the distributed phenomena itself, not only about the local site. Thus, for instance, the Connexions project would probably have remained largely unknown to me had I not taken a job in Houston, but it nevertheless possesses precise, identifiable connections to the other sites and sets of people that I have studied, and is therefore recognizable as part of this distributed phenomena, rather than some other. I was actively looking for something like Connexions in order to ask questions about what was becoming of Free Software and how it was transforming. Had there been no Connexions in my back yard, another similar field site would have served instead.
It is in this sense that the ethnographic object of this study is not geeks and not any particular project or place or set of people, but Free Software and the Internet. Even more precisely, the ethnographic object of this study is â€œrecursive publicsâ€â€”except that this concept is also the work of the ethnography, not its preliminary object. I could not have identified â€œrecursive publicsâ€ as the object of the ethnography at the outset, and this is nice proof that ethnographic work is a particular kind of epistemological encounter, an encounter that requires considerable conceptual work during and after the material labor of fieldwork, and throughout the material labor of writing and rewriting, in order to make sense of and reorient it into a question that will have looked deliberate and [PAGE 21] answerable in hindsight. Ethnography of this sort requires a long-term commitment and an ability to see past the obvious surface of rapid transformation to a more obscure and slower temporality of cultural significance, yet still pose questions and refine debates about the near future.19 Historically speaking, the chapters of part II can be understood as a contribution to a history of scientific infrastructureâ€”or perhaps to an understanding of large-scale, collective experimentation.20 The Internet and Free Software are each an important practical transformation that will have effects on the practice of science and a kind of complex technical practice for which there are few existing models of study.
A methodological note about the peculiarity of my subject is also in order. The Attentive Reader will note that there are very few fragments of conventional ethnographic material (i.e., interviews or notes) transcribed herein. Where they do appear, they tend to be â€œpublicly availableâ€â€”which is to say, accessible via the Internetâ€”and are cited as such, with as much detail as necessary to allow the reader to recover them. Conventional wisdom in both anthropology and history has it that what makes a study interesting, in part, is the work a researcher has put into gathering that which is not already available, that is, primary sources as opposed to secondary sources. In some cases I provide that primary access (specifically in chapters 2, 8, and 9), but in many others it is now literally impossible: nearly everything is archived. Discussions, fights, collaborations, talks, papers, software, articles, news stories, history, old software, old software manuals, reminiscences, notes, and drawingsâ€”it is all saved by someone, somewhere, and, more important, often made instantly available by those who collect it. The range of conversations and interactions that count as private (either in the sense of disappearing from written memory or of being accessible only to the parties involved) has shrunk demonstrably since about 1981.
Such obsessive archiving means that ethnographic research is stratified in time. Questions that would otherwise have required â€œbeing thereâ€ are much easier to research after the fact, and this is most evident in my reconstruction from sources on USENET and mailing lists in chapters 1, 6, and 7. The overwhelming availability of quasi-archival materials is something I refer to, in a play on the EMACS text editor, as â€œself-documenting history.â€ That is to say, one of the activities that geeks love to participate in, and encourage, is the creation, analysis, and archiving of their own roles in the [PAGE 22] development of the Internet. No matter how obscure or arcane, it seems most geeks have a well-developed sense of possibilityâ€”their contribution could turn out to have been transformative, important, originary. What geeks may lack in social adroitness, they make up for in archival hubris.
Finally, the theoretical contribution of Two Bits consists of a refinement of debates about publics, public spheres, and social imaginaries that appear troubled in the context of the Internet and Free Software. Terminology such as virtual community, online community, cyberspace, network society, or information society are generally not theoretical constructs, but ways of designating a subgenre of disciplinary research having to do with electronic networks. The need for a more precise analysis of the kinds of association that take place on and through information technology is clear; the first step is to make precise which information technologies and which specific practices make a difference.
There is a relatively large and growing literature on the Internet as a public sphere, but such literature is generally less concerned with refining the concept through research and more concerned with pronouncing whether or not the Internet fits Habermasâ€™s definition of the bourgeois public sphere, a definition primarily conceived to account for the eighteenth century in Britain, not the twenty-first-century Internet.21 The facts of technical and human life, as they unfold through the Internet and around the practices of Free Software, are not easy to cram into Habermasâ€™s definition. The goal of Two Bits is not to do so, but to offer conceptual clarity based in ethnographic fieldwork.
The key texts for understanding the concept of recursive publics are the works of Habermas, Charles Taylorâ€™s Modern Social Imaginaries, and Michael Warnerâ€™s The Letters of the Republic and Publics and Counterpublics. Secondary texts that refine these notions are John Deweyâ€™s The Public and Its Problems and Hannah Arendtâ€™s The Human Condition. Here it is not the public sphere per se that is the center of analysis, but the â€œideas of modern moral and social orderâ€ and the terminology of â€œmodern social imaginaries.â€22 I find these concepts to be useful as starting points for a very specific reason: to distinguish the meaning of moral order from the meaning of moral and technical order that I explore with respect to geeks. I do not seek to test the concept of social imaginary here, but to build something on top of it.
If recursive public is a useful concept, it is because it helps elaborate the general question of the â€œreorientation of knowledge and power.â€ In particular it is meant to bring into relief the ways in which the Internet and Free Software are related to the political economy of modern society through the creation not only of new knowledge, but of new infrastructures for circulating, maintaining, and modifying it. Just as Warnerâ€™s book The Letters of the Republic was concerned with the emergence of the discourse of republicanism and the simultaneous development of an American republic of letters, or as Habermasâ€™s analysis was concerned with the relationship of the bourgeois public sphere to the democratic revolutions of the eighteenth century, this book asks a similar series of questions: how are the emergent practices of recursive publics related to emerging relations of political and technical life in a world that submits to the Internet and its forms of circulation? Is there still a role for a republic of letters, much less a species of public that can seriously claim independence and autonomy from other constituted forms of power? Are Habermasâ€™s pessimistic critiques of the bankruptcy of the public sphere in the twentieth century equally applicable to the structures of the twenty-first century? Or is it possible that recursive publics represent a reemergence of strong, authentic publics in a world shot through with cynicism and suspicion about mass media, verifiable knowledge, and enlightenment rationality?
Posted by Christopher Kelty on May 8, 2008