The Connexions project was an experiment in modulating the practices of Free Software. It was not inspired by so much as it was based on a kind of template drawn from the experience of people who had some experience with Free Software, including myself. But how exactly do such templates get used? What is traced and what is changed? In terms of the cultural significance of Free Software, what are the implications of these changes? Do they maintain the orientation of a recursive public, or are they attempts to apply Free Software for other private concerns? And if they are successful, what are the implications for the domains they affect: education, scholarship, scientific knowledge, and cultural production? What effects do these changes have on the norms of work and the meaning and shape of knowledge in these domains?
In this chapter I explore in ethnographic detail how the modulations of Free Software undertaken by Connexions and Creative Commons are related to the problems of reuse, modification, and the norms of scholarly production. I present these two projects as responses to the contemporary reorientation of knowledge and power; they are recursive publics just as Free Software is, but they expand the domain of practice in new directions, that is, into the scholarly world of textbooks and research and into the legal domains of cultural production more generally.
In the course of â€œfiguring outâ€ what they are doing, these two projects encounter a surprising phenomenon: the changing meaning of the finality of a scholarly or creative work. Finality is not certainty. While certainty is a problematic that is well and often studied in the philosophy of science and in science studies, finality is less so. What makes a work stay a work? What makes a fact stay a fact? How does something, certain or not, achieve stability and identity? Such finality, the very paradigm of which is the published book, implies stability. But Connexions and Creative Commons, through their experiments with Free Software, confront the problem of how to stabilize a work in an unstable context: that of shareable source code, an open Internet, copyleft licenses, and new forms of coordination and collaboration.1 The meaning of finality will have important effects on the ability to constitute a politics around any given work, whether a work of art or a work of scholarship and science. The actors in Creative Commons and Connexions realize this, and they therefore form yet another instance of a recursive public, precisely because they seek ways to define the meaning of finality publicly and openlyâ€”and to make modifiability an irreversible aspect of the process of stabilizing knowledge.
The modulations of Free Software performed by Connexions and Creative Commons reveal two significant issues. The first is the troublesome matter of the meaning of reuse, as in the reuse of concepts, ideas, writings, articles, papers, books, and so on for the creation of new objects of knowledge. Just as software source code can be shared, ported, and forked to create new versions with new functions, and just as software and people can be coordinated in new ways using the Internet, so too can scholarly and scientific content. I explore the implications of this comparison in this chapter. The central gambit of both Connexions and Creative Commons (and much of scientific practice generally) is that new work builds on [PAGE 271] previous work. In the sciences the notion that science is cumulative is not at issue, but exactly how scientific knowledge accumulates is far from clear. Even if â€œstanding on the shoulders of giantsâ€ can be revealed to hide machinations, secret dealings, and Machiavellian maneuvering of the most craven sort, the very concept of cumulative knowledge is sound. Building a fact, a result, a machine, or a theory out of other, previous worksâ€”this kind of reuse as progress is not in question. But the actual material practice of writing, publication, and the reuse of other results and works is something that, until very recently, has been hidden from view, or has been so naturalized that the norms of practice are nearly invisible to practitioners themselves.
This raises the other central concern of this chapter: that of the existence or nonexistence of norms. For an anthropologist to query whether or not norms exist might seem to theorize oneself out of a job; one definition of anthropology is, after all, the making explicit of cultural norms. But the turn to â€œpracticesâ€ in anthropology and science studies has in part been a turn away from â€œnormsâ€ in their classic sociological and specifically Mertonian fashion. Robert Mertonâ€™s suggestion that science has been governed by normsâ€”disinterestedness, communalism, organized skepticism, objectivityâ€”has been repeatedly and roundly criticized by a generation of scholars in the sociology of scientific knowledge who note that even if such norms are asserted by actors, they are often subverted in the doing.2 But a striking thing has happened recently; those Mertonian norms of science have in fact become the more or less explicit goals in practice of scientists, engineers, and geeks in the wake of Free Software. If Mertonian norms do not exist, then they are being invented. This, of course, raises novel questions: can one create norms? What exactly would this mean? How are norms different from culture or from legal and technical constraints? Both Connexions and Creative Commons explicitly pose this question and search for ways to identify, change, or work with norms as they understand them, in the context of reuse.
Whiteboards: What Was Publication?
More than once, I have found myself in a room with Rich Baraniuk and Brent Hendricks and any number of other employees of the [PAGE 272] Connexions project, staring at a whiteboard on which a number of issues and notes have been scrawled. Usually, the notes have a kind of palimpsestic quality, on account of the array of previous conversations that are already there, rewritten in tiny precise script in a corner, or just barely erased beneath our discussion. These conversations are often precipitated by a series of questions that Brent, Ross Reedstrom, and the development team have encountered as they build and refine the system. They are never simple questions. A visitor staring at the whiteboard might catch a glimpse of the peculiar madness that afflicts the project: a mixture of legal terms, technical terms, and terms like scholarly culture or DSP communities. Iâ€™m consulted whenever this mixture of terms starts to worry the developers in terms of legality, culture, or the relationship between the two. Iâ€™m generally put in the position of speaking either as a lawyer (which, legally speaking, I am not supposed to do) or as an anthropologist (which I do mainly by virtue of holding a position in an anthropology department). Rarely are the things I say met with assent: Brent and Ross, like most hackers, are insanely well versed in the details of intellectual-property law, and they routinely correct me when I make bold but not-quite-true assertions about it. Nonetheless, they rarely feel well versed enough to make decisions about legal issues on their own, and often I have been calledâ€”on again as a thoughtful sounding board, and off again as intermediary with Creative Commons.
This process, I have come to realize, is about figuring something out. It is not just a question of solving technical problems to which I might have some specific domain knowledge. Figuring out is modulation; it is template-work. When Free Software functions as a template for projects like Connexions, it does so literally, by allowing us to trace a known form of practice (Free Software) onto a less well known, seemingly chaotic background and to see where the forms match up and where they do not. One very good way to understand what this means in a particular caseâ€”that is, to see more clearly the modulations that Connexions has performedâ€”is to consider the practice and institution of scholarly publication through the template of Free Software.
Consider the ways scholars have understood the meaning and significance of print and publication in the past, prior to the Internet and the contemporary reorientation of knowledge and power. The list of ambitious historians and theorists of the relationship [PAGE 273] of media to knowledge is long: Lucien Febvre, Walter Ong, Marshall McLuhan, Jack Goody, Roger Chartier, Friedrich Kittler, Elizabeth Eisenstein, Adrian Johns, to name a few.3 With the exception of Johns, however, the history of publication does not start with the conventional, legal, and formal practices of publication so much as it does with the material practices and structure of the media themselves, which is to say the mechanics and technology of the printed book.4 Ongâ€™s theories of literacy and orality, Kittlerâ€™s re-theorization of the structure of media evolution, Goodyâ€™s anthropology of the media of accounting and writingâ€”all are focused on the tangible media as the dependent variable of change. By contrast, Johnsâ€™s The Nature of the Book uncovers the contours of the massive endeavor involved in making the book a reliable and robust form for the circulation of knowledge in the seventeenth century and after.
Prior to Johnsâ€™s work, arguments about the relationship of print and power fell primarily into two camps: one could overestimate the role of print and the printing press by suggesting that the â€œfixityâ€ of a text and the creation of multiple copies led automatically to the spread of ideas and the rise of enlightenment. Alternately, one could underestimate the role of the book by suggesting that it was merely a transparent media form with no more or less effect on the circulation or evaluation of ideas than manuscripts or television. Johns notes in particular the influence of Elizabeth Eisensteinâ€™s scholarship on the printing press (and Bruno Latourâ€™s dependence on this in turn), which very strongly identified the characteristics of the printed work with the cultural changes seen to follow, including the success of the scientific revolution and the experimental method.5 For example, Eisenstein argued that fixityâ€”the fact that a set of printed books can be exact copies of each otherâ€”implied various transformations in knowledge. Johns, however, is at pains to show just how unreliable texts are often perceived to be. From which sources do they come? Are they legitimate? Do they have the backing or support of scholars or the crown? In short, fixity can imply sound knowledge only if there is a system of evaluation already in place. Johns suggests a reversal of this now common-sense notion: â€œWe may consider fixity not as an inherent quality, but as a transitive one. . . . We may adopt the principle that fixity exists only inasmuch as it is recognized and acted upon by peopleâ€”and not otherwise. The consequence of this change in perspective is that print culture itself is immediately laid open to analysis. It becomes [PAGE 274] a result of manifold representations, practices and conflicts, rather than just the manifold cause with which we are often presented. In contrast to talk of a â€˜print logicâ€™ imposed on humanity, this approach allows us to recover the construction of different print cultures in particular historical circumstances.â€6
Johnsâ€™s work focuses on the elaborate and difficult cultural, social, and economic work involved, in the sixteenth and seventeenth centuries, in transforming the European book into the kind of authority it is taken to be across the globe today. The creation and standardization not just of books but of a publishing infrastructure involved the kind of careful social engineering, reputation management, and skills of distinction, exclusion, and consensus that science studies has effectively explored in science and engineering. Hence, Johns focuses on â€œprint-in-the-makingâ€ and the relationship of the print culture of that period to the reliability of knowledge. Instead of making broad claims for the transformation of knowledge by print (eerily similar in many respects to the broad claims made for the Internet), Johns explores the clash of representations and practices necessary to create the sense, in the twentieth century, that there really is or was only one print culture.
The problem of publication that Connexions confronts is thus not simply caused by the invention or spread of the Internet, much less that of Free Software. Rather, it is a confrontation with the problems of producing stability and finality under very different technical, legal, and social conditionsâ€”a problem more complex even than the â€œdifferent print cultures in particular historical circumstancesâ€ that Johns speaks of in regard to the book. Connexions faces two challenges: that of figuring out the difference that today introduces with respect to yesterday, and that of creating or modifying an infrastructure in order to satisfy the demands of a properly authoritative knowledge. Connexions textbooks of necessity look different from conventional textbooks; they consist of digital documents, or â€œmodules,â€ that are strung together and made available through the Web, under a Creative Commons license that allows for free use, reuse, and modification. This version of â€œpublicationâ€ clearly has implications for the meaning of authorship, ownership, stewardship, editing, validation, collaboration, and verification.
The conventional appearance of a bookâ€”in bookstores, through mail-order, in book clubs, libraries, or universitiesâ€”was an event that signified, as the name suggests, its official public appearance [PAGE 275] in the world. Prior to this event, the text circulated only privately, which is to say only among the relatively small network of people who could make copies of it or who were involved in its writing, editing, proofreading, reviewing, typesetting, and so on. With the Internet, the same text can be made instantly available at each of these stages to just as many or more potential readers. It effectively turns the event of publication into a notional eventâ€”the click of a buttonâ€”rather than a highly organized, material event. Although it is clear that the practice of publication has become denaturalized or destabilized by the appearance of new information technologies, this hardly implies that the work of stabilizing the meaning of publicationâ€”and producing authoritative knowledge as a resultâ€”has ceased. The tricky part comes in understanding how Free Software is used as a template by which the authority of publication in the Gutenberg Galaxy is being transformed into the authority of publication in the Turing Universe.
Publication in Connexions
In the case of Connexions there are roughly three stages to the creation of content. The first, temporally speaking, is whatever happens before Connexions is involved, that is, the familiar practices of what I would call composition, rather than simply writing. Some project must be already under way, perhaps started under the constraints of and in the era of the book, perhaps conceived as a digital textbook or an online textbook, but still, as of yet, written on paper or saved in a Word document or in LaTeX, on a scholarâ€™s desktop. It could be an individual project, as in the case of Richâ€™s initial plan to write a DSP textbook, or it could be a large collaborative project to write a textbook.
The second stage is the one in which the document or set of documents is translated (â€œConnexifiedâ€) into the mark-up system used by Connexions. Connexions uses the eXtensible Mark-up Language (XML), in particular a subset of tags that are appropriate to textbooks. These â€œsemanticâ€ tags (e.g., <term>) refer only to the meaning of the text they enclose, not to the â€œpresentationâ€ or syntactic look of what they enclose; they give the document the necessary structure it needs to be transformed in a number of creative ways. Because XML is related only to content, and not to [PAGE 276] presentation (it is sometimes referred to as â€œagnosticâ€), the same document in Connexions can be automatically made to look a number of different ways, as an onscreen presentation in a browser, as a pdf document, or as an on-demand published work that can be printed out as a book, complete with continuous page numbering, footnotes (instead of links), front and back matter, and an index. Therein lies much of Connexionsâ€™s technical wizardry.
During the second stage, that of being marked up in XML, the document is not quite public, although it is on the Internet; it is in what is called a workgroup, where only those people with access to the particular workgroup (and those have been invited to collaborate) can see the document. It is only when the document is finished, ready to be distributed, that it will enter the third, â€œpublishedâ€ stageâ€”the stage at which anyone on the Internet can ask for the XML document and the software will display it, using style sheets or software converters, as an HTML page, a pdf document for printing, or as a section of a larger course. However, publication does not here signify finality; indeed, one of the core advantages of Connexions is that the document is rendered less stable than the book-object it mimics: it can be updated, changed, corrected, deleted, copied, and so on, all without any of the rigmarole associated with changing a published book or article. Indeed, the very powerful notion of fixity theorized by McLuhan and Eisenstein is rendered moot here. The fact that a document has been printed (and printed as a book) no longer means that all copies will be the same; indeed, it may well change from hour to hour, depending on how many people contribute (as in the case of Free Software, which can go through revisions and updates as fast, or faster, than one can download and install new versions). With Wikipedia entries that are extremely politicized or active, for example, a â€œfinalâ€ text is impossible, although the dynamics of revision and counter-revision do suggest outlines for the emergence of some kinds of stability. But Connexions differs from Wikipedia with respect to this finality as well, because of the insertion of the second stage, during which a self-defined group of people can work on a nonpublic text before committing changes that a public can see.
It should be clear, given the example of Connexions, or any similar project such as Wikipedia, that the changing meaning of â€œpublicationâ€ in the era of the Internet has significant implications, both practical (they affect the way people can both write and pub[PAGE 277]lish their works) and legal (they fit uneasily into the categories established for previous media). The tangibility of a textbook is quite obviously transformed by these changes, but so too is the cultural significance of the practice of writing a textbook. And if textbooks are written differently, using new forms of collaboration and allowing novel kinds of transformation, then the validation, certification, and structure of authority of textbooks also change, inviting new forms of open and democratic participation in writing, teaching, and learning. No longer are all of the settled practices of authorship, collaboration, and publication configured around the same institutional and temporal scheme (e.g., the book and its publishing infrastructure). In a colloquial sense, this is obvious, for instance, to any musician today: recording and releasing a song to potentially millions of listeners is now technically possible for anyone, but how that fact changes the cultural significance of music creation is not yet clear. For most musicians, creating music hasnâ€™t changed much with the introduction of digital tools, since new recording and composition technologies largely mimic the recording practices that preceded them (for example, a program like Garage Band literally looks like a four-track recorder on the screen). Similarly, much of the practice of digital publication has been concerned with recreating something that looks like traditional publication.7
Perhaps unsurprisingly, the Connexions team spent a great deal of time at the outset of the project creating a pdf-document-creation system that would essentially mimic the creation of a conventional textbook, with the push of a button.8 But even this process causes a subtle transformation: the concept of â€œeditionâ€ becomes much harder to track. While a conventional textbook is a stable entity that goes through a series of printings and editions, each of which is marked on its publication page, a Connexions document can go through as many versions as an author wants to make changes, all the while without necessarily changing editions. In this respect, the modulation of the concept of source code translates the practices of updating and â€œversioningâ€ into the realm of textbook writing. Recall the cases ranging from the â€œcontinuumâ€ of UNIX versions discussed by Ken Thompson to the complex struggles over version control in the Linux and Apache projects. In the case of writing source code, exactitude demands that the change of even a single character be tracked and labeled as a version change, whereas a [PAGE 278] conventional-textbook spelling correction or errata issuance would hardly create the need for a new edition.
In the Connexions repository all changes to a text are tracked and noted, but the identity of the module does not change. â€œEditionsâ€ have thus become â€œversions,â€ whereas a substantially revised or changed module might require not reissuance but a forking of that module to create one with a new identity. Editions in publishing are not a feature of the medium per se; they are necessitated by the temporal and spatial practices of publication as an event, though this process is obviously made visible only in the book itself. In the same way, versioning is now used to manage a process, but it results in a very different configuration of the medium and the material available in that medium. Connexions traces the template of software production (sharing, porting, and forking and the norms and forms of coordination in Free Software) directly onto older forms of publication. Where the practices match, no change occurs, and where they donâ€™t, it is the reorientation of knowledge and power and the emergence of recursive publics that serves as a guide to the development of the system.
Legally speaking, the change from editions to versions and forks raises troubling questions about the boundaries and status of a copyrighted work. It is a peculiar feature of copyright law that it needs to be updated regularly each time the media change, in order to bring certain old practices into line with new possibilities. Scattered throughout the copyright statutes is evidence of old new media: gramophones, jukeboxes, cable TV, photocopiers, peer-to-peer file-sharing programs, and so on. Each new form of communication shifts the assumptions of past media enough that they require a reevaluation of the putative underlying balance of the constitutional mandate that gives (U.S.) intellectual-property law its inertia. Each new device needs to be understood in terms of creation, storage, distribution, production, consumption, and tangibility, in order to assess the dangers it poses to the rights of inventors and artists.
Because copyright law â€œhard codesâ€ the particular media into the statutes, copyright law is comfortable with, for example, book editions or musical recordings. But in Connexions, new questions arise: how much change constitutes a new work, and thus demands a new copyright license? If a licensee receives one copy of a work, to which versions will he or she retain rights after changes? Because [PAGE 279] of the complexity of the software involved, there are also questions that the law simply cannot deal with (just as it had not been able to do in the late 1970s with respect to the definition of software): is the XML document equivalent to the viewable document, or must the style sheet also be included? Where does the â€œcontentâ€ begin and the â€œsoftwareâ€ end? Until the statutes either incorporate these new technologies or are changed to govern a more general process, rather than a particular medium, these questions will continue to emerge as part of the practice of writing.
This denaturalization of the notion of â€œpublicationâ€ is responsible for much of the surprise and concern that greets Connexions and projects like it. Often, when I have shown the system to scholars, they have displayed boredom mixed with fear and frustration: â€œIt can never replace the book.â€ On the one hand, Connexions has made an enormous effort to make its output look as much like conventional books as possible; on the other hand, the anxiety evinced is justified, because what Connexions seeks to replace is not the book, which is merely ink and paper, but the entire publishing process. The fact that it is not replacing the book per se, but the entire process whereby manuscripts are made into stable and tangible objects called books is too overwhelming for most scholars to contemplateâ€”especially scholars who have already mastered the existing process of book writing and creation. The fact that the legal system is built to safeguard something prior to and not fully continuous with the practice of Connexions only adds to the concern that such a transformation is immodest and risky, that it endangers a practice with centuries of stability behind it. Connexions, however, is not the cause of destabilization; rather, it is a response to or recognition of a problem. It is not a new problem, but one that periodically reemerges: a reorientation of knowledge and power that includes questions of enlightenment and rationality, democracy and self-governance, liberal values and problems of the authority and validation of knowledge. The salient moments of correlation are not the invention of the printing press and the Internet, but the struggle to make published books into a source of authoritative knowledge in the seventeenth and eighteenth centuries and the struggle to find ways to do the same with the Internet today.9
Connexions is, in many ways, understood by its practitioners to be both a response to the changing relations of knowledge and power, [PAGE 280] one that reaffirms the fundamental values of academic freedom and the circulation of knowledge, and also an experiment with, even a radicalization of, the ideals of both Free Software and Mertonian science. The transformation of the meaning of publication implies a fundamental shift in the status, in the finality of knowledge. It seeks to make of knowledge (knowledge in print, not in minds) something living and constantly changing, as opposed to something static and final. The fact that publication no longer signifies finalityâ€”that is, no longer signifies a state of fixity that is assumed in theory (and frequently in practice) to account for a textâ€™s reliabilityâ€”has implications for how the text is used, reused, interpreted, valued, and trusted.10 Whereas the traditional form of the book is the same across all printed versions or else follows an explicit practice of appearing in editions (complete with new prefaces and forewords), a Connexions document might very well look different from week to week or year to year.11 While a textbook might also change significantly to reflect the changing state of knowledge in a given field, it is an explicit goal of Connexions to allow this to happen â€œin real time,â€ which is to say, to allow educators to update textbooks as fast as they do scientific knowledge.12
These implications are not lost on the Connexions team, but neither are they understood as goals or as having simple solutions. There is a certain immodest, perhaps even reckless, enthusiasm surrounding these implications, an enthusiasm that can take both polymath and transhumanist forms. For instance, the destabilization of the contemporary textbook-publishing system that Connexions represents is (according to Rich) a more accurate way to represent the connections between concepts than a linear textbook format. Connexions thus represents a use of technology as an intervention into an existing context of practice. The fact that Connexions could also render the reliability or trustworthiness of scholarly knowledge unstable is sometimes discussed as an inevitable outcome of technical changeâ€”something that the world at large, not Connexions, must learn to deal with.
To put it differently, the â€œgoalâ€ of Connexions was never to destroy publishing, but it has been structured by the same kind of imaginations of moral and technical order that pervade Free Software and the construction of the Internet. In this sense Rich, Brent, and others are geeks in the same sense as Free Software geeks: they [PAGE 281] share a recursive public devoted to achieving a moral and technical order in which openness and modifiability are core values (â€œIf we are successful, we will disappearâ€). The implication is that the existing model and infrastructure for the publication of textbooks is of a different moral and technical order, and thus that Connexions needs to innovate not only the technology (the source code or the openness of the system) or the legal arrangements (licenses) but also the very norms and forms of textbook writing itself (coordination and, eventually, a movement). If publication once implied the appearance of reliable, final textsâ€”even if the knowledge therein could be routinely contested by writing more texts and reviews and critiquesâ€”Connexions implies the denaturalization of not knowledge per se, but of the process whereby that knowledge is stabilized and rendered reliable, trustworthy.
A keyword for the transformation of textbook writing is community, as in the tagline of the Connexions project: â€œSharing Knowledge and Building Communities.â€ Building implies that such communities do not yet exist and that the technology will enable them; however, Connexions began with the assumption that there exist standard academic practices and norms of creating teaching materials. As a result, Connexions both enables these practices and norms, by facilitating a digital version of the textbook, and intervenes in them, by creating a different process for creating a textbook. Communities are both assumed and desired. Sometimes they are real (a group of DSP engineers, networked around Rich and others who work in his subspecialty), and sometimes they are imagined (as when in the process of grant writing we claim that the most important component of the success of the project is the â€œseedingâ€ of scholarly communities). Communities, furthermore, are not audiences or consumers, and sometimes not even students or learners. They are imagined to be active, creative producers and users of teaching materials, whether for teaching or for the further creation of such materials. The structure of the community has little to do with issues of governance, solidarity, or pedagogy, and much more to do with a set of relationships that might obtain with respect to the creation of teaching materialsâ€”a community of collaborative production or collaborative debugging, as in the modulation of forms of coordination, modulated to include the activity of creating teaching materials.
Agency and Structure in Connexions
One of the most animated whiteboard conversations I remember having with Brent and Ross concerned difference between the possible â€œrolesâ€ that a Connexions user might occupy and the implications this could have for both the technical features of the system and the social norms that Connexions attempts to maintain and replicate. Most software systems are content to designate only â€œusers,â€ a generic name-and-password account that can be given a set of permissions (and which has behind it a long and robust tradition in computer-operating-system and security research). Users are users, even if they may have access to different programs and files. What Connexions needed was a way to designate that the same person might have two different exogenous roles: a user might be the author, but not the owner of the content, and vice versa. For instance, perhaps Rice University maintains the copyright for a work, but the author is credited for its creation. Such a situationâ€”known, in legal terms, as â€œwork for hireâ€â€”is routine in some universities and most corporations. So while the author is generally given the freedom and authority to create and modify the text as he or she sees fit, the university asserts copyright ownership in order to retain the right to commercially exploit the work. Such a situation is far from settled and is, of course, politically fraught, but the Connexions system, in order to be useful at all to anyone, needed to accommodate this fact. Taking an oppositional political stand would render the system useless in too many cases or cause it to become precisely the kind of authorless, creditless system as Wikipediaâ€”a route not desired by many academics. In a perfectly open world all Connexions modules might each have identical authors and owners, but pragmatism demands that the two roles be kept separate.
Furthermore, there are many people involved every day in the creation of academic work who are neither the author nor the owner: graduate students and undergraduates, research scientists, technicians, and others in the grand, contested, complex academic ecology. In some disciplines, all contributors may get authorship credit and some of them may even share ownership, but often many of those who do the work get mentioned only in acknowledgments, or not at all. Again, although the impulse of the creators of Connexions might be to level the playing field and allow only one kind of user, the fact of the matter is that academics simply would not use [PAGE 283] such a system.13 The need for a role such as â€œmaintainerâ€ (which might also include â€œeditorâ€), which was different from author or owner, thus also presented itself.
As Brent, Ross, and I stared at the whiteboard, the discovery of the need for multiple exogenous roles hit all of us in a kind of slow-motion shockwave. It was not simply that the content needed to have different labels attached to it to keep track of these people in a databaseâ€”something deeper was at work: the law and the practice of authorship actually dictated, to a certain extent, what the software itself should look like. All of sudden, the questions were preformatted, so to speak, by the law and by certain kinds of practices that had been normalized and thus were nearly invisible: who should have permission to change what? Who will have permission to add or drop authors? Who will be allowed to make what changes, and who will have the legal right to do so and who the moral or customary right? What implications follow from the choices the designers make and the choices we present to authors or maintainers?
The Creative Commons licenses were key to revealing many of these questions. The licenses were in themselves modulations of Free Software licenses, but created with people like artists, musicians, scholars, and filmmakers in mind. Without them, the content in Connexions would be unlicensed, perhaps intended to be in the public domain, but ultimately governed by copyright statutes that provided no clear answers to any of these questions, as those statutes were designed to deal with older media and a different publication process. Using the Creative Commons licenses, on the other hand, meant that the situation of the content in Connexions became well-defined enough, in a legal sense, to be used as a constraint in defining the structure of the software system. The license itself provided the map of the territory by setting parameters for things such as distribution, modification, attribution, and even display, reading, or copying.
For instance, when the author and owner are different, it is not at all obvious who should be given credit. Authors, especially academic authors, expect to be given credit (which is often all they get) for an article or a textbook they have written, yet universities often retain ownership of those textbooks, and ownership would seem to imply a legal right to be identified as both owner and author (e.g., Forrester Research reports or UNESCO reports, which hide the [PAGE 284] identity of authors). In the absence of any licenses, such a scenario has no obvious solution or depends entirely on the specific context. However, the Creative Commons licenses specified the meaning of attribution and the requirement to maintain the copyright notice, thus outlining a procedure that gave the Connexions designers fixed constraints against which to measure how they would implement their system.
A positive result of such constraints is that they allow for a kind of institutional flexibility that would not otherwise be possible. Whether a university insists on expropriating copyright or allows scholars to keep their copyrights, both can use Connexions. Connexions is more â€œopenâ€ than traditional textbook publishing because it allows a greater number of heterogeneous contributors to participate, but it is also more â€œopenâ€ than something like Wikipedia, which is ideologically committed to a single definition of authorship and ownership (anonymous, reciprocally licensed collaborative creation by authors who are also the owners of their work). While Wikipedia makes such an ideological commitment, it cannot be used by institutions that have made the decision to operate as expropriators of content, or even in cases wherein authors willingly allow someone else to take credit. If authors and owners must be identical, then either the author is identified as the owner, which is illegal in some cases, or the owner is identified as the author, a situation no academic is willing to submit to.
The need for multiple roles also revealed other peculiar and troubling problems, such as the issue of giving an â€œidentityâ€ to long-dead authors whose works are out of copyright. So, for instance, a piece by A. E. Housman was included as a module for a class, and while it is clear that Housman is the author, the work is no longer under copyright, so Housman is no longer the copyright holder (nor is the society which published it in 1921). Yet Connexions requires that a copyright be attached to each module to allow it to be licensed openly. This particular case, of a dead author, necessitated two interesting interventions. Someone has to actually create an account for Housman and also issue the work as an â€œeditionâ€ or derivative under a new copyright. In this case, the two other authors are Scott McGill and Christopher Kelty. A curious question arose in this context: should we be listed both as authors and owners (and maintainers), or only as owners and maintainers? And if someone uses the module in a new context (as they have the right to do, [PAGE 285] under the license), will they be required to give attribution only to Housman, or also to McGill and Kelty as well? What rights to ownership do McGill and Kelty have over the digital version of the public-domain text by Housman?14
The discussion of roles circulated fluidly across concepts like law (and legal licenses), norms, community, and identity. Brent and Ross and others involved had developed sophisticated imaginations of how Connexions would fit into the existing ecology of academia, constrained all the while by both standard goals, like usability and efficiency, and by novel legal licenses and concerns about the changing practices of authors and scholars. The question, for instance, of how a module can be used (technically, legally) is often confused with, or difficult to disentangle from, how a module should be used (technically, legally, or, more generally, â€œsociallyâ€â€”with usage shaped by the community who uses it). In order to make sense of this, Connexions programmers and participants like myself are prone to using the language of custom and norm, and the figure of community, as in â€œthe customary norms of a scholarly community.â€
From Law and Technology to Norm
The meaning of publication in Connexions and the questions about roles and their proper legal status emerged from the core concern with reuse, which is the primary modulation of Free Software that Connexions carries out: the modulation of the meaning of source code to include textbook writing. What makes source code such a central component of Free Software is the manner in which it is shared and transformed, not the technical features of any particular language or program. So the modulation of source code to include textbooks is not just an attempt to make textbooks exact, algorithmic, or digital, but an experiment in sharing textbook writing in a similar fashion.
This modulation also affects the other components: it creates a demand for openness in textbook creation and circulation; it demands new kinds of copyright licenses (the Creative Commons licenses); and it affects the meaning of coordination among scholars, ranging from explicit forms of collaboration and co-creation to the entire spectrum of uses and reuses that scholars normally make of their [PAGE 286] peersâ€™ works. It is this modulation of coordination that leads to the second core concern of Connexions: that of the existence of â€œnormsâ€ of scholarly creation, use, reuse, publication, and circulation.
Since software programmers and engineers are prone to thinking about things in concrete, practical, and detailed ways, discussions of creation, use, and circulation are rarely conducted at the level of philosophical abstraction. They are carried out on whiteboards, using diagrams.
The whiteboard diagram transcribed in figure 8 was precipitated by a fairly precise question: â€œWhen is the reuse of something in a module (or of an entire module) governed by â€˜academic normsâ€™ and when is it subject to the legal constraints of the licenses?â€ For someone to quote a piece of text from one module in another is considered normal practice and thus shouldnâ€™t involve concerns about legal rights and duties to fork the module (create a new modified version, perhaps containing only the section cited, which is something legal licenses explicitly allow). But what if someone borrows, say, all of the equations in a module about information theory and uses them to illustrate a very different point in a different module. Does he or she have either a normal or a legal right to do so? Should the equations be cited? What should that citation look like? What if the equations are particularly hard to mark-up in the MathML language and therefore represent a significant investment in time on the part of the original author? Should the law govern this activity, or should norms?
There is a natural tendency among geeks to answer these questions solely with respect to the law; it is, after all, highly codified and seemingly authoritative on such issues. However, there is often no need to engage the law, because of the presumed consensus (â€œacademic normsâ€) about how to proceed, even if those norms conflict with the law. But these norms are nowhere codified, and this makes geeks (and, increasingly, academics themselves) uneasy. As in the case of a requirement of attribution, the constraints of a written license are perceived to be much more stable and reliable than those of culture, precisely because culture is what remains contested and contestable. So the idea of creating a new â€œversionâ€ of a text is easier to understand when it is clearly circumscribed as a legally defined â€œderivative work.â€ The Connexions software was therefore implemented in such a way that the legal right to create a derived work (to fork a module) could be done with the press of [PAGE 287] a button: a distinct module is automatically created, and it retains the name of the original author and the original owner, but now also includes the new authorâ€™s name as author and maintainer. That new author can proceed to make any number of changes.
But is forking always necessary? What if the derivative work contains only a few spelling corrections and slightly updated information? Why not change the existing module (where such changes would be more akin to issuing a new edition), rather than create a legally defined derivative work? Why not simply suggest the changes to the original author? Why not collaborate? While a legal license gives people the right to do all of these things without ever consulting the person who licensed it, there may well be occasions [PAGE 288] when it makes much more sense to ignore those rights in favor of other norms. The answers to these questions depend a great deal on the kind and the intent of the reuse. A refined version of the whiteboard diagram, depicted in figure 9, attempts to capture the various kinds of reuse and their intersection with laws, norms, and technologies.
The center of the diagram contains a list of different kinds of imaginable reuses, arrayed from least interventionist at the top to most interventionist at the bottom, and it implies that as the intended transformations become more drastic, the likelihood of collaboration with the original author decreases. The arrow on the left indicates the legal path from cultural norms to protected fair uses; the arrow on the right indicates the technical path from built-in legal constraints based on the licenses to software tools that make collaboration (according to presumed scholarly norms) easier than the alternative (exercising the legal right to make a derivative work). With the benefit of hindsight, it seems that the arrows on either side should actually be a circle that connect laws, technologies, and norms in a chain of influence and constraint, since it is clear in retrospect that the norms of authorial practice have actually changed (or at least have been made explicit) based on the existence of licenses and the types of tools available (such as blogs and Wikipedia).
The diagram can best be understood as a way of representing, to Connexions itself (and its funders), the experiment under way with the components of Free Software. By modulating source code to include the writing of scholarly textbooks, Connexions made visible the need for new copyright licenses appropriate to this content; by making the system Internet-based and relying on open standards such as XML and Open Source components, Connexions also modulated the concept of openness to include textbook publication; and by making the system possible as an open repository of freely licensed textbook modules, Connexions made visible the changed conditions of coordination, not just between two collaborating authors, but within the entire system of publication, citation, use, reuse, borrowing, building on, plagiarizing, copying, emulating, and so on. Such changes to coordination may or may not take hold. For many scholars, they pose an immodest challenge to a working system that has developed over centuries, but for others they represent the removal of arbitrary constraints that prevent [PAGE 289] novel and innovative forms of knowledge creation and association rendered possible in the last thirty to forty years (and especially in the last ten). For some, these modulations might form the basis for a final modulationâ€”a Free Textbooks movementâ€”but as yet no such movement exists.
In the case of shared software source code, one of the principal reasons for sharing it was to reuse it: to build on it, to link to it, to employ it in ways that made building more complex objects into an easier task. The very design philosophy of UNIX well articulates the necessity of modularity and reuse, and the idea is no less powerful in other areas, such as textbooks. But just as the reuse of software is not simply a feature of softwareâ€™s technical characteristics, the idea of â€œreusingâ€ scholarly materials implies all kinds of questions that are not simply questions of recombining texts. The ability to share source codeâ€”and the ability to create complex software based on itâ€”requires modulations of both the legal meaning of software, as in the case of EMACS, and the organizational form, as in the [PAGE 290] emergence of Free Software projects other than the Free Software Foundation (the Linux kernel, Perl, Apache, etc.).
In the case of textbook reuse (but only after Free Software), the technical and the legal problems that Connexions addresses are relatively well specified: what software to use, whether to use XML, the need for an excellent user interface, and so on. However, the organizational, cultural, or practical meaning of reuse is not yet entirely clear (a point made by figures 8 and 9). In many ways, the recognition that there are cultural norms among academics mirrors the (re)discovery of norms and ethics among Free Software hackers.15 But the label â€œcultural normsâ€ is a mere catch-all for a problem that is probably better understood as a mixture of concrete technical, organizational, and legal questions and as more or less abstract social imaginaries through which a particular kind of material order is understood and pursuedâ€”the creation of a recursive public. How do programmers, lawyers, engineers, and Free Software advocates (and anthropologists) â€œfigure outâ€ how norms work? How do they figure out ways to operationalize or make use of them? How do they figure out how to change them? How do they figure out how to create new norms? They do so through the modulations of existing practices, guided by imaginaries of moral and technical order. Connexions does not tend toward becoming Free Software, but it does tend toward becoming a recursive public with respect to textbooks, education, and the publication of pedagogical techniques and knowledge. The problematic of creating an independent, autonomous public is thus the subterranean ground of both Free Software and Connexions.
To some extent, then, the matter of reuse raises a host of questions about the borders and boundaries in and of academia. Brent, Ross, and I assumed at the outset that communities have both borders and norms, and that the two are related. But, as it turns out, this is not a safe assumption. At neither the technical nor the legal level is the use of the software restricted to academicsâ€”indeed, there is no feasible way to do that and still offer it on the Internetâ€”nor does anyone involved wish it to be so restricted. However, there is an implicit sense that the people who will contribute content will primarily be academics and educators (just as Free Software participants are expected, but not required to be programmers). As figure 9 makes clear, there may well be tremendous variation in the kinds of reuse that people wish to make, even within academia. [PAGE 291] Scholars in the humanities, for instance, are loath to even imagine others creating derivative works with articles they have written and can envision their work being used only in the conventional manner of being read, cited, and critiqued. Scholars in engineering, biology, or computer science, on the other hand, may well take pleasure in the idea or act of reuse, if it is adequately understood to be a â€œscientific resultâ€ or a suitably stable concept on which to build.16 Reuse can have a range of different meanings depending not only on whether it is used by scholars or academics, but within that heterogeneous group itself.
The Connexions software does not, however, enforce disciplinary differences. If anything it makes very strong and troubling claims that knowledge is knowledge and that disciplinary constraints are arbitrary. Thus, for instance, if a biologist wishes to transform a literary scholarâ€™s article on Darwinâ€™s tropes to make it reflect current evolutionary theory, he or she could do so; it is entirely possible, both legally and technically. The literary scholar could react in a number of ways, including outrage that the biologist has misread or misunderstood the work or pleasure in seeing the work refined. Connexions adheres rigorously to its ideas of openness in this regard; it neither encourages nor censures such behavior.
By contrast, as figure 9 suggests, the relationship between these two scholars can be governed either by the legal specification of rights contained in the licenses (a privately ordered legal regime dependent on a national-cum-global statutory regime) or by the customary means of collaboration enabled, perhaps enhanced, by software tools. The former is the domain of the state, the legal profession, and a moral and technical order that, for lack of a better word, might be called modernity. The latter, however, is the domain of the cultural, the informal, the practical, the interpersonal; it is the domain of ethics (prior to its modernization, perhaps) and of tradition.
If figure 9 is a recapitulation of modernity and tradition (what better role for an anthropologist to play!), then the presumptive boundaries around â€œcommunitiesâ€ define which groups possess which norms. But the very design of Connexionsâ€”its technical and legal exactitudeâ€”immediately brings a potentially huge variety of traditions into conflict with one another. Can the biologist and the literary scholar be expected to occupy the same universe of norms? Does the fact of being academics, employees of a university, [PAGE 292] or readers of Darwin ensure this sharing of norms? How are the boundaries policed and the norms communicated and reinforced?
The problem of reuse therefore raises a much broader and more complex question: do norms actually exist? In particular, do they exist independent of the particular technical, legal, or organizational practice in which groups of people existâ€”outside the coordinated infrastructure of scholarship and science? And if Connexions raises this question, can the same question not also be asked of the elaborate system of professions, disciplines, and organizations that coordinate the scholarship of different communities? Are these norms, or are they â€œtechnicalâ€ and â€œlegalâ€ practices? What difference does formalization make? What difference does bureaucratization make?17
The question can also be posed this way: should norms be understood as historically changing constructs or as natural features of human behavior (regular patterns, or conventions, which emerge inevitably wherever human beings interact). Are they a feature of changing institutions, laws, and technologies, or do they form and persist in the same way wherever people congregate? Are norms features of a â€œcalculative agency,â€ as Michael Callon puts it, or are they features of the evolved human mind, as Marc Hauser argues?18
The answer that my informants give, in practice, concerning the mode of existence of cultural norms is neither. On the one hand, in the Connexions project the question of the mode of existence of academic norms is unanswered; the basic assumption is that certain actions are captured and constrained neither by legal constraints nor technical barriers, and that it takes people who know or study â€œcommunitiesâ€ (i.e., nonlegal and nontechnical constraints) to figure out what those actions may be. On some days, the project is modestly understood to enable academics to do what they do faster and better, but without fundamentally changing anything about the practice, institutions, or legal relations; on other days, however, it is a radically transformative project, changing how people think about creating scholarly work, a project that requires educating people and potentially â€œchanging the cultureâ€ of scholarly work, including its technology, its legal relations, and its practices.
In stark contrast (despite the very large degree of simpatico), the principal members of Creative Commons answer the question of the existence of norms quite differently than do those in Connexions: [PAGE 293] they assert that norms not only change but are manipulated and/or channeled by the modulation of technical and legal practices (this is the novel version of law and economics that Creative Commons is founded on). Such an assertion leaves very little for norms or for culture; there may be a deep evolutionary role for rule following or for choosing socially sanctioned behavior over socially unacceptable behavior, but the real action happens in the legal and technical domains. In Creative Commons the question of the existence of norms is answered firmly in the phrase coined by Glenn Brown: â€œpunt to culture.â€ For Creative Commons, norms are a prelegal and pretechnical substrate upon which the licenses they create operate. Norms must exist for the strategy employed in the licenses to make senseâ€”as the following story illustrates.
On the Nonexistence of Norms in the Culture of No Culture
More than once, I have found myself on the telephone with Glenn Brown, staring at notes, a diagram, or some inscrutable collection of legalese. Usually, the conversations wander from fine legal points to music and Texas politics to Glennâ€™s travels around the globe. They are often precipitated by some previous conversation and by Glennâ€™s need to remind himself (and me) what we are in the middle of creating. Or destroying. His are never simple questions. While the Connexions project started with a repository of scholarly content in need of a license, Creative Commons started with licenses in need of particular kinds of content. But both projects required participants to delve into the details of both licenses and the structure of digital content, which qualified me, for both projects, as the intermediary who could help explore these intersections. My phone conversations with Glenn, then, were much like the whiteboard conversations at Connexions: filled with a mix of technical and legal terminology, and conducted largely in order to give Glenn the sense that he had cross-checked his plans with someone presumed to know better. I canâ€™t count the number of times I have hung up the phone or left the conference room wondering, â€œHave I just sanctioned something mad?â€ Yet rarely have I felt that my interventions served to do more than confirm suspicions or derail already unstable arguments.
In one particular conversationâ€”the â€œpunt to cultureâ€ conversationâ€”I found myself bewildered by a sudden understanding of the process of writing legal licenses and of the particular assumptions about human behavior that need to be present in order to imagine creating these licenses or ensuring that they will be beneficial to the people who will use them.
These discussions (which often included other lawyers) happened in a kind of hypothetical space of legal imagination, a space highly structured by legal concepts, statutes, and precedents, and one extraordinarily carefully attuned to the fine details of semantics. A core aspect of operating within this imagination is the distinction between law as an abstract semantic entity and law as a practical fact that people may or may not deal with. To be sure, not all lawyers operate this way, but the warrant for thinking this way comes from no less eminent an authority than Oliver Wendell Holmes, for whom the â€œPath of Lawâ€ was always from practice to abstract rule, and not the reverse.19 The opposition is unstable, but I highlight it here because it was frequently used as a strategy for constructing precise legal language. The ability to imagine the difference between an abstract rule designating legality and a rule encountered in practice was a first step toward seeing how the language of the rule should be constructed.
I helped write, read, and think about the first of the Creative Commons licenses, and it was through this experience that I came to understand how the crafting of legal language works, and in particular how the mode of existence of cultural or social norms relates to the crafting of legal language. Creative Commons licenses are not a familiar legal entity, however. They are modulations of the Free Software license, but they differ in important ways.
The Creative Commons licenses allow authors to grant the use of their work in about a dozen different waysâ€”that is, the license itself comes in versions. One can, for instance, require attribution, prohibit commercial exploitation, allow derivative or modified works to be made and circulated, or some combination of all these. These different combinations actually create different licenses, each of which grants intellectual-property rights under slightly different conditions. For example, say Marshall Sahlins decides to write a paper about how the Internet is cultural; he copyrights the paper (â€œÂ© 2004 Marshall Sahlinsâ€), he requires that any use of it or any copies of it maintain the copyright notice and the attribution of [PAGE 295] authorship (these can be different), and he furthermore allows for commercial use of the paper. It would then be legal for a publishing house to take the paper off Sahlinsâ€™s Linux-based Web server and publish it in a collection without having to ask permission, as long as the paper remains unchanged and he is clearly and unambiguously listed as author of the paper. The publishing house would not get any rights to the work, and Sahlins would not get any royalties. If he had specified noncommercial use, the publisher would instead have needed to contact him and arrange for a separate license (Creative Commons licenses are nonexclusive), under which he could demand some share of revenue and his name on the cover of the book.20 But say he was, instead, a young scholar seeking only peer recognition and approbationâ€”then royalties would be secondary to maximum circulation. Creative Commons allows authors to assert, as its members put it, â€œsome rights reservedâ€ or even â€œno rights reserved.â€
But what if Sahlins had chosen a license that allowed modification of his work. This would mean that I, Christopher Kelty, whether in agreement with or in objection to his work, could download the paper, rewrite large sections of it, add in my own baroque and idiosyncratic scholarship, and write a section that purports to debunk (or, what could amount to the same, augment) Sahlinsâ€™s arguments. I would then be legally entitled to re-release the paper as â€œÂ© 2004 Marshall Sahlins, with modifications Â© 2007 Christopher Kelty,â€ so long as Sahlins is identified as the author of the paper. The nature or extent of the modifications is not legally restricted, but both the original and the modified version would be legally attributed to Sahlins (even though he would own only the first paper).
In the course of a number of e-mails, chat sessions, and phone conversations with Glenn, I raised this example and proposed that the licenses needed a way to account for it, since it seemed to me entirely possible that were I to produce a modified work that so distorted Sahlinsâ€™s original argument that he did not want to be associated with the modified paper, then he should have the right also to repudiate his identification as author. Sahlins should, legally speaking, be able to ask me to remove his name from all subsequent versions of my misrepresentation, thus clearing his good name and providing me the freedom to continue sullying mine into obscurity. After hashing it out with the expensive Palo Alto legal firm that was officially drafting the licenses, we came up with text that said: [PAGE 296] â€œIf You create a Derivative Work, upon notice from any Licensor You must, to the extent practicable, remove from the Derivative Work any reference to such Licensor or the Original Author, as requested.â€
The bulk of our discussion centered around the need for the phrase, â€œto the extent practicable.â€ Glenn asked me, â€œHow is the original author supposed to monitor all the possible uses of her name? How will she enforce this clause? Isnâ€™t it going to be difficult to remove the name from every copy?â€ Glenn was imagining a situation of strict adherence, one in which the presence of the name on the paper was the same as the reputation of the individual, regardless of who actually read it. On this theory, until all traces of the authorâ€™s name were expunged from each of these teratomata circulating in the world, there could be no peace, and no rest for the wronged.
I paused, then gave the kind of sigh meant to imply that I had come to my hard-won understandings of culture through arduous dissertation research: â€œIt probably wonâ€™t need to be strictly enforced in all casesâ€”only in the significant ones. Scholars tend to respond to each other only in very circumscribed cases, by writing letters to the editor or by sending responses or rebuttals to the journal that published the work. It takes a lot of work to really police a reputation, and it differs from discipline to discipline. Sometimes, drastic action might be needed, usually not. There is so much misuse and abuse of peopleâ€™s arguments and work going on all the time that people only react when they are directly confronted with serious abuses. And even so, it is only in cases of negative criticism or misuse that people need respond. When a scholar uses someoneâ€™s work approvingly, but incorrectly, it is usually considered petulant (at best) to correct them publicly.â€
â€œIn short,â€ I said, leaning back in my chair and acting the part of expert, â€œitâ€™s like, you know, câ€™monâ€”it isnâ€™t all law, there are a bunch of, you know, informal rules of civility and stuff that govern that sort of thing.â€
Then Glenn said., â€œOh, okay, well thatâ€™s when we punt to culture.â€
When I heard this phrase, I leaned too far back and fell over, joyfully stunned. Glenn had managed to capture what no amount of fieldwork, with however many subjects, could have. Some combination of American football, a twist of Hobbes or Holmes, and a lived understanding of what exactly these copyright licenses are [PAGE 297] meant to achieve gave this phrase a luminosity I usually associate only with Balinese cock-fights. It encapsulated, almost as a slogan, a very precise explanation of what Creative Commons had undertaken. It was not a theory Glenn proposed with this phrase, but a strategy in which a particular, if vague, theory of culture played a role.
For those unfamiliar, a bit of background on U.S. football may help. When two teams square off on the football field, the offensive team gets four attempts, called â€œdowns,â€ to move the ball either ten yards forward or into the end zone for a score. The first three downs usually involve one of two strategies: run or pass, run or pass. On the fourth down, however, the offensive team must either â€œgo for itâ€ (run or pass), kick a field goal (if close enough to the end zone), or â€œpuntâ€ the ball to the other team. Punting is a somewhat disappointing option, because it means giving up possession of the ball to the other team, but it has the advantage of putting the other team as far back on the playing field as possible, thus decreasing its likelihood of scoring.
To â€œpunt to culture,â€ then, suggests that copyright licenses try three times to legally restrict what a user or consumer of a work can make of it. By using the existing federal intellectual-property laws and the rules of license and contract writing, copyright licenses articulate to people what they can and cannot do with that work according to law. While the licenses do not (they cannot) force people, in any tangible sense, to do one thing or another, they can use the language of law and contract to warn people, and perhaps obliquely, to threaten them. If the licenses end up silent on a pointâ€”if there is no â€œscore,â€ to continue the analogyâ€”then itâ€™s time to punt to culture. Rather than make more law, or call in the police, the license strategy relies on culture to fill in the gaps with peopleâ€™s own understandings of what is right and wrong, beyond the law. It operationalizes a theory of culture, a theory that emphasizes the sovereignty of nonstate customs and the diversity of systems of cultural norms. Creative Commons would prefer that its licenses remain legally minimalist. It would much prefer to assumeâ€”indeed, the licenses implicitly requireâ€”the robust, powerful existence of this multifarious, hetero-physiognomic, and formidable opponent to the law with neither uniform nor mascot, hunched at the far end of the field, preparing to, so to speak, clean lawâ€™s clock.
Creative Commonsâ€™s â€œcultureâ€ thus seems to be a somewhat vague mixture of many familiar theories. Culture is an unspecified but finely articulated set of given, evolved, designed, informal, practiced, habitual, local, social, civil, or historical norms that are expected to govern the behavior of individuals in the absence of a state, a court, a king, or a police force, at one of any number of scales. It is not monolithic (indeed, my self-assured explanation concerned only the norms of â€œacademiaâ€), but assumes a diversity beyond enumeration. It employs elements of relativismâ€”any culture should be able to trump the legal rules. It is not a hereditary biological theory, but one that assumes historical contingency and arbitrary structures.
Certainly, whatever culture is, it is separate from law. Law is, to borrow Sharon Traweekâ€™s famous phrase, â€œa culture of no cultureâ€ in this sense. It is not the cultural and normative practices of legal scholars, judges, lawyers, legislators, and lobbyists that determine what laws will look like, but their careful, expert, noncultural ratiocination. In this sense, punting to culture implies that laws are the result of human design, whereas culture is the result of human action, but not of human design. Law is systematic and tractable; culture may have a deep structure, but it is intractable to human design. It can, however, be channeled and tracked, nudged or guided, by law.
Thus, Lawrence Lessig, one of the founders of Creative Commons has written extensively about the â€œregulation of social meaning,â€ using cases such as those involving the use or nonuse of seatbelts or whether or not to allow smoking in public places. The decision not to wear a seatbelt, for instance, may have much more to do with the contextual meaning of putting on a seatbelt (donâ€™t you trust the cab driver?) than with either the existence of the seatbelt (or automatic seatbelts, for that matter) or with laws demanding their use. According to Lessig, the best law can do in the face of custom is to change the meaning of wearing the seatbelt: to give the refusal a dishonorable rather than an honorable meaning. Creative Commons licenses are based on a similar assumption: the law is relatively powerless in the face of entrenched academic or artistic customs, and so the best the licenses can do is channel the meaning of sharing and reuse, of copyright control or infringement. As Glenn explained in the context of a discussion about a license that would allow music sampling.
We anticipate that the phrase â€œas appropriate to the medium, genre, and market nicheâ€ might prompt some anxiety, as it leaves things relatively undefined. But thereâ€™s more method here than you might expect: The definition of â€œsamplingâ€ or â€œcollageâ€ varies across different media. Rather than try to define all possible scenarios (including ones that havenâ€™t happened yet)â€”which would have the effect of restricting the types of re-uses to a limited setâ€”we took the more laissez faire approach.
This sort of deference to community valuesâ€”think of it as â€œpunting to cultureâ€â€”is very common in everyday business and contract law. The idea is that when lawyers have trouble defining the specialized terms of certain subcultures, they should get out of the way and let those subcultures work them out. Itâ€™s probably not a surprise Creative Commons likes this sort of notion a lot.21
As in the case of reuse in Connexions, sampling in the music world can imply a number of different, perhaps overlapping, customary meanings of what is acceptable and what is not. For Connexions, the trick was to differentiate the cases wherein collaboration should be encouraged from the cases wherein the legal right to â€œsampleâ€â€”to fork or to create a derived workâ€”was the appropriate course of action. For Creative Commons, the very structure of the licenses attempts to capture this distinction as such and to allow for individuals to make determinations about the meaning of sampling themselves.22
At stake, then, is the construction of both technologies and legal licenses that, as Brent and Rich would assert, â€œmake it easy for users to do the right thing.â€ The â€œright thing,â€ however, is precisely what goes unstated: the moral and technical order that guides the design of both licenses and tools. Connexions users are given tools that facilitate citation, acknowledgment, attribution, and certain kinds of reuse instead of tools that privilege anonymity or facilitate proliferation or encourage nonreciprocal collaborations. By the same token, Creative Commons licenses, while legally binding, are created with the aim of changing norms: they promote attribution and citation; they promote fair use and clearly designated uses; they are written to give users flexibility to decide what kinds of things should be allowed and what kinds shouldnâ€™t. Without a doubt, the â€œright thingâ€ is right for some people and not for othersâ€”and it is thus political. But the criteria for what is right are not [PAGE 300] merely political; the criteria are what constitute the affinity of these geeks in the first place, what makes them a recursive public. They see in these instruments the possibility for the creation of authentic publics whose role is to stand outside power, outside markets, and to participate in sovereignty, and through this participation to produce liberty without sacrificing stability.
What happens when geeks modulate the practices that make up Free Software? What is the intuition or the cultural significance of Free Software that makes people want to emulate and modulate it? Creative Commons and Connexions modulate the practices of Free Software and extend them in new ways. They change the meaning of shared source code to include shared nonsoftware, and they try to apply the practices of license writing, coordination, and openness to new domains. At one level, such an activity is fascinating simply because of what it reveals: in the case of Connexions, it reveals the problem of determining the finality of a work. How should the authority, stability, and reliability of knowledge be assessed when work can be rendered permanently modifiable? It is an activity that reveals the complexity of the system of authorization and evaluation that has been built in the past.
The intuition that Connexions and Creative Commons draw from Free Software is an intuition about the authority of knowledge, about a reorientation of knowledge and power that demands a response. That response needs to be technical and legal, to be sure, but it also needs to be publicâ€”a response that defines the meaning of finality publicly and openly and makes modifiability an irreversible aspect of the process of stabilizing knowledge. Such a commitment is incompatible with the provision of stable knowledge by unaccountable private parties, whether individuals or corporations or governments, or by technical fiat. There must always remain the possibility that someone can question, change, reuse, and modify according to their needs.
Posted by Christopher Kelty on May 8, 2008