On October 25, the Lemelson Center will hold its annual New Perspectives on Invention and Innovation symposium and this year’s theme is ripped from the headlines: inventing the surveillance society. We knew we had a hot potato in this topic when we began planning early in 2013, but we had no idea it would soon explode when Edward Snowden leaked information about the National Security Agency’s (NSA) spying at home and abroad. To say that this has raised the stakes for us at the Lemelson Center would be a huge understatement. While there are myriad aspects to the ongoing controversy, our symposium will focus on the technology of surveillance and the related issues of social and ethical responsibility.
If history has taught us anything, it is that technology and invention can often escape our control, however good our intentions. If we wait to address social problems downstream after they arise, it is usually too late. It then becomes mostly a futile game of catch-up. The ideal approach is to try to anticipate such problems from the start of major projects, building in front-end attention to the social and ethical impacts of emergent technologies. I say “ideal,” because there are major obstacles to doing so. Mostly it’s a matter of money, but second-guessing an emerging technology in this way may also be criticized and dismissed as a brake on innovation.
Occasionally, though, such foresight is evident. Consider the government’s Human Genome Project, in which the National Institutes of Health (NIH) and the Department of Energy (DOE) were major players (the latter because of concern over health issues related to radiation from atomic testing and chemical exposure). Both the DOE and the NIH genome programs set aside fully three to five percent of their annual budgets for risk assessment and for the investigation of Ethical, Legal, and Social Implications (ELSI). While such efforts may not have fully anticipated, much less solved, future problems, at least they were a move in the right direction.
That privacy issues were deeply implicated with the genome project was recognized early on. There was a real danger that personal genetic information could get into the wrong hands or be used inappropriately, with truly scary consequences for individuals, including denial of employment and health insurance. One might reasonably ask why the same care has not been taken with information and communications technologies that have allowed the NSA to do what it does. Why were we taken by surprise by both government and commercial abuse of digital innovations? Perhaps it’s because they emerged over a relatively long period of time and from a disparate set of players, whereas the Human Genome Project(s) had much more the flavor of a crash program like the Manhattan Project, with its reliable funding, central management, and tightly controlled access.
Yet, as with the genome project, government- and particularly military-sponsored R&D played a critical role in the launching of today’s breakthrough digital technologies, and they still do. The Internet owed its birth to the Defense Department’s Advanced Research Project Agency, which developed ARPANET. With national security as the over-arching motivation and justification, keeping the new technologies in control and out of the wrong hands had to be a major concern. Because of the veil of secrecy, I can’t say if there was actually an attempt to establish regulations with respect to the privacy issues that bedevil us today. Perhaps there was, but clearly the events of September 11 have fundamentally changed the rules of the game.
A major factor in the whole problem of management and control was the privatizing of government R&D, resulting in hybrid organizations combining private and government sectors (see Kevin R. Kosar, “The QuasiGovernment: Hybrid Organizations with Both Government and Private Legal Characteristics,” Congressional Research Service, June 22, 2011.). The pattern was established after the Second World War with the creation of Federally Funded Research and Development Centers (FFRDCs)—so-called GO-CO (government owned, contractor operated) organizations. The first of these was the Air Force’s RAND Corporation, established in 1947 in Santa Monica, CA. Government atomic weapons labs like Oak Ridge and Los Alamos national laboratories soon followed suit. Such quasi-government arrangements allowed for much more flexibility in terms of spending, procurement, hiring, personnel adjustments, and more rapid technology transfer from basic research to application.
There were clear advantages to this model, but it has come under increasing scrutiny and criticism over the last decade because of the potential for corruption, lack of accountability and oversight, and loss of government control of research. In particular, the FFRDCs greatly complicated the problem of regulation. More than a decade ago, public policy expert Ann Markusen argued persuasively against privatizing national security. She pointed out that government out-sourcing requires strong management, but “such capacity is undercut by the unpopularity of regulation and unwillingness to spend on it” (“The Case Against Privatizing National Security,” June 2001).
As I write, the question of governmental oversight of the National Security Agency’s data-mining, monitoring, and outright spying is being hotly debated. Perhaps NSA was and is indeed working within its own regime of regulation and accountability. But the cosy relationship today between government agencies like NSA and the companies they outsource to makes it far too easy for classified government innovation and information to flow into the commercial sector, where there is little if any incentive for regulation. The Snowden case was a prime example.
Today, I often hear it argued that no one should have been surprised by the revelations of government spying. After all, social media users, not to mention online shoppers, have willingly, with little if any apparent concern for the consequences, already ceded much of their personal privacy to corporations. As my colleague Jeff Brodie noted (with tongue firmly in cheek), “we want our cake, we want the icing, and we want to eat it without gaining weight.” (A penetrating satire on this incredibly self-destructive social behavior is David Egger’s recent novel on the ultimate perils of Big Data, The Circle.)
Invention and innovation, however, can also be powerful forces for democracy and the public good. Recent history has shown that cell phones and social media have made it far more difficult for dictators to control information. Such technology has clearly been crucial to the Arab Spring, for example. But it is also a double-edged sword that can be used by ill-intentioned regimes to undermine democracy in unprecedented ways. With mounting concerns for national security, surveillance technologies are not going away. But is it too late to bring them back under at least some semblance of democratic control?