The Journal of Things We Like (Lots)
Select Page

How to Unlock the Voting Block

Gavin Rynard, How to Unlock the Voting Block, U. Miami Int'l & Comp. L. Rev. (2017).

A harsh, contentious election cycle is bound to create apathy in a vast swath of the United States population. For many Americans, apathy ends with the convenient excuse heard around water coolers the day after election day: “I forgot to register.” On March 15, 2015, Oregon became the first state in the country to take that excuse away from its residents with its new Motor Voter Act.

Oregon’s Motor Voter Act automatically registers Oregonians as voters whenever they visit the Department of Motor Vehicles (“DMV”) to apply for, renew, or replace an Oregon drivers’ license, ID card, or permit. Oregon’s goal was to increase voter turnout. Voter registration has a high correlation with voter turnout in the United States. While only 53.6% of the voting age population voted in the 2012 presidential election, 84.3% of registered voters voted in that same election. Furthermore, approximately one quarter of eligible American voters—51 million Americans—are not registered to vote. Therefore, Oregon saw an opportunity and sought out a plan to increase voter turnout, and its plan was simple: eliminate barriers to increase turnout.

This article will discuss the details and merits of Oregon’s automatic voter registration program. Essentially, this article poses and answers one basic question: How far should a government go to encourage its citizens to vote? It will also prescribe a solution to improve voter turnout on a state-by-state level and on a federal level by encouraging Americans to lobby and write to their state and federal legislatures and to push for ballot initiatives on a state level.

Cite as: Gavin Rynard, How to Unlock the Voting Block, JOTWELL (February 14, 2017) (reviewing Gavin Rynard, How to Unlock the Voting Block, U. Miami Int'l & Comp. L. Rev. (2017)), https://zetasec4.jotwell.com/unlock-voting-block/.

Calling Bull**** in the Age of Big Data

Instructors: Carl T. Bergstrom and Jevin West

Prerequisites: None, though Professor Sanjay Srivastava’s PSY 607 may provide useful background. Synopsis and learning objectives: Our world is saturated with bullshit. Learn how to detect it and defuse it. Full details on our about page. Disclaimer: We have developed this seminar to meet what we see as a major need in higher education nationwide and we have proposed to teach it at the University of Washington in the near future. At present, however, Calling Bullshit is not an official course, nor is it otherwise endorsed by the University of Washington.

Each of the lectures will explore one specific facet of bullshit. For each week, a set of required readings are assigned. For some weeks, supplementary readings are also provided for those who wish to delve deeper.

Lectures

  1. Introduction to bullshit
  2. Spotting bullshit
  3. The natural ecology of bullshit
  4. Causality
  5. Statistical traps
  6. Visualization
  7. Big data
  8. Publication bias
  9. Predatory publishing and scientific misconduct
  10. The ethics of calling bullshit.
  11. Fake news

Week 1. Introduction to bullshit. What is bullshit? Concepts and categories of bullshit. The art, science, and moral imperative of calling bullshit. Brandolini’s Bullshit Asymmetry Principle.

  • Harry Frankfurt (1986) On Bullshit. Raritan Quarterly Review 6(2)
  • G. A. Cohen (2002) Deeper into Bullshit. Buss and Overton, eds., Contours of Agency: Themes from the Philosophy of Harry Frankfurt Cambridge, Massachusetts: MIT Press.

Week 2. Spotting bullshit. Truth, like liberty, requires eternal vigilance. How do you spot bullshit in the wild? Effect sizes, dimensions, Fermi estimation, and checks on plausibility. Claims and the interests of those who make them.

  • Carl Sagan 1996 The Fine Art of Baloney Detection. Chapter 12 in Sagan (1996) The Demon-Haunted World


Week 3. The natural ecology of bullshit.

Where do we find bullshit? Why news media provide bullshit. TED talks and the marketplace for upscale bullshit. Why social media provide ideal conditions for the growth and spread of bullshit.


Week 4. Causality One common source of bullshit data analysis arises when people ignore, deliberately or otherwise, the fact that correlation is not causation. The consequences can be hilarious, but this confusion can also be used to mislead. Regression to the mean pitched as treatment effect. Selection masked as transformation.

Supplementary reading

  • Karl Pearson (1897) On a Form of Spurious Correlation which may arise when Indices are used in the Measurement of Organs. Proceedings of the Royal Society of London 60: 489–498. For context see also Aldrich (1995).

Week 5. Statistical traps. Base-rate fallacy / prosecutor’s fallacy. Simpson’s paradox. Data censoring. Will Rogers effect, lead-time bias, and length time bias. Means versus medians. Importance of higher moments.

  • Simpson’s paradox: an interactive data visualization from VUDlab at UC Berkeley.
  • Alvan Feinstein et al. (1985) The Will Rogers Phenomenon — Stage Migration and New Diagnostic Techniques as a Source of Misleading Statistics for Survival in Cancer. New England Journal of Medicine 312:1604-1608.

Week 6. Data visualization. Data graphics can be powerful tools for understanding information, but they can also be powerful tools for misleading audiences. We explore the many ways that data graphics can steer viewers toward misleading conclusions.

  • Edward Tufte (1983) The Visual Display of Quantitative Information Chartjunk: vibrations, grids, and ducks. (Chapter 5)

Week 7. Big data. When does any old algorithm work given enough data, and when is it garbage in, garbage out? Use and abuse of machine learning. Misleading metrics. Goodhart’s law.

  • Jevin West (2014) How to improve the use of metrics: learn from game theory. Nature 465:871-872

Week 8. Publication bias. Even a community of competent scientists all acting in good faith can generate a misleading scholarly record when — as is the case in the current publishing environment — journals prefer to publish positive results over negative ones. In a provocative and hugely influential 2005 paper, epidemiologist John Ioannides went so far as to argue that this publication bias has created a situation in which most published scientific results are probably false. As a result, it’s not clear that one can safely the results of some random study reported in the scientific literature, let alone on Buzzfeed.

Supplementary Reading

    • Erick Turner et al. (2008) Selective Publication of Antidepressant Trials and Its Influence on Apparent Efficacy New England Journal of Medicine 358:252-260
    • Nissen et al. (2016) Publication bias and the canonization of false facts.

eLife 5:e21451


Week 9. Predatory publishing and scientific misconduct. Predatory publishing. Beall’s list and his anti-Open Access agenda. Publishing economics. Pathologies of publish-or-perish culture.

New York Times Dec. 29, 2016.


Week 10. The ethics of calling bullshit. Where is the line between deserved criticism and targeted harassment? Is it, as one prominent scholar argued, “methodological terrorism” to call bullshit on a colleague’s analysis? What if you use social media instead of a peer-reviewed journal to do so? How about calling bullshit on a whole field that you know almost nothing about? Principles for the ethical calling of bullshit. Differences between being a hard-minded skeptic and being a domineering jerk.


Week 11. Fake news.. Fifteen years ago, nascent social media platforms offered the promise of a more democratic press through decentralized broadcasting and a decoupling of publishing from advertising revenue. Instead, we get sectarian echo chambers and, lately, a serious assault on the very notion of fact. Not only did fake news play a substantive role in the November 2016 US elections, but recently a fake news story actually provoked nuclear threats issued by twitter.

New York Times Nov. 25, 2016

Cite as: A. Michael Froomkin, Calling Bull**** in the Age of Big Data, JOTWELL (January 12, 2017) (reviewing Instructors: Carl T. Bergstrom and Jevin West), https://zetasec4.jotwell.com/calling-bull-age-big-data/.

Title Size and Italics Test

A. Michael Froomkin, Towards Identity Bankruptcy, (unpbl. MSS 2017).

As computer-mediated communications displace voice and phone, not to mention pen and paper, what were formerly ephemera (such as messages now in emails rather than postcards) and formerly limited in circulation (e.g. old photos) have the potential of lingering forever and being accessible to anyone with an internet connection and the ability to type or say “google.com”. Social media has encouraged explosive growth in both self-surveillance and surveillance of one’s friends and geographically coterminous strangers. The prevalence of CCTV and other public sensors only adds to the data collected about all of us. Given the persistence and searchablilty of all this data, employers, financial institutions, and other economic actors have turned to the Internet to seek information about potential employees and clients. This is of course only part of a larger trend in which everyone from social acquaintances to possible romantic partners increasingly looks to the Internet to find out what was formerly private and inaccessible information. Consequently enormous quantities of both false data and true but unfairly prejudicial data become indelibly associated with a person’s electronic and legal identities.

Many aspects of this trend towards universal transparency could be said to be beneficial. But some clearly are harmful. In response to those, we should create a system of “Identity Bankruptcy” by which people could ensure that certain classes of online personal information could not be used to discriminate against them in employment, housing, access to credit, and other similar market relationships. This essay does not purport to provide a fully worked out account of all the details of how this might work. Rather, the goal here is to provide at most a preliminary sketch of its outlines.

The basic idea behind Identity Bankruptcy is to mitigate some of the harms of the Internet’s permanence by providing a formal judicial or administrative procedure, inspired by existing personal bankruptcy law, that would allow some persons upon good cause shown to make a fresh start as to their electronic identity on social media, the internet more generally. Identity Bankruptcy is not primarily addressed to the issue of the persistence and visibility of old court and other official records, although it could easily be expanded to cover those records if there was public demand for that extension.

A declaration of Identity Bankruptcy would forbid employers, lenders, landlords, and other market participants from making decisions based in any part on the matters covered by the declaration. Persons who successfully apply for a declaration of ID Bankruptcy would have a chance to make a fresh start in one part of their lives. By banning certain commercial uses of the information–much as we currently ban discrimination based on the fact of age, disability, race, religion, genetic information, or national origin—Identity Bankruptcy would allow persons to partly disentangle their ongoing legal and personal identity from many (but not all) electronic records of their past.

Cite as: A. Michael Froomkin, Title Size and Italics Test, JOTWELL (January 5, 2017) (reviewing A. Michael Froomkin, Towards Identity Bankruptcy, (unpbl. MSS 2017)), https://zetasec4.jotwell.com/title-size-italics-test/.

Title © symbol + Italics and — emdash – endash (brackets) [square brackets]  slashes /\ exclaim! at@ hash# dollar$ percent% ampersand& star* parens() question ??? an…elipsis X2 <– superscript strike  (Revised)

A. Lucifer, The Wages of Hell, 1 Demonic Times 1 (0000).

Need to test the bulletpoints :

  • First item
  • Second item. Privatization is a phenomenon that legal theorists and legal philosophers have begun to notice and to stake out positions on, for and against. Privatization is defined with reference to the (too?) familiar distinction1 between public and private actors. Privatization happens when a good, service, or a function that is typically supplied by state government, through the efforts of its officials and personnel, comes to be provided by private actors, perhaps still at state expense. In a pair of recent articles, Avihay Dorfman and Alon Harel have singled out private prisons and mercenary armies as paradigm examples of privatized public goods. Dorfman and Harel lament the fact that both advocates and opponents of privatization conceive the normative issue in purely “instrumentalist” terms. Which type of actor, public or private, can provide a given good or service more efficiently?
  • Third item. While we must constantly remind ourselves that each case we analyze or teach involves real individuals with real disputes that affected real lives, there is a certain fictional quality to these stories precisely because the judicial opinion is the lead character. Judicial opinions can never be more than an abstract, a description of events that then becomes the accepted narrative. Paul Robert Cohen’s expletive-bearing jacket was expression serving an “emotive function,” according to the Court, not an “absurd and immature antic,” as the dissent would have it, and that made all the difference. Opinions have authors, and authors are necessarily engaged in a project of crafting narratives with a result in mind.

And the numbered lists too:

  1. First item
  2. Second item. Privatization is a phenomenon that legal theorists and legal philosophers have begun to notice and to stake out positions on, for and against. Privatization is defined with reference to the (too?) familiar distinction between public and private actors. Privatization happens when a good, service, or a function that is typically supplied by state government, through the efforts of its officials and personnel, comes to be provided by private actors, perhaps still at state expense. In a pair of recent articles, Avihay Dorfman and Alon Harel have singled out private prisons and mercenary armies as paradigm examples of privatized public goods. Dorfman and Harel lament the fact that both advocates and opponents of privatization conceive the normative issue in purely “instrumentalist” terms. Which type of actor, public or private, can provide a given good or service more efficiently?
  3. Third item. While we must constantly remind ourselves that each case we analyze or teach involves real individuals with real disputes that affected real lives, there is a certain fictional quality to these stories precisely because the judicial opinion is the lead character. Judicial opinions can never be more than an abstract, a description of events that then becomes the accepted narrative. Paul Robert Cohen’s expletive-bearing jacket was expression serving an “emotive function,” according to the Court, not an “absurd and immature antic,” as the dissent would have it, and that made all the difference. Opinions have authors, and authors are necessarily engaged in a project of crafting narratives with a result in mind.

Yet more text, with a blockquote in the paragraph:

But the disagreements among constitutional theorists run deeper than the question of how to decide cases; scholars also disagree about how to evaluate the merits of a given decision-making approach. One cannot defend one’s preferred method of constitutional adjudication without identifying reasons why that method is preferable to others. And to identify these reasons, one must have an account of what a successful approach to constitutional adjudication achieves. Should we value methodologies that consistently produce substantively desirable judicial outcomes? Should we value methodologies that best reflect the Constitution’s status as written law ratified by “We the People”? Should we value methodologies that constrain the power of unelected judges? Should we value methodologies that adhere to conventional understandings of “what the law is”? And so on. Different approaches to constitutional decision-making will look more or less attractive depending on the criteria against which we evaluate them. And different people favor different approaches in part because they disagree as to what those criteria should be.2

Andrew Coan’s illuminating new article is about this second set of questions—questions that go to what Coan3 calls the “normative foundations” 4 of constitutional theory.5 These questions, as Coan readily concedes,6 are by no means unfamiliar to constitutional lawyers; scholars routinely identify criteria for evaluating a decision-making methodology and, in the course of doing so, have very often set out to defend the relevance of the criteria they use. But what Coan’s article aims to provide is a systematic examination of the competing sets of “first principles” from which different theories of constitutional decision-making begin. Coan’s7 goal, in other words, is to survey the existing landscape of normative constitutional theory with an eye toward describing and evaluating the various types of reasons and arguments that constitutional theorists regard as relevant to the choice among…

Carter’s initial point is that both scholarly commentary and legal analysis of premarital agreements is based on unsupported empirical claims that premarital agreements generally involve richer would-be husbands imposing exploitative one-sided terms on poorer would-be wives. Like Carter, I do not know of any reliable data regarding how many people enter premarital agreements, what their motivations are, and how frequently one-sided terms are included in those agreements. However, the view of premarital agreements as instruments of oppression is not entirely mythical: it comes from reading the published opinions involving them (where this scenario is in fact common). But why should we assume that the reported cases accurately reflect the general practice of premarital contracting? Perhaps only the unconscionable agreements get litigated (and appealed)? Agreements that are entered in good faith and are substantively fair are unlikely to be challenged, and if challenged, they will probably not raise the sort of issues that result in reported decisions

Then back to regular text.

  1. This is a footnote. []
  2. This is a footnote. []
  3. This is a footnote. []
  4. This is a footnote. []
  5. This is a footnote. []
  6. This is a footnote. []
  7. Another footnote. []
Cite as: A. Michael Froomkin, Title © symbol + Italics and — emdash – endash (brackets) [square brackets]  slashes /\ exclaim! at@ hash# dollar$ percent% ampersand& star* parens() question ??? an…elipsis X2 <– superscript strike  (Revised), JOTWELL (January 3, 2017) (reviewing A. Lucifer, The Wages of Hell, 1 Demonic Times 1 (0000)), https://zetasec4.jotwell.com/title-symbol-italics-emdash-endash-brackets-square-brackets-slashes-exclaim-hash-dollar-percent-ampersand-star-parens-question-elipsis/.

Police Force (Test Version)

Works mentioned in this review:

Police carry weapons, and sometimes they use them. When they do, people can die: the unarmed like Walter Scott and Tamir Rice, and bystanders like Akai Gurley and Bettie Jones. Since disarming police is a non-starter in our gun-saturated society, the next-best option is oversight. Laws and departmental policies tell officers when they can and can’t shoot; use-of-force review boards and juries hold officers accountable (or are supposed to) if they shoot without good reason. There are even some weapons police shouldn’t have at all.

Online police carry weapons, too, because preventing and prosecuting new twists on old crimes often requires new investigative tools. The San Bernadino shooters left behind a locked iPhone. Child pornographers gather on hidden websites. Drug deals are done in Bitcoins. Hacker gangs hold hospitals’ computer systems for ransom. Modern law enforcement doesn’t just passively listen in: it breaks security, exploits software vulnerabilities, installs malware, sets up fake cell phone towers, and hacks its way onto all manner of devices and services. These new weapons are dangerous; they need new rules of engagement, oversight, and accountability. The articles discussed in this review help start the conversation about how to guard against police abuse of these new tools.

In one recent case, the FBI seized control of a child pornography website. For two weeks, the FBI operated the website itself, sending a “Network Investigative Technique” — or, to call things by their proper names, a piece of spyware — to the computers of people who visited the website. The spyware then phoned home, giving the FBI the information it needed (IP addresses) to start identifying the users so they could be investigated and prosecuted on child pornography charges.

There’s something troubling about police operation of a spyware-spewing website; that’s something we normally expect from shady grey-market advertisers, not sworn officers of the law. For one thing, it involves pervasive deception. As Elizabeth E. Joh and Thomas W. Joo explain in Sting Victims: Third-Party Harms in Undercover Police Operations, this is hardly a new problem. Police have been using fake names and fake businesses for a long time. Joh and Joo’s article singles out the underappreciated way in which these ruses can harm third parties other than the targets of the investigation. In child abuse cases, for example, the further distribution of images of children being sexually abused “cause[s] new injury to the child’s reputation and emotional well-being.”

Often, the biggest victims of police impersonation are the specific people or entities being impersonated. Joh and Joo give a particularly cogent critique of this law enforcement “identity theft.” The resulting harm to trust is especially serious online, where other indicia of identity are weak to begin with. The Justice Department settled for $143,000 a civil case brought by a woman whose name and intimate photographs were used by the DEA to set up a fake Facebook account to send a friend request to a fugitive.

Again, deception by police is not new. But in a related essay, Bait, Mask, and Ruse: Technology and Police Deception, Joh nicely explains how “technology has made deceptive policing easier and more pervasive.” A good example, discussed in detail by Stephanie K. Pell and Christopher Soghoian in their article, A Lot More Than a Pen Register, and Less Than a Wiretap: What the StingRay Teaches Us About How Congress Should Approach the Reform of Law Enforcement Surveillance Authorities, is IMSI catchers, or StingRays. These portable electronic devices pretend to be cell phone towers, forcing nearby cellular devices to communicate with them, exposing some metadata in the process. This is a kind of lie, and not necessarily a harmless one. Tricking phones into talking to fake cell towers hinders their communications with real ones, which can raise power consumption and hurt connectivity.

In an investigative context, StingRays are commonly used to locate specific cell phones without the assistance of the phone company, or to obtain a list of all cell phones near the StingRay. Pell and Soghoian convincingly argue that StingRays successfully slipped through holes in the institutional oversight of surveillance technology. On the one hand, law enforcement has at times argued that the differences between StingRays and traditional pen registers meant that they were subject to no statutory restrictions at all; on the other, it has argued that they are sufficiently similar to pen registers that no special disclosure of the fact that a StingRay is to be used is necessary when a boilerplate pen register order is presented to a magistrate. Pell and Soghoian’s argument is not that StingRays are good or bad, but rather that an oversight regime regulating and legitimizing police use of dangerous technologies breaks down if the judges who oversee it cannot count on police candor.

In a broader sense, Joh and Joo and Pell and Soghoian are all concerned about police abuse of trust. Trust is tricky to establish online, but it is also essential to many technologies. This is one reason why so many security experts objected to the FBI’s now-withdrawn request for Apple to use its code signing keys to vouch for a modified and security-weakened custom version of iOS. Compelling the use of private keys in this way makes it harder to rely on digital signatures as a security measure.

The FBI’s drive-by spyware downloads are troubling in yet another way. A coding mistake can easily destroy data rather than merely observing it, and installing one piece of unauthorized software on a computer makes it easier for others to install more. Lawful Hacking, by Steven M. Bellovin, Matt Blaze, Sandy Clark, and Susan Landau, thinks through some of these risks, along with more systemic ones. In order to get spyware on a computer, law enforcement frequently needs to take advantage of an existing unpatched vulnerability in the software on that computer. But when law enforcement pays third parties for information about those vulnerabilities, it helps incentivize the creation of more such information, and the next sale might not be to the FBI. Even if the government finds a vulnerability itself, keeping that vulnerability secret undercuts security for Internet users, because someone else might find and exploit that same vulnerability independently. The estimated $1.3 million that the FBI paid for the exploit it employed in the San Bernadino case — along with the FBI’s insistence on keeping the details secret — sends a powerful signal that the FBI is more interested in breaking into computers than in securing them, and that that is where the money is.

The authors of Lawful Hacking are technologists, and their article is a good illustration of why lawyers need to listen to technologists more. The technical issues — including not just how software works but how the security ecosystem works — are the foundation for the legal and policy issues. Legislating security without understanding the technology is like building a castle on a swamp.

Fortunately, legal scholars who do understand the technical issues — because they are techies themselves or know how to listen to them — are also starting to think through the policy issues. Jonathan Mayer’s Constitutional Malware is a cogent analysis of the Fourth Amendment implications of putting software on people’s computers without their knowledge, let alone their consent. Mayer’s first goal is to refute what he calls the “data-centric” theory of Fourth Amendment searches, that so long as the government spyware is configured such that it discloses only unprotected information, it is irrelevant how the software was installed or used. The article then thinks through many of the practicalities involved with using search warrants to regulate spyware, such as anticipatory warrants, particularity, and notice. It concludes with an argument that spyware is sufficiently dangerous that it should be subject to the same kind of “super-warrant” procedural protections as wiretaps. Given that spyware can easily extract the contents of a person’s communications from their devices at any time, the parallel with wiretaps is nearly perfect. Indeed, on any reasonable measure, spyware is worse, and police and courts ought to give it closer oversight. To similar effect is former federal magistrate judge Brian Owsley’s Beware of Government Agents Bearing Trojan Horses, which includes a useful discursive survey of cases in which law enforcement has sought judicial approval of spyware.

Unfortunately, oversight by and over online law enforcement is complicated by the fact that a suspect’s device could often be anywhere in the world. This reality of life online raises problems of jurisdiction: jurisdiction for police to act and jurisdiction for courts to hold them accountable. Ahmed Ghappour’s Searching Places Unknown: Law Enforcement Jurisdiction on the Dark Web points out that when a suspect connects through a proxy-based routing service such as Tor, mapping a device’s location may be nearly impossible. Observing foreigners abroad is one thing; hacking their computers is quite another. Other countries can and do regard such investigations as violations of their sovereignty. Searching Places Unknown offers a best-practices guide for avoiding diplomatic blowback and the risk that police will open themselves up to foreign prosecution. One of the most important suggestions is minimization: Ghappour recommends that investigators proceed in two stages. First, they should attempt to determine the device’s actual IP address and no more; with that information, they can make a better guess at where the device is and a better-informed decision about whether and how to proceed.

This, in the end, is what tainted the evidence in the Tor child pornography investigation. Federal Rule of Criminal Procedure 41 does not give a magistrate judge in Alexandria, Virginia the authority to authorize the search of a computer in Norwood, Massachusetts. This NIT-picky detail in the Federal Rules may not be an issue much longer. The Supreme Court has voted — in the face of substantial objection from tech companies and privacy activists — to approve a revision to Rule 41 giving greater authority to magistrates to issue warrants for “remote access” searches. But since many of these unknown computers will be not just in another district but abroad, the diplomatic issues Ghappour flags would remain relevant even under a revised Rule 41. So would Owsley’s and Mayer’s recommendations for careful oversight.

Reading these articles together highlights the ways in which the problems of online investigations are both very new and very old. The technologies at issue — spyware, cryptographic authentication, onion routing, cellular networks, and encryption — were not designed with much concern for the Federal Rules or law enforcement budgeting processes. Sometimes they bedevil police; sometimes they hand over private data on a silver platter. But the themes are familiar: abuse of trust and positions of authority, the exploitation of existing vulnerabilities and the creation of new ones. Oversight is a crucial part of the solution, but at the moment it is piecemeal and inconsistently applied. The future of policing has already happened. It’s just not evenly distributed.

Cite as: James Grimmelmann, Police Force, JOTWELL (July 4, 2016) (reviewing seven works), http://cyber.jotwell.com/police-force/.

Testing Passing 4 Paragraphs

Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN.

The concept of “cyberspace” has fascinated legal scholars for roughly 20 years, beginning with Usenet, Bulletin Board Systems, the World Wide Web and other public aspects of the Internet. Cyberspace may be defined as the semantic embodiment of the Internet, but to legal scholars the word “cyberspace” itself initially reified the paradox that the Internet both seemed to be free of law and constituted law, simultaneously. The explorers of cyberspace were like the advance guard of the United Federation of Planets, boldly exploring open, uncharted territory and domesticating it in the interest of the public good. The result was to be both order (of a sort) without law, to paraphrase and re-purpose Robert Ellickson’s work, and law (of a different sort), to distill Lawrence Lessig’s famous exchange with Judge Frank Easterbrook.1 For the last 20 years, more or less, legal scholars have intermittently pursued the resulting project of defining, exploring, and analyzing cyberlaw, but without really resolving this tension, that is, without really identifying the “there” there. Perhaps the best, most engaged, and certainly most optimistic embrace of that point of view is David Post’s In Search of Jefferson’s Moose.

Less speculative and less adventurous cyberlaw scholars, which is to say, most of them, quickly adapted to the seeming hollowness of their project by aligning themselves with existing literatures on governance, a rich and potentially fruitful field of inquiry derived largely from research and policymaking in the modern regulatory state. That material was made both relevant and useful in the Internet context via the emergence of global regulatory systems that speak to the administration of networks, particularly the Domain Name System and ICANN, the institution that was invented to govern it. The essential question of cyberlaw became, and remains: What is Internet governance, and what do we learn about governance in general from our observations and experiences with Internet governance? As an intervention in that ongoing discussion, Between Coordination and Regulation: Finding the Governance in Internet Governance is an especially welcome and clarifying contribution, all the more so because of its relative brevity.

The lead author is the head of the Humboldt Institute for Internet and Society and a veteran observer of and participant in Internet governance dialogues at ICANN and the World Summit on the Information Society (WSIS). She and two colleagues at the Humboldt Institute have produced a useful review of relevant Internet governance literature and a new framework for further research and analysis that is eclectic in its reference to and reliance on existing material and therefore independent of the influence of any single prior theorist or thinker. The resulting framework is both novel yet recognizably derivative of and continuous with respect to earlier work in the field. This is not a work primarily of legal scholarship by legal scholars, but properly understood, it should contribute in important ways to sustaining the ongoing project of cyberlaw. Internet governance is conceptualized here in ways that make clear its relevance and utility to questions of governance generally.

The paper introduces its subject with an overview of the definitional problems associated with the term “governance” and especially the phrase “Internet governance.” In phenomenal terms, the concept often refers to combinations of three things: one, rulemaking and enforcement and associated coordinating behaviors that implicate state actors acting in accordance with established political hierarchies; two, formal and informal non-state actors acting in less coordinated or un-coordinated “bottom up” ways, including through the formation and evolution of social norms; and three, technical protocols and interventions that have human actors as their designers but that have sorts of independent technical agency in enabling and constraining behaviors.

The authors note that many researchers seeking to define and understand relevant combinations equate “governance” with “regulation,” which leads to the implication that governance, like regulation, should be purposive with respect to its domain and that its goals should be evaluated accordingly. They reject that equation, observing that the experience of Internet institutions and other actors, of both legal and socio-technical character, suggests that such a purposive framing of the phenomenon of governance is unhelpfully underinclusive. A large amount of relevant behavior and consequences cannot be traced in purposive terms or in functional terms to planned interventions.

Also rejected, this time on overinclusiveness grounds, is the idea that governance can and should be equated with coordination among actors in a social space, as such. The authors correctly note that if governance is coordination of actors in social life, then virtually any and every social phenomenon is governance, and the concept loses any distinct analytic potential.

  1. See Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 201; Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). []
Cite as: Michael Madison, Testing Passing 4 Paragraphs, JOTWELL (January 1, 2017) (reviewing Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN), https://zetasec4.jotwell.com/testing-passing-4-paragraphs/.

Testing Paras Passed to Zeta With Blockquotes

Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN.

The concept of “cyberspace” has fascinated legal scholars for roughly 20 years, beginning with Usenet, Bulletin Board Systems, the World Wide Web and other public aspects of the Internet. Cyberspace may be defined as the semantic embodiment of the Internet, but to legal scholars the word “cyberspace” itself initially reified the paradox that the Internet both seemed to be free of law and constituted law, simultaneously. The explorers of cyberspace were like the advance guard of the United Federation of Planets, boldly exploring open, uncharted territory and domesticating it in the interest of the public good. The result was to be both order (of a sort) without law, to paraphrase and re-purpose Robert Ellickson’s work, and law (of a different sort), to distill Lawrence Lessig’s famous exchange with Judge Frank Easterbrook.1 For the last 20 years, more or less, legal scholars have intermittently pursued the resulting project of defining, exploring, and analyzing cyberlaw, but without really resolving this tension, that is, without really identifying the “there” there. Perhaps the best, most engaged, and certainly most optimistic embrace of that point of view is David Post’s In Search of Jefferson’s Moose.

Less speculative and less adventurous cyberlaw scholars, which is to say, most of them, quickly adapted to the seeming hollowness of their project by aligning themselves with existing literatures on governance, a rich and potentially fruitful field of inquiry derived largely from research and policymaking in the modern regulatory state. That material was made both relevant and useful in the Internet context via the emergence of global regulatory systems that speak to the administration of networks, particularly the Domain Name System and ICANN, the institution that was invented to govern it. The essential question of cyberlaw became, and remains: What is Internet governance, and what do we learn about governance in general from our observations and experiences with Internet governance? As an intervention in that ongoing discussion, Between Coordination and Regulation: Finding the Governance in Internet Governance is an especially welcome and clarifying contribution, all the more so because of its relative brevity.

The lead author is the head of the Humboldt Institute for Internet and Society and a veteran observer of and participant in Internet governance dialogues at ICANN and the World Summit on the Information Society (WSIS). She and two colleagues at the Humboldt Institute have produced a useful review of relevant Internet governance literature and a new framework for further research and analysis that is eclectic in its reference to and reliance on existing material and therefore independent of the influence of any single prior theorist or thinker. The resulting framework is both novel yet recognizably derivative of and continuous with respect to earlier work in the field. This is not a work primarily of legal scholarship by legal scholars, but properly understood, it should contribute in important ways to sustaining the ongoing project of cyberlaw. Internet governance is conceptualized here in ways that make clear its relevance and utility to questions of governance generally.

The paper introduces its subject with an overview of the definitional problems associated with the term “governance” and especially the phrase “Internet governance.” In phenomenal terms, the concept often refers to combinations of three things: one, rulemaking and enforcement and associated coordinating behaviors that implicate state actors acting in accordance with established political hierarchies; two, formal and informal non-state actors acting in less coordinated or un-coordinated “bottom up” ways, including through the formation and evolution of social norms; and three, technical protocols and interventions that have human actors as their designers but that have sorts of independent technical agency in enabling and constraining behaviors.

The authors note that many researchers seeking to define and understand relevant combinations equate “governance” with “regulation,” which leads to the implication that governance, like regulation, should be purposive with respect to its domain and that its goals should be evaluated accordingly. They reject that equation, observing that the experience of Internet institutions and other actors, of both legal and socio-technical character, suggests that such a purposive framing of the phenomenon of governance is unhelpfully underinclusive. A large amount of relevant behavior and consequences cannot be traced in purposive terms or in functional terms to planned interventions.

Also rejected, this time on overinclusiveness grounds, is the idea that governance can and should be equated with coordination among actors in a social space, as such. The authors correctly note that if governance is coordination of actors in social life, then virtually any and every social phenomenon is governance, and the concept loses any distinct analytic potential.

In between these two poles of the spectrum—that governance is regulation, or that governance is coordination—the authors settle on the argument that governance is and should be characterized as “reflexive coordination.” They define this concept as follows:

Critical situations occur when different criteria of evaluation and performance come together and actors start redefining the situation in question. Routines are contested, adapted or displaced through practices of articulation and justification. Understanding governance as reflexive coordination elucidates the heterogeneity of sources and means that drive the emergence of ordering structures. (P. 20.)

This approach preserves the role of heterogeneous assemblages of actors, conventions, technologies, purposes, and accidents, while calling additional attention to moments and instances of conflict and dispute, where “routine coordination fails, when the (implicit) expectations of the actors involved collide and contradictory interests or evaluations become visible.” The authors’ point is that this concept, which they refer to as reflexive coordination, or more clearly stated, these processes of reflexive coordination, are specifically aligned with the concept of Internet governance in particular and with governance in general. The reflexivity in question are practices and processes of contestation, conflict, reflection, and resolution that sometimes accompany more ordinary or typical practices and processes of institutional and technical design and activity. Those ordinary or typical practices and processes constitute questions of coordination and/or regulation, broadly conceived. Those are appropriately directed to the Internet, but not under the governance rubric.

The authors acknowledge their debt to a variety of social science research approaches, including Bruno Latour, John Law, Elinor Ostrom, Douglas North, and Oliver Williamson, and to American scholars of law and public policy, notably Michael Froomkin, Milton Mueller, Joel Reidenberg, and Lawrence Lessig, but without resting their case specifically on any one of them or on any particular work. As a student of the subject, I was struck not by the identities of the researchers whose work is cited, but rather by the conceptual affinity between the authors’ concept of “reflexive coordination” and an uncited concept. Recently, in a parallel literature on the anthropology (and dare I say, governance) of open source computer software, Christopher Kelty, now a researcher at UCLA, coined the phrase “recursive public” to describe the attributes of an open source software development collective.2 Kelty writes:

A recursive public is a public that is vitally concerned with the material and practical maintenance and modification of the technical, legal, practical, and conceptual means of its own existence as a public; it is a collective independent of other forms of constituted power and is capable of speaking to existing forms of power through the production of actually existing alternatives. Free Software is one instance of this concept, both as it has emerged in the recent past and as it undergoes transformation and differentiation in the near future.…In any public there inevitably arises a moment when the question of how things are said, who controls the means of communication, or whether each and everyone is being properly heard becomes an issue.… Such publics are not inherently modifiable, but are made so—and maintained—through the practices of participants.3

The extended quotation is offered to suggest that processes of reflexive coordination already resonate in governance domains beyond those associated with the Internet itself. To the extent that reflexive coordination needs affirmation as a generalized model of governance, Kelty’s research on recursive publics offers some useful evidence that the model is useful. Open source software development collectives seem to fit the model of governance quite readily, despite the fact that the concepts of “reflexive coordination” and the “recursive public” arise in different intellectual traditions and for different purposes. The challenges of understanding and practicing Internet governance speak to the challenges of understanding and practicing governance generally. “Between coordination and regulation: Finding the governance in Internet governance” offers a helpful and important step forward in that broader project.

  1. See Frank H. Easterbrook, Cyberspace and the Law of the Horse, 1996 U. Chi. Legal F. 201; Lawrence Lessig, The Law of the Horse: What Cyberlaw Might Teach, 113 Harv. L. Rev. 501 (1999). []
  2. Christopher M. Kelty, Two Bits: The Cultural Significance of Free Software (2008). []
  3. Id. at 3. []
Cite as: Michael Madison, Testing Paras Passed to Zeta With Blockquotes, JOTWELL (January 1, 2017) (reviewing Jeanette Hofmann, Christian Katzenbach & Kirsten Gollatz, Between Coordination and Regulation: Finding the Governance in Internet Governance, New Media & Society (2016), available at SSRN), https://zetasec4.jotwell.com/testing-paras-passed-zeta-blockquotes/.

Testing Number of Paragraphs Passed to Zeta

Kent Barnett & Christopher J. Walker, Chevron in the Circuit Courts, 115 Mich. L. Rev. (forthcoming 2017), available at SSRN.

This test post has several paragraphs, but NONE of them should be passed to zeta because introparagraph limit is zero. Kent Barnett and Chris Walker begin this fascinating article by describing the Chevron doctrine and its history. In its landmark 1984 opinion in Chevron v. NRDC, the Supreme Court announced a new, seemingly more deferential doctrine that it instructed lower courts to apply when they review agency interpretations of the statutes they administer. The Chevron opinion is one of the most cited opinions in history. It has been cited in “nearly 15,000 judicial decisions and in over 17,000 law review articles and other secondary sources.” (P. 2.)

Barnett and Walker agree with most scholars that the Supreme Court’s “choice to apply Chevron deference, as opposed to a less-deferential doctrine or no deference at all, does not seem to affect the outcome of the case.” (P. 4.) They note that the Supreme Court did not even mention Chevron in three-quarters of the cases in which it reviewed agency statutory interpretations during the twenty-two-year period immediately after it issued its opinion in Chevron. They then report the findings of their study—the largest empirical study of circuit court applications of Chevron ever undertaken. As they characterize the results of their study, what they call Chevron Regular seems quite different from Chevron Supreme.

Barnett and Walker read, analyzed, and coded 1330 opinions issued by circuit courts between 2003 and 2013. Their dozens of findings are surprising in many ways. I will discuss just the five that I found most surprising. First, “agency statutory interpretations were significantly more likely to prevail under Chevron deference (77.3%) than Skidmore deference (56.0%) or, especially, de novo review (38.5%).” (P. 5) (footnote omitted). Second, circuit courts upheld agency interpretations more frequently when they applied Chevron to interpretations adopted through informal means (78.4%) than to interpretations adopted in notice and comment rulemakings (74.2%). Third, when circuit courts applied Chevron, they upheld longstanding interpretations far more often (87.6%) than recent interpretations (74.5%) or changed interpretations (65.6%). Fourth, circuit courts varied greatly with respect to the proportion of cases in which they applied Chevron to agency statutory interpretations—from a high of 88.9% for the D.C. Circuit to a low of 60.7% for the Sixth Circuit. Fifth, circuit courts also varied greatly with respect to the proportion of cases in which they upheld agency statutory interpretations, albeit not with a high correlation between their rates of outcomes and their rates of invocation of Chevron. The First Circuit upheld interpretations most frequently (83.1%); the Ninth Circuit upheld interpretations least frequently (65.5%), while the D.C. Circuit was around the middle (72.6%).

Barnett and Walker are appropriately cautious in drawing inferences from their findings. Their findings raise far more questions than they answer. Here are just a few.

First, the findings are a major disappointment to those of us who initially saw in Chevron the potential for greater consistency and predictability in the process of judicial review of agency statutory interpretations. We have long been disappointed with the massive inconsistencies in the Supreme Court’s approach to Chevron, but many of us believed (or at least hoped) that circuit courts were applying Chevron in a relatively consistent and predictable way. We were wrong. Circuit court applications of Chevron are at least as inconsistent, unpredictable, and incoherent as Supreme Court applications of Chevron. Those findings raise the question of whether Chevron can, or should, continue to exist as a review doctrine.

Second, whatever Chevron means in circuit courts, the circuit court version differs from the Supreme Court version in many ways. The glaring inconsistencies between the Supreme Court’s approach to Chevron and the approach (more accurately the approaches) of the circuit courts raise the question whether a doctrine can, or should, survive in circuit courts when it bears no relation to the version of the doctrine that exists in the Supreme Court.

Cite as: Christopher Walker, Testing Number of Paragraphs Passed to Zeta, JOTWELL (January 1, 2017) (reviewing Kent Barnett & Christopher J. Walker, Chevron in the Circuit Courts, 115 Mich. L. Rev. (forthcoming 2017), available at SSRN), https://zetasec4.jotwell.com/testing-number-paragraphs-passed-zeta/.

Recognizing Disgust, Repudiating Exile

Sara K. Rankin, The Influence of Exile, 76 Md. L. Rev. (forthcoming 2016), available at SSRN.

The discourse of poverty law in the United States is on the rise. Following the Great Recession of December 2007 to June 2009, the odd yet telling disparagement of “law and poverty” by the late Antonin Scalia in September 2008, and the Occupy Wall Street protests that erupted into public consciousness in September 2011, poverty law scholars have published three new casebooks, organized a new series of conferences hosted by law schools in California, Washington, and Washington, D.C., contributed to the theme for other ongoing conferences such as ClassCrits (Toward A Critical Legal Analysis of Economic Inequality), and assembled in well-attended panels at the annual meeting of the Association of American Law Schools.

In The Influence of Exile, Sara K. Rankin, associate professor of law and director of the Homeless Rights Advocacy Project of the Fred T. Korematsu Center for Law and Equality at the Seattle University School of Law, contributes to that discourse by theorizing “the influence of exile”—the well-documented drive to exclude disfavored groups of people by restricting their rights to access and occupy public space. (Pp. 1-2.) The influence of exile has taken myriad forms throughout United States history (e.g., Slave Codes, Black Codes, anti-miscegenation laws, and Jim Crow regimes; Asian exclusion laws, Mexican “repatriation” campaigns, and Anti-Okie laws; redlining regulations, policies, and practices; and “Sundown Town” policies and practices), but Rankin argues persuasively that the influence of exile perseverates today in a distinctive “social-spatial segregation [that] further entrenches stereotyping, misunderstanding, and the stigmatization of marginalized groups.” (P. 11.) Her article abounds with insights into these matters. Here I discuss three of them—the visible poor; sociolegal control of public space; and disgust, affect, and ideology.

Rankin critiques the official definition of homelessness and urges a reconceptualization by way of Joel Blau’s notion of the “visible poor.” (Pp. 2 n.8, 3 n.11.) The visible poor includes not only people whom the United States Department of Housing and Urban Development officially counts as homeless but also a substantially larger part of the forty-three-plus million people whose poverty combines “with housing instability, mental illness, or other psychological or socio-economic challenges that deprive them of reasonable alternatives to spending all or the majority of their time in public.” (P. 2.) Like the move urged in 2014 by the ClassCrits group, to contextualize poverty and inequality in relation to precarity and work, Rankin’s rhetorical shift from the homeless to the visible poor promises a better approach to analyzing and intervening against the contemporary “criminalization” laws and policies that target such people. (Pp. 43-44, 48-49, 52.) For example, implicating a larger part of the forty-three-plus million poor people in the United States—over thirteen percent of the populace—helps to move the proliferation of laws that criminalize the visible poor from the margins and may help to organize more effective counters to the influence of exile.

Rankin characterizes the past twenty years as a period in which “the combination of economic conditions, broken window ideologies, and the human drive to exile created a perfect storm for the increasing enactment of laws that purge signs of visible poverty from public space.” (P. 42.) Drawing on interdisciplinary urban studies, she argues that the privatization, commercialization, festivalization, and sanitization of public space all contribute to the problem. (Pp. 39-41.) In particular, business improvement districts, which cities have increasingly imposed on their traditional downtown areas, exemplify these sociolegal processes and political projects. (Pp. 41-42.) For Rankin this situation amounts to one that sociologist Talmadge Wright has conceptualized in terms of “battles for ‘tactical control’ of public space.” (P. 3). Rankin argues persuasively that, “in this context, the mere existence of homeless [and visibly poor] people in public space is an act of resistance.” (P. 56.) In her view, sociolegal controversies over the visibly poor express the ideological and material class relations that construct, naturalize, and ultimately control “public space.” (Pp. 9, 57-58.) In particular, “visible poverty as a form of protest challenges the American conscience to grapple with its own complicity in creating the circumstances within which homelessness and poverty can thrive.” (Pp. 57.)

In the longest part of her article, Rankin synthesizes studies from psychology, social psychology, social neuroscience, and sociology, which explain the group and individual dynamics that animate people to differentially identify with and include others, or instead to misrecognize, exclude, marginalize, and ultimately exile strangers. (Pp. 5-24.) In particular, social neuroscience findings confirm, “that today, society tends to regard homeless and visibly poor people with disgust and rejection at higher rates than most any other perceived status.” (Pp. 12 n.46.) Though some people may find the claim controversial, Rankin explains that, “Studies show visible poverty elicits higher rates of disgust than nearly any other commonly marginalized trait, including racial or ethnic indicia.” (P. 15 n.60.) She acknowledges that people whose ethnicities are racialized into a minority group status tend to be poorer in income, own less wealth, and be otherwise worse off than people whose ethnicities are racialized into the majority white group status. (Pp. 5-7, 12-15.) However, she hypothesizes that the stigmatization of poverty may have become a sanitized way to express otherwise disfavored forms of prejudice. (P. 16 n.63.) Instead of “overt expressions of bias . . . with respect to race and gender, and perhaps increasingly, with respect to sexual orientation and identity . . . the American conscience may be sanitizing many forms of discrimination to appear as something less objectionable or actionable: judgments about social worthiness.” (Pp. 18-20.)

Thus, the influence of exile troubles Rankin in at least two ways: first, it feeds on the disgust that individuals, who are ostensibly not poor (or at least perceive themselves not to be poor), feel when confronted with visibly poor people: they perceive these “strangers” as not only unsightly and dirty blemishes in public space but also as dangers who symbolize human “broken windows.” (Pp. 22-23, 25-26, 36-38, 59.) Second, United States society and culture have evolved an ideology to legitimate and reinforce the disgust that (some, many, most?) individuals feel when confronted with visibly poor people. Instead of allowing this feeling to be identified as invidious discrimination, this ideology cloaks individuals’ feelings of disgust beneath the mantle of a sober judgment about blameworthiness and just deserts. (Pp. 20-21.)

The influence of exile, Rankin argues, thus degrades not only the visibly poor themselves, but also the legislators, judges, and citizens who accede to popular animus against them. Indeed, the influence of exile degrades all of us who allow ourselves to become complicit in the sanitization (privatization, commercialization, festivalization) of public space, and the criminalization of the visibly poor—in a word, exile.

Shakepeare’s Prince Escalus, the ruler of fair Verona, declared at the end of Romeo and Juliet:

See what a scourge is laid upon your hate,
That heaven finds means to kill your joys with love.
And I for winking at your discords too
Have lost a brace of kinsmen: all are punish’d.

According to Rankin, under the influence of exile, here too “all are punish’d.” Thus, she argues for the law—legislators, judges, and the polis—to recognize its invidious influence, to confront its pernicious effects, and ultimately to protect “the rights of all people to exist in public space or, more fundamentally, to exist at all.” (P. 59.)

Cite as: Marc-Tizoc González, Recognizing Disgust, Repudiating Exile, JOTWELL (December 31, 2016) (reviewing Sara K. Rankin, The Influence of Exile, 76 Md. L. Rev. (forthcoming 2016), available at SSRN), https://zetasec4.jotwell.com/recognizing-disgust-repudiating-exile/.

Chevron’s Origin Story

Aditya Bamzai, The Origins of Judicial Deference to Executive Interpretation, 126 Yale L.J. (forthcoming 2017), available at SSRN.

In his concurrence in Perez v. Mortgage Bankers, Justice Scalia reiterated his historical justification for Chevron deference (first articulated in his Mead dissent): “the rule of Chevron, if it did not comport with the [Administrative Procedure Act], at least was in conformity with the long history of judicial review of executive action, where ‘[s]tatutory ambiguities . . . were left to reasonable resolution by the Executive.’” In a must-read article forthcoming in the Yale Law Journal, Aditya Bamzai casts serious doubt on Justice Scalia’s (and many others’) understanding of Chevron’s origin story.1

There is so much to like about this article, and one should really read the full article. But I’ll highlight four main takeaways.

First and foremost, Bamzai exhaustively rebuts the historical argument that the case law and doctrine prior to the Twentieth Century supports the type of deference now being applied to agency statutory interpretations under Chevron. Instead, as documented in Part II of the article, the interpretive approach was traditionally to defer to executive interpretations of law that are longstanding and contemporaneous. Such “respect” or deference had nothing to do with agency expertise, congressional delegation, national uniformity in the law, or political accountability—the primary rationales invoked today to support Chevron deference. Instead, courts respected longstanding and contemporaneous executive interpretations because, under the traditional canons of statutory interpretation, courts respected longstanding and contemporaneous interpretations in general.

Second, Bamzai rejects the Chevron origin story based on Nineteenth Century mandamus doctrine and practice. Indeed, his review of the cases suggests the opposite: “Those cases distinguished between, on the one hand, the standard for obtaining the writ and, on the other, the appropriate interpretive methodology that would be applied in cases not brought using the writ.” (P. 31.) This finding has particular significance, as it suggests that Justice Scalia may well have been mistaken in relying on the mandamus cases as historical justification for Chevron deference in his Mead dissent and Mortgage Bankers concurrence.

Third, as detailed in Part III of the article, Chevron’s origin story doesn’t even really begin until the 1940s. To be sure, James Landis and others advocated for judicial deference to administrative interpretations of law before the 1940s. The Supreme Court, however, did not embrace such deference until in the 1940s, in cases with which administrative law professors are quite familiar (Gray v. Powell, NLRB v. Hearst, Skidmore v. Swift & Co.). (The one wrinkle to Bamzai’s Chevron origin story may be Bates & Guild Co. v. Payne, 194 U.S. 106, 110 (1904), in which the Supreme Court concluded that “even upon mixed questions of law and fact, or of law alone, [an agency’s] action will carry with it a strong presumption of correctness.” Bamzai explains why the opinion had limited immediate impact and did not upset the contemporary and customary canons that had predominated statutory interpretation generally during that era.)

Finally, Bamzai adds his take on what Section 706 of the Administrative Procedure Act (APA) intended to accomplish. Perhaps not surprisingly, Bamzai concludes that in passing the APA Congress sought to remove the deference the Supreme Court had just given to federal agency statutory interpretations earlier in the 1940s. Although many scholars have weighed in on this debate, Bamzai’s novel contribution is to read the APA against the historical development of judicial deference to agency statutory interpretations. As Bamzai explains:

The most natural reading of section 706—one that has, to my knowledge, heretofore escaped scholarly or judicial attention—is that the APA’s judicial-review provision adopted the traditional interpretive methodology that had prevailed from the beginning of the Republic until the 1940s and, thereby, incorporated the customary-and-contemporary canons of constructions. In other words, when Congress enacted the APA, it did in fact incorporate traditional background rules of statutory interpretation. It did not, however, incorporate the rule that came to be known as Chevron deference, because that was not (at the time) the traditional background rule of statutory construction. Under the incorporated approach, a court would “respect”—or, to use the modern parlance, “defer to”—an agency’s interpretation of a statute if and only if that interpretation reflected a customary or contemporaneous practice under the statute. (P. 61.)

I could spend much more time discussing in greater detail Bamzai’s rigorous examination of Chevron’s origin story and underscoring how his account should make many of us reconsider the historical foundation for Chevron deference. But I hope this brief summary encourages you to download and digest the full paper from SSRN.

In concluding, however, I cannot resist speculating a bit about the article’s origin story. After all, Bamzai clerked for Justice Scalia before joining the Justice Department and now (as of this Fall) the University of Virginia School of Law. In recent years Justice Scalia had begun to doubt the constitutionality of Auer deference—the deference owed to agency interpretations of their own regulations—but he had not (at least publicly) shared similar concerns about Chevron deference. As he noted in his Mortgage Bankers concurrence (and Mead dissent), Chevron deference “at least was in conformity with the long history of judicial review of executive action.”

I wonder if we can trace Bamzai’s interest in exploring the historical foundations of Chevron deference to his clerkship experience with a justice whose comfort with the doctrine may have been based on a historical misunderstanding. I further wonder whether this article would have changed Justice Scalia’s mind. That we will never know. What I do know, though, is that I very much look forward to reading more of Bamzai’s administrative law scholarship. This is just Bamzai’s first article in what I expect to be a series of articles that make us rethink the foundations of the modern administrative state.

  1. This paper was one of a half dozen presented at the Rethinking Judicial Deference Policy Conference, which was sponsored by George Mason University’s new Center for the Study of the Administrative State under the direction of Neomi Rao. []
Cite as: Christopher Walker, Chevron’s Origin Story, JOTWELL (December 31, 2016) (reviewing Aditya Bamzai, The Origins of Judicial Deference to Executive Interpretation, 126 Yale L.J. (forthcoming 2017), available at SSRN), https://zetasec4.jotwell.com/chevrons-origin-story/.