Theorists usually agree that the rule of law requires that the law be public, that is, available ... more Theorists usually agree that the rule of law requires that the law be public, that is, available so that those whose conduct it governs may discover what it requires of them. In the regulatory state, however, every regulation is likely to be lost in a mountain of other regulations. This would not violate the publicity condition if regulated conduct invariably carries telltale signs that it is likely to be subject to legal requirements but in a technological society even this weaker condition may not be met, because a great deal of seemingly-innocuous conduct may be regulated. The present paper examines the publicity problem raised by regulation in a technological society. It does so by examining the U.S. Supreme Court's mens rea jurisprudence for regulatory crimes, on the question of whether knowledge that one is violating a regulation is an element of the offense. The chief cases are Balint, Freed, United States Minerals, Liparota, Staples, Ratzlaf, and Cheek, along with the 9th Circuit's Weitzenhof decision. As I analyze them, the first four of these cases set out a "signaling theory" of publicity, according to which knowledge that one is violating a regulation need not be proven if the conduct "signals" its own regulability. Selling narcotics, possessing hand grenades, and shipping sulfuric acid are all the sort of thing that one should anticipate are regulable (thus Balint, Freed, and United States Minerals); trafficking in food stamps is not (thus Liparota). However, in Staples the Supreme Court held that possessing a machine gun is not signaling behavior, because so many Americans regard gun ownership as innocent activity. Similarly, Ratzlaf remarks that structuring financial transactions to evade federal reporting requirements can be innocent behavior, apparently because so many Americans regard regulatory dodging as a legitimate pastime. These decisions pose a complication for the signaling theory: Our intuitions about what sort of conduct is intrinsically regulable are affected by the expectations of those around us. Thus, the phenomenon of social cognition creates the possibility of a cultural excuse for regulatory crime. Furthermore, it seems clear that many defendants do not recognize the non-innocence of their behavior because (like the tax resister Cheek) they do not want to recognize it in which case the cultural excuse becomes more like a cultural veto, by which those who dislike a regulation may nullify it (at least once). The paper canvasses several possible solutions to this difficulty, concluding that the needs of legitimate regulation outweigh the requirement of publicity.
Philosophy and Public Policy Quarterly, Dec 1, 1995
Begin with a fundamental clash between theory and political reality. Suppose that there are compe... more Begin with a fundamental clash between theory and political reality. Suppose that there are compelling reasons prudential, aesthetic, and moral for rich countries to limit their consumption, and for developing countries not to emulate current consumption patterns in the developed world. Suppose, second, that reducing rich-society consumption implies slowing economic growth within these societies, and that stable levels of consumption imply a steady-state economy. Place these suppositions next to the view of every democratic politician in the world: that economies must grow robustly, that an economy growing at "merely" 2 or 3 percent annually is underperforming, that improving its underperformance counts as the chief problem facing government, and that failure to solve this problem invariably turns a political leader into an ex-leader. The contrast could hardly be more stark. It raises the first question of this essay: do existing political systems, and in particular liberal-democratic political systems associated with largely capitalist (free enterprise) economies, have the capacity to reduce existing levels of consumption to realize the no-growth/slow-growth model developed two decades ago by writers such as Kenneth Boulding and Paul Ehrlich? These writers assumed that global stewardship requires more-or-less s tatic levels of consumption. But w hat are the prospects for creating a steady-state economy? The second question of this essay concerns the prospects for redirecting consumption, thereby achieving what I shall call a safe-growth model of economic life. This model implies rapid technological advances in environmentally safe directions, coupled with the cultural changes implicit in transformed consumption habits. Would such an approach be politically viable in a society such as ours?
This paper was written as a keynote address for a conference on Michael Walzer’s Just and Unjust ... more This paper was written as a keynote address for a conference on Michael Walzer’s Just and Unjust War on the 40th anniversary of its publication. It discusses the significance of the book, and examines the updating prefaces Walzer wrote to the five editions of the book and his methodological postscript to the fifth edition. The paper contrasts Walzer’s philosophical method with that of analytic just war theory, arguing that Walzer’s use of historical cases and the analytic use of imaginary “toy” cases serve different philosophical ends. Noting that Just and Unjust Wars appeared the same year as the Additional Protocols to the Geneva Conventions, I examine the parallels between Walzer’s views and those in AP I, especially between Walzer’s reformulation of the doctrine of double effect and AP I’s requirement that militaries take all feasible steps to insure that attacks do not inflict excessive unintended harm on civilians. Next I examine the role that human rights plays in the course ...
Theorists skeptical of rational choice theories often base their doubts on the claim that values ... more Theorists skeptical of rational choice theories often base their doubts on the claim that values are plural and incommensurable. According to the value-pluralists, including Finnis, Pildes and Anderson, and Kronman, incommensurability implies that no common measure exists that would permit direct comparison of different value-packages, and hence rational choice among them. I call this the "Incommensurability-Undecidability Thesis" (IUT): Because values are incommensurable, rational choice among packages of values is often impossible. Value pluralists also maintain the "Incommensurability-Intransitivity Thesis" (IIT), which states that if values are incommensurable, transitive decision among packages of values is often impossible, because along one dimension of value package 1 may be preferable to package 2, and package 2 preferable to package 3, without package 1 being preferable to package 3, because 3 is preferable to 1 along an orthogonal value-dimension. I deny both the IUT and the IIT, and argue that because these theses fail, the issue of value incommensurability is largely beside the point in debates over the usefulness of rational choice techniques. Provided that each of the incommensurable values may be quantitatively measured (although of course the measures cannot be compared between values), a small set of plausible assumptions permits rational choice among value packages, and also induces a transitive ordering of value packages. These assumptions are Nash's axioms for bargaining theory. Consider the following thought experiment: Suppose that each of the incommensurable values is represented by a fanatic fiduciary ? a rational bargainer whose sole aim is to maximize the value he or she is assigned to represent. Then the choice situation among value packages may be described as an arbitration game among the fanatic fiduciaries, and Nash's bargaining theorem implies that a unique solution exists, namely that point in the choice set where the product of the fiduciaries' marginal gains over the status quo (the "Nash product") attains a maximum. The Nash solution is rationally preferable to all other value packages without being superior to the alternatives in any evaluative sense except the trivial one: it is superior to them in respect of rational preferability. In particular, it is rationally preferable despite the absence of a common measure. The IUT fails. Let the status quo point in the k-dimensional choice space be the origin. I argue that there is a plausible sense in which all points with the same Nash products - these points all lie on a hyperbolic surface - are rationally indifferent to each other, because there is no arbitration game whose Pareto frontier contains one point on the surface such that any other point on the surface is its Nash solution. Thus, hyperbolic surfaces consisting of points with the same Nash products are indifference curves. This argument justifies constructing a "Nash preference relation" in which point x is preferable to point y if the product of x's coordinates is greater than the product of y's. This is a transitive ordering of the positive part of the choice space, and the IIT fails. The failure of the IIT and the IUT does not by itself justify rational choice techniques. The argument against these two theses assumes that all values may be quantitatively measured, and in many contexts that assumption has little to recommend it. However, the argument shows one important fact: the real limitation on rational choice is not the incommensurability of values, but rather their immensurability.
Theorists usually agree that the rule of law requires that the law be public, that is, available ... more Theorists usually agree that the rule of law requires that the law be public, that is, available so that those whose conduct it governs may discover what it requires of them. In the regulatory state, however, every regulation is likely to be lost in a mountain of other regulations. This would not violate the publicity condition if regulated conduct invariably carries telltale signs that it is likely to be subject to legal requirements but in a technological society even this weaker condition may not be met, because a great deal of seemingly-innocuous conduct may be regulated. The present paper examines the publicity problem raised by regulation in a technological society. It does so by examining the U.S. Supreme Court's mens rea jurisprudence for regulatory crimes, on the question of whether knowledge that one is violating a regulation is an element of the offense. The chief cases are Balint, Freed, United States Minerals, Liparota, Staples, Ratzlaf, and Cheek, along with the 9th Circuit's Weitzenhof decision. As I analyze them, the first four of these cases set out a "signaling theory" of publicity, according to which knowledge that one is violating a regulation need not be proven if the conduct "signals" its own regulability. Selling narcotics, possessing hand grenades, and shipping sulfuric acid are all the sort of thing that one should anticipate are regulable (thus Balint, Freed, and United States Minerals); trafficking in food stamps is not (thus Liparota). However, in Staples the Supreme Court held that possessing a machine gun is not signaling behavior, because so many Americans regard gun ownership as innocent activity. Similarly, Ratzlaf remarks that structuring financial transactions to evade federal reporting requirements can be innocent behavior, apparently because so many Americans regard regulatory dodging as a legitimate pastime. These decisions pose a complication for the signaling theory: Our intuitions about what sort of conduct is intrinsically regulable are affected by the expectations of those around us. Thus, the phenomenon of social cognition creates the possibility of a cultural excuse for regulatory crime. Furthermore, it seems clear that many defendants do not recognize the non-innocence of their behavior because (like the tax resister Cheek) they do not want to recognize it in which case the cultural excuse becomes more like a cultural veto, by which those who dislike a regulation may nullify it (at least once). The paper canvasses several possible solutions to this difficulty, concluding that the needs of legitimate regulation outweigh the requirement of publicity.
Philosophy and Public Policy Quarterly, Dec 1, 1995
Begin with a fundamental clash between theory and political reality. Suppose that there are compe... more Begin with a fundamental clash between theory and political reality. Suppose that there are compelling reasons prudential, aesthetic, and moral for rich countries to limit their consumption, and for developing countries not to emulate current consumption patterns in the developed world. Suppose, second, that reducing rich-society consumption implies slowing economic growth within these societies, and that stable levels of consumption imply a steady-state economy. Place these suppositions next to the view of every democratic politician in the world: that economies must grow robustly, that an economy growing at "merely" 2 or 3 percent annually is underperforming, that improving its underperformance counts as the chief problem facing government, and that failure to solve this problem invariably turns a political leader into an ex-leader. The contrast could hardly be more stark. It raises the first question of this essay: do existing political systems, and in particular liberal-democratic political systems associated with largely capitalist (free enterprise) economies, have the capacity to reduce existing levels of consumption to realize the no-growth/slow-growth model developed two decades ago by writers such as Kenneth Boulding and Paul Ehrlich? These writers assumed that global stewardship requires more-or-less s tatic levels of consumption. But w hat are the prospects for creating a steady-state economy? The second question of this essay concerns the prospects for redirecting consumption, thereby achieving what I shall call a safe-growth model of economic life. This model implies rapid technological advances in environmentally safe directions, coupled with the cultural changes implicit in transformed consumption habits. Would such an approach be politically viable in a society such as ours?
This paper was written as a keynote address for a conference on Michael Walzer’s Just and Unjust ... more This paper was written as a keynote address for a conference on Michael Walzer’s Just and Unjust War on the 40th anniversary of its publication. It discusses the significance of the book, and examines the updating prefaces Walzer wrote to the five editions of the book and his methodological postscript to the fifth edition. The paper contrasts Walzer’s philosophical method with that of analytic just war theory, arguing that Walzer’s use of historical cases and the analytic use of imaginary “toy” cases serve different philosophical ends. Noting that Just and Unjust Wars appeared the same year as the Additional Protocols to the Geneva Conventions, I examine the parallels between Walzer’s views and those in AP I, especially between Walzer’s reformulation of the doctrine of double effect and AP I’s requirement that militaries take all feasible steps to insure that attacks do not inflict excessive unintended harm on civilians. Next I examine the role that human rights plays in the course ...
Theorists skeptical of rational choice theories often base their doubts on the claim that values ... more Theorists skeptical of rational choice theories often base their doubts on the claim that values are plural and incommensurable. According to the value-pluralists, including Finnis, Pildes and Anderson, and Kronman, incommensurability implies that no common measure exists that would permit direct comparison of different value-packages, and hence rational choice among them. I call this the "Incommensurability-Undecidability Thesis" (IUT): Because values are incommensurable, rational choice among packages of values is often impossible. Value pluralists also maintain the "Incommensurability-Intransitivity Thesis" (IIT), which states that if values are incommensurable, transitive decision among packages of values is often impossible, because along one dimension of value package 1 may be preferable to package 2, and package 2 preferable to package 3, without package 1 being preferable to package 3, because 3 is preferable to 1 along an orthogonal value-dimension. I deny both the IUT and the IIT, and argue that because these theses fail, the issue of value incommensurability is largely beside the point in debates over the usefulness of rational choice techniques. Provided that each of the incommensurable values may be quantitatively measured (although of course the measures cannot be compared between values), a small set of plausible assumptions permits rational choice among value packages, and also induces a transitive ordering of value packages. These assumptions are Nash's axioms for bargaining theory. Consider the following thought experiment: Suppose that each of the incommensurable values is represented by a fanatic fiduciary ? a rational bargainer whose sole aim is to maximize the value he or she is assigned to represent. Then the choice situation among value packages may be described as an arbitration game among the fanatic fiduciaries, and Nash's bargaining theorem implies that a unique solution exists, namely that point in the choice set where the product of the fiduciaries' marginal gains over the status quo (the "Nash product") attains a maximum. The Nash solution is rationally preferable to all other value packages without being superior to the alternatives in any evaluative sense except the trivial one: it is superior to them in respect of rational preferability. In particular, it is rationally preferable despite the absence of a common measure. The IUT fails. Let the status quo point in the k-dimensional choice space be the origin. I argue that there is a plausible sense in which all points with the same Nash products - these points all lie on a hyperbolic surface - are rationally indifferent to each other, because there is no arbitration game whose Pareto frontier contains one point on the surface such that any other point on the surface is its Nash solution. Thus, hyperbolic surfaces consisting of points with the same Nash products are indifference curves. This argument justifies constructing a "Nash preference relation" in which point x is preferable to point y if the product of x's coordinates is greater than the product of y's. This is a transitive ordering of the positive part of the choice space, and the IIT fails. The failure of the IIT and the IUT does not by itself justify rational choice techniques. The argument against these two theses assumes that all values may be quantitatively measured, and in many contexts that assumption has little to recommend it. However, the argument shows one important fact: the real limitation on rational choice is not the incommensurability of values, but rather their immensurability.
Uploads
Papers by David Luban