Friday, December 30, 2016

Grand Hotel Abyss


Georg Lukács in 1962 used the colorful image of a fictional "Grand Hotel Abyss" to express his disappointment in the theorists of the Frankfurt School. Here is a passage in which the idea is described in "Preface to the Theory of the Novel" (link):
A considerable part of the leading German intelligentsia, including Adorno, have taken up residence in the ‘Grand Hotel Abyss’ which I described in connection with my critique of Schopenhauer as ‘a beautiful hotel, equipped with every comfort, on the edge of an abyss, of nothingness, of absurdity. And the daily contemplation of the abyss between excellent meals or artistic entertainments, can only heighten the enjoyment of the subtle comforts offered.’ (The fact that Ernst Bloch continued undeterred to cling to his synthesis of ‘left’ ethics and ‘right’ epistemology (e.g. cf. Frankfurt 1961) does honour to his strength of character but cannot modify the outdated nature of his theoretical position. To the extent that an authentic, fruitful and progressive opposition is really stirring in the Western world (including the Federal Republic), this opposition no longer has anything to do with the coupling of ‘left’ ethics with ‘right’ epistemology.)
The thinkers of the Frankfurt School -- Adorno, Horkheimer, Habermas, Benjamin, Wellmer, Marcuse -- were for Lukács too much devoted to theorizing capitalism and barbarism and too little about changing it. They were like imagined world-weary residents in the Grand Hotel Abyss, observing the unfolding catastrophe but doing nothing to intervene to stop it. They were about theory, not praxis.

Stuart Jeffries uses this trope as the organizing theme of his group biography of these figures in Grand Hotel Abyss: The Lives of the Frankfurt School, and, in a word, he finds that Lukács's critique is unfounded.

Jeffries emphasizes the common social origins of these boundary-breaking critics of capitalism. The book is detailed and insightful. Jeffries emphasizes the common social and cultural origins of almost all these men -- German, Jewish, bourgeois, affluent -- and the common threads of their criticism of the capitalism and consumerism that surrounded them in the early and middle twentieth century. The central question of how it came to pass that ordinary people in cultured, philosophically rich Germany came to support the Nazi rise to power was of vital concern to all of them. But consumerism, authoritarianism, and the suffering both created by and hidden by capitalism are also in the center stage.

The book is primarily about ideas and debates, not the particulars of personal biography. Jeffries does an impressive job of walking readers through the debates that swirled within and across the Frankfurt School -- is capitalism doomed? Are workers inherently revolutionary? Is art part of the support system for capitalism? Is Marxism scientific or dialectical? Jeffries does an exceptional and fascinating job of telling this complex story of intellectual history and social criticism.

A particularly important innovation within the intellectual tradition of critical theory was the pointed critique these theorists offered of mass culture. Unlike orthodox Marxists who gave primary emphasis to the workings of the forces and relations of production -- economics -- the critical theorists took very seriously the powerful role played within advanced capitalism by mass culture, film, media, and television. (The publication in 1927 of Marx's Economic and Philosophic Manuscripts of 1844 appears to have been an important impetus to much of the theorizing of the Frankfurt School.) Here is one example of the social criticism of Hollywood offered by Adorno and Horkheimer in Dialectic of Enlightenment:
Consider, for instance, Donald Duck. Once, such cartoon characters were ‘exponents of fantasy as opposed to rationalism’, wrote Adorno and Horkheimer. Now they had become instruments of social domination. ‘They hammer into every brain the old lesson that continuous friction, the breaking down of all individual resistance, is the condition of life in this society. Donald Duck in the cartoons and the unfortunate in real life get their thrashing so that the audience can learn to take their own punishment.’ (225)
So what is a more progressive role for works of art and culture to play in a society embodying serious social exploitation and domination? One work that was an important point of consideration for several theorists was the Brecht and Weill opera, Rise and Fall of the City of Mahogany. Adorno and others regarded this work as one that gave appropriate and unblinking attention to the suffering of the modern social order.
Brecht’s libretto, too, sought to make it clear that the bourgeois world was absurd and anarchic. ‘In order to represent this convincingly’, wrote Adorno of the dramatisation of the bourgeois world as absurd and anarchic, ‘it is necessary to transcend the closed world of bourgeois consciousness which considers bourgeois social reality to be immutable. Outside of this framework, however, there is no position to take – at least for the German consciousness, there is no site which is non-capitalist.’ This was to become one great theme of critical theory: there is no outside, not in today’s utterly rationalised, totally reified, commodity-fetishising world. When Marx wrote Capital in the mid nineteenth century, the more primitive capitalist system he was diagnosing made commodity fetishism merely episodic; now it was everywhere, poisoning everything. ‘Paradoxically, therefore’, Adorno added, ‘transcendence must take place within the framework of that which is.’ Brecht’s assault on capitalist society in Mahoganny was then paradoxically both from within and from without at the same time, both immanent and transcendent. (132)
Jeffries also provides a fascinating and extended discussion of the deep interactions that occurred between Thomas Mann and Adorno in Los Angeles as Mann worked at completing Doctor Faustus. Mann wanted Adorno's expert advice about modern music, and Adorno obliged. Jeffries argues that Adorno had a substantive effect on the novel:
Arguably, the finished novel reflects Adorno’s melancholic philosophy more profoundly than Mann’s. This is not to suggest plagiarism: as Adorno wrote in 1957, the insinuation that Mann made illegitimate use of his ‘intellectual property’ is absurd. The underlying aesthetic philosophy of the novel goes beyond the binary opposition between the Apollonian and Dionysian, between the orderly and the ecstatic, that Nietzsche set out in The Birth of Tragedy and to which Mann repeatedly appealed in his fiction... During the collaboration with Adorno, however, Mann set aside his original, Dionysian conception of the composer and as a result Leverkühn became something much more interesting –a figure who dramatised something of the Frankfurt School’s, and in particular Adorno’s, distinctive contribution to the philosophy of art. (243)
And what about fascism? This was a central thrust of Frankfurt School research, and opinion was divided about the causes of the rise of Nazism in Germany among the Frankfurt School theorists. But here is an interpretation that seems particularly relevant in 2016 in the United States, given the pageantry of political rallies and the slogans about making America great again:
Fascism was, as a result, a paradox, being both ancient and modern: more precisely it was a system that used a tradition hostile to capitalism for the preservation of capitalism. For Bloch, as for Walter Benjamin, fascism was a cultural synthesis that contained both anti-capitalist and utopian aspects. The Frankfurt School failed to emphasise in its analysis of fascism what Benjamin called the ‘aestheticisation of politics’. It fell to Benjamin, Bloch and Siegfried Kracauer to reflect on the Nazi deployment of myths, symbols, parades and demonstrations to command support. (250)
The chapter on Habermas is also very good and can be read separately as an introduction to Habermas's leading ideas (chapter 17). It is significant that this final voice of the Frankfurt School should be one that provides a basis for greater optimism about the prospects for modern democracy than what emerges from the Dialectic of Enlightenment.

The perspectives of the Frankfurt School were developed in the context of crises of capitalism, fascism, and anti-semitism in the 1930s. But these theories are once again deeply relevant in the context of the politics of 2016. A xenophobic, divisive candidate and party have assumed the reins of power in a populous democracy. The issues of propaganda and unapologetic political lies are before us once again. The politics of hate and intolerance have taken center stage. And the role of culture, media, and now the internet needs to be examined carefully for its dependence upon the corporate order as well as its possible potency as a mechanism of resistance. The Frankfurt School thinkers had important insights into virtually all these questions. Jeffries' very interesting intellectual history of the movement is timely.

Jeffries quotes from a letter from Adorno to Mann on the aftermath of Nazism in Germany with observations that may be relevant to us today as well:
The inarticulate character of apolitical conviction, the readiness to submit to every manifestation of actual powers, the instant accommodation to whatever new situation emerges, all this is merely an aspect of the same regression. If it is true that the manipulative control of the masses always brings about a regressive formation of humanity, and if Hitler’s drive for power essentially involved the relationship of this development ‘at a single stroke’, we can only say that he, and the collapse that followed, has succeeded in producing the required infantilisation. (273)
These are words that may be important in the coming years, if the incoming government succeeds in carrying out many of its hateful promises. And how will the institutions of media and culture respond? Let us not be infantilized in the years to come when it comes to the fundamental values of democracy.

Thursday, December 29, 2016

Critical points in history and social media


Recent posts have grappled with the interesting topic of phase transitions in physics (link, link, link). One reason for being interested in this topic is its possible relevance to the social world, where abrupt changes of state in the social plenum are rare but known occurrences. The eruption of protest in numerous countries across the Middle East and North Africa during the Arab Spring is one example. Essentially we can describe these incidents as moments when ordinary citizens are transformed from quiescent members of civil society, pursuing their private lives as best they can, to engaged activists assembling at great risk in large demonstrations. Is this an example of a phase transition? And are there observable indicators that might allow researchers to explain and sometimes anticipate such critical points?

There is a great deal of interesting research underway on these topics in the field of complex systems and communications theory. The processes and phenomena that researchers are identifying appear to have a great deal of importance both for understanding current social dynamics and potentially for changing undesirable outcomes.

Researchers on the dynamics of mass social media have addressed the question of critical transitions. Kuehn, Martens, and Romero (2014) provide an interesting approach in their article, "Critical transitions in social network activity" (link). Also of interest is Daniel Romero's "An epidemiological approach to the spread of political third parties", co-authored with Christopher Kribs-Zaleta, Anuj Mubayi, and Clara Orbe (link).

Here is the abstract for "Critical transitions":
A large variety of complex systems in ecology, climate science, biomedicine and engineering have been observed to exhibit tipping points, where the dynamical state of the system abruptly changes. For exam- ple, such critical transitions may result in the sudden change of ecological environments and climate conditions. Data and models suggest that detectable warning signs may precede some of these drastic events. This view is also corroborated by abstract mathematical theory for generic bifurcations in stochastic multi-scale systems. Whether such stochastic scaling laws used as warning signs for a priori unknown events in society are present in social networks is an exciting open problem, to which at present only highly speculative answers can be given. Here, we instead provide a first step towards tackling a simpler question by focusing on a priori known events and analyse a social media data set with a focus on classical variance and autocorrelation warning signs. Our results thus pertain to one absolutely fundamental question: Can the stochastic warning signs known from other areas also be detected in large-scale social media data? We answer this question affirmatively as we find that several a priori known events are preceded by variance and autocorrelation growth. Our findings thus clearly establish the necessary starting point to further investigate the relationship between abstract mathematical theory and various classes of critical transitions in social networks.
They use the idea of a tipping point rather than a phase transition, but there seems to be an important parallel between the two ideas. (Here are a few prior posts on continuity and tipping points; link, link.) Here is they define the idea of a critical transition: "A critical transition may informally be defined as a rapid and drastic change of a time-dependent dynamical system" (2). The warning signs they consider are formal and statistical rather than substantive: increasing variance and rising auto-correlation:
Two of the most classical warning signs are rising variance and rising auto-correlation before a critical transition [10,28]. The theory behind these warning signs is described in more detail in Appendix A. The basic idea is that if a drastic change is induced by a critical (bifurcation) point, then the underlying deterministic dynamics becomes less stable. Hence, the noisy fluctuations become more dominant as the decay rate decreases close to the critical transition. As a result, (a) the variance in the signal increases, due to the stronger fluctuations and (b) the system’s state memory (i.e., auto-correlation) increases, due to smaller deterministic contraction onto a single state [10,11]. It can be shown that both warning signs are related via a suitable fluctuation–dissipation relation [29]. (2)
Below are the data they present showing statistical associations of hashtag frequencies for impending known events -- Halloween, Thanksgiving, and Christmas. The X panels represent the word frequency of the hashtag; the V panels represent the variance, and R represents autocorrelation on the time series of word frequency.


It is plain from the graphs of these variables that the frequency, variance, and autocorrelation statistics for the relevant hashtags demonstrate a rising trend as they approach the event and fall off steeply following the event; so these statistics post-dict the event effectively. But of course there is no value in predicting the occurrence of Halloween based on the frequency of #halloween earlier in October; we know that October 31 will soon occur. The difficult research question posed here is whether it is possible to identify warning signs for unknown impending events. The authors do not yet have an answer to this question, but they offer a provocative hypothesis: "These time series illustrate that there is a variety of potentially novel dynamical behaviors in large-scale social networks near large spikes that deserve to be investigated in their own right." (4). This suggests several questions for future investigation:
  • How do we define when a critical transition occurs in the data for an a priori unknown event? 
  • For a priori unknown events, is there a possibility to identify hashtags or other aspects of the message which allow us to determine the best warning sign? 
  • Can we link warning signs in social networks to a priori unknown critical transitions outside a social network? 
  • Which models of social networks can re-produce critical transitions observed in data? 
Also of interest for issues raised previously in Understanding Society is Romero, Kribs-Zaleta, Mubayi, and Orbe's "An epidemiological approach to the spread of political third parties" (link). This paper is relevant to the topic of the role of organizations in the spread of social unrest considered earlier (link, link). Their paper uses the example of Green Party activism as an empirical case. Here is their abstract:
Abstract. Third political parties are influential in shaping American politics. In this work we study the spread of a third party ideology in a voting population where we assume that party members/activists are more influential in recruiting new third party voters than non-member third party voters. The study uses an epidemiological metaphor to develop a theoretical model with nonlinear ordinary differential equations as applied to a case study, the Green Party. Considering long-term behavior, we identify three threshold parameters in our model that describe the different possible scenarios for the political party and its spread. We also apply the model to the study of the Green Party’s growth using voting and registration data in six states and the District of Columbia to identify and explain trends over the past decade. Our system produces a backward bifurcation that helps identify conditions under which a sufficiently dedicated activist core can enable a third party to thrive, under conditions which would not normally allow it to arise. Our results explain the critical role activists play in sustaining grassroots movements under adverse conditions.
And here is the basic intuition underlying the analysis of this paper:
We use an epidemiological paradigm to translate third party emergence from a political phenomenon to a mathematical one where we assume that third parties grow in a similar manner as epidemics in a population. We take this approach following in the steps of previous theoretical studies that model social issues via such methods. The epidemiological metaphor is suggested by the assumption that individuals’ decisions are influenced by the collective peer pressure generated by others’ behavior; the “contacts” between these two groups’ ideas are analogous to the contact processes that drive the spread of infectious diseases. (2)
Their approach makes use of a system of differential equations to describe the behavior of the population as a whole based on specific assumptions. It would seem that the problem could be approached using an agent-based model as well. This paper is relevant to the general topic of critical points in social behavior as well, since it attempts to discover the conditions under which a social movement like third-party mobilization will accelerate rather than decay.

Also of interest to the topic of large dynamic social processes and social media is R. Kelly Garrett and Paul Resnick, "Resisting political fragmentation on the Internet" (link). Here is their abstract:
Abstract: Must the Internet promote political fragmentation? Although this is a possible outcome of personalized online news, we argue that other futures are possible and that thoughtful design could promote more socially desirable behavior. Research has shown that individuals crave opinion reinforcement more than they avoid exposure to diverse viewpoints and that, in many situations, hearing the other side is desirable. We suggest that, equipped with this knowledge, software designers ought to create tools that encourage and facilitate consumption of diverse news streams, making users, and society, better off. We propose several techniques to help achieve this goal. One approach focuses on making useful or intriguing opinion-challenges more accessible. The other centers on nudging people toward diversity by creating environments that accentuate its benefits. Advancing research in this area is critical in the face of increasingly partisan news media, and we believe these strategies can help.
This research too is highly relevant to the dynamic social processes through which largescale social changes occur, and particularly so in the current climate of fake news and deliberate political polarization.

(It is interesting that social media and the Internet come into this story in several different ways. Google employee and Egyptian activist Wael Ghonim played a central role in the early stages of activation of the uprisings in Cairo in 2011. His book, Revolution 2.0: The Power of the People Is Greater Than the People in Power: A Memoir, is a fascinating exposure to some of the details of these events, and the short book Wael Ghonim... Facebook and The Uprising in Egypt by Dhananjay Bijale specifically addresses the role that Ghonim and FaceBook played in the mobilization of ordinary young Egyptians.)

Monday, December 19, 2016

Menon and Callender on the physics of phase transitions


In an earlier post I considered the topic of phase transitions as a possible source of emergent phenomena (link). I argued there that phase transitions are indeed interesting, but don't raise a serious problem of strong emergence. Tarun Menon considers this issue in substantial detail in the chapter he co-authored with Craig Callender in The Oxford Handbook of Philosophy of Physics, "Turn and face the strange ... ch-ch-changes: Philosophical questions raised by phase transitions" (link). Menon and Callender provide a very careful and logical account of three ways of approaching the physics of phase transitions within physics and three versions of emergence (conceptual, explanatory, ontological). The piece is technical but very interesting, with a somewhat deflating conclusion (if you are a fan of emergence):
We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. 
Menon and Callendar review three approaches to the phenomenon of phase transition offered by physics: classical thermodynamics, statistical mechanics, and renormalization group theory. Thermodynamics describes the behavior of materials (gases, liquids, and solids) at the macro level; and statistical mechanics and renormalization group theory are theories of the micro states of materials intended to allow derivation of the macro behavior of the materials from statistical properties of the micro states. They describe this relationship in these terms:
Statistical mechanics is the theory that applies probability theory to the microscopic degrees of freedom of a system in order to explain its macroscopic behavior. The tools of statistical mechanics have been extremely successful in explaining a number of thermodynamic phenomena, but it turned out to be particularly difficult to apply the theory to the study of phase transitions. (193)
Here is the mathematical definition of phase transition that they provide:
Mathematically, phase transitions are represented by nonanalyticities or singularities in a thermodynamic potential. A singularity is a point at which the potential is not infinitely differentiable, so at a phase transition some derivative of the thermo­dynamic potential changes discontinuously. (191)
And they offer this definition:

(Def 1) An equilibrium phase transition is a nonanalyticity in the free energy. (194)

Here is their description of how the renormalization group theory works:
To explain the method, we return to our stalwart Ising model. Suppose we coarse­grain a 2­D Ising model by replacing 3 × 3 blocks of spins with a single spin pointing in the same direction as the majority in the original block. This gives us a new Ising system with a longer distance between lattice sites, and possibly a different coupling strength. You could look at this coarse­graining procedure as a transformation in the Hamiltonian describing the system. Since the Hamiltonian is characterized by the coupling strength, we can also describe the coarse­graining as a transformation in the coupling parameter. Let K be the coupling strength of the original system and R be the relevant transformation. The new coupling strength is K′ = RK. This coarse­graining procedure could be iterated, producing a sequence of coupling parameters, each related to the previous one by the transformation R. The transformation defines a flow on parameter space. (195)
Renormalization group theory, then, is essentially the mathematical basis of coarse-graining analysis (link).

The key difficulty that has been used to ground arguments about strong emergence of phase transitions is now apparent: there seems to be a logical disjunction between the resources of statistical mechanics and the findings of thermodynamics. In theory physicists would like to hold that statistical mechanics provides the micro-level representation of the phenomena described by thermodynamics; or in other words, that thermodynamic facts can be reduced to derivations from statistical mechanics. However, the definition of a phase transition above specifies that the phenomena display "nonanalyticities" -- instantaneous and discontinuous changes of state. It is easily demonstrated that the equations used in statistical mechanics do not display nonanalyticities; change may be abrupt, but it is not discontinuous, and the equations are infinitely differentiable. So if phase transitions are points of nonanalyticity, and statistical mechanics does not admit of nonanalytic equations, then it would appear that thermodynamics is not derivable from statistical mechanics. Similar reasoning applies to renormalization group theory.

This problem was solved within statistical mechanics by admitting of infinitely many bodies within the system that is represented (or alternatively, admitting of infinitely compressed volumes of bodies); but neither of these assumptions of infinity is realistic of the material world.

So are phase transitions "emergent" phenomena in either a weak sense or a strong sense, relative to the micro-states of the material in question? The strongest sense of emergence is what Menon and Callender call ontological irreducibility.
Ontological irreducibility involves a very strong failure of reduction, and if any phenomenon deserves to be called emergent, it is one whose description is ontologically irreducible to any theory of its parts. Batterman argues that phase transitions are emergent in this sense (Batterman 2005). It is not just that we do not know of an adequate statistical mechanical account of them, we cannot construct such an account. Phase transitions, according to this view, are cases of genuine physical discontinuities. (215)
The possibility that phase transitions are ontologically emergent at the level of thermodynamics is raised by the point about the mathematical characteristics of the equations that constitute the statistical mechanics description of the micro level -- the infinite differentiability of those equations. But Menon and Callender give a compelling reason for thinking this is misleading. They believe that phase transitions constitute a conceptual novelty with respect to the resources of statistical mechanics -- phase transitions do not correspond to natural kinds at the level of the micro-constitution of the material. But they argue that this does not establish that the phenomena cannot be explained or derived from a micro-level description. So phase transitions are not emergent according to the explanatory or ontological understandings of that idea.

The nub of the issue comes down to how we construe the idealization of statistical mechanics that assumes that a material consists of an infinite number of elements. This is plainly untrue of any real system (gas, liquid, or solid). The fact that there are boundaries implies that important thermodynamic properties are not "extensive" with volume: twice the volume leads to twice the entropy. But the way in which the finitude of a volume of material affects its behavior is through the effects of novel behaviors at the edges of the volume. And in many instances these effects are small relative to the behavior of the whole, if the volume is large enough.
Does this fact imply that there is a great mystery about extensivity, that extensivity is truly emergent, that thermodynamics does not reduce to finite N statistical mechanics? We suggest that on any reasonably uncontentious way of defining these terms, the answer is no. We know exactly what is happening here. Just as the second law of thermodynamics is no longer strict when we go to the microlevel, neither is the concept of extensivity. (201-202)
There is an important idealization on the thermodynamic description as well -- the notion that several specific kinds of changes are instantaneous or discontinuous. But this assumption can also be seen as an idealization, corresponding to a physical system that is undergoing changes at different rates under different environmental conditions. What thermodynamics describes as an instantaneous change from liquid to gas may be better understood as a rapid process of change at the molar level which can be traced through in a continuous way.

(The fact that some systems are coarse-grained has an interesting implication for this set of issues (link). The interesting implication is that while it is generally true that the micro states in such a system entail the macro states, the reverse is not true: we cannot infer from a given macro state to the exact underlying micro state. Rather, many possible micro states correspond to a given macro state.)

The conclusion they reach is worth quoting:
Phase transitions are an important instance of putatively emergent behavior. Unlike many things claimed emergent by philosophers (e.g., tables and chairs), the alleged emergence of phase transitions stems from both philosophical and scientific arguments. Here we have focused on the case for emergence built from physics. We have found that when one clarifies concepts and digs into the details, with respect to standard textbook statistical mechanics, phase transitions are best thought of as conceptually novel, but not ontologically or explanatorily irreducible. And if one goes past textbook statistical mechanics, then an argument can be made that they are not even conceptually novel. In the case of renormalization group theory, consideration of infinite systems and their singular behavior provides a central theoretical tool, but this is compatible with an explanatory reduction. Phase transitions may be “emergent” in some sense of this protean term, but not in a sense that is incompatible with the reductionist project broadly construed. (222)
Or in other words, Menon and Callender refute one of the most technically compelling interpretations of ontological emergence in physical systems. They show that the phenomena of phase transitions as described by classical thermodynamics are compatible with being reduced to the dynamics of individual elements at the micro-level, so phase transitions are not ontologically emergent.

Are these arguments relevant in any way to debates about emergence in social system dynamics? The direct relevance is limited, since these arguments depend entirely on the mathematical properties of the ways in which the micro-level of physical systems are characterized (statistical mechanics). But the more general lesson does in fact seem relevant: rather than simply postulating that certain social characteristics are ontologically emergent relative to the actors that make them up, we would be better advised to look for the local-level processes that act to bring about surprising transitions at critical points (for example, the shift in a flock of birds from random flight to a swarm in a few seconds).

Monday, December 12, 2016

More on cephalopod minds


When I first posted on cephalopod intelligence a year or so ago, I assumed it would be a one-off diversion into the deep blue sea (link). But now I've read the fascinating recent book by Peter Godfrey-Smith, Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness, and it is interesting enough to justify a second deep dive. Godfrey-Smith is a philosopher, but he is also a scuba diver, and his interest in cephalopods derives from his experiences under water. This original stimulus has led to two very different lines of inquiry. What is the nature of the mental capacities of an octopus? And how did "intelligence" happen to evolve twice on earth through such different pathways? Why is a complex nervous system an evolutionary advantage for a descendent of a clam?

Both questions are of philosophical interest. The nature of consciousness, intelligence, and reasoning has been of great concern to philosophers in the study of the philosophy of mind. The questions that arise bring forth a mixture of difficult conceptual, empirical, and theoretical issues: how does consciousness relate to behavioral capacity? Are intelligence and consciousness interchangeable? What evidence would permit us to conclude that a given species of animal has consciousness and reasoning ability?

The evolutionary question is also of interest to philosophers. The discipline of the philosophy of biology focuses much of its attention on the issues raised by evolutionary theory. Elliott Sober's work illustrates this form of philosophical thinking -- for example, The Nature of Selection: Evolutionary Theory in Philosophical Focus, Evidence and Evolution: The Logic Behind the Science. Godfrey-Smith tells an expert's story of the long evolution of mollusks, in and out of their shells, with emerging functions and organs well suited to the opportunities available in their oceanic environments. One of the evolutionary puzzles to be considered is the short lifespan of octopuses and squid -- just a few years (160). Why would the organism invest so heavily in a cognitive system that supported its life for such a short time?

A major part of the explanation that G-S favors involves the fact that octopuses are hunters, and a complex nervous system is more of an advantage for predator than prey. (Wolves are more intelligent than elk, after all!) Having a nervous system that supports anticipation, planning, and problem solving turns out to be an excellent preparation for being a predator. Here is a good example of how that cognitive advantage plays out for the octopus:
David Scheel, who works mostly with the giant Pacific octopus, feeds his animals whole clams, but as his local animals in Prince William Sound do not routinely eat clams, he has to teach them about the new food source. So he partly smashes a clam and gives it to the octopus. Later, when he gives the octopus an intact clam, the octopus knows that it’s food, but does not know how to get at the meat. The octopus will try all sorts of methods, drilling the shell and chipping the edges with its beak, manipulating it in every way possible … and then eventually it learns that its sheer strength is sufficient: if it tries hard enough, it can simply pull the shell apart. (70)
Exploration, curiosity, experimentation, and play are crucial components of the kind of flexibility that organisms with big nervous systems bring to earning their living.

G-S brings up a genuinely novel aspect of the organismic value of a complex nervous system: not just problem-solving applied to the external environment, but coordination of the body itself. Intelligence evolves to handle the problem of coordinating the motions of the parts of the body.
The cephalopod body, and especially the octopus body, is a unique object with respect to these demands. When part of the molluscan “foot” differentiated into a mass of tentacles, with no joints or shell, the result was a very unwieldy organ to control. The result was also an enormously useful thing, if it could be controlled. The octopus’s loss of almost all hard parts compounded both the challenge and the opportunities. A vast range of movements became possible, but they had to be organized, had to be made coherent. Octopuses have not dealt with this challenge by imposing centralized governance on the body; rather, they have fashioned a mixture of local and central control. One might say the octopus has turned each arm into an intermediate-scale actor. But it also imposes order, top-down, on the huge and complex system that is the octopus body. (71)
In this picture, neurons first multiply because of the demands of the body, and then sometime later, an octopus wakes up with a brain that can do more. (72)
This is a genuinely novel and intriguing idea about the creation of a new organism over geological time. It is as if a plastic self-replicating and self-modifying artifact bootstrapped itself from primitive capabilities into a directed and cunning predator. Or perhaps it is a preview of the transition that artificial intelligence systems embodying adaptable learning processes and expanding linkages to the control systems of the physical world may take in the next fifty years.  

What about the evolutionary part of the story? Here is a short passage where Godfrey-Smith considers the long evolutionary period that created both vertebrates and mollusks:
The history of large brains has, very roughly, the shape of a letter Y. At the branching center of the Y is the last common ancestor of vertebrates and mollusks. From here, many paths run forward, but I single out two of them, one leading to us and one to cephalopods. What features were present at that early stage, available to be carried forward down both paths? The ancestor at the center of the Y certainly had neurons. It was probably a worm-like creature with a simple nervous system, though. It may have had simple eyes. Its neurons may have been partly bunched together at its front, but there wouldn’t have been much of a brain there. From that stage the evolution of nervous systems proceeds independently in many lines, including two that led to large brains of different design. (65)
The primary difference that G-S highlights here is the nature of the neural architecture that each line eventually favors: a central cord connecting periphery to a central brain; and a decentralized network of neurons distributed over the whole body.
Further, much of a cephalopod’s nervous system is not found within the brain at all, but spread throughout the body. In an octopus, the majority of neurons are in the arms themselves— nearly twice as many as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch, but also the capacity to sense chemicals— to smell, or taste. Each sucker on an octopus’s arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, like reaching and grasping. (67)
So what about the "alien intelligence" part of G-S's story? G-S emphasizes the fact that octopus mentality is about as alien to human experience and evolution as it could be.
Cephalopods are an island of mental complexity in the sea of invertebrate animals. Because our most recent common ancestor was so simple and lies so far back, cephalopods are an independent experiment in the evolution of large brains and complex behavior. If we can make contact with cephalopods as sentient beings, it is not because of a shared history, not because of kinship, but because evolution built minds twice over. This is probably the closest we will come to meeting an intelligent alien. (9)
This too is intriguing. G-S is right: the evolutionary story he works through here gives great encouragement for the idea that an organism in a complex environment and a few bits of neuronal material can evolve in wildly different pathways, leading to cognitive capabilities and features of awareness that are dramatically different from human intelligence. Life is plastic and evolutionary time is long. The ideas of the unity of consciousness and the unified self don't have any particular primacy or uniqueness. For example: 
The octopus may be in a sort of hybrid situation. For an octopus, its arms are partly self—they can be directed and used to manipulate things. But from the central brain’s perspective, they are partly non-self too, partly agents of their own. (103)
So there is nothing inherently unique about human intelligence, and no good reason to assume that all intelligent creatures would find a basis for mutual understanding and communication. Sorry, Captain Kirk, the universe is stranger than you ever imagined!

Thursday, December 8, 2016

French sociology


Is sociology as a discipline different in France than in Germany or Britain? Or do common facts about the social world entail that sociology is everywhere the same?

The social sciences feel different from physics or mathematics, in that their development seems much more path-dependent and contingent. The problems selected, the theoretical resources deployed, the modes of evidence considered most relevant -- all these considerations have to be specified; and they have been specified differently in different times and places. An earlier post considered the arc of sociology in France (link).

Johan Heilbron's French Sociology has now appeared, and it is a serious effort to make sense of the tradition of sociology as it developed in France. (Jean-Louis Fabiani's Qu'est-ce qu'un philosophe français? provides a similar treatment of philosophy in France; link.) Heilbron approaches this topic from the point of view of historical sociology; he wants to write a historical sociology of the discipline of sociology.
For this historical-sociological view I have adopted a long-term perspective in order to uncover patterns of continuity and change that would have otherwise remained hidden. Several aspects of contemporary French sociology—its position in the Faculty of Letters, for example—can be understood only by going back in time much further than is commonly done. (2)
Understanding ideas is not merely about concepts, theories, and assumptions—however important they are—it simultaneously raises issues about how such ideas come into being, how they are mobilized in research and other intellectual enterprises, and how they have, or have not, spread beyond the immediate circle of producers. Understanding intellectual products, to put it simply and straightforwardly, cannot be divorced from understanding their producers and the conditions of production. (3)
Heilbron traces the roots of sociological thinking to the Enlightenment in France, with the intellectual ethos that any question could be considered scientifically and rationally.
If the Enlightenment has been seen as a formative period for the social sciences, it was fundamentally because a secular intelligentsia now explicitly claimed and effectively exercised the right to analyze any subject matter, however controversial, independently of official doctrines. (13)
This gives an intellectual framework to the development of sociology; but for Heilbron the specifics of institutions and networks are key for understanding the particular pathway that the discipline underwent. Heilbron identifies the establishment after the Revolution of national academies for natural science, human science, and literature as an important moment in the development of the social sciences: "The national Académie des sciences morales et politiques (1832) became the official center for moral and political studies under the constitutional regime of the July monarchy" (14). In fact, Heilbron argues that the disciplines of the social sciences in France took shape as a result of a dynamic competition between the Academy and the universities. Much of the work of the Academy during mid-nineteenth century was directed towards social policy and the "social question" -- the impoverished conditions of the lower classes and the attendant risk of social unrest. There was the idea that the emerging social sciences could guide the formation of intelligent and effective policies by the state (20).

Another major impetus to the growth of the social sciences was the French defeat in the Franco-Prussian War in 1870. This national trauma gave a stimulus top the enhancement of university-based disciplines. The case was made (by Emile Zola, for example) that France was defeated because Prussia had the advantage in science and education; therefore France needed to reform and expand its educational system and research universities.
Disciplinary social science now became the predominant mode of teaching, research, and publishing. University-based disciplines gained a greater degree of autonomy not only with respect to the national Academy but also vis-à-vis governmental agencies and lay audiences. Establishing professional autonomy in its different guises—conceptually, socially, and institutionally—was the main preoccupation of the representatives of the university-based disciplines. (30)
Heilbron pays attention to the scientific institutions through which the social sciences developed in the early twentieth century. Durkheim's success in providing orientation to the development of sociology during its formative period in the early twentieth century rested in some part on Durkheim's ability to create and sustain some of those institutions, including especially the L'Année sociologique. Here is Heilbron's summary of this fact:
Because the Durkheimian program eclipsed that of its competitors and obtained considerable intellectual recognition, sociology in France did not enter the university as a science of “leftovers,” as Albion Small said about American sociology. Durkheimian sociology, quite the contrary, represented a challenging and rigorous program to scientifically study crucial questions about morality, religion, and other collective representations, their historical evolution and institutional underpinnings. (90)
Here is a graph of the relationships among a number of the primary contributors to L'Année sociologique during 1898-1912:



But Heilbron notes that this influence in the institutions of publication in the discipline of sociology did not translate directly or immediately into a primary location for the Durkheimians within the developing university system.

Heilbron's narrative confirms a break in the development of sociology at the end of World War II. And in fact, it seems to be true that sociology became a different discipline in France after 1950. Here is how Heilbron characterizes the intellectual field:
Sociological work after 1945 was caught up in a constellation that was defined by two antagonistic poles: an intellectual pole represented by existentialist philosophers who dominated the intellectual and much of the academic field and a policy-related research pole in state institutes for statistical, economic, and demographic studies. (123-124)
An important part of the growth of sociology in France in this period was stimulated by practical needs of policy reform and economic reorganization. It was in part because of a lack of intellectual status that the demand for applied research came to fulfill a new function for the social sciences. The growth of applied social science research was produced by the needs of economic recovery and the new role of the state in that respect. (129)
But academic sociology did not progress rapidly:
In the postwar academic structure, sociology was still a rather marginal phenomenon, a discipline with little prestige that was institutionally no more than a minor for philosophy undergraduates. The leading academics were the two professors at the Sorbonne, Georges Davy and Georges Gurvitch, each of whom presided over his own journal. Davy had succeeded Halbwachs in 1944 and resumed the publication of the Année sociologique, assisted by the last survivors of the Durkheimian network. (130)
Assessing the situation in 1955, Alain Touraine observed a near-total separation between university sociology and empirical research. Researchers were isolated, he wrote, and they lacked solid training, research experience, and professional prospects. Their working conditions, furthermore, were poor. The CES had only three study rooms for almost forty researchers and neither the CES nor the CNRS provided research funding. (139)
On Heilbron's account, the large changes in sociology began to accelerate in the 1970s. Figures like Touraine, Bourdieu, Crozier, and Boudon brought substantially new thinking to both theoretical ideas and research problems for sociology. In a later post I will consider his treatment of this period in the development of the discipline.

(Here is an earlier post discussing Gabriel Abend's ideas about differences in the discipline of sociology across the world; link.)

Thursday, December 1, 2016

Processual sociology


Andrew Abbott is one of the thinkers within sociology who is not dependent upon a school of thought -- not structuralism, not positivism, not ethnomethodology, not even the Chicago School. He approaches the problems that interest him with a fresh eye and therefore represents a source of innovation and new ideas within sociological theory. Second, he presents some very compelling intuitions about the social world when it comes to social ontology. He thinks that many social scientists bring unfortunate assumptions with them about the fixity of the social world -- assumptions about entities and properties, assumptions about causation, assumption about laws. And he shows in many places how misleading these assumptions are -- not least in his study of the professions The System of Professions: An Essay on the Division of Expert Labor (Institutions), but in his history of the Chicago School of sociology as well (Department and Discipline: Chicago Sociology at One Hundred). Processual Sociology presents his current thinking about some of those important ideas.

The central organizing idea of Processual Sociology is one that finds expression in much of Abbott's work, the notion that we should think of the social world as a set of ongoing processes rather than a collection of social entities and structures. He sometimes refers to this as a relational view of the actor and the social environment. Here is how he describes the basic ontological idea of a processual social world:
By a processual approach, I mean an approach that presumes that everything in the social world is continuously in the process of making, remaking, and unmaking itself (and other things), instant by instant. The social world does not consist of atomic units whose interactions obey various rules, as in the thought of the economists. Nor does it consist of grand social entities that shape and determine the little lives of individuals, as in the sociology of Durkheim and his followers. (preface)
This isn't a wholly unfamiliar idea in sociological theory; for example, Norbert Elias advocated something like it with his idea of "figurational sociology" (link). But Abbott's adherence to the approach and his sustained efforts to develop sociological ideas in light of it are distinctive. 

Abbott offers the idea of a "career" as an example of what he means by a processual social reality. A person's career is not a static thing that exists in a confined period of time; rather, it is a series of choices, developments, outcomes, and plans that accumulate over time in ways that in turn affect the individual's mentality. Or consider his orienting statements about professions in The System of Professions:
The professions, that is, make up an interdependent system. In this system, each profession has its activities under various kinds of jurisdiction. Sometimes it has full control, sometimes control subordinate to another group. Jurisdictional boundaries are perpetually in dispute, both in local practice and in national claims. It is the history of jurisdictional disputes that is the real, the determining history of the professions. Jurisdictional claims furnish the impetus and the pattern to organizational developments. Thus an effective historical sociology of professions must begin with case studies of jurisdictions and jurisdiction disputes. It must then place these disputes in a larger context, considering the system of professions as a whole. (kl 208)
His comments about the discipline of sociology itself in Department and Discipline have a similar fluidity. Rather than thinking of sociology as a settled "thing" within the intellectual firmament, we are better advised to look at the twists and turns various sociologists, departments, journals, conferences, and debates have made of the configuration during a period in time.

These examples have to do with the nature of social things -- institutions and organizations, for example. But Abbott extends the processual view to the actors themselves. He argues that we should look to the flow of actions rather than the actor (again, a parallel with Elias); so actions are as much the result of shifting circumstances as they are the reflective choices of unitary actors. Moreover, the individual himself or herself continues to change throughout life and throughout a series of actions. Memories change, desires change, and social relationships change. Individuals are "historical" -- they are embedded in concrete circumstances and relationships that contribute to their actions and movements at each moment. (This is the thrust of the first chapter of the volume.) Abbott extends this idea of the "processual individual" by reconsidering the concept of human nature (chapter 2).
For a processual view that begins with problematizing that continuity, an important first step is to address the concept of human nature, asking what kind of concept of human nature is possible under processual assumptions. (16)
Here is something like a summary of the view that he develops in this chapter:
Human nature, first, is rooted in the three modes of historicality—corporeal, memorial, and recorded—and the complex of substantive historicality that they enable. It concerns the means by which those modes interact and shape the developing lineage that is a person or social entity. It is also rooted in what we might call optativity, the human capacity to envision alternative futures and indeed alternative future units to the social process. (31-32)
Ecological thinking plays a large role in Abbott's conception of the social realm. Social and human arrangements are not to be thought of in isolation; instead, Abbott advocates that we should consider them in a field of ecological interdependence. A research library does not exist uniquely by itself; rather, it exists in a field of funding, institutional control, user demands, legal regulations, and public opinion. Its custodians make decisions about the purchase of materials based on the needs and advocacy of various stakeholders, and the operation and holdings of the research library are a joint product of these contextual influences. In an innovative move, Abbott argues that the ecology within which an institution like a library sits is actually a linked set of ecologies, each exercising influence over the others. So the library influences the publisher in the same activities through which the publisher influences the library. Here is a brief description of the idea of linked ecologies:
I here answer this critique with the concept of linked ecologies. Instead of envisioning a particular ecology as having a set of fixed surrounds, I reconceptualize the social world in terms of linked ecologies, each of which acts as a flexible surround for others. The overall conception is thus fully general. For expository convenience, however, it is easiest to develop the argument around a particular ecology. I shall here use that of the professions. (35)
The central topic for a sociologist in a processual framework is the problem of stability: given the permanent fact of change, how does continuity emerge and persist? This is the problem of order.
I am concerned to envision what kinds of concepts of order might be appropriate under a different set of social premises: those of processualism. As the first two chapters of this book have argued, the processual ontology does not start with independent individuals trying to create a society. It starts with events. Social entities and individuals are made out of that ongoing flow of events. The question therefore arises of what concept of order would be necessary if we started out not with the usual state-of-nature ontology, but with this quite different processual one. (200)
Here Abbott's thinking converges with several other sociologists and theorists whose work provides insights concerning the persistence of social entities, institutions, or assemblages. Abbott, Kathleen Thelen and Manuel DeLanda (link, link) agree about an important fundamental question: we must investigate the mechanisms and circumstances that permit social institutions, rules, or arrangements to persist in the face of the stochastic pressures of change induced by actors and circumstances.




Wednesday, November 30, 2016

DeLanda on historical ontology


A primary reason for thinking that assemblage theory is important is the fact that it offers new ways of thinking about social ontology. Instead of thinking of the social world as consisting of fixed entities and properties, we are invited to think of it as consisting of fluid agglomerations of diverse and heterogeneous processes. Manuel DeLanda's recent book Assemblage Theory sheds new light on some of the complexities of this theory.

Particularly important is the question of how to think about the reality of large historical structures and conditions. What is "capitalism" or "the modern state" or "the corporation"? Are these temporally extended but unified things? Or should they be understood in different terms altogether? Assemblage theory suggests a very different approach. Here is an astute description by DeLanda of historical ontology with respect to the historical imagination of Fernand Braudel:
Braudel's is a multi-scaled social reality in which each level of scale has its own relative autonomy and, hence, its own history. Historical narratives cease to be constituted by a single temporal flow -- the short timescale at which personal agency operates or the longer timescales at which social structure changes -- and becomes a multiplicity of flows, each with its own variable rates of change, its own accelerations and decelerations. (14)
DeLanda extends this idea by suggesting that the theory of assemblage is an antidote to essentialism and reification of social concepts:
Thus, both 'the Market' and 'the State' can be eliminated from a realist ontology by a nested set of individual emergent wholes operating at different scales. (16)
I understand this to mean that "Market" is a high-level reification; it does not exist in and of itself. Rather, the things we want to encompass within the rubric of market activity and institutions are an agglomeration of lower-level concrete practices and structures which are contingent in their operation and variable across social space. And this is true of other high-level concepts -- capitalism, IBM, or the modern state.

DeLanda's reconsideration of Foucault's ideas about prisons is illustrative of this approach. After noting that institutions of discipline can be represented as assemblages, he asks the further question: what are the components that make up these assemblages?
The components of these assemblages ... must be specified more clearly. In particular, in addition to the people that are confined -- the prisoners processed by prisons, the students processed by schools, the patients processed by hospitals, the workers processed by factories -- the people that staff those organizations must also be considered part of the assemblage: not just guards, teachers, doctors, nurses, but the entire administrative staff. These other persons are also subject to discipline and surveillance, even if to a lesser degree. (39)
So how do assemblages come into being? And what mechanisms and forces serve to stabilize them over time?  This is a topic where DeLanda's approach shares a fair amount with historical institutionalists like Kathleen Thelen (link, link): the insight that institutions and social entities are created and maintained by the individuals who interface with them, and that both parts of this observation need explanation. It is not necessarily the case that the same incentives or circumstances that led to the establishment of an institution also serve to gain the forms of coherent behavior that sustain the institution. So creation and maintenance need to be treated independently. Here is how DeLanda puts this point:
So we need to include in a realist ontology not only the processes that produce the identity of a given social whole when it is born, but also the processes that maintain its identity through time. And we must also include the downward causal influence that wholes, once constituted, can exert on their parts. (18)
Here DeLanda links the compositional causal point (what we might call the microfoundational point) with the additional idea that higher-level social entities exert downward causal influence on lower-level structures and individuals. This is part of his advocacy of emergence; but it is controversial, because it might be maintained that the causal powers of the higher-level structure are simultaneously real and derivative upon the actions and powers of the components of the structure (link). (This is the reason I prefer to use the concept of relative explanatory autonomy rather than emergence; link.)

DeLanda summarizes several fundamental ideas about assemblages in these terms:
  1. "Assemblages have a fully contingent historical identity, and each of them is therefore an individual entity: an individual person, an individual community, an individual organization, an individual city." 
  2. "Assemblages are always composed of heterogeneous components." 
  3. "Assemblages can become component parts of larger assemblages. Communities can form alliances or coalitions to become a larger assemblage."
  4. "Assemblages emerge from the interactions between their parts, but once an assemblage is in place it immediately starts acting as a source of limitations and opportunities for its components (downward causality)." (19-21)
There is also the suggestion that persons themselves should be construed as assemblages:
Personal identity ... has not only a private aspect but also a public one, the public persona that we present to others when interacting with them in a variety of social encounters. Some of these social encounters, like ordinary conversations, are sufficiently ritualized that they themselves may be treated as assemblages. (27)
Here DeLanda cites the writings of Erving Goffman, who focuses on the public scripts that serve to constitute many kinds of social interaction (link); equally one might refer to Andrew Abbott's processual and relational view of the social world and individual actors (link).

The most compelling example that DeLanda offers here and elsewhere of complex social entities construed as assemblages is perhaps the most complex and heterogeneous product of the modern world -- cities.
Cities possess a variety of material and expressive components. On the material side, we must list for each neighbourhood the different buildings in which the daily activities and rituals of the residents are performed and staged (the pub and the church, the shops, the houses, and the local square) as well as the streets connecting these places. In the nineteenth century new material components were added, water and sewage pipes, conduits for the gas that powered early street lighting, and later on electricity and telephone wires. Some of these components simply add up to a larger whole, but citywide systems of mechanical transportation and communication can form very complex networks with properties of their own, some of which affect the material form of an urban centre and its surroundings. (33)
(William Cronon's social and material history of Chicago in Nature's Metropolis: Chicago and the Great West is a very compelling illustration of this additive, compositional character of the modern city; link. Contingency and conjunctural causation play a very large role in Cronon's analysis. Here is a post that draws out some of the consequences of the lack of systematicity associated with this approach, titled "What parts of the social world admit of explanation?"; link.)



Sunday, November 27, 2016

What is the role of character in action?


I've been seriously interested in the question of character since being invited to contribute to a volume on the subject a few years ago. That volume, Questions of Character, has now appeared in print, and it is an excellent and engaging contribution. Iskra Fileva was the director of the project and is the editor of the volume, and she did an excellent job in selecting topics and authors. She also wrote an introduction to the volume and introductions to all five parts of the collection. It would be possible to look at Fileva's introductions collectively as a very short book on character by themselves.

So what is "character"? To start, it is a concept of the actor that draws our attention to enduring characteristics of moral and practical propensities, rather than focusing on the moment of choice and the criteria recommended by the ethicist on the basis of which to make choices. Second, it is an idea largely associated with the "virtue" ethics of Aristotle. The other large traditions in the history of ethics -- utilitarianism and Kantian ethics, or consequentialist and deontological theories -- have relatively little to say about character, focusing instead on action, rules, and moral reasoning. And third, it is distinguished from other moral ideas by its close affinity to psychology as well as philosophy. It has to do with the explanation of the behavior of ordinary people, not just philosophical ideas about how people ought to behave.  

This is a fundamentally important question for anyone interested in formulating a theory of the actor. To hold that human beings sometimes have "character" is to say that they have enduring features of agency that sometimes drive their actions in ways that override the immediate calculation of costs and benefits, or the immediate satisfaction of preferences. For example, a person might have the virtues of honesty, courage, or fidelity -- leading him or her to tell the truth, resist adversity, or keep commitments and promises, even when there is an advantage to be gained by doing the contrary. Or conceivably a person might have vices -- dishonesty, cruelty, egotism -- that lead him or her to act accordingly -- sometimes against personal advantage. 

Questions of Character is organized into five major sets of topics: ethical considerations, moral psychology, empirical psychology, social and historical considerations, and art and taste. Fileva has done an excellent job of soliciting provocative essays and situating them within a broader context. Part I includes innovative discussions of how the concept of character plays out in Aristotle, Hume, Kant, and Nietzsche. Part II considers different aspects of the problem of self-control and autonomy. Part III examines the experimental literature on behavior in challenging situations (for example, the Milgram experiment), and whether these results demonstrate that human actors are not guided by enduring virtues. Part IV examines the intersection between character and large social settings, including history, the market, and the justice system. And Part V considers the role of character in literature and the arts, including the interesting notion that characters in novels become emblems of the character traits they display.

The most fundamental question raised in this volume is this: what is the role of character in human action? How, if at all, do embodied traits, virtues and vices, or personal commitments influence the actions that we take in ordinary and extraordinary circumstances? And the most intriguing challenge raised here is one that casts doubt on the very notion of character: "there are no enduring behavioral dispositions inside a person that warrant the label 'character'." Instead, all action is opportunistic and in the moment. Action is "situational" (John Doris, Lack of Character: Personality and Moral Behavior; Ross and Nisbett, The Person and the Situation). On this approach, what we call "character" and "virtue" is epiphenomenal; action is guided by factors more fundamental than these.

My own contribution focuses on the ways in which character may be shaped by historical circumstances. Fundamentally I argue that growing up during the Great Depression, the Jim Crow South, or the Chinese Revolution potentially cultivates fairly specific features of mentality in the people who had these formative experiences. The cohort itself has a common (though not universal) character that differs from that of people in other historical periods. As a consequence people in those cohorts commonly behave differently from people in other cohorts when confronted with roughly similar action situations. So character is both historically shaped and historically important. Much of my argument was worked out in a series of posts here in Understanding Society

This project is successful in its own terms; the contributors have created a body of very interesting discussion and commentary on an important element of human conduct. The volume is distinctly different from other collections in moral psychology or the field of morality and action. But the project is successful in another way as well. Fileva and her colleagues succeeded in drawing together a novel intellectual configuration of scholars from numerous disciplines to engage in a genuinely trans-disciplinary research collaboration. Through several academic conferences (one of which I participated in), through excellent curatorial and editorial work by Fileva herself, and through the openness of all the collaborators to listen with understanding to the perspectives of researchers in other disciplines, the project succeeded in demonstrating the power of interdisciplinary collaboration in shedding light on an important topic. I believe we understand better the intriguing complexities of actors and action as a result of the work presented in Questions of Character.

(Here is a series of posts on the topic of character; link.)

Thursday, November 24, 2016

Coarse-graining of complex systems


The question of the relationship between micro-level and macro-level is just as important in physics as it is in sociology. Is it possible to derive the macro-states of a system from information about the micro-states of the system? It turns out that there are some surprising aspects of the relationship between micro and macro that physical systems display. The mathematical technique of "coarse-graining" represents an interesting wrinkle on this question. So what is coarse-graining? Fundamentally it is the idea that we can replace micro-level specifics with local-level averages, without reducing our ability to calculate macro-level dynamics of behavior of a system.

A 2004 article by Israeli and Goldenfeld, "Coarse-graining of cellular automata, emergence, and the predictability of complex systems" (link) provides a brief description of the method of coarse-graining. (Here is a Wolfram demonstration of the way that coarse graining works in the field of cellular automata; link.) Israeli and Goldenfeld also provide physical examples of phenomena with what they refer to as emergent characteristics. Let's see what this approach adds to the topic of emergence and reduction. Here is the abstract of their paper:
We study the predictability of emergent phenomena in complex systems. Using nearest neighbor, one-dimensional Cellular Automata (CA) as an example, we show how to construct local coarse-grained descriptions of CA in all classes of Wolfram's classification. The resulting coarse-grained CA that we construct are capable of emulating the large-scale behavior of the original systems without accounting for small-scale details. Several CA that can be coarse-grained by this construction are known to be universal Turing machines; they can emulate any CA or other computing devices and are therefore undecidable. We thus show that because in practice one only seeks coarse-grained information, complex physical systems can be predictable and even decidable at some level of description. The renormalization group flows that we construct induce a hierarchy of CA rules. This hierarchy agrees well apparent rule complexity and is therefore a good candidate for a complexity measure and a classification method. Finally we argue that the large scale dynamics of CA can be very simple, at least when measured by the Kolmogorov complexity of the large scale update rule, and moreover exhibits a novel scaling law. We show that because of this large-scale simplicity, the probability of finding a coarse-grained description of CA approaches unity as one goes to increasingly coarser scales. We interpret this large scale simplicity as a pattern formation mechanism in which large scale patterns are forced upon the system by the simplicity of the rules that govern the large scale dynamics.
This paragraph involves several interesting ideas. One is that the micro-level details do not matter to the macro outcome (italics above). Another related idea is that macro-level patterns are (sometimes) forced by the "rules that govern the large scale dynamics" -- rather than by the micro-level states.

Coarse-graining methodology is a family of computational techniques that permits "averaging" of values (intensities) from the micro-level to a higher level of organization. The computational models developed here were primarily applied to the properties of heterogeneous materials, large molecules, and other physical systems. For example, consider a two-dimensional array of iron atoms as a grid with randomly distributed magnetic orientations (up, down). A coarse-grained description of this system would be constructed by taking each 3x3 square of the grid and assigning it the up-down value corresponding to the majority of atoms in the grid. Now the information about nine atoms has been reduced to a single piece of information for the 3x3 grid. Analogously, we might consider a city of Democrats and Republicans. Suppose we know the affiliation of each household on every street. We might "coarse-grain" this information by replacing the household-level data with the majority representation of 3x3 grids of households. We might take another step of aggregation by considering 3x3 grids of grids, and representing the larger composite by the majority value of the component grids.

How does the methodology of coarse-graining interact with other inter-level questions we have considered elsewhere in Understanding Society (emergence, generativity, supervenience)? Israeli and Goldenfeld connect their work to the idea of emergence in complex systems. Here is how they describe emergence:
Emergent properties are those which arise spontaneously from the collective dynamics of a large assemblage of interacting parts. A basic question one asks in this context is how to derive and predict the emergent properties from the behavior of the individual parts. In other words, the central issue is how to extract large-scale, global properties from the underlying or microscopic degrees of freedom. (1)
Note that this is the weak form of emergence (link); Israeli and Goldenfeld explicitly postulate that the higher-level properties can be derived ("extracted") from the micro level properties of the system. So the calculations associated with coarse-graining do not imply that there are system-level properties that are non-derivable from the micro-level of the system; or in other words, the success of coarse-graining methods does not support the idea that physical systems possess strongly emergent properties.

Does the success of coarse-graining for some systems have implications for supervenience? If the states of S can be derived from a coarse-grained description C of M (the underlying micro-level), does this imply that S does not supervene upon M? It does not. A coarse-grained description corresponds to multiple distinct micro-states, so there is a many-one relationship between M and C. But this is consistent with the fundamental requirement of supervenience: no difference at the higher level without some difference at the micro level. So supervenience is consistent with the facts of successful coarse-graining of complex systems.

What coarse-graining is inconsistent with is the idea that we need exact information about M in order to explain or predict S. Instead, we can eliminate a lot of information about M by replacing M with C, and still do a perfectly satisfactory job of explaining and predicting S.

There is an intellectual wrinkle in the Israeli and Goldenfeld article that I haven't yet addressed here. This is their connection between complex physical systems and cellular automata. A cellular automaton is a simulation governed by simple algorithms governing the behavior of each cell within the simulation. The game of Life is an example of a cellular automaton (link). Here is what they say about the connection between physical systems and their simulations as a system of algorithms:
The problem of predicting emergent properties is most severe in systems which are modelled or described by undecidable mathematical algorithms[1, 2]. For such systems there exists no computationally efficient way of predicting their long time evolution. In order to know the system’s state after (e.g.) one million time steps one must evolve the system a million time steps or perform a computation of equivalent complexity. Wolfram has termed such systems computationally irreducible and suggested that their existence in nature is at the root of our apparent inability to model and understand complex systems [1, 3, 4, 5]. (1)
Suppose we are interested in simulating the physical process through which a pot of boiling water undergoes sudden turbulence shortly before 100 degrees C (the transition point between water and steam). There seem to be two large alternatives raised by Israeli and Goldenfeld: there may be a set of thermodynamic processes that permit derivation of the turbulence directly from the physical parameters present during the short interval of time; or it may be that the only way of deriving the turbulence phenomenon is to provide a molecule-level simulation based on the fundamental laws (algorithms) that govern the molecules. If the latter is the case, then simulating the process will prove computationally impossible.

Here is an extension of this approach in an article by Krzysztof Magiera and Witold Dzwinel, "Novel Algorithm for Coarse-Graining of Cellular Automata" (link). They describe "coarse-graining" in their abstract in these terms:
The coarse-graining is an approximation procedure widely used for simplification of mathematical and numerical models of multiscale systems. It reduces superfluous – microscopic – degrees of freedom. Israeli and Goldenfeld demonstrated in [1,2] that the coarse-graining can be employed for elementary cellular automata (CA), producing interesting interdependences between them. However, extending their investigation on more complex CA rules appeared to be impossible due to the high computational complexity of the coarse-graining algorithm. We demonstrate here that this complexity can be substantially decreased. It allows for scrutinizing much broader class of cellular automata in terms of their coarse graining. By using our algorithm we found out that the ratio of the numbers of elementary CAs having coarse grained representation to “degenerate” – irreducible – cellular automata, strongly increases with increasing the “grain” size of the approximation procedure. This rises principal questions about the formal limits in modeling of realistic multiscale systems.
Here K&D seem to be expressing the view that the approach to coarse-graining as a technique for simplifying the expected behavior of a complex system offered by Israeli and Goldenfeld will fail in the case of more extensive and complex systems (perhaps including the pre-boil turbulence example mentioned above).

I am not sure whether these debates have relevance for the modeling of social phenomena. Recall my earlier discussion of the modeling of rebellion using agent-based modeling simulations (link, link, link). These models work from the unit level -- the level of the individuals who interact with each other. A coarse-graining approach would perhaps replace the individual-level description with a set of groups with homogeneous properties, and then attempt to model the likelihood of an outbreak of rebellion based on the coarse-grained level of description. Would this be feasible?