Climate, civilization and humanity

“The major problems in the world,” Gregory Bateson said, are in “the difference between how nature works and the way people think.”

More and more, I’m seeing people that are grappling with the climate challenge pursue this kind of Batesonian inquiry.

Roy Scranton on “Learning How to Die in the Anthropocene,” published a few days ago on the NYT The Stone blog:

The biggest problem climate change poses isn’t how the Department of Defense should plan for resource wars, or how we should put up sea walls to protect Alphabet City, or when we should evacuate Hoboken. It won’t be addressed by buying a Prius, signing a treaty, or turning off the air-conditioning. The biggest problem we face is a philosophical one: understanding that this civilization is already dead. The sooner we confront this problem, and the sooner we realize there’s nothing we can do to save ourselves, the sooner we can get down to the hard work of adapting, with mortal humility, to our new reality.

Paul Hawken, in conversation with Joel Makower at last month’s GreenBiz event (video):

We have to ask ourselves — and I really mean this — is climate change happening to us or for us. Because if it’s happening to us, then we’re victims. And if we’re victims, that means that some other part of humanity is “other,” and we’re cut off.

And that is not what life teaches us. What life teaches us is that our destiny — and, literally, who we are — is absolutely inseperable with all living beings. And that in there is the grace and the beauty and the insight with which we can create transformation.

It doesn’t mean you ignore the data, it doesn’t mean you whistle past the graveyard of the science. It just means, as Wendell Berry says, “Be joyful though you know the facts.”

Listen, watch also: Roy Scranton interviewed by Terry Gross, and Paul Hawken on video at the June 2013 Transformation in a Changing Climate conference, speaking on “The Reimagination of Carbon.”

Political entrenchment as path dependence

When Russell Brand says, “Voting is tacit complicity with the system” — in last week’s BBC interview, viewed 7 million times and counting — he describes political entrenchment as a type of path dependence, and advocates for tactics to delegitimize the system.

Here are a couple of other sources for seeing politics through a systems lens.

From “Increasing Returns, Path Dependence, and the Study of Politics,” by Paul Pierson (2000):

[When] path dependent processes are at work, political life is likely to be marked by four features:

  • Multiple equilibria: Under a set of initial conditions conducive to increasing returns, a number of outcomes — perhaps a wide range — are generally possible.
  • Contingency: Relatively small events, if they occur at the right moment, can have large and enduring consequences.
  • A critical role for timing and sequencing: In increasing returns processes, when an event occurs may be crucial. Because earlier parts of a sequence matter much more than later parts, an event that happens “too late” may have no effect, although it might have been of great consequence if the timing had been different.
  • Inertia: Once an increasing returns process is established, positive feedback may lead to a single equilibrium. This equilibrium will in turn be resistant to change.

From “Path Dependence in Historical Sociology,” by James Mahoney (2000), a typology of path-dependent explanations of institutional reproduction (adapted from the original):

typology of path dependence

Amidst uncertainty, perceiving risk

risky business Numerous climate commentators — from economist Nicholas Stern, to social psychologist Nick Pidgeon and communications researcher Matthew Nisbet — have emphasized a risk-based understanding of the climate challenge.

This year, a couple of intriguing new climate initiatives take risk-based approaches. One is the global C40 Cities Climate Risk Assessment Network, which aims to develop a C40 Risk Assessment Framework for use by municipalities around the world. Another is the Risky Business project, pictured above, which, carrying the imprimatur of co-chairs New York City Mayor Michael Bloomberg, former U.S. Treasury Secretary Hank Paulson, and Farallon Capital founder Tom Steyer, aims to assess U.S. financial risks and engage leaders in key sectors.

More power to them. Still, if we step back and ask — What is this thing called “risk”? — it turns out to be a tricky question.

A glance around my bookshelf reveals numerous perspectives: the landmark toxicology study Generations at Risk: Reproductive Health and the Environment, the cultural theory of risk perception described by Mary Douglas and Aaron Wildavsky, the modernity-as-risk theorized by Ulrich Beck, the analytical-deliberative approach to risk-informed public processes recommended by the U.S. National Research Council, odds-making on humanity’s future by Martin Rees, and so on.

In the terminology of Knightian uncertainty, coined in 1921 by economist Frank Knight, risk is distinguished from uncertainty. Knight defined risk as “measurable uncertainty” — in situations where one can calculate the odds of various potential outcomes.

The 1995 book Risk by John Adams took a phenomenological approach to the Knightian distinction. Call it: amidst uncertainty, perceiving risk. Uncertainty is what’s inescapably out there. Risk is how we account for it. Adams:

The development of our expertise in coping with uncertainty begins in infancy. The trial and error processes by which we first learn to crawl, and then walk and talk, involve decision-making in the face of uncertainty. In our development to maturity we progressively refine our risk-taking skills; we learn how to handle sharp things and hot things, how to ride a bicycle and cross the street, how to communicate our needs and wants, how to read the moods of others, how to stay out of trouble. How to stay out of trouble? This is one skill we never master completely. It appears to be a skill that we do not want to master completely.

He calls his model “the risk thermostat”:

risk thermostat

The model postulates that:

  • Everyone has a propensity to take risks;
  • This propensity varies from one individual to another;
  • This propensity is influenced by the potential rewards of risk-taking;
  • Perceptions of risk are influenced by experience of accident losses — one’s own and others’;
  • Individual risk-taking decisions represent a balancing act in which perceptions of risk are weighed against propensity to take risk;
  • Accident losses are, by definition, a consequence of taking risks; the more risks an individual takes, the greater, on average, will be both the rewards and losses he or she incurs.

See also:

Carbon budgets — a new climate target?

Say that all of humanity – past, present and foreseeable future – has a dollar to spend on carbon-fueled economic growth. Those of us that have reaped the industrial world’s benefits doled out four bits or so from 1750 to 2008, and some of those investments paid off handsomely. Standards of health, education and material living rose. And the global digital network emerged.

Fair to say, these expenditures largely preceded any broad realization that the carbon go-go days might be a passing phase. But that is no longer the case. The total bank – set, not by resource limits, but by the planet’s capacity for waste absorption – has been counted. One need not embrace central planning to wonder how a pragmatic and just CFO might eye the remaining balance.

This is an excerpt from my June 2009 article, “The Story of the Trillion Tons of Carbon” (i.e., metric tons or, alternatively, tonnes). At the time, two papers published in Nature offered the first detailed examination of the relationship between cumulative carbon emissions and global temperature increases. The gist is that if human-engendered temperature increases are to be kept under the internationally agreed upon target of 2°C (3.6°F), then cumulative carbon emissions must be no more than a trillion metric tons. Actually, less than a trillion, if one accounts for other greenhouse gases. And half the budget has already been spent.

In the summer of 2012, Bill McKibben called this realization, “humanity’s terrifying new math.”

This year, carbon calculations are in the spotlight again, having made it into the Intergovernmental Panel on Climate Change (IPCC) final draft report (on pages 63-4 of the 2,216-page Complete Underlying Scientific/Technical Assessment) and Summary for Policymakers (pdf).

Last week’s New Scientist (“IPCC digested: Just leave the fossil fuels underground,” by Michael Le Page):

Merely reducing emissions is not enough. It will slow climate change, but in the end how much the planet warms depends on the overall amount of CO2 we pump out. To have any chance of limiting the global temperature rise to 2 °C, we have to limit future emissions to about 500 gigatonnes of CO2. Burning known fossil fuel reserves would release nearly 3000 gigatonnes, and energy companies are currently spending $600 billion trying to find more.

The implications of the numbers are staggering. The value of these companies depends on their reserves. If at some point in the future the world gets serious about tackling climate change, these reserves will become worthless. About $4 trillion worth of shares would be wiped out, according to the non-profit Carbon Tracker Initiative. For most of us, that’s our pension funds at risk.

The NYT has the first piece I’ve seen on the IPCC’s backroom political negotiations (“How to Slice a Global Carbon Pie?,” by Justin Gillis):

The scientists had wanted to specify a carbon budget that gave the best chance of keeping temperatures at the 3.6 degree target or below. But many countries felt the question was related to risk — and that the issue of how much risk to take was political, not scientific. The American delegation suggested that the scientists lay out a range of probabilities for staying below the 3.6-degree target, not a single budget, and that is what they finally did.

The original budget is in there. But the adopted language gives countries the possibility of a much larger carbon pie, if they are willing to tolerate a greater risk of exceeding the temperature target.

It’s worth asking: Since there are already targets of 2°C and of 350 (parts per million CO2 in the atmosphere) — the latter serving as the centerpiece of a global campaign — what’s the meaning of another target?

One difference is that the trillion tonne target raises inescapable questions of climate justice — difficult questions. After all, who has benefited from the first half-trillion tonnes? What individuals, what nations, and what generations? And who will benefit from the next half? These types of questions are not as transparent in the other two targets.

Back in 2009, my trillion tonne story highlighted Henry Shue’s writings on “distributive justice.” And as all eyes turned to the upcoming Copenhagen Climate Conference, I edited and published a series of articles on climate justice by Peter Singer, Dale Jamieson, Steve Vanderheiden, Kathleen Dean Moore, Paul Baer, and Stephen Gardiner.

Now that the trillion tonnes has been recognized by the IPCC, these voices are more critical than ever.

At the same time, it’s also worth thinking carefully about this word “target.” Nearly fifty years ago, Martin Luther King assured us of a long bend toward justice. I might be mistaken, but what I don’t recall is that Dr. King ever described a target-based management approach for getting there.

See also: critiques of target-based management, in other contexts, by Buzz Holling and Ray Ison.

Systemic governance and knowledge cultures


“When done well,” emphasizes Thomas Dietz in this talk from last year’s U.S. National Academy of Sciences (NAS) colloquium on The Science of Science Communication,”public participation improves the quality and legitimacy of decisions and builds the capacity of all involved to engage in the policy process.”

I’ve written about last year’s colloquium talks by Daniel Kahneman and Arthur Lupia. A second colloquium in this NAS series begins Monday morning and will be webcast here.

Dietz co-edited the 2008 National Research Council (NRC) publication, Public Participation in Environmental Assessment and Decision Making, which surveyed and synthesized the literature on design and evaluation of public processes applicable to environmental issues like coastal and marine planning, smart growth planning, hazardous waste siting, and so on. In a nutshell, the NRC framework describes three public process goals: legitimacy, capacity, and decision quality.

Dietz highlights the challenge of developing “institutional forms — the organizations and the norms or rules — that we would use to make sure that these insights are used routinely.” I’ve similarly begun to think that the NRC findings might be included in a broader model for “systemic governance” — a phrase used by Ray Ison and Janet McIntyre-Mills.

Meanwhile, although Dietz insists that the NRC findings cannot be summarized in a single diagram, I’ve been using this one below.

goals of systemic governance

One of Dietz’s slides that caught my attention was this one below on forms of expertise. Recognizing multiple forms of expertise or rationality or “ways of knowing” is critical to navigating the fact-value entanglements that are at the heart of governance issues.

Dietz forms of expertise

For comparison, one model that comes to mind is Valerie Brown’s excellent visualization of nested knowledge cultures, adapted below from the book, Tackling Wicked Problems: Through the Transdisciplinary Imagination, which I reviewed in the journal Ecopsychology.

Brown knowledge cultures

 

Edgar Morin: Parts and wholes

Edgar Morin part and whole

“The whole exists for and by means of the parts, and the parts for and by means of the whole,” write Giuseppe Longo, Maël Montévil, and Stuart Kauffman in 2012’s “No entailing laws, but enablement in the evolution of the biosphere,” which I wrote about here.

I was reminded of this piece from the enablement paper when reading Edgar Morin’s On Complexity, a 2008 book comprised of essays from the 1970s-90s, with a fascinating foreword by Alfonso Montuori, available online here.

Morin:

Systems theory reacted to reductionism with its idea of the whole, but believing it had surpassed reductionism, its “holism” merely brought about a reduction to the whole, from which arose not only blindness to the parts as parts but its myopia with respect to organization as organization, and its ignorance of the complexity at the heart of any global unity.

In either case, reductionistic or holist explanation seeks to simplify the problem of complex unity. The one reduces explanation of the whole to the properties of the parts conceived in isolation. The other reduces the properties of the parts to the properties of the whole, also conceived in isolation. These two mutually repelling explanations each arose out of the same paradigm.

The conception that is revealed here places us at once beyond reductionism and holism, and summons a principle of intelligibility and integrates the portion of truth included in each; there should neither be annihilation of the whole by the parts not of the parts by the whole. It is essential, therefore, to clarify the relations between parts and whole, where each term refers back to the other: “I consider it as impossible,” said Pascal, “to know the parts without knowing the whole, as to know the whole without precise knowledge of the parts.” In the twentieth century, reductionist and holist ideas still do not measure of the level of such a formulation.

The truth of the matter is that, even more than mutually referring to one another, the interrelation that links explanation of the parts to that of the whole, and vice versa, is an invitation to recursive description and explanation; that is, description (explanation) of the parts depends upon that of the whole, which depends upon that of the parts, and it is in the circuit that the description of explanation constitutes itself.

This signifies that neither one of the two terms is reducible to the other. Thus, if the parts must be conceived in function to the whole, they must also be conceived in isolation: a part has its proper irreducibility in relation to the system. It is necessary, moreover, to know the qualities or properties of the parts that are inhibited, virtualized, and, therefore, invisible at the heart of the system, not only to correctly understand the parts, but also to better understand the constraints, inhibitions, and transformations effected by the organization of the whole.

fish stock sustainability indexThe U.S. National Ocean and Atmospheric Administration Fish Stock Sustainability Index (FSSI), which tracks 227 fish stocks that account for 90 percent of the total U.S. catch, shows continuous improvement in this graph from the September 2013 National Research Council (NRC) report, “Evaluating the Effectiveness of Fish Stock Rebuilding Plans in the United States.”

The most recent FSSI quarterly update shows the index up again at 613.5 and describes the scoring methodology (pdf).

The news is celebrated in the NRC press release: “Many, But Not All, Depleted Fish Populations Show Signs of Recovery Under Rebuilding Plans That Reduce Fish Harvest.”

The NRC report also summarizes recent research. “Scientific understanding of [marine] ecosystem dynamics is advancing rapidly.”

Pieces that caught my eye:

  • “There is growing evidence of nonlinear dynamics in fish populations (Dixon et. al., 1999; Glaser et al., 2011), as well as a growing consensus that ecosystem and multi-species effects are important.”
  • “Regime shifts in marine fish stocks appear to be quite common (Vert Pre et al., 2013).”
  • “While it is common to attribute population declines exclusively to fishing or exclusively to the environment, in most cases, the observed stock dynamics are probably a combination of fishing and the environment. Furthermore, fishing and environmental effects may interact in ways that are non-additive (Hsieh et al., 2008; Deyle et al., 2013).”
  • “Truncating the age structure may reduce the ability of populations to cope with sequences of poor conditions. A possible consequence of fishing-induced change in age structure may be an increased coupling between recruitment and environmental conditions.”
  • “When fish stocks are depleted, their prey species are consumed by other predators that may increase in abundance, thereby limiting availability of the common prey. It may then be difficult to simultaneously rebuild all overfished species to their single-species BMSY levels without reductions in other consumer species.”
  • “[I]n most cases, scientific understanding of ecosystem dynamics is insufficient to confidently predict the future state or to achieve desired tradeoffs among species (even if there were agreement on which tradeoffs are desirable). These cases depend on having pragmatic, operational management strategies that acknowledge this kind of uncertainty.”

Koestler: creativity in humour, science, art

The “logic of laughter,” according to Arthur Koestler in 1964’s The Act of Creation, is in “the clash of two mutually incompatible codes, or associative contexts, which explodes the [narrative] tension.”

Here is a joke he attributes to John von Neumann, who like Koestler was Jewish and from Budapest, Hungary. Koestler was born on this date in 1905; von Neumann was two years the elder.

Two women meet while shopping at the supermarket in the Bronx. One looks cheerful, the other depressed. The cheerful one inquires:

‘What’s eating you?’

‘Nothing’s eating me.’

‘Death in the family?’

‘No, God forbid!’

‘Worried about money?’

‘No … nothing like that.’

‘Trouble with the kids?’

‘Well, if you must know, it’s my little Jimmy.’

‘What’s wrong with him, then?’

‘Nothing is wrong. His teacher said he must see a psychiatrist.’

Pause. ‘Well, well, what’s wrong with seeing a psychiatrist?’

‘Nothing is wrong. The psychiatrist said he’s got an Oedipus complex.’

Pause. ‘Well, well, Oedipus or Shmoedipus, I wouldn’t worry so long as he’s a good boy and loves his mamma.’

Here, the “clash” is between the two “codes” of (perceived) scientific knowing and familial identity. There’s a notable parallel in theories of institutional logics, which describe how individuals and organizations create change through the transposition of symbols and practices among institutional orders.

For Koestler, these “bisociative” clashes were the source of creativity — whether in humor, science, or art.



The three panels of the rounded triptych shown on the frontispiece [and adapted above] indicate three domains of creativity which shade into each other without sharp boundaries: Humour, Discovery, and Art. …

Each horizontal line across the triptych stands for a pattern of creative activity which is represented on all three panels. … The logical pattern of the creative process is the same in all three cases; it consists in the discovery of hidden similarities. …

I shall try to show that all patterns of creative activity are tri-valent: they can enter the service of humour, discovery, or art; and also, that as we travel across the triptych from left to right, the emotional climate changes by gradual transitions from aggressive to neutral to sympathetic and identifactory — or, to put it another way, from an absurd through an abstract to a tragic or lyric view of existence.

the nature of science

“[S]cience is fundamentally a social enterprise,” write the authors of the U.S. National Research Council 2011 Framework for K-12 Science Education.

As I described last week, the framework is the basis for the Next Generation Science Standards (NGSS), and both are bolstered with a good measure of systems thinking  — including this type of reflection on the role of science itself. In the lingo adopted by the framework and standards, such science-and-society reflections are called understandings about “the nature of science.”

The NGSS nature of science matrix includes eight understandings or themes, along with school-level learning objectives for each of the eight (in Appendix H, pdf). To illustrate, I created the table at top with two of the themes and their associated high school learning objectives. These understandings constitute, in effect, an attempt to delineate the boundaries of scientific ways of doing and knowing.

Whether these learning objectives serve to well and sufficiently characterize the nature of science is of course a matter of opinion. Some critics have emerged, but I’m quite impressed with inclusion of these understandings.

My questions are more about the lack of a clear relationship between the nature of science as a topic area and the three primary topic areas or dimensions: practices, disciplinary core ideas, and crosscutting concepts. These three are represented in the logo and icon, which presents a triangular weaving that is more-or-less like the diagram on the left below, only more colorful and design-y. In this icon, the nature of science is not depicted.

I’m not the only one left wondering what happened. From the public feedback, as described in the framework’s Appendix A: “Many of those who provided comments thought that the ‘nature of science’ needed to be made an explicit topic or idea.”

Suppose that, following this recommendation, the nature of science were considered more explicitly, how might it be positioned in relation to the three dimensions? Below on the right is one suggestion for re-conceiving the icon and clarifying the nature of science as the boundary.

Here’s my rationale, in sum: There is something called science. Science includes ways of both doing and knowing. Scientific ways of doing and knowing, versus other ways, are delineated by something called here the nature of science. Different people will inevitably have different opinions on how to characterize this boundary. But not clarifying that the nature of science in fact constitutes the boundary seems like a missed opportunity.

nature of science in the Next Generation Science Standards

A call for system thinking in K-12 schools

The U.S. K-12 Next Generation Science Standards were published in April this year, with the goal of “provid[ing] all students an internationally benchmarked science education.” As described in the FAQ, the standards were developed separately from the more frequently discussed Common Core State Standards, and they are based on the National Research Council’s (NRC) 2011 Framework for for K-12 Science Education.

This NRC framework draws liberally from systems thinking — as evident in the “crosscutting concepts,” one of the framework’s three dimensions:

In this chapter, we describe concepts that bridge disciplinary boundaries, having explanatory value throughout much of science and engineering. … These concepts help provide students with an organizational framework for connecting knowledge from the various disciplines into a coherent and scientifically based view of the world. …

The committee identified seven crosscutting scientific and engineering concepts:

  1. Patterns — Observed patterns of forms and events guide organization and classification, and they prompt questions about relationships and the factors that influence them.
  2. Cause and effect: Mechanism and explanation — Events have causes, sometimes simple, sometimes multifaceted. A major activity of science is investigating and explaining causal relationships and the mechanisms by which they are mediated. Such mechanisms can then be tested across given contexts and used to predict and explain events in new contexts.
  3. Scale, proportion, and quantity — In considering phenomena, it is critical to recognize what is relevant at different measures of size, time, and energy and to recognize how changes in scale, proportion, or quantity affect a system’s structure or performance.
  4. Systems and system models — Defining the system under study—specifying its boundaries and making explicit a model of that system—provides tools for understanding and testing ideas that are applicable throughout science and engineering.
  5. Energy and matter: Flows, cycles, and conservation — Tracking fluxes of energy and matter into, out of, and within systems helps one understand the systems’ possibilities and limitations.
  6. Structure and function — The way in which an object or living thing is shaped and its substructure determine many of its properties and functions.
  7. Stability and change — For natural and built systems alike, conditions of stability and determinants of rates of change or evolution of a system are critical elements of study.

Exciting stuff — as far as it goes. And easier outlined than implemented. I’ll be writing more about the framework and standards in days to come.