Autonomous vehicles: policies and ethics

by Howard Silverman on 14 Apr 2014 · 0 comments

Robin Chase on policies to ensure that autonomous vehicles serve the public good (“Will a World of Driverless Cars Be Heaven or Hell?”):

So policy makers, taxpayers, road warriors, city lovers: Which path forward will we choose? Our future hinges on two things. First, will the cost for autonomous vehicles be high enough that each vehicle will need to be used well? If so, the economic imperative to share the cars will set us down the efficient-use path. Second, will we add a per-mile fee for human-free passenger vehicles? We will need that to temper our insatiable desire to send machines out to do our bidding.

Tom Chatfield on the laws of robotics-type dilemmas that arise (“Automated ethics”):

If my self-driving car is prepared to sacrifice my life in order to save multiple others, this principle should be made clear in advance together with its exact parameters. …

As agency passes out of the hands of individual human beings, in the name of various efficiencies, the losses outside these boxes don’t simply evaporate into non-existence. If our destiny is a new kind of existential insulation – a world in which machine gatekeepers render certain harms impossible and certain goods automatic – this won’t be because we will have triumphed over history and time, but because we will have delegated engagement to something beyond ourselves.

Churchman, Bayes, and the systems approach

by Howard Silverman on 10 Apr 2014 · 0 comments

Per my post on climate and the Bayesian brain, as well as the ever-present challenge of effective inquiry regarding the elephants among us, there’s this prescient piece from C. West Churchman, c.1979:

I was amazed that Khun’s book, The Structure of Scientific Revolutions, or the advent of Bayesian statistics, could have created such a stir among the intellectuals! In the systems approach, all methods of inquiry, all designs of inquiring systems are options of the inquirer; there is no a priori set of standards that dictate the preferable ones.

No a priori standards, indeed. To guide individual inquiry and individual action into something more collective, we have social institutions. Not a priori either, they’re ours for the shaping. Once shaped, though, path dependence sets in.

Climate change and the Bayesian brain

by Howard Silverman on 8 Apr 2014 · 1 comment

What do you do when your world has changed? Hold on to the past? Adapt to the new? According to Bayesian cognitive theory, we’re always doing a bit of both: holding onto prior beliefs, while adapting to incoming signals.

My interest here is in the Bayesian model of cognition — the “Bayesian brain” — and in what it might mean for understanding climate change. Inextricably coupled with the theory of the Bayesian brain are the statistical methods of Bayesian inference. While the statistical theorems date back centuries, the cognitive science is recent. The methods are a hot topic, the model not so much, yet.

Champion of Bayesian statistics Nate Silver discussed climate in a chapter of his 2012 book, The Signal and the Noise: Why So Many Predictions Fail — but Some Don’t, to which climatologist Michael Mann responded with a friendly-yet-scathing critique.

The exchange left me a little puzzled. Sure, there were oddities and errors in Silver’s chapter, the ones that Mann detailed and more. Still, I wondered about Mann’s seemingly categorical dismissal of Silver’s Bayesian approach (see update below), and I noted that Dan Kahan, known for his research on cultural cognition, had a similar reaction. At the same time, I also noticed that neither Silver nor Kahan referenced the recent Bayesian cognitive research.

The relaunch of Silver’s FiveThirtyEight, which I wrote about last time, has me looking at this topic again. In this post, I’ll test out some pattern recognition and seek your feedback.

Bayesian Brain
I’ll begin with expert introductions to the cognitive field by Chris Frith and Stanislas Dehaene; later on, I’ll also reference a couple of talks by Karl Friston.

“[O]ur brain is a Bayesian machine,” wrote Frith in 2007′s Making Up the Mind: How the Brain Creates our Mental World — a lucid account of the research narrative.

[O]ur brain is a Bayesian machine that discovers what is in the world by making predictions and searching for the causes of sensations. … Our brains build models of the world and continuously modify these models on the basis of the signals that reach our senses.

Dehaene gave a concise summary of the cognitive research in response to the 2008 Edge question (“The Brain’s Schrödinger’s Equation”):

For many theoretical neuroscientists, it all started twenty five years ago, when John Hopfield made us realize that a network of neurons could operate as an attractor network, driven to optimize an overall energy function which could be designed to accomplish object recognition or memory completion. Then came Geoff Hinton’s Boltzmann machine — again, the brain was seen as an optimizing machine that could solve complex probabilistic inferences. Yet both proposals were frameworks rather than laws. Each individual network realization still required the set-up of thousands of ad-hoc connection weights.

Karl Friston, from UCL in London, has presented two extraordinarily ambitious and demanding papers in which he presents “a theory of cortical responses”. Friston’s theory rests on a single, amazingly compact premise: the brain optimizes a free energy function. This function measures how closely the brain’s internal representation of the world approximates the true state of the real world. From this simple postulate, Friston spins off an enormous variety of predictions: the multiple layers of cortex, the hierarchical organization of cortical areas, their reciprocal connection with distinct feedforward and feedback properties, the existence of adaptation and repetition suppression… even the type of learning rule — Hebb’s rule, or the more sophisticated spike-timing dependent plasticity — can be deduced, no longer postulated, from this single overarching law.

The theory fits easily within what has become a major area of research — the Bayesian Brain, or the extent to which brains perform optimal inferences and take optimal decisions based on the rules of probabilistic logic. Alex Pouget, for instance, recently showed how neurons might encode probability distributions of parameters of the outside world, a mechanism that could be usefully harnessed by Fristonian optimization. And the physiologist Mike Shadlen has discovered that some neurons closely approximate the log-likelihood ratio in favor of a motor decision, a key element of Bayesian decision making. My colleagues and I have shown that the resulting random-walk decision process nicely accounts for the duration of a central decision stage, present in all human cognitive tasks, which might correspond to the slow, serial phase in which we consciously commit to a single decision. During non-conscious processing, my proposal is that we also perform Bayesian accumulation of evidence, but without attaining the final commitment stage. Thus, Bayesian theory is bringing us increasingly closer to the holy grail of neuroscience — a theory of consciousness.

Dehaene, in his 2014 book: “The hypothesis that the brain acts as a Bayesian statistician is one of the hottest and most debated areas of contemporary neuroscience.”

Climate Change
Turning to the topic of climate, it’s illustrative to compare two examples of Bayesian inferential reasoning: one from Nate Silver and the other from physicist Sean Carroll.

Silver, in the climate chapter of Signal and Noise:

Suppose that in 2001, you had started out with a strong prior belief in the hypothesis that industrial carbon emissions would continue to cause a temperature rise. … But then you observe some new evidence: over the next decade, from 2001 through 2011, global temperatures do not rise. … Under Bayes’s theorem, you should revise your estimate of the probability of the global warming hypothesis downward; the question is by how much.

Carroll, in the Q&A to this 2013 talk (~1:01:50):

Q: Can you apply what you know about physics to what we might know about the future and climate?

Sean Carroll: If I were responsible, I would just say ‘no.’

Climate is very, very complicated. The question we should think about in terms of climate change is not a physics question. It’s a Bayesian probability question.

That is to say, I am not an expert on climate. My knowledge of how electrons and gravity work is of absolutely no use in understanding how the climate works. I know how the greenhouse effect works. But I also appreciate that there’s a lot more to the climate than just the greenhouse effect, so I wouldn’t trust my own judgement.

The reason I say Bayesian probability is because the Reverend Thomas Bayes gave us a way of assigning the probability that a certain theory is correct, in the presence of certain data, certain pieces of information.

Now I might a priori say: ‘I don’t know what’s happening with the climate.’ Then someone shows me the graph of temperature that goes like this [up steeply], and someone shows me the other graph of the carbon dioxide that goes like this [also up steeply], and my meager physicist’s mind would say: ‘You know I bet putting carbon dioxide into the atmosphere is warming the Earth.’

Then, I look at all the climate scientists in the world, and they tell me that putting carbon dioxide into the atmosphere is warming the Earth. That’s another piece of data.

And it’s overwhelmingly clear at this point in time, from all the information that we have — not because I’m a scientist, but because I’m a human being — that we human beings are making the Earth much, much warmer. And it’s potentially very disastrous, and we should stop doing it.

Needless to say, I agree with Carroll on this last point.

Pattern Recognition
Nate Silver and Sean Carroll started with similar questions, basically: Are humans causing climate change? But they selected different signals and came to different conclusions. Silver followed year-by-year global surface temperatures, while Carroll referenced climate science research findings and the scientific consensus. The problem with Silver’s choice of signals, Michael Mann wrote, is that yearly temperatures are not the same as climate. Scientific explanations for recent global temperatures point elsewhere, to the ocean absorption of heat. Plus there is the issue of what dates one uses for such an analysis.

At the same time, Silver’s and Carroll’s approaches have much in common. Both advocate for Bayesian reasoning. Silver sees it as “essential to scientific progress.” Likewise, Karl Friston describes his own Bayesian reasoning as “hypothesis testing in a Popperian sense” (~7:45). Silver and Carroll each seek to help their audiences get better at making sense of the world. I aspire to no less.

Still, if the Bayesian model of brain function is accurate, advocating for Bayesian reasoning becomes somewhat paradoxical. One might advocate for statistical quantification (Silver) or for a particular set of signals (Silver and Carroll). But according to the cognitive theory, Bayesian reasoning itself is simply what we do — inescapably what we all do.

Moreover, Bayesian reasoning hardly guarantees scientific conclusions. As other cognitive research shows, people have plenty of biases. One might hold Bayesian prior beliefs that are like one of the worldviews in Dan Kahan’s cultural theory-type research. Or one’s Bayesian signal selection might depend on the types of credibility calculations described in Arthur Lupia’s research.

In the end, Michael Mann emphasized, the greenhouse effect and other laws of physics are “true whether or not you choose to believe them.” Meanwhile, how individuals come to believe a truer understanding of the physical world — or fail to do so — is exactly what a Bayesian model of cognition, or something like it, might be used to explain.

To my mind, this bridge from subjective to objective is the power of the Bayesian approach. Metaphorically, it’s a feature not a bug. “What we’re doing, in quite a fundamental way,” declared Karl Friston (~30:30), “is coupling the immaterial to the material.”

In recent years, there has been a lot of discussion about climate-related cognition and communication: ways that people understand climate and ways of engaging others in conversations about climate. One notable venue has been the Sackler Colloquia on the Science of Science Communication (I and II). A critical piece that’s been missing from these discussions, if I’m not mistaken, is a cognitive model of how each of us updates his or her understandings over time. That’s what researchers on the Bayesian brain seek to provide. And that would be a very valuable thing.

References:

UPDATE: response from Michael Mann

MMann

Of foxes and hedgehogs: experts on expertise

by Howard Silverman on 26 Mar 2014 · 0 comments

Last week’s relaunch of Nate Silver’s FiveThirtyEight has been a mess (see: ThinkProgress, Lawyers Guns MoneyKrugman, Noahpinion).

Doubling down on a theme from his book The Signal and the Noise, Silver branded himself with the sign of the fox — the fox that knows many things, and spurns the hedgehog’s one big thing.

I’d complexify the fox-hedgehog metaphor by taking each animal to represent an endpoint for a spectrum of views on the role of expertise in society. Whether you accept Silver’s foxy pretensions, or perhaps see him more hedgehoggy than he’d like to admit, the result has been a round of critical examination on the nature of expertise. And that’s a good thing.

“For complex systems, we want foxes rather than hedgehogs,” declared Harold Linstone in 1994′s The Challenge of the 21st Century. In the intervening years, many have expressed similar sentiments. But the challenges of engaging in complex systems aren’t so clear cut. Expertise still matters, as many wrote in response to Silver. Meanwhile, on public policy questions — inevitably contested — expertise provides no easy answers. This is the lesson of the 1973 “wicked problems” paper by Rittel and Webber (pdf).

Much discussion has naturally focused on expertise in journalism, on journalists with varying degrees of foxiness or hedgehogginess, and particularly on the view-from-nowhere journalism, with its pretense of objectivity, that Silver seems to espouse.

My own thoughts turn to climate and, broadly, to scientific expertise.

There are of course people who’ve examined and written about this topic, about the role of science in society. But for all the recent punditry on the value of expertise, I don’t think I’ve seen any cited.

Needless to say, the topic of science-in-society gets complex pretty quickly. Easier to just point to ThinkProgress, where climate scientists rebutted the FiveThirtyEight foray into climate science. But the science represents only the tame side of this story. The wicked side is where science meets interpretation: In the face of such challenges, what might constitute effective action? (And might such challenges help us better understand the nature of humanity itself?)

Here are a few voices on science-in-society, experts on expertise.

Naomi Oreskes, from a talk at University of Rhode Island (and my post of August 2010):

We think that science provides certainty. And if we lack certainty, we think that something is wrong with the science. This view that science could provide certainty is an old one, but it was most clearly articulated by the late 19th-century positivists, who held out a dream of a positive knowledge, in the familiar sense of absolutely, positively true. But if we’ve learned anything since the 19th century, it’s that the positivists’ dream was exactly that, a dream.

History shows us clearly that science does not provide certainty. It does not provide truth. What it provides us with is the consensus of experts, based on the organized accumulation and scrutiny of evidence.

Brian Wynne, from “Strange Weather, Again: Climate Science as Political Art” (and my post of July 2010):

After the previous political impasse for a decade or more over the very acceptance of the established and increasingly urgent IPCC scientific knowledge of anthropogenic climate change, it has become more sharply evident that there are many other profound and ill-understood obstacles to relating scientific knowledge, and abstract belief in principle, to real grounded practice consistent with that scientific knowledge. I will suggest here that the usual understanding of this as a problem of ‘translation’ of that knowledge is itself a key part of the problem. …

[I]t becomes important to ask what kind of knowledge we understand ourselves to have … [and] whether the intensely scientific primary framing of the issue, combined as this is with an intensely economistic imagination and framing of the appropriate responses, may engender profound alienation of ordinary human subjects around the globe from ‘owning the issue’ and thus from taking responsibility for it.

Sheila Jasanoff, from “A New Climate for Society” (and my posts of June 2010, July 2010, Dec 2010):

Climate change … is problematic because it tends to separate the epistemic from the normative, divorcing is from ought. Crudely put, it detaches global fact from local value, projecting a new, totalizing image of the world as it is, without regard for the layered investments that societies have made in worlds as they wish them to be. It therefore destabilizes knowledge at the same time as it seeks to stabilize it. To know climate change as science wishes it to be known, societies must let go of their familiar, comfortable modes of living with nature.

Climate change confronts us with facts that matter crucially to the universal human destiny but that have not passed through complex processes of social accreditation on a global scale. The institutions through which climate knowledge is produced and validated (most notably, the IPCC) have operated in largely uncharted territory, in accordance with no shared, pre-articulated commitments about the right ways to interpret or act upon nature. The resulting representations of the climate have become decoupled from most modern systems of experience and understanding. …

[T]he interpretive social sciences have a very particular role to play in relation to climate change. It is to restore to public view, and offer a framework in which to think about, the human and the social in a climate that renders obsolete important prior categories of solidarity and experience. It is to make us more aware, less comfortable, and hence more reflective about how we intervene, in word or deed, in the changing order of things.

Anthony Giddens, from a talk at the International Institute for European Affairs (and my post of August 2010):

We’ve got to find a way back to the politics of the long term. … You’re talking, to me, about a return to planning. Planning, of course, went out of vogue. … Planning was not effective in Soviet-style situations, and not very effective in this country either. But you can’t have a 20-30 year perspective on politics without planning in some sense. Therefore, you’ve got to find a way of producing effective policy over the long term, which will somehow cope with the fact that technological innovation is not predictable, by and large.

Harry Collins and Robert Evans, from Rethinking Expertise (and my post of April 2010):

[W]e return to the larger problem that we began with: Who should contribute to which aspects of technological debate in the public domain? At the start of the twenty-first century it is well established that the public should contribute to some aspects of these debates. The public have the political right to contribute, and without their contribution technological developments will be distrusted and perhaps resisted. This is what we called the ‘Problem of Legitimacy.’ Our complaint is that the social sciences of the last decades have concentrated too hard on the Problem of Legitimacy to the exclusion of other questions. As explained, our principal aim is to offer some way into what we call the ‘Problem of Extension.’ The Problem of Extension is concerned with how we set boundaries around the legitimate contribution of the general public to the technical part of technical debates.

Reading Rittel: Research on, in, and for design

by Howard Silverman on 18 Mar 2014 · 0 comments

For all the talk about wicked problems, I find that few have heard of or read Horst Rittel, who coined the term in 1967 and was lead author on the wicked 1973 paper, “Dilemmas in a General Theory of Planning” (pdf).

Rittel was on faculty at the Ulm School of Design and a leader in the design methods movement. In general, there’s much of value in the design literature, especially for anyone working with approaches to action research, transdisciplinarity, international development, organizational development, social entrepreneurship, social practice, community organizing, and so on. Unfortunately, popular writers on design thinking sometimes neglect to cite their own history — as Cameron Tonkinwise laments, for example, in “The Grammar of Design Thinking.”

I’ve been reading The Universe of Design: Horst Rittel’s Theories of Design and Planning, a 2010 collection of Rittel’s lecture notes and other unpublished writings, edited by Jean-Pierre Protzen and David Harris.

Rittel’s concept of design, as Protzen and Harris reference in the prologue, was a broadly systemic one: “an activity that aims at the production of a plan, which plan — if implemented — is intended to bring about a situation with specific desired characteristics without creating unforeseen and undesired side and after effects.”

This planning for the future is an essential human activity. From the prologue:

Rittel always wondered why this human ability to plan for the future has not received the same attention as epistemology, that is, the study of the human ability to know, and to know what we know to be true, a field that has preoccupied philosophers since the dawn of time.

In 1987, at a conference on Design Theories and Methods in Boston, Rittel mused that ‘[i]t is one of the mysteries of our civilization that the noble and prominent activity of design has found little scholarly attention until recently.’

This piece from “Seminar 1: Modes of Innovation,” 1964, describes three roles for research in service to design:

  • Research on design … Observing the designer as a biologist observes an animal.  How does it work, or behave, or obtain his results?
  • Research in design … Research into the specific knowledge needed for a particular design problem — methods of inquiry, inference, etc. about the particular object under design. One type is the study of the consequences of design. This is almost never attempted. Once a building is completed, unless it collapses the profession is no much longer interested in it. How it serves as a framework for human behavior is almost never investigated.
  • Research for design … Research on generalizable knowledge which the designer can use to control innovation.

“Everybody designs at least some of the time; nobody designs all the time,” the editors write.

If design is about planning for a desired change, then research on, in, and for design is about getting good at change.

See also: “Why Horst W.J. Rittel Matters” at Hugh Dubberly’s website.

Design for organizational learning: US military

by Howard Silverman on 24 Feb 2014 · 0 comments

School of Military Studies - Art of DesignA few stories I came across recently have me thinking about the challenges of organizational learning — in this case, with respect to the U.S. military.

“I believe the Army is more interested in learning from its experiences than any organization I had ever been in,” pronounced Margaret Wheatley in a 1997 interview with Scott London.

That’s quite a testament from Wheatley, author of the classic Leadership and the New Science. This exchange came halfway through the interview:

London: You’ve done some work with the Army Chief of Staff and his senior staff. What does the Army have to learn from your ideas?

Wheatley: I had a lot to learn from them. That was one of the interesting things. I went into the Army as foreign territory. It had never been part of my belief system or my politics, actually. What I encountered there, when I was willing to just look around, was a lot of paradoxes.

At the positive end of the paradoxes was the fact that I believe the Army is more interested in learning from its experiences than any organization I had ever been in. Many organizations are now trying to walk under the banner of “The Learning Organization,” realizing that knowledge is our most important product and that that gives us our competitive edge. There is a lot of rhetoric now about how we have to create “learning” from our experience. But the only place that I’ve seen it, though, is in the Army. As one colonel said, “We realized a while ago that it’s better to learn than be dead.” So they had this deep imperative for learning that, certainly at the senior levels, frees them to want to learn from experience and see what they might not want to see.

The Army is an incredibly literate organization. They have internal journals that they use to correspond with one another. They study history carefully. They have a center for Army lessons learned. They document everything. And they have this wonderful process of learning from direct experience called “After Action Review,” in which everyone who was involved sits down and the three questions are: What happened? Why do you think it happened? And what can we learn from it?

If you were in a good American organization and were able to get those three questions as part of your process, you could become a learning organization. What I observe in our business organizations — even in our public institutions — is that after a crisis or breakdown, or after something worked really well, we don’t get together and say, “Okay, what do we each think happened, and what can we learn from it?” We either take credit for it, or, if it’s an error, we try to bury it as fast as we can and move on.

We’re not in cultures which support learning; we’re in cultures that give us the message consistently: “Don’t mess up, don’t make mistakes, don’t make the boss look bad, don’t give us any surprises.” So we’re asking for a kind of predictability, control, respect and compliance that has nothing to do with learning.

So I don’t know how any of these large organizations, both public and private, have a prayer to become a true learning organization, until they move away from these cultures of status and protection and fear of one another. That came real clear to me in the Army.

A key concept here — though Wheatley doesn’t mention it by name — is design. Organizational learning must be designed for — that is, afforded and encouraged through the development of a culture of learning.

A couple of 2010 military publications, the Army Field Manual 5-0 (pdf) and The School of Military Studies Art of Design: Student Text 2.0 (pdf), reflect this design turn.

The former offers an informative glimpse into the Army’s operational thinking: “Design is a methodology for applying critical and creative thinking to understand, visualize, and describe complex, ill-structured problems and develop approaches to solve them.” The latter presents a wide-ranging survey of writings on learning, design, systems, and related fields — as, for example, in the figure adapted at top: “the four big ideas of design.”

The story of the Army’s turn to design was recounted by Roger Martin in Design Observer (“Design Thinking Comes to the U.S. Army”), also back in 2010. Martin affirmed that “the Army has gotten design quite right,” while also cautioning that “the struggle to get design well ensconced in Army doctrine was and remains no easy feat.”

No easy feat, indeed — as a report in this week’s On the Media radio/podcast attests. The segment, “Rewriting History,” describes a case in which, against other dynamics, a culture of learning did not prevail in the U.S. military.

It’s challenging stuff — for all organizations — as Meg Wheatley emphasized.

Heinz von Foerster on becoming human

by Howard Silverman on 17 Feb 2014 · 0 comments

By Heinz von Foerster — published as the preface to his Festschrift, which is hosted by Alexander Riegler’s Radical Constructivism site at the University of Vienna:

I was always a little bit disturbed that English has no word for what in Latin would be “homo”, in French “l’homme”, or in German “Mensch”. The way English describes this, how it handles the problem, is to talk of a “human being”.

If you are a human being this gives me always a sore taste in my mouth. As if it were already the case that this thing you are talking about were already a human being and therefore can always whatever it does act and say I am acting because I am a human being. It could be really a horror show.

My feeling is a human being has to be developed first in order to become a human being and I think this is only possible through his actions but in my perspective alone one cannot develop anything. We are enmerced in our circle of friends, relatives, family and a society. I think we develop a connection with all these friends and other human beings into what may become a “human becoming” and what may become a human being.

For me this existence is always a kind of a dance. And since you have here a group of friends who are writing about me I thought is it because of the dance with these friends, because of the interaction with all these people that I slowly suceeded in becoming a human being. One becomes a human being through one’s interaction, love, and friendships with others.

Table of Contents >>

Scenario planning for a purposeful future

by Howard Silverman on 13 Feb 2014 · 0 comments

Bear Deluxe HSilvermanWe shape the world and the world shapes us. The problem with scenario-planning-as-usual is that it focuses attention on exogenous variables — how the world shapes us — without enough consideration for how we might shape the world.

“To the extent that we mimic scientists in claiming value-free objectivity in our view of the future, we deny the very thing that makes us good human beings and good futurists,” insisted James Ogilvy in Creating Better Futures: Scenario Planning as a Tool for a Better Tomorrow.

I’ve written about this topic in posts on “Objective and subjective scenario planning” and on Adam Kahane’s whole-system-in-the-room approach to scenarios.

Last year, I also penned an article on the iconic scenario tool, the 2×2 matrix, for the Portland-based arts and environment magazine Bear Deluxe. The article is now online.

Michael Quinn Patton: evaluation for innovation

by Howard Silverman on 10 Feb 2014 · 0 comments

Patton comparing evalution approachesMahatma Gandhi looked up from his spinning wheel as an attendant read aloud from a letter: “Dear Mr Gandhi: We regret we cannot fund your proposal because the link between spinning cloth and the fall of the British Empire was not clear to us.”

That’s a cartoon in Michael Quinn Patton’s 2011 book, Developmental Evaluation: Applying Complexity Concepts to Enhance Innovation and Use (reprinted from folks at Search for Common Ground Indonesia).

Snarky perhaps, but also incisive.

Traditional program evaluation models — accounting for programmatic inputs, activities, outputs, outcomes, and impacts — are invaluable. They are, however and by definition, not developmental: not amenable to ongoing iteration, and not designed for encouraging innovation.

The accountability of developmental evaluation, then, Patton says in the talk linked below, “is that something gets developed — and that something has value, is making a difference, generates learning.”

I’ve adapted and abbreviated the table above from the book’s extensive comparison of the two approaches. Recently, I caught a 2013 webinar by Patton, hosted by Social Innovation Generation (SiG), for which he also shared his slides.

One thing I’ve wondered about, when it comes to types of evaluation, is the contention that the developmental approach merely serves “niche” activities.

This statement is in Patton’s book, and the SiG host reiterated it in the webinar introduction. (“DE is not for every evaluation situation.  Indeed, the niche is quite specific.”) It also came up implicitly in the webinar Q+A, when Patton was asked about whether the developmental approach is a good fit for existing organizations.

Here’s his response (~53:00):

When I work with organizations, I don’t recommend that all of their evaluation become developmental evaluation. Most organizations are implementing models. Most funding is funding models. We still have a model mentality.

The place to look for developmental evaluation, within an ongoing organization, is wherever the leadership is talking about things like innovation [and] systems change. Not for everything that they do. But in fact most organizations have some area in which, while they do business as usual, they also want to do some innovation.

I find that the business world totally gets this. Most major corporations do R&D work, and R&D — venture capital work, trying new things out, developing things — is what DE is in the nonprofit, voluntary sector. DE is the R&D function. It’s where you try things out.

Even in government, you wouldn’t expect and no one could run a government on DE principles. Nobody would ever get elected saying, ‘I’m not sure where we’re going to go. Trust me. We’ll adapt as we go along.’

But where there are pockets of government — in provinces and communities — where people are saying, ‘We don’t know the solution to this problem. We’ve tried out lots of things. We want to engage people in the community in coming up with their own innovative solutions.’ That’s the place to do DE.

So you look for those pockets of innovation, places where systems change is being talked about. It’s not the whole organization. It’s the place where people are serious about innovation and systems change.

That sounds to me like a pretty big niche. And getting bigger.

Dennis Martinez on kincentric relationships

by Howard Silverman on 4 Feb 2014 · 0 comments

Following my recent posts on relationships to nature, I’ve been looking again at Dennis Martinez’s writings on a “kincentric” perspective. Martinez is chair of the Indigenous Peoples’ Restoration Network, a working group of the Society for Ecologial Restoration International.

From the book, Original Instructions: Indigenous Teachings for a Sustainable Future:

In wilderness preservation, in land management, forestry, and resource management of all kinds, Native Peoples offer a kind of model. But it’s not the biocentric model that you’re familiar with from deep ecology or Aldo Leopold’s land ethic. It’s fundamentally different because it is primarily kincentric. That’s the word I have coined to describe a unique Indigenous cosmology and relationship to nature. It’s not in the dictionary. I had to think of something that would work to explain that what this relationship is about in the universe is one of equality. Humans don’t even have the moral authority to extend ethics to the land community, as the Leopold land ethic and deep ecology do.

Traditionally, we work with animals and plants. We are comanagers with animals and plants. We don’t have the right to extend anything. What we have the right to do is to make our case, as human beings, to the natural world. That compact, that kind of contract between animals and human beings, is what has guided Indians’ subsistent livelihoods — hunting and gathering — and Indian agroecology and agriculture in the world for a very, very long time.