woensdag 26 februari 2014

Black Swan workshop 20 February 2014

Petromaks - Black Swan Workshop,
Stavanger, 20 February 2014

This workshop was part of a new (?) project that aims to develop concepts, approaches and methods for proper understanding of risk for Norwegian petroleum activities, with due attention to the knowledge dimension and surprises. And so over 100 participants gathered in Stavanger, coming from mainly oil and gas companies, universities, some students and a few ‘outsiders’ as well.

Chairman of the day, Terje Aven, opened the day, introducing the research program and the goal of the day: to have an exchange of ideas about the Black Swan concept and how to meet the challenges they pose. Knowledge is one of the central elements because we need to know where to put resources. Not in finding solutions right now, but having guidance on the direction.

Aven defined a Black Swan as “a surprising extreme event relative to present knowledge or beliefs”. This means that a Black Swan can be a surprise for some, but not for others, it’s knowledge dependent.
Three different types of Black Swans: 1) unknown unknowns, 2) unknown knowns, and 3) known but not believed to be likely because probability is judged to be too low.

Since knowledge is a key, Aven thought that risk assessment may have an important role in dealing with Black Swans - but we have to see beyond current risk assessment practices since traditional assessment practices may ignore the knowledge dimension about uncertainties and surprises.


A couple of approaches for improved risk assessment were presented which included RedTeaming (playing ‘devil’s advocate’), scenario analysis/system thinking, challenge analysis (providing arguments for events to occur), and anticipatory failure determination based on theory of inventive problem solving. Aven also addressed different types of risk problems caused by knowledge uncertainties which put weight on the use of the cautionary and precautionary principles.

The program before lunch featured four academic speakers. Andrew Stirling from the University of Sussex was the first of these. He warned that, since Black Swans are not objective, one should not try to bury subjectivity under analysis. An argument in favour of precaution followed and the interesting observation that defence of scientific rationality (against application of the precautionary principle) is often surprisingly emotional.

According to Stirling, uncertainty requires deliberation about action - you cannot analyze your way out of it. Deliberation will produce more robust knowledge than probabilistic analysis. Some examples were used to illustrate that evidence-based research often includes so large uncertainties that they often can be used as argument for pretty much any decision.

A matrix was presented in which knowledge about likelihood and knowledge about possibilities were presented against an problematic/unproblematic scale. As was demonstrated risk assessment is (according to the definition of Knight) only applicable in one quadrant (good knowledge about both possibilities and likelihood), yet we are forced to use this instrument a.o. through regulations and political forces in a desire for ‘evidence based policies’. The other quadrants were labeled ‘uncertainty’ (’risk’ with bad knowledge on likehood), ‘ambiguity’ (bad knowledge about possibilities, but good about likehood) and ‘ignorance’ (bad knowledge about both). A number of tools were proposed to get out of the “risk corner” and have a wider view. Tools included the ones that Aven had mentioned before. One way is to be more humble and not be caught in ‘swan-alysis’…


Concluding Stirling said that the point is not putting things in boxes (e.g. what type of Black Swan we’re dealing with) but rather what to do things for. Critical deliberation is more important than analysis. One problem with the Black Swan metaphor may be the suggestion of white (= good, many of those) and black (= bad, only a few - so we’re good), but things are definitely not that binary!!

Ortwin Renn had an inspiring Powerpoint-free speech about different Black Swans, the role of (risk) analysis and risk management (methods). Some Black Swans are in the long tails of the normal distribution, others are about problems in knowledge or knowledge transfer. There often is a disparity of knowledge within or between companies - transfer of knowledge may be beneficial. And there are the “real” surprises, the rare events which have never been seen, or there is no pattern that could predict them.

One reason for the popularity of the Black Swan is that we humans experience many random unique events in the course of our lives. Our memory builds on unique events, not on routine. But… risk assessment works the other way… and builds on a very formal approach.

How to deal with these challenges? We can’t say about probability of Black Swan events, but we can say something about the vulnerability of our system! This we can do by analysis, but a different kind of analysis.
Resilience is about the ability of a system to cope with stress without losing its function. Highly efficient systems are usually NOT resilient. There is an unavoidable trade-off between efficiency and resilience. This trade-off is not a mathematical decision. It’s about risk appetite, compensation, politics, our attitude to precaution. It’s an open question we need to deliberate about!

ESRA chairman Enrico Zio had a presentation that reflected his search for answers around the theme of uncertainty. We try to predict by modeling, but there is a difference between ‘real risk’, assumed risk and expected protection. So there is a multitude of risk profiles and various analyses don’t give the same answers. One solution might be to combine deterministic and probabilistic safety assessments.


Zio continued by discussing two different approaches to safety which Hollnagel calls Safety I (look at things that go wrong) and Safety II (look at things that go well - resilience thinking). Improving the ration between Safety I (decrease) and Safety II (increase) may be one way to decrease Black Swan risks. Observability and controllability are two important issues related to Safety II.

Concluding Zio warned about trying to solve today’s (complex) problems with the mindset of yesterday.

After a number of critical remarks with regard to knowledge and probability Seth Guikema from John Hopkins University talked about why we cannot discard historical data and expertise. Guikema underlined that risk assessment helps to understand problems, but does not make the decisions. People do!

Historical data can be useful as it says something about the frequency and extent of the events that have happened, but it cannot give information about eventual Black Swans in the future. While you cannot determine the inputs themselves, you can use models from things that have happened and run these with BIG events and see how the system responds to these. Guikema illustrated this with example from the impact of historical hurricanes on power grids in the USA and how the recent hurricane that ravaged the Philippines would affect the USA. This variation on ‘red teaming’ proves to be a useful way to assess vulnerability. One ironic comment was that people tend to forget the lessons learned from previous storms.

Again: data driven models cannot help initiating events, but can be used to assess the impact and more. An iterative ongoing process was presented as shown in the picture below.


After lunch there was possibility of participants to present views or discuss practical examples. Regrettably only a few used this opportunity - despite the rather large number of participants.

First up was Conoco Phillips who presented two different cases. Kjell Sandve discussed some challenges within risk management especially the question if increasing costs were ‘caused’ by HSE requirements. One main problem according to Sandve was a challenge to reduce requirements once they had been applied one place (but maybe weren’t quite necessary everywhere). He asked also if we actually have the right tools and if risk assessment supports good decisions. His appeal to the project was: Don’t make more complex tools/methods - rather the opposite!

Malene Sandøy starting talking about a 1970s’ decision problem, i.c. how high should jackets for facilities be build and the Black Swan that one met some years later when it turned out that the seabed subsided and platforms were “sinking”. Related Black Swans were stricter regulations with regard to wave height, new design requirements and not in the least a much longer life time for the structure than originally anticipated.
After a rather technical discussion of design loads and how these analyses could be used Sandøy ended up with a clear improvement potential: From calculations (analyses) that “no one” understands to broad discussion in the organization of scenarios to design for.

Third speaker during the audience participation was yours truly who brought some views from outside the oil & gas breaking through some barriers of domain dependence. The presentation included a retrospective on some Black swans that have affected Norwegian Railways in the past years, both negative (the 22/7 terror attack) and positive (the national reaction to 22/7, the ash cloud and, interestingly, the financial crisis). Safety related accidents are rarely Black Swans, but as Stirling said one shouldn’t be too stuck up with putting things in boxes and a more ‘relaxed’ approach to the definition will give several ‘Dark Fowl’ events. One example was the Sjursøya accident in March 2010 which was discussed. A quick assessment of the system where the accident originated (the Alnabru freight terminal) leads to the conclusion that this was a Fragile system. Measures taken after the accident were discussed related to the Fragile - Robust - Antifragile triad from Taleb’s latest book.


Ullrika Sahlin from Lund University came from an environmental background which was interesting because she related Black Swans not so much to events (as safety folks tend to do), but rather to continuous exposure from certain processes. Sahlin presented a couple of thoughts about the subject and expectations from the project.

Her presentation included discussions around various perspectives on risk (a.o. traditional statistical and Bayesian), the quality of our knowledge and the processes we use to produce the knowledge we use for evidence based management and not in the least assumptions (to which one member from the audience remarked that “If you assume, you make an ASS of U and ME”).

One advice from Sahlin was that we should communicate our estimates with humility and communicate our uncertainties with confidence.


Igor Kozine from the University of Danmark was the last participant who had a relatively spontaneous short presentation around a 2003 Financial Times article that told about a Pentagon initiative to have people placing bets on anticipated terror attacks as a way of risk assessment. This project was discontinued (at least officially) because of public outrage.

After a coffee break with engaged mingling and discussions there was a concluding session with discussion and reflections. Themes that came along included:
  • In line with Safety I/II: should we focus on failures or on success? Is focusing on success a way to manage Black Swans? Regulators may have a problem with those approaches… Rather than either/or one should compare alternative strategies. There aren’t too many error recovery studies yet.
  • What about resilience on a personal level? How much must one know? Dilemma between compliance and using your head in situations that are not described in the rules - see Piper Alpha.
  • Taleb’s Antifragility and Resilience: safety people may have a different understanding about Resilience than Taleb, and (according to Rasmus from the University from Danmark) also original literature differentiates between Robust (bounces back to normal) and Resilient (bounces back to a NEW normal).
  • Stirling summarized that one important dilemma is the question: What is the analysis for? Shall it help making a decision, or shall it describe knowledge about some risks? One solution may be to look at the recipient of the analysis, not just at the object to be analyzed. What are the power structures that are working on us?
  • The last question was if we really need new methods, or rather a good way to use existing ones and their output? Aven concluded that risk assessment in the broad sense of the term has a role to play. Knowledge is important as is the communication of knowledge.