What can we predict and what can we not predict?

Margaret Heffernan is an expert on prediction, whose new book, Uncharted, tells is what we can predict, what we can’t, and how we can tell the difference.

Jason Kingsley 0:01
A lot of what we do on this podcast is speculating about what the future might look like while leaning on the past. Today I’m talking to Margaret Heffernan whose latest book Uncharted is all about the limits on what we can and can’t predict. What is it possible to know about the future, if anything at all? What are the best ways of learning? And who can we trust to guide us through the process? These are all vital questions we want to understand for our imperfect future. Welcome to Future Imperfect.

Margaret, welcome, lovely to meet you. Would you do us the favour of describing yourself and what you do in your particular areas of interest?

Margaret Heffernan 0:52
I think I basically look for good questions to try to find answers to. So if you look at all of the books that I’ve written, they all started with a question of some kind. And in a way, for me, writing is the way that I figure out what my answer is. So if you take a book like Uncharted, it was very much driven by what’s wrong with the way that people think about the future. And what would be better? If you take A Bigger Prize, it was: gee, competition doesn’t work the way economists tell us it does. How does it really work? And if you take Wilful Blindness, it’s: gee, how come they when these terrible things happen everybody thinks nobody knew but actually it turns out almost everybody knew?

Jason Kingsley 1:38
So you ask questions, and then expand upon those questions in your book.

Margaret Heffernan 1:43
The books are really ways to answer the question, from my perspective, something that bothers me, that I don’t understand. So doing the research, and writing the book is how I try to piece together what’s really going on here.

Jason Kingsley 2:02
So do some of them just start out as ideas that then become books? Or do you start out with the idea of, I want to write a book about this. And here’s the idea or experiment…

Margaret Heffernan 2:12
It absolutely starts with an idea. And I have ideas for books probably every day. And I tend to ignore them. The ones that just keep coming back, and back and back, the ones that won’t be ignored, are the ones that end up being books. I mean, I would say with Willful Blindness that, you know, instantly once I could frame the question, I knew I wanted to write it, because it connected to so many things that I knew about and had always wondered about. But I think with the other ones, you know, I sort of thought about it and let it go and thought about it and let it go. And then finally thought no, there really is something here and I really would like to dig into it. But sometimes it takes a while for the question really to formulate itself.

Jason Kingsley 3:01
Do you sometimes find that as you explore the question, you actually come up with a completely different perspective, when you’ve actually done the research and the thinking? Do you start with a – not preconceived notion – but a sort of an accepted notion that then becomes: Actually I was wrong, I have to rethink that, or it’s not quite right.

Margaret Heffernan 3:20
Definitely, definitely. I mean, I definitely don’t know the answer. When I get started. I may have a sense as to where the answer might lie. And I usually have a sense that there are a couple of stories or people or data points that are going to be a good place to start. So for example, with Uncharted, I knew this data that shows that, you know, the very, very, very, very, very most rigorous forecasters can only predict accurately about 400 days out. And for the rest of us who aren’t nearly so fastidious, it’s closer to 150 days. And that had really lodged itself in my brain because I thought, well, that’s not the way we make decisions about anything on that timescale. So if you take that seriously, and I think you have to it changes everything. So how does it change everything? And what do we need to do differently?

Jason Kingsley 4:22
Does that cut across all sort of areas of sort of prediction as it were economic and weather? And, well, any area? So this sort of general rule that overarching his prediction, in its broadest sense of the term?

Margaret Heffernan 4:35
Yeah, I mean, weather is slightly different in the sense that you can’t see 400 days out, right so five days is absolutely the max for really accurate weather forecasting. You’re talking five days, and when newspapers run stories, but it’s going to be a barbecue summer, they have no idea what they’re talking about.

Jason Kingsley 5:02
I do sometimes wonder that about the weather. You know, is it selling hope a little bit? Is it saying, Oh, we’re going to have a good summer, so everybody feels happy about it, but makes no difference at all.

Margaret Heffernan 5:12
Sometimes it’s that and sometimes I think it’s encouraging their advertisers to advertise barbecues and steak and stuff like that. But it’s it’s not based on anything meaningful.

Jason Kingsley 5:25
Hmm, that’s interesting. And then, of course, when you when you look at, presumably things like economics, it gets even worse, arguably, because whole societies are predicated on the idea that they can predict something in 10 years and you’re talking about demographics and, well, environment for one is a huge thing, right? So really?

Margaret Heffernan 5:45
So this is another thing. So I’m very drawn to very nerdy data points that feel weird. I think it was in 2016, two academics at the Oxford Martin School, which is politics in society, brought out a paper saying that by the year 2035 47% of jobs would have been replaced by automation. And I looked at this and I thought, Ah, 47%

Jason Kingsley 6:19
Really? That’s a very accurate number.

Margaret Heffernan 6:22
2035, it was 2015, because they were talking about 20 years, hence. I thought, Now, this is preposterous. But I recognised instantly that if you say 47, instead of approximately half, you instantly have acquired our spuriously some kind of authority. Like it sounds like, well, these people really know. Right? So I went in, I downloaded the paper. And of course, the first paragraph of the paper basically says we’re using a brand new model for prediction. Brand new equals untested. Means it might work. And it might not. So things like that start to accumulate around the idea I have. At some point, there’s enough to start thinking about, Okay, I think this is a book and I think this is what it’s about.

Jason Kingsley 7:17
Do people get upset with you, though, for effectively saying what you’re doing past 400 days is arguably just made up nonsense? Or pointless? Because the error bars presumably encounter encompass 100% of the potential outcome. Therefore, you’re just guessing and willful thinking. Do they get angry or upset? Have you had anything like that?

Margaret Heffernan 7:43
I definitely do. And that’s fine. I mean, all the books I’ve ever written, have made some people angry. But I think that, when I was writing Uncharted, the first third of the book is about why the different ways we approach forecasting and prediction don’t work. So I looked at things that we’re talking about, like economic modelling and forecasting. I looked at history as a predictor, this notion of history repeats itself, and I looked at DNA. And in each of those, I said, what we’ve been told us wrong. Now, I knew when I was writing it, that this was an uphill battle. Because for reasons I can’t empathise with, people like to believe, these sort of fake truths. But what I didn’t, of course, predict was that the pandemic would come along and just prove my case. So whereas I spent, you know, probably a year writing something that I thought everybody would disagree with, you know, in a matter of weeks, everybody thought, wow, how did she know?

Jason Kingsley 8:54
Yes, you become the Nostradamus who’ve got it, right.

Margaret Heffernan 8:58
And then I’m the Nostradamus, who walks around saying don’t believe a word I say I. Exactly. And what all of this was about really was uncertainty, which people didn’t really understand. And I think to some degrees, still don’t understand that something can be generally certain, but specifically, very ambiguous. So we know climate change is a real crisis. But we don’t know which forests are going to catch fire. We don’t know which agricultural crops are going to be ruined this summer. We don’t know where we might suddenly get mass migration from. But we do know that climate change is real and exactly the same thing that the bank of the same reason the Bank of England will say, Well, we know they’re going to be banking crises in the future. We don’t know when they will be. We don’t know where they will start and we don’t know what will set them off. But we know that they will happen kind of the worst of all possible worlds, to the degree that, you know, these things are happening, you know, they’re in the system and a pandemic is exactly the same thing. But you don’t have enough information to be able to predict with any accuracy, the stuff that could be really helpful.

Jason Kingsley 10:20
That’s really fascinating. And the problem also there is that humans like a sense of certainty, even if that certainty is completely fabricated. And people have been selling certainty in magical thinking, for as long as people have been in societies, I presume, know that there are reasons why people think in certain ways, and it probably gives them comfort. Because if everything is just horribly random, what’s the point of it all you get down that that sort of naturalistic solipsism, which is nothing matters? So why does anything matter? Don’t do anything about it? And in many ways, that’s one way of dealing with uncertainty. But I suppose the other way is to do what you can with the data that you have. Do the best you can.

Margaret Heffernan 11:02
Yeah. So I think there’s a great deal of uncertainty, there’s also risk and risk is different from uncertainty because risk you can quantify. So you can look at the risk endemic, for example, in a particular kind of investment, you can quantify the risk of, you know, buying a particular property or asset. So that’s not random.

Jason Kingsley 11:29
So risk and uncertainty are linked, but not the same. They’re different. Okay.

Margaret Heffernan 11:36
And then you can see that in other walks of life, so if you look at personality and DNA, for example, you can see that some aspects are relatively predictable. But they are not cast iron guarantees. Because all sorts of stuff happens in life, which changes them. So for example, and I have a big fight in my book with Robert Plomin, who sees DNA as a blueprint, that’s just who you’re going to be. Right? So he’ll say, Well, you know, a very significant amount of IQ is heritable. That’s fine. But it doesn’t tell you much about life. So you know, I might have a high IQ, and my partner might have a high IQ, and our child might be born with a high IQ. But you know, honestly, if we were living in Syria 10 years ago, that would be no guarantee of any particular life outcome.

Jason Kingsley 12:35
Right. So there are external factors that factor into the genetic component. Yes, yes.

Margaret Heffernan 12:41
You know, if we’re all growing up in a place where suddenly actually there is no school. And it does not guarantee that our fabulous IQ child is going to have a fabulous academic life or be capable of getting very high paying jobs. So there’s so much in our life that’s driven by context, that however much we may see part of the picture, because the context is less predictable. We don’t know what the part of the picture that we can see really means it’s a little bit like having a couple of pieces of the jigsaw puzzle, but you don’t have the box cover to see what they have.

Jason Kingsley 13:23
Right? Yes, I mean, I, I remember having conversations with people about nature versus nurture. And I always felt that was a false dichotomy. I always felt like that both interacts, you can have a fortunate situation. And you can have an unfortunate genetic inheritance, but you live in a fortunate situation, which will improve your right position in life or what you can achieve. And we all know there are people out there who have done very well with modest talents. And vice versa. There are people who are highly talented, who, through circumstances beyond their control, have ended up not achieving those talents and in every spread in between. So they this ridiculous idea that it’s one or the other is said it has to be everything bad. How does this how does this affect algorithms then because I’m, you’re fascinated with machines. I’m computer games literate. And I know about technology a little bit, but not to an academic level. And I’m fascinated by how algorithms are starting to make decisions for us about the media we consume. Well in in every factor of our online world, that’s some area of interest for you as well, isn’t it in particular?

Margaret Heffernan 14:29
Yeah. So algorithms are? Well, you know, the mathematician Cathy O’Neil says algorithms are opinions encoded in numbers. And I think she’s right. Algorithms are making assumptions. So, you know, if you’re listening to Spotify, it’s assuming if you like, Bob Dylan, you’ll like Bruce Springsteen, Hmm, now that might be fair enough, and it’s it’s generally safe enough, which is to say it’s probably true for I don’t know 85% of Bob Dylan fans by And it’s not an important decision, frankly. But what the algorithm will never figure out is that even though I do like Bob Dylan, and I do like Bruce Springsteen, I also really love handle, you will never get me to Handel. Right? Because what it’s doing, is it saying, Okay, what other composers or singers are similar to Bob Dylan? And there will be an algorithmic profile of Bob Dylan. What’s closest to matching that? Well, by the time you get to Handel you’re so far away from the Bob Dylan profile is to be meaningless.

Jason Kingsley 15:37
So this is take you through localised maxima event, is this an evolutionary idea that you can, you can reach a hill of maximum fitness, and there is a hill over there, that’s even better. But you’ve got to go through this dip of fitness to get to the other Hill.

Margaret Heffernan 15:52
I don’t think it’s really that it’s just actually I really, like Handel and I really like Bruce Springsteen, and not very many people, like both. So there simply isn’t very much data about people like me. And is

Jason Kingsley 16:06
Do you think algorithms are guilty of this rabbit holing? Once you start in a certain area, and it goes, Aha, this person likes this! And it serves up more of that until it reaches the end, it gets more and more extreme, but only in little increments until you’ve travelled 1000 miles down the rabbit hole without quite realising you’re travelling. And therein lies the sort of danger of extremism and radicalization, perhaps?

Margaret Heffernan 16:36
Well, algorithms aren’t the same, you know, they’re not all written the same way. So recommendation engines are algorithms, which, you know, are written in different ways where they may get more extreme, or they may get more bland, you know, I would say that Spotify is algorithms actually tend to the bland rather than to the extreme. So there are lots and lots of difficulties with algorithms. One is they’re taking a huge data set, which is always a historic data set, right? And it’s sort of saying, okay, so what are the patterns here, and therefore, how to individuals fit into one patterns. So the problem is, the data set is not the whole universe, right. And it’s also yesterday’s data set, not today’s data set. And you’re making assumptions about how the themes interact with individuals. Hiring algorithms are a classic example, making assumptions about what does good look like. So there’s this very famous story of algorithms used in the New York school system, where they were using algorithms to evaluate teachers in terms of who got to keep their job and who got fired. And they were making it on the basis of you know, how much students improved, for example, and all kinds of other things. And the definition of what good was, is an opinion, right? If you actually got three teachers in a room, and ask them, What is good teaching, I promise you, they could not agree. But an algorithm has defined good teaching. And it will spit out certain kinds of teachers and say, keep these and it’ll spit out other teachers and say, get rid of these. And this was used in the school system for about 10 years, and it kept firing really outstanding teachers. And the difficulty there was that the teachers didn’t know that this was happening on a system wide basis. So they thought, I guess I must be a bad teacher. Right. And because they were ashamed of that they didn’t necessarily talk to other teachers so that they didn’t know that actually, lots of teachers whom parents and kids adored, were being fired. And some were being fired for doing poorly in teaching classes that they actually never even taught.

Jason Kingsley 19:00
So, bad data as well.

Margaret Heffernan 19:03
You can take the algorithm, for example that the government tried to use to predict the grades people would have got for A-Levels, and you’re making assumptions about how a student is going to improve between the time they last took an exam and the time of the exam. Well, how much you think somebody is going to improve? It’s a really hard thing to predict. Are they going to be hugely motivated because they’ve got crap grades before? Are they going to just think, Oh, well, I don’t have a chance so I won’t bother. Are their parents going to support them or harangue them? Are they going to get measles? You know, the week before the exam? I mean, this is all completely unknowable. Hmm. And you know, what Cathy O’Neil says is if you want to judge an algorithm, you have to look at who does it specifically disadvantage because that where you will see what the bias is. But you know, they’re always making assumptions based on data, which is very often incomplete. And based on heuristics that say, Well, if you like Bruce Springsteen, you like Bob Dylan, which will be true, probably in many cases, but never in all cases. So there are lots of little mistakes, which can on a big scale, and up being gigantic mistakes.

Jason Kingsley 20:32
So these algorithms in many ways normalise decisions. What you’re saying is if they’re trying to make everybody vanilla then they’re not looking at outstanding extremes.

Margaret Heffernan 20:46
They have encoded a profile of excellence for which people are chosen. So for example, at Google, they don’t use algorithms for hiring. Because while they can feed in all the data of everybody they’ve ever hired, and all the evaluations of everybody they’ve ever hired, and start drawing profiles of generally, what do good Googlers look like, they’re also smart enough to recognise that actually, what good was last year might need to be different this year. That may be the kind of work we’re going to be focusing on this year will require different qualities and capabilities. And so if we rely on historic data, we’re going to find the perfect workforce for 2010, which is absolutely Bloody useless.

Jason Kingsley 21:39
Completely. And also, once people know there’s an algorithm there, people can game those algorithms as well. I remember once talking to a teacher who had to assess the children in a sports endeavour about how much they’d improved. And once the bright kids realised that if they started out excellent, and they remained excellent, they got an average grade. And the kids realised that what they had to do is flunk the first tests deliberately, to do appallingly badly, like, so they were they were judged on how fast they ran or, you know, their their personal best speeds and all that kind of stuff. They realised that if they actually did poorly on the data to begin with, they suddenly will get top grades. Yeah, anyway. And so human beings learn to game the system. salespeople do the same. Yeah, they realise they’ve hit this quarter’s sales figures, they’re not going to get any more bonuses. So what they do they hold back sales until the next quarter, because there’s literally no point making those sales, there’s no advantage to them.

Margaret Heffernan 22:40
But very often, you can’t see how the decision has been. It’s a trade secret. And because it’s trying to make sometimes hundreds of decisions. Actually, you can’t see where the problem is, you know, and that’s why there has been this call for the auditability of algorithms. Because so in the States, Cathy O’Neill discovered there’s an algorithm for applying for jobs in the fast food industry. The fast food industry, in terms of employees has a turnover rate on average of around 200%. So they’re replacing their workforce about twice a year. So there’s good grounds for automating it, because you have to do so much of it. But what they found in one case was that for mysterious reasons, which were deemed a trade secret, anybody who in the way they answered the questionnaire, provided data that the algorithm would interpret as they’re having, at some point in their lives, having had some kind of mental illness. Those people would never ever get through. That’s against the law in the United States. I could not interview someone and say, Have you had any experience of mental illness? It’s against the Americans with Disabilities Act. But the algorithm was doing it because a it’s trade secret and who is the algorithm?

Jason Kingsley 24:08
And it’s not a person discriminating. It’s a machine, which probably should fall under the fit, but the legislation hasn’t caught up with it. Absolutely fascinating. Also, machine learning requires quite a big data set as well. And it only learns from the data set. I remember reading an article about military tank recognition. Again, this might be apocryphal, so I haven’t done any research on it. But they they found that the machine learning was fantastic at judging where these hidden tanks were. And then they tested it with a new set of photographs. And it was appalling. It was absolutely bad. Then somebody went through it, and realised that what it was doing was actually determining whether the sun was shining or not. Because the test images with tanks that were hidden, were on a shiny day. The photographs were taken on a sunshiny day. So it didn’t know it was so dumb. Yeah, I didn’t know that it was doing anything to do with tank recognition. It was just the data. So the garbage in garbage out thing, which is, you know, notion in computing is just writ large with algorithms and people don’t realise it, I suppose.

Margaret Heffernan 25:14
So there’s an example in Virginia Eubanks’ very wonderful book called Automating Inequality. I think it’s Pennsylvania tries to start using algorithms to predict which kids might need access to social services. And so they’re trying to build a model of all the families they’ve ever dealt with. But the difficulty with that is that the data set they had was only the kids who had access social services, which omits all of the middle class and upper class people who use private services. So in terms of having a valid data set for what is the context from which a need for help might emerge? They had a woefully woefully inadequate data set that was hopelessly biassed. So actually, for having a really pristine and adequate data set is very much harder than people think. And however much they talk about data is the new oil. Right? There’s a lot of a lot of gunge and filth in the oil. And it’s quite hard to get that out.

Jason Kingsley 26:26
Yes, so I suppose the concept of Survivor Bias is hugely relevant, that the data is only based on the data you have not the data you don’t have, and you don’t know what data you don’t have. It may be self selecting this whole concept of cognitive biases and self selection of your data sets. And yeah, that’s absolutely fascinating. It all sounds a bit bleak, though, you? Where do you think this is going to go in the future? I mean, the future can’t be completely imperfect like this, surely?

Margaret Heffernan 26:58
Well, see, I don’t think a predictable future is at all appealing for a start. I mean, if you knew when you were born, every single thing you do, why bother doing it?

Jason Kingsley 27:10
Yes, that’s very true. Yes, that’s the that’s the old conundrum of the higher power, knowing what you’re doing. And yes, it’s caused problems for the religious for, for many thousands of years.

Margaret Heffernan 27:22
For me uncertainty, while it has this daunting aspect, which is, you know, I don’t know what’s going to happen tomorrow. I don’t know when I go out for a bike ride today, you know, whether I’ll get hit by a car or not. I also know that actually, a lot of what happens is up to me that I’m very much less likely to get hit by a car, if I’m careful, I’m less likely to get hurt if I wear a helmet. If I’m too tired, I probably shouldn’t go out. So it actually is the unpredictability is what gives me agency in my own life and I don’t think anybody would like to be without that.

Jason Kingsley 28:00
No, that will be awful, wouldn’t it? Imagine waking up in the morning and knowing exactly what you’re going to do that day? And for the rest of your life? I think it would be would be pointless, wouldn’t it?

Margaret Heffernan 28:09
It would be definitively pointless. Yeah, it would just be going through your to do list and crossing everything off until it was done. And then you’re dead. Yes. So it’s really important to understand that intrinsic to uncertainty lies agency, and choice, and an opportunity to fix mistakes, and an opportunity to learn, and an opportunity to repair and to imagine and to create. And for me, those are all pretty exciting ideas. So I don’t regard not knowing the future as somehow terrible. I regard it as a blessing. That actually, I don’t want to know exactly what I’m going to be doing a year hence on this day. Because I want to make the decisions that lead to that day. I’ve had some choice in the matter.

Jason Kingsley 29:00
There’s a whole thing about equality. I’ve always felt that equality of opportunity is what everybody deserves not not equality of outcome. And everybody no matter what their walk of life should have opportunities to excel, but if they choose not to, or they don’t want to, or incapable or not motivated sufficiently, then that’s up to them. You know, that’s, that’s the freedom of choice. That’s, that’s really interesting.

Margaret Heffernan 29:24
It’s very difficult because I think it’s inappropriate to say to some kid born to a not-very-happy family in a not-very-happy economic environment, or not-very-happy time in history, that whether or not they become a Nobel Prize winner is up to them. I think we have to have some humility and understanding that our lives are formed by decisions we make, of course, by context, and by random things that happen So I write in my book, you know about two kids who grew up in a very nice, happy middle class family where the dad is a tremendous animator, and the mother is an educational psychologist. And you know, there’s lots of data to say, well, these kids are going to do really well, because they’re from a pretty good, you know, economic background, and a lot of data to say, they’re probably going to end up doing things roughly similar to their parents. So except that the Son ended up in the military, and the daughter ended up doing social media for counter extremism, both of which are completely off the map as far as their parents are concerned, both of whom are pacifists. So I just think this notion that we know, is kind of tyrannical. And that actually what we need to embrace is our capacity for exploration and experimentation and imagination, and creativity.

Jason Kingsley 30:59
Yes, so I am very optimistic about the future. And I do think computers and algorithms, I do think they can help. But I think we’re in the very early stages of the moment. And I think, I think we haven’t learned the right language or the right attitude to how to defend ourselves against fake news or nonsense. So we read on the internet, and a lot of people haven’t, I mean, I’ve read quite a lot about these scams online. And it tends to be an older generation that is being scammed, and perhaps they have fewer tools to enable them to be more cynical about the world. But then again, cynicism can be overdone, as well as the sort of balance between being suitably cynical about things and too cynical, perhaps.

Margaret Heffernan 31:43
Well, let’s be very clear a lot of the algorithms that are feeding people fake news and driving them to more extreme positions – so the algorithms that do that on Facebook – are not doing that accidentally, right. They’re not thinking about what’s good for you. They’re thinking about what’s good for Facebook. What’s good for Facebook is for you to spend as long as humanly possible. So they’re not sending you to extremism, because it’s extremists. And they’re sending it there, because you’ll spend more time there.

Jason Kingsley 32:11
So it’s all about view time, it’s all about keeping people’s eyeballs stuck to the end. So it’s actually almost by definition, amoral, it’s not even immoral it doesn’t care.

Margaret Heffernan 32:22
It doesn’t care where it’s sending you, as long as you’re becoming addicted to spending more and more and more and more time there. It’s exactly like computer games, which are designed with staircase algorithms to be addictive to keep you going and going and going. And when you’re just about had enough, you know, they will build design in a break. So the instead of quitting, you have a bit of a break, and then you keep going and you keep going and you keep going. And this is very well understood by games designers, and you know, the more ethical of them, have real concerns about them, understood that the addictive game is the game that everybody starts talking about. So it’s simply driven by a desire to make more money.

Jason Kingsley 33:04
There is definitely a big debate in the games industry about loot boxes, for example, about whether they’re gambling, which personally I think they clearly are, because I think they create an emotional response in somebody. I think we all know that random rewards reinforce behaviour in all sorts of weird ways. Skinner proved that around the I think it was the 50s and 60s. It’s very clear that lots of animals have this sort of slightly broken, random rewards response system. And we do as well, a lot of people are making a lot of money out of it. A lot of people then justify whatever their behaviour is. They’re saying, Well, we’re making lots of money. And I think that’s wrong. Personally, I’m quite against loot boxes. And I had people talk to me about Panini stickers, you know, these stickers that kids can get and collect footballers, and you buy a pack of them, and you don’t know which ones you’re getting? And I said yes, but what about Panini stickers? Are these equivalent stickers? And I said, yeah, they’re a form of gambling as well. It’s absolutely the same thing. You don’t know what you’re getting. And then you want to go and buy another one. I mean, they’re a bit more controlled, but they are absolutely a form of gambling, that taps into that part of our psychology.

Margaret Heffernan 34:13
So the problem is not the technology per se, the problem is the business model.

Jason Kingsley 34:18
Hmm. I could go into the computer game side quite a lot, but I think probably we stick to your stuff. So, forecasting is obviously a very important element and our valuable forecasting years, but people talk about it being also quite ideologically based that if forecasting is not particularly plausible after 150 to 400 days, surely after that, it becomes totally steeped in ideology and politics. Is that what you found?

Margaret Heffernan 34:49
Well, models are always going to be ideological because a model is a simplified view of the world. That means you have to leave a lot out. So What gets in and what gets out is fundamentally a value statement, you’re deciding that one thing is more important than another. And I mean, Alan Greenspan said this, he said, Of course, I have an ideology. Everybody has an ideology, everybody has to possibly absorb all the information in the world. And so we’re constantly editing on the fly. That’s, you know, and that’s the function of our personality, our experience, you know, and our beliefs about what matters in the world. Paul Krugman said, you know, I often think that what got left out of my economic models may have been more important than what went into economic models, you know, recognising that actually over time, what needs to be in a model might shift. So all models are expressive of a perspective. And a point of view, they can’t not be because if they weren’t, they would be as big as the world and therefore not very helpful. So all models are biassed they cannot be otherwise. Now, that doesn’t make them useless. But it also means that they aren’t some kind of oracle that will tell you exactly what’s going to happen. In the same way that the economic forecast for 2020 issued in January, did not foresee the pandemic.

Jason Kingsley 36:27
No, I suppose. So that that brings up the concept of black swan events, doesn’t it, which was a big thing some years ago about the unpredictable mega events that can come along and disrupt things. And I, I suppose a pandemic would fall into that category. Partly, although people have been predicting pandemics.

Margaret Heffernan 36:43
Pandemics are exactly like climate change, right? They are always happening, we know they always happen. But as each one is different, we don’t know when the next one will break out, we don’t know where it will start. And we don’t know what the pathogen will be. But we know that they always happen. So they’re very likely, and they know that they have high impact. So these are the kinds of events for which you can prepare, you can’t plan but you can prepare, and you can’t prepare for everything, because that’d be woefully inefficient and wasteful. But where you have high impact and high likelihood events, you have to prepare. Sorry, the argument in my book is that because we have believed too much of prediction, we have stayed very tightly addicted to efficiency. Efficiency only works when you know exactly what you want and what you’re doing. So efficiency, great in an assembly line, right? You want to produce a car, you know exactly what the car looks like, you know exactly what the pieces are. And you know exactly how they come together. perfect environment for efficiency. Once you have high degrees of uncertainty, what efficiency will do is rob you of any margin to respond or adapt. And we saw this in the National Health Service, which, you know, before the pandemic ran, its ICUs at approximately 89% capacity. Well, that’s on one level, it’s fantastic. Because it means you’re not wasting very much. On another level, it means if you actually had so much as a bad train crash, you have no capacity to deal with it. And when you’re hit by a pandemic, you definitely have no capacity to deal with it, which given that a pandemic is likely, with a huge impact is careless and irresponsible. And that’s exactly what we saw, which is why the death rate in the first phase of the pandemic in the UK was very much higher than for example, in Germany, where they were running their ICUs at a capacity of about 60%. So when people started getting sick, there was plenty of capacity to deal with them. So people are very much less likely to die. So one of the lessons about understanding uncertainty is that in situations that are uncertain, efficiency is not your friend. Because it will rob you of the capacity, the elasticity to respond. As long as you know exactly what car you’re building today, efficiencies. Great.

Jason Kingsley 39:20
That’s interesting. So how do we how do we educate politicians? Because these are the category of human beings that are making these decisions on what’s spent and what’s not spent on it at a macro level? How do we educate politicians and compete communicate with them that it isn’t really about opinions, it’s about reality, and that things need to be run inefficiently to give us robustness in the event of something that we know will happen. We just don’t know when and what.

Margaret Heffernan 39:49
Well, so that’s why I write my books, right?

Jason Kingsley 39:51
So getting people to look at your books and hopefully read them and understand that that’s the case is vitally important. One of the problems I’ve sometimes had talking with politicians is a lot of them – and this isn’t a criticism of the arts – but they’re, they’re heavily educated in the arts, which are wonderful. But they don’t necessarily often have a lot of advanced education in the sciences, or in this sort of factor analysis, and probably getting into difficult water here. But, but a lot of politicians just don’t do science and don’t really understand what it is at a fundamental level. Have you found that’s been difficult for you?

Margaret Heffernan 40:32
Well, first of all, most politicians are not educated in the arts at all. Only ignorant about the arts, and dismissive of their importance. I think there’s a problem that very few of our politicians these days have done anything that you and I might regard as a real job. You have quite a lot of lawyers and quite a lot of accountants, but not much else you have, you know, you have professional politicians, PR people, communications people, think tank people, but I think you have a very shallow bed of lived experience. And you have politicians coming from a very shallow social base. But I think it’s like any walk in life. I mean, first of all, I think it’s up to us to educate politicians as much as for them to educate us. And I think that, you know, it’s a responsibility we all have, which is to understand the world that we live in. And that doesn’t finish when we’re at school. And it’s specifically for people who are very responsible jobs, specifically requires that they continue to educate themselves. But I think one of the things that we’ve seen in the pandemic is you can be a scientist and still not know what the right answer is, of course, the answer is not in the data. The data can inform the decision you’ll make, but it almost never will give you an unequivocal answer, because it depends on what it is you’re seeking to achieve. And that’s a numerical data driven thing. That’s a value statement. That’s the point at which actually, having studied psychology, or philosophy, or the arts will be very helpful to you, in terms of understanding other points of view, and other perspectives, but the data will not give you the answer.

Jason Kingsley 42:20
So, so kind of this is all getting quite kind of grim. And it’s sort of always saying like, we’re condemned to a chaotic spiral of, of randomness in our society. Do you see things getting better or changing?

Margaret Heffernan 42:35
I don’t think it’s grim. I think what it says is yes, you have to think, yes, you have to be aware of the world and think about what it’s showing you. Yes, you have to read Yes, you have to understand kind of how to reach a good decision. But that’s how you have agency in your life. That’s what it is to be human. Now, if you want to be on autopilot, that’s fine. In this world, you may be a lot happier, you know, you can get a job at Amazon, you will not have to think for yourself at all, ever, you will be told exactly what to do, you will be in heaven in an authoritarian universe. And life will be easy, because you’ll never have to make a choice. But do you think that humanity has anything to offer than actually using the incredible human skills that evolution has presented you with? That’s what human life is about.

Jason Kingsley 43:30
I mean, we have we have come a long way as a society. In the broadest sense, the word humanity has progressed in measurably far in a relatively short space of time. So our brains are imperfect in lots of ways, but are quite good at dealing with and trying to build. I sometimes look at ancient civilizations and think that was a lot of organisation there and relatively unsophisticated technological skills, but incredible craft skills and incredible abilities of human beings to plan and make things happen. And so I am always quite positive about the future. But I am a little worried about the way the algorithms might not take over, but drive people in certain extreme directions, and how we have to correct that.

Margaret Heffernan 44:16
And I’m worried about the people who want to use them that way.

Jason Kingsley 44:21
Well, we’ve got a lot of that in politics, haven’t we people predicting what will trigger people to behave in a certain way? There was a really interesting paper I read about the way the brain responds to threats and how the threat response, it can predict your political position in an interesting way. And I haven’t heard anything more about that paper recently, because I think it’s quite uncomfortable saying that it’s actually a brain level, brain structure level. I think people that were more right wing in the classical sense, had a higher response to threat and that they reacted more strongly to threat than people that were more traditionally on the left of the spectrum, is that something you’re aware of?

Margaret Heffernan 45:03
I have a sneaking suspicion that if a lion started to attack you, whether you are left wing or right wing wouldn’t make any difference. You’d feel threatened?

Jason Kingsley 45:12
Yes, you probably be one. But I just I did wonder whether there are certain brain types that tends in one direction or another in terms of authoritarian or whatever the words, I always have to be careful with political words, because they mean different things in different contexts. But do you feel there are certain personality types that would respond in different ways? Now? I don’t know. It’s as simple as that.

Margaret Heffernan 45:35
I think Personality Typing in and of itself is authoritarian. I think it’s why all these personality tests that people play such credence by which have no statistical validity at all are tools to try to make people believe they are a type, which is a means of taking away their freedom. And as long as you start thinking about types, then you can start thinking about better types and worse types. And that’s how you end discriminating, segregating and capable of incredible crimes. So I think everything about Personality Typing is unhelpful at best.

Jason Kingsley 46:18
All right. Well, I think we’ve nearly reached at the time, the end of it, that’s absolutely fascinating. But was there anything you wanted to say to our listeners about your work, how it’s available and where they might find it? If you want to read more, because they are fascinating books? Yeah.

Margaret Heffernan 46:33
I mean, all of my books are available, you know, all good bookstores, independent and otherwise: Willful Blindness, A Bigger Prize and Uncharted are all still in print. There is a new edition out of Uncharted because I updated it when the paperback came out. And there’s a new edition of Willful Blindness, which I sort of rewrote bits of in the light of Grenfell tower, Rotherham, #MeToo, and instances like that, but they’re very readily available, and some are even in what few libraries we have left.

Jason Kingsley 47:06
Yes, there’s a whole nother topic there about libraries and access to information, which exactly, we haven’t got time to go into. It’s been an absolute pleasure. Thank you very much for a brief skim through some of your thoughts on on the subject. And hopefully we can speak again soon.

Margaret Heffernan 47:24
I hope so. Take care.

Jason Kingsley 47:25
Thank you. Bye.

Leave a comment