Elaine Kasket is the author of All The Ghosts In The Machine, an investigation of what happens to our digital possessions and data after we die. She’s also working on what data is collected about us throughout our lives. Here, she talks to Jason about digital wills, privacy, and what happens when our smart fridge starts talking to our other devices.
Jason Kingsley 0:01
Throughout our lives, we each create a digital footprint: some large, and some small. Elaine Kasket is the world’s foremost expert in what happens to that data when we die: who has access to our social media profiles, our emails, our search histories, and the documents and media we’ve created throughout our lives. We’ve only just begun to confront the thorny issues around the electronic privacy of the dead, what is an asset that you can pass on to your descendants, but the law is still lagging behind. She joins me to discuss what data has been collected about us, how it might be used, and how this is likely to develop in the future, and quite frankly, whether we should be worried about it. Welcome to Future Imperfect.
Should we start with you explaining your knowledge of what actually does happen to our data when we die? Because this is a fairly new phenomenon, isn’t it? Or maybe it’s not?
Elaine Kasket 1:06
Well, of course, human beings have always generated information about themselves that outlived them, whether that was on cuneiform clay tablets, or everything that’s happened since. But we’re in a pretty unprecedented period of time where the sheer volume of what is captured and stored is, of course, like nothing else ever in human history. And we’re also in a period of time where the control of all of that information that we store up is largely controlled by not us but by big entities that write the rulebook for what happens to our data when we die. The answer to your question is, by default: it sticks around, there aren’t any good, clear, effective mechanisms for identifying what data belongs to, or is attached to a person who’s now deceased, and no clear rules for what then ought to happen to that information, which is just sort of piling up and piling up and piling up on the surface of the world and gently heating up the world’s atmosphere with a lot, literally, with a lot of fairly, I’m sure, irrelevant information. Irrelevant but still potentially monetizable and useful in various ways to the people who hold it. So absolutely, we’re in completely new territory, and I think people are just starting to wake up to the fact that this is a problem, because the citizens of the digital age are going to be starting to die in greater numbers. And it’s going to become an even more pressing dilemma about what happens to this data. The data and the deceased can actually have strange connections, that aren’t easy to anticipate, with the privacy and the concerns and interests of the living, because the data of the dead stay in their social networks, all mixed up with the information to the living. So it’s hard to do something with or access the data of a deceased person without simultaneously accessing a whole lot of other people’s information as well.
Jason Kingsley 3:18
Well, this is a weird corollary to that, because of the whole thing with social networks reminding you of something that happened five years ago, or ten years ago and wanting you to repost it, or whatever it might be. Some of the time, it’s really interesting, you think, oh, that’s kind of I remember that. And then sometimes you think, I really didn’t want to be reminded of that. I’d happily parked that unpleasant memory. And thank you very much Mr. Social Media Network, you’ve just actually brought something up, which I’d forgotten, and I don’t really want to remember. It’s surprisingly intrusive, and it can be good, and it can be bad. So I would imagine that the idea of preserving records for your next of kin is a wonderful thing, but also problematic. I mean, it’s enough of a problem. I suppose having huge amounts of video of your loved one is a good thing, perhaps, but is there too much data? Do we really want to know that?
Elaine Kasket 4:14
It depends completely because, of course, that could always happen before, right? You could always be surprised by a memory, whether that memory is occasioned by walking past that restaurant where you last saw the person you’d lost or whether you come across as scrap of paper or something in a box in your attic. But as far as the records that we’ve historically kept, surprises are less frequent. And we have a bit more control over that, you know, if I don’t want to look at those pictures, I don’t go to that box in the spare room or whatever it is. So yeah, this algorithmic delivering of where you are in a way that you can’t control absolutely can be good or bad, depending on the person, the bereavement, the moment and and all of those things, but the problem is the comprehensive nature of what’s logged and stored. If a person can get access to a deceased person’s device, or accounts, that’s often a portal to all of everything. And within that there might be stuff that’s comforting. There might be stuff that’s downright disturbing. There might be stuff that’s ambiguous and awakens questions that you didn’t have before, that the person is no longer around to address or resolve. So it kind of blows everything wide open, when think about the stuff we post on social media. That’s fairly performative stuff, I guess, even if we might subsequently think, Oh, I wish I hadn’t posted that, as we get older and wiser. At the time we have to do that with deliberateness. But there’s a whole bunch of other stuff about the digital traces that we leave behind, that isn’t deliberative, that isn’t performative, that isn’t intended for other people’s eyes. If a person doesn’t regularly erase their search history, for example, or the list of all the websites they’ve ever visited, including the ones at 2 or 3 in the morning, when they’re anxious about something or feeling some kind of way, or whatever it is, that can be an incredibly intimate autobiographical document by somebody who has access to that laptop after the person’s gone, and can completely change your vision of your relationship with that person, or who you felt that person to be. That’s a really difficult thing for a lot of brave people. But we’re assuming access here, and a big problem is a lot of people don’t have access. If something’s password protected, double password, protected, verifications, all that stuff that’s really good to have these days when you don’t want your accounts compromised, or your identity stolen, or whatever, those perfectly sensible things are a problem after a person’s death, both for sentimental information, pictures, all the family pictures. Usually, it’s often one person in a family that’s kind of the keeper of certain archives, like photo archives, or practical stuff, like, where did this person have their bank accounts? I mean, you might not even know that because it might be concealed behind the apps on the locked phone. And so that problem is the problem, you tend to read about more in the press, you know, my husband, or my father, or my mother died and apple or Facebook or whoever won’t give us access to these things?
Jason Kingsley 7:23
What are there rules currently? Or are they is it changing all the time? I mean, does it vary by jurisdiction? I imagine it varies by jurisdiction, what they’re allowed to do, because a dead person has different kind of rights than a living person in English law anyway, as I understand it in so for example, you can’t defame somebody who’s dead in English law, I think, although there are some privacy rights and that sort of public sort of famous people rights. Churchill, for example, can Churchill’s family can protect certain aspects of his character? But you can say things about Churchill that are lies, and Churchill can’t sue you because he’s not here anymore.
Elaine Kasket 7:23
Yeah, wealth and privilege and fame and things like that seem to give you a different level to sort of dignitary rights, I suppose, which is a bit different than privacy. But it’s certainly, for example, under GDPR, the general data protection regulation in Europe, you know, which mostly the UK rules conform to GDPR, they said, dead people don’t have a legal personality, deceased persons aren’t covered by this regulation. And they basically said all EU member states figure it out, like however it seems right for you. You figure it out! That was a really, really, really bad mistake, I think, because if the problem is partly because people’s data is definitely connected to living person’s data in a way that can compromise living person’s privacy. GDPR really needed to make a clear stand on that, because it’s basically left a really hot mess in its wake, laws about what you can leave in your will, and how that works, and what constitutes legal will. All the rules around that are some of the most variable laws in the world. It isn’t like copyright that has a sort of global convention, it’s actually really different everywhere. And it can be super local. And that’s a problem, because then the super local meets with these really big international global companies that are in charge of everything. But generally speaking, what’s covering stuff when you deal with something on Facebook, or Apple or Google or Amazon is contract law. And that contract usually says one account-one user, that’s it. So no, we can’t transfer the ownership or the access of that account to somebody else. Sorry, no, you can’t rock up to the Genius Bar in the Apple Store and say, Hi, here’s a death certificate. Here’s this phone, can you help, they won’t do that. So I think that’s what catches a lot of people out. They assume that It’s kind of like they think, well, I’m the spouse or I’m the child, I’m the next of kin. Why are you not treating this the same as he would this dining room table or whatever else I might be entitled to. So people haven’t really made that leap to understand just how different digital stuff is to physical stuff, and just how little control we have over it, because we ceded control of that data to those platforms when we signed up. And so those platforms are largely in charge of what happens to that data. When a person dies, people can get a really expensive court order to get hold of, you know, an archive of something, this or that, or whatever. And there have been successful cases like that. But it’s a real hassle. And it’s an expense.
Jason Kingsley 10:46
And we shouldn’t have to do that. We shouldn’t have to go to court to get things that should be our possessions. I mean, there’s a system in place for pretty much every other aspect of our lives about probate and death certificates and all this kind of thing. And it seems insane that people haven’t thought about this. But on the other hand, people put operable code in emails thinking that it will just be fun and interesting for people and completely ignoring the concept of malicious actors. I mean, absolutely extraordinary. Sometimes the tech, it only thinks of the good It can do and sometimes never considers the bad.
Elaine Kasket 11:26
I think it’s I think it’s very preoccupied with like, Oh, what can we do with this in the moment for this, and things aren’t designed with the end in mind. They’re very much designed for right now. And you said possessions a minute ago, and this is the tricky thing. I mean, if you have an iPhone, for example, does the iPhone belong to you? Yes. You know, if you’re not leasing it. Does the account belong to you, do the account contents belong to you that you’ve posted or putting the cloud? Well, -ish, but the problem is, you know the saying possession is nine tenths of the law, the update of that is access is nine tenths of the law. If you don’t have the access, if you’re not that person, you know, nine out of 10 times, and probably a bit more than that, you are out of luck. The average person who’s not really dedicated and resourceful, and possibly with deep pockets, is not going to be able to do something about that access. So here we are eliminating our paper offices and blindly scanning in all the papers on our desk thinking, Oh, this is great, I can save it on the cloud. And then something happens to you, and the people who are tasked with sorting out their estate, might not be able to get hold of anything at all they need and might be trotting around every single High Street bank to say. Did so-and-so have an account here? Because it might not be clear. So as you can see, this is a dilemma that has lots of tails to it. The other thing about possessions, as I said earlier, so much of our information is connected to that of other people. So here’s this email thread that I have with another person, here’s this Facebook Messenger thing, or this WhatsApp thing, or here’s this exchange on social media, multiple individuals put their input into that. And so it’s a co-created, co-constructed thing. Person A is deceased and doesn’t have a legal personality anymore. Persons B, C, D, E, F, they are alive, they do have a contract, they have their rights to privacy that they contracted to on this platform. So you know, we’re not at the point where we can just unreel just the data pertaining to the deceased.
Jason Kingsley 13:41
So you could have a situation where living person who has a more complex set of rights than deceased person and doesn’t want the information to be passed on to a third party, but the third party as the inheritors of the deceased party’s stuff, there’s a conflict, there’s a natural conflict there. There might have been a good reason for it, there might have been a bad reason for it. It could be anything as humans are complex creatures. So you’ve got you’ve got a massive tension there between a live human being that sort of inherits the stuff in normal world space, and then somebody who doesn’t actually want to be associated with it.
Elaine Kasket 14:18
Absolutely. About 16 years ago, one of the people that I talked to about this area a lot is a guy called Albert Gideri, who is retired now, but who worked at Stanford, he’s a privacy lawyer. He’s litigated a lot of these things to do with the data of the deceased, with big companies and so forth. He was describing how one of the first cases that he did in this area of law involved a young man who was riding his motorcycle back to university and had a crash and died. His family was very keen to get hold of the content of various accounts, and the laws that they couldn’t, but they didn’t understand that and they wanted to litigate over that and so they went to Albert, and they found a way to help this family that involved going around to every single person that that young man was in communication with, and getting permission to be able to access stuff, and then getting the content. But the sting in the tail is it all turned out very badly. Because what happened was kind of like I was describing before, the young man wasn’t really who his family thought that he was, it exposed a lot of secrets, that then became kind of their last memories of him. It also exposed a lot of other family secrets and problems. The mum found out that the sister actually didn’t like her very much had some problems with her. And all of these conflicts happened as a result of accessing these things. So, Albert says, the golden rule for him in this area is be careful what you ask for, because you might not like it if you get it. People have in their heads that when they’re really craving someone that they’ve lost, or they’re really missing something, or perhaps when they want to know something, if there’s something troubling or mysterious about the death or about the relationship, they think, oh, if I can just get access, I will feel better, something will be resolved. Or I’ll answer this question or I’ll feel closer to the person. And I think oftentimes it ends up being more complex than that.
Jason Kingsley 16:19
It’s almost like retrospective eavesdropping on somebody’s life. They often say, in listening for information about yourself, you’ll never hear anything good said about you, so you’re better off not doing it. You could argue that because a lot of people use social media as a form of diary, some people, as you say, use it as a performance art, their life isn’t what they say it is, but it’s part of their persona. Other people seem to use it, just as a sort of a memoir, here’s a photo of me doing something. So okay, that’s just kind of mundane diary and everything in between. And this is fascinating, because it isn’t the whole person. It’s a fragment of that person’s personality. In a way, it gets fossilised and frozen. But I was also thinking about historians and my love of history. You know, the Dark Ages used to be called the Dark Ages, because there wasn’t that much known about it. And as we’ve uncovered more, we’ve changed the word, it’s not called the Dark Ages any more, because there is quite a lot of information about it now. In some ways, you could argue that we’re perhaps entering a new Dark Ages here, because the the data might not be preserved. It’s so mundane, that everything is thrown away, you won’t have the Samuel Pepys diaries, because nobody’s really bothering in the same way. And, you know, they’re not that interesting. Maybe they are in aggregate. But if they were all switched off and thrown away, you could imagine a situation where social media decide we’ll just delete all of this, but I don’t think they ever will, because they want the information.
Elaine Kasket 17:47
Well, I think it’s only valuable to them for a set period of time, because one of the things that they can do with the data of the deceased, because the deceased person is out of contract, and they can do whatever they want with it, is they can mine a lot of information or draw a lot of inferences from that deceased person’s data about still living users with it. But then that’s only going to be useful for that period of time that that can be then used to sell these people something or to kind of keep these people on the hook in the attention economy, right? And so a lot of it might be kept around in there. There have to be systems for identifying what’s important to keep and what can we jettison. And of course, a lot of values might go into that: the values and the biases that go into these decisions about whose data is important enough to keep and whose data can be jettisoned, that’s a really dicey territory there too, I think. Even if it’s still there, if people can’t access it, if all of the parties who might have an interest in it or a benefit from it cannot access it, it might as well not be there. I’ve used that very phrase before, that we could all be living through the kind of early digital age Dark Ages, you know, because we have this conceit, or this idea, this phrase that Online Is Forever, we kind of use it as a warning, saying careful what you post up here, because online is forever, that will always be there. It’s almost lulled people, I think into a false sense of security, about the longevity and the accessibility of their data. I think it’s hard for people to imagine their loved ones won’t be able to get hold of those pictures, they just sort of assume that they will be able to and they don’t need to plan ahead for that stuff. So yeah, I think that could be very possible. You’ve got these two polarised possibilities. You have absolutely everything, to the extent of the genealogists, the amateur genealogist, the future just so bored because there’s no chase, you know, there’s no hunt. Oh, you want pictures of your great grandmother, here’s 76,000 pictures from Instagram or whatever. That’s boring, doesn’t really have the thrill of the chase that it seems to have for genealogists looking back at the 19th century or whatever. On the other hand, there could be nothing at all. No answer to where do you come from or who your ancestors could be. I’ve got no idea. There’s just no record that’s been preserved of them. Because at a certain point, in the early 21st century, everybody stopped recording things on paper and printing photographs out and doing other things.
Jason Kingsley 20:22
I was thinking about that the other day while trying to access some data. It was from 10 years ago, it wasn’t that long ago. I went to a specialist and they said, We don’t have the cables for those machines anymore. I said, That’s only 10 years ago, you’re supposed to be an archive specialist. So I would expect that if you say 100 years ago, but apparently it’s just not worth keeping that data around, they could get some cables built – custom cables made for very large amounts of money, which we’re doing because we want the data – but that was an exceptional circumstance. You know, your grandmother’s pictures of her down the same pub that she was drinking in for the last 30 years of her life, which are lovely if that’s your important thing, but basically all the same, more or less, are arguably just going to dissolve away entirely. And there’s not even there’s not even a sort of chemical record of it. Like there is on some manuscripts are where words are palimpsests where things have been scraped off, you can with certain viewing techniques, you can get the data back a bit.
Elaine Kasket 21:20
Absolutely the picture underneath the picture, you know,
Jason Kingsley 21:23
Exactly. The Dead Sea Scrolls But this data is literally gone, the data is gone forever.
Elaine Kasket 21:30
Yeah, you know, carbon dating? Okay, you can carbon date, but you’re right, it doesn’t have that materiality. This seems unimaginable because we’re in such a data-saturated, data-rich, always too much of it environment. Again, it’s hard for us to process that it could go, you know, 50 years from now you want to see a papyrus scroll, no problem, you want to see something from your Instagram account 50 years ago, No, sorry. Stuff from the Old Kingdom in Egypt could still be around. That could be easier than a MySpace account from 2004. In fact, there was a – rather than a data cull – a data loss incurred through a data migration with MySpace about a year and a half ago, where they were like, Oh, sorry, we were migrating this data from here to there, we lost all of this information. But you know, data culls are gonna be necessary. Ideally, we’d move to a position of more digital minimalism where less of this data was tracked and captured and kept in the first place. But since we haven’t done that, and we don’t seem to be any closer to it, there’s going to have to be decisions made about it because it feels to me just not okay, for all of this surfeit data to be stuffing the data centres of the world and getting cooled down by whatever technologies are cooling it down. It just that really bugs me, that that’s happening, but then you can get to the point where, alright, who’s making the decisions about this? How do we decide, oh, yeah, you you’re important enough to be remembered to history, you guys not so much, we don’t really care about you. So it feels like so contrary to the, for example, the 1990s vision or Tim Berners Lee’s vision of the internet as being this democratising, inclusive kind of thing, that we could end up just perpetuating inequalities, because certain people might be able to afford it. Because it might be a monetary thing eventually, right? Like in the 19th century in Pere Lachaise, Paris cemetery or whatever, like, oh, you’re a big guy, big money. Yeah, sure, you can have a massive great monument, but the huddled masses, here’s your mass grave over here that’s unmarked. And we can actually have a digital version of that, because a rich, white guy in Silicon Valley can afford to be remembered in perpetuity, because he can pay for it or the state can pay for it. Somebody else, you know, a person of colour in Central Africa somewhere, we don’t remember you. So that is problematic.
Jason Kingsley 23:53
Maybe Twitter will archive all those tweets and people with blue ticks, but anybody that doesn’t have a blue tick… There’s a sledgehammer solution, isn’t it? If you’re not verified? We don’t care about your data?
Elaine Kasket 24:05
No. And do you know Twitter very recently moved to memorialization like its colleagues and you know, on Facebook and Instagram, and I was really upset about that decision, because I really wish – as important as memorialised accounts are for many bereaved people – I really wish that social media platforms who weren’t designed for this purpose, had never said yes to memorialization in the first place. I feel like it would have returned kind of responsibility for the bereaved people to whom that deceased person was important to maintain their own archives if that’s important to them, or to remember the person in a way that they decide themselves rather than it being yet another thing that we depend on social media to do. Hold on to my deceased loved ones’ information for me. No. Be there when I go to it. Design it in a way that controls or anticipates my emotional experience. And I feel like I’m shunting that responsibility onto social media companies, and assuming that responsibility is a bad step, is a step in the wrong direction. So I wish that they’d push that back. Facebook started memorialising accounts properly after the Virginia Tech mass shooting massacre. So social media platforms, like Facebook used to have a delete upon death policy in their terms and conditions, which is the same thing that email accounts and other things that had. And then the Virginia Tech massacre happened at a university campus in the United States in 2007. And the bereaved appealed to Facebook and said, Please don’t delete these accounts, these become sites we can go to to remember these people. And so Facebook designed this thing where the profile could be frozen. You couldn’t log into it anymore, nobody could get access to it again, once users died. But then people can continue to visit it. So people’s in-life, social media presences would then be converted into a memorial when they died. And this became something over time with various iterations of policy that started to happen essentially, by default. Twitter had not done that. And Twitter in late 2019 said, Oh, hey, guys, just so you know, we’re going to do a big cull of inactive accounts soon, in December. So if you haven’t logged in for a while, and you want your Twitter account, log in. Immediately there was a big outcry from bereaved people saying you can’t do this because an inactive account cull is a delete upon death policy with a delay. So, within 24 hours, Twitter backed off saying, Oh, no, no, sorry. So sorry. We’ll look into a memorialization policy, which they’ve now done. I just feel like it’s such a step in the wrong direction, because it’s essentially saying, Okay, technology companies, it’s your job to be there for us in our remembrance and grief and so we want you to convert or design this thing for memorialization for us. And the fact that there was such an outcry and so we’re saying it’s would be wrong for you to delete these accounts? Like how can you do this, this is immoral, this is terrible, like what you know, it’s like a killing them all over again, sort of thing. I feel unsettled by that level of dependence on this platform.
Jason Kingsley 27:43
Especially private organisations that are controlled by for-profit motivation. They have shareholders that want to make money, it’s driven by a commercial component. You could argue if it was at least government-sanctioned, then there would be some diminishment of that sort of for-profit perspective. And sometimes people want to be forgotten. We have this concept in Europe about the idea of being able to be forgotten, even when you’re alive. And I could imagine the situation where people actually want the social media of the deceased, to be forgotten, got rid of…
Elaine Kasket 28:22
On some social media like Facebook, you can say: I want my account deleted upon death. That’s an option. Your nominee legacy contact or you say, I want to archive to be downloadable or you say I want to be deleted it upon death. The thing is, under current UK law, at least you can tick that box all you want, if or challenged in court, it’s actually not legally binding, because ticking something on a social media platform doesn’t hold up in law in the UK. It does in most states in the United States, but it doesn’t here. So people can be merrily ticking away thinking they’re seizing control over their digital legacy, that if somebody were to challenge it in court, I don’t know what would happen: that’s not been tested yet.
Jason Kingsley 29:01
Yeah, I’ve come across the slight odd thing, the idea that people want you to remove data from your database, but you have to sort of keep a record of the fact that they’ve requested the data removal. You have to keep a memory of the people have wanted to be removed. If you said to me, this is a really weird concept that you so we have to remind ourselves that you been deleted. How does that work?
Elaine Kasket 29:25
Once you’ve written yourself into the Book of Life, rather than the digital Book of Life, you can’t unwrite yourself. It’s amazing how hard it is to remove yourself or unwrite yourself partly because there’s all sorts of things that are now capturing data about us that we have no control over. So the forward march, the onward march of facial recognition technology in so many of our physical environments is something that people are I think sleepwalking into. The installation of virtual home assistants that are voice activated that are ambient listening, you know, like Alexa or you know, Echos and things like that are happening. All of these things produce data in and of themselves, they are data that are triangulated. Something that’s really huge in the United States – less so here, but I’m sure it’s coming – is baby valence, onitoring the physiological, like a pulse oximetry socks for infants sold as peace of mind. Parental surveillance is being normalised as good parenting. So we’ve got people accumulating that digital reflection, that digital footprint from way before they have any control over it. So all of these data points, taken an aggregate are identified with that individual and the person who hits 13, or 16, or whatever the law is in your land. I mean, these are things so it’s not just about our deliberate social media stuff. Now, all the things that used to be called surveillance and wiretapping, are being sold to us as fun and convenient, and as safeguarding us. And as something that if we don’t participate in, we’re somehow going to not be fully paid up members of digital society, and we’re going to miss out on stuff. So capturing data has been really fetishized and candy coloured, and, you know, encouraged.
Jason Kingsley 31:22
I think it kind of comes down to people wanting somebody, you know, this paternal instinct that society sometimes has to tell people what they should do. The idea to me of an Internet of Things like your fridge, ordering new food for you, when it realises that the milk is one day older than it should be. Quite frankly, we have milk that’s one day old, you deal with it. You maybe have slightly off milk, and you make another cup of tea, but who cares? But the idea that your fridge could do that the next step, stuff appears suddenly in your fridge that you didn’t know you’d ordered. Then you get this situation where the fridge says no, I’m not going to do that full-fat milk, because it’s not good for you.
Elaine Kasket 32:08
Yeah, I’ve been talking to your scales, that’s not allowed that this week.
Jason Kingsley 32:15
And the AI says, You can’t, the medical AI that I’m also connected with said that you’re in a certain risk group. Therefore this is the list of foods I am willing to order for you.
Elaine Kasket 32:26
Yeah, because your smartwatch has been monitoring various things about your health all week. And then there’s the data from the scale. And then there’s the thing from the refrigerator. And then there’s the connection to your medical records, because you’ve signed off on your watch to communicate with your GP or something like that. So I mean, this isn’t science fiction, you know, there are actually if people sign up to it and said, Oh, yes, please, I’d like that. That’s fine. All the companies involved in making these technologies will rejoice, because, for example, the baby monitoring, the pulse oximetry sock that I was referring to might go for £350, but that’s not where they’re making their money. They’re making their money from selling the aggregate data of these children. We’re sold this idea that technology eventually can eliminate any and all friction from our experience, and that includes grief and bereavement. At Facebook’s last iteration of legacy contact or memorialization on Facebook, there was a press release in April 2019 saying they were making it easier to honour a loved one after they pass away. They talked about the elimination of all these pain points as though grief, of sadness at a loss were something to be ironed away, something that technology could control, that could be reduced by good design, as though that’s what we want. And yeah, fine. Instinctively do we shy away from pain? Yes. Do we go towards pleasure and entertainment? Yes, but I don’t consider it to be a laudable aim for technology to remove sadness or grief or pain or any of those things from the wide spectrum of human experience because it leaches meaning out.
Jason Kingsley 34:15
It sanitises life and death. As tragic as death is, it’s also a spur sometimes for people to say, Well, I’d better get on with my life. I’ve got a finite number of years and there’s a thing I’ve been meaning to getting around to doing and this personal tragedy in my life, as awful as it is, is a springboard to doing something else with my life and that is a good thing. And to say that death is something to be avoided. seems actually like a tragedy. It seems like they’re just selling something which is I suppose is what it actually is. They are just selling something.
Elaine Kasket 34:53
Yeah, your data. But you know, to your point about the reminders that we were talking about a little while ago. One of the things that Facebook did in April 2019 said, Oh, we’re aware that a birthday reminders of the deceased person are probably really painful for people. So even if a profile hasn’t been memorialised yet, if the family hasn’t requested memorialization or the death hasn’t been detected, it hasn’t been memorialised, we’ll use artificial intelligence to divine from the contents that the person’s passed away so that birthday reminders will stop. They’re making this assumption that it is going to be universally painful for every person to receive a birthday reminder, that it’s in their gift to control for that reaction and control that pain away. A couple of mothers that I talked to, from my book, who’d lost kids in their 20s said they were devastated when the birthday reminders stopped, because that’s something that they really wanted. Grief is so idiosyncratic and grievers have agency, they’re not passive, they knit together narratives of the dead person and their relationship with them or themselves as bereaved people that change over time. They use all sorts of things, including but not limited to whatever is available digitally, or the absence of things being available digitally, they kind of weave that into their story, they have agency, they’re not just passive things to have their feelings controlled by anything or anybody. That’s not what grief is.
Jason Kingsley 36:29
So in a way, we’re talking about the social media, tending to sort of hit this problem with a sledgehammer and make decisions for us all. As opposed to giving us a suite of options which we may or may not want to adopt. I mean, is that would that be your suggestion for best practice in this area? Or is this something that’s going to continue to develop?
Elaine Kasket 36:49
My suggestion for best practice is that over time the social media companies that have assumed responsibility for memorialising things by default, walk that back, and say, Okay, after somebody’s death, there’s this fixed period of time in which anybody who has a stake or a wish can download only that content that they could already access. For that to be easy, for it not to be in this kind of weird format, but for it to be something that they can get easily. And then after that, then it’s done. We eliminate that person’s data from our our servers, that’s what would be ideal in my book intro are sort of a medium sort of satisfactory solution might be something more like giving, as you say, users, individual users much more fine grained control over what they want to see and what they don’t, and to allow there to be some flexibility on that. Because one day, somebody might think, Oh, gosh, I don’t want to see this, but I don’t want to delete it. They don’t want to do something that they can’t undo. So for there to be flexibility in that. But I just don’t think it’s the best solution for something to be available ad infinitum. Now the problem, of course, comes with what? You reference history, and here we have the largest archive of human interaction and behaviour and events ever kept. It’s an incredible archive, isn’t it? It’s amazing, you know, what is being captured about these times that is then available to the kind of folks of the future. And yet, we know how many problems there are with his archive. For one thing, our behaviours are being so expertly nudged and manipulated by these platforms themselves. So, of what is this an archive? This is an archive of behaviour that you helped cause and nudge and provoke in the first place. It’s both the designer and the recorder of our behaviours individually and collectively. You might know about BJ Fogg’s lab at Stanford, it used to be called the Persuasive Technologies Lab and now I think it’s called the Behavioural Design Lab. Both fairly sinister names for laboratories, perhaps. A lot of people involved in these technologies in Silicon Valley are graduates of or at least took, you know, a couple modules from this guy. I think we underestimate so much the extent to which our behaviours individually and collectively are being shaped and one of those is the fact that surveillance has been so normalised, and that we have come to expect and value it or rely on it and depend on it. That’s happened in an extraordinarily short period of time, with technologies that once upon a time, we would have found quite suspicious. So we’ve been sort of snookered into this in a short period of time. So I’m thinking okay, this amazing historical archive, but a lot about the archive is untrustworthy. It’s going to be hard for the historians of the future just as hard for us now to discern fact from manipulated fiction and property. So, it’s a complicated archive.
Jason Kingsley 39:57
I mean, that’s always been an issue for historians, and the further back you go, the more you have to interpret things. Often you would interpret them with the gloss of your own experiences as well. 20th- or-21st century historians interpret history different than 19th-century historians, particularly thinking of the sort of Victorian age of wealthy gentlemen plundering or going around the world, collecting things for their museums. It was a particular moment in time where certain things were done, which are now being undone in quite a large way. I wonder how our society will be interpreted in another 3 or 400 years by people in the future, looking back on us and going, They were insane to do that – utterly, utterly insane. Literally, they paid for recording devices to be put in every room of their houses, literally, they carried around surveillance devices, they weren’t even government sanctioned, they paid big money for it, they paid a lot of money for it, and proudly showed everybody what they were doing, and yet big business was monetizing them and getting them to spend money on things. Or will they go, Wow, what an amazing depth of knowledge we now have. And we don’t have to analyse history, we’ve got so much data, we can tell you exactly what was going on with any one person on a Tuesday, and it’s very boring and mundane. That’s the other problem. A lot of history is quite mundane and just people surviving. Or as you said, Will there be yes, there’s this complete absence. We just don’t know. We have we have roads, we have buildings. We have a few paper records, but not many. But weirdly, there was a vast explosion of data. And it’s all gone.
Elaine Kasket 41:46
Yeah. Or some solar flare, some kind of white out. Absolutely. And you know, that possible insanity and judgments from historians the future. I mean, Tristan Harris from the Centre for Humane Technology, who was the first person at Google, to have the Design Ethicist title there, and he went to Stanford and encountered the BJ Fogg laboratory there, and then has gone on to develop the Centre for Humane Technology. He’s the one who does the podcast with his colleague, Your Undivided Attention, which is an excellent, excellent podcast, fantastic. Everything the Centre for Humane Technology does is but when he was a guest on another podcast, he was saying, we have for the last 10 years, the subjects are the participants, the willing participants, of the greatest vast psychological experiment or manipulation ever. And it’s still ongoing, and it doesn’t show any signs of slowing. If the Internet of Things, the uptake of the continued uptake of Internet of Things technologies is a guide we’re still lapping it up. I think that all of these companies have become really expert at manipulating us into using discourses of convenience and frictionlessness and entertainment and safety and freedom from all sorts of things that we fear, to sway us to influence us into buying this stuff and to ticking those boxes that says, Yes, please share my share my data. I know that when they say, I know that Internet of Things means surveillance devices in your home, you know, that are going to sell your information in all sorts of ways that could disadvantage you, that could make you poor, that could influence your child’s future on the behavioural futures markets, that data that can be re-identified with her. I know all that. I know all that. And now and again, I’ll make a decision that goes against that knowledge. Even though I care deeply about this stuff. And every privacy person and tech person that I know says the same thing.
I was wondering whether the lack of data is a problem, because to get credit, you often need to have a credit history, and without that you cannot get money. If you don’t have the track record, if they can’t see that you borrowed money and paid it back, you usually fall into this sort of weird loophole, which is you may be good for the money, but we have no record there for you, so you can’t have it. Could you imagine a situation where you haven’t had the surveillance devices in your home, we don’t know anything about you, therefore, we won’t give you any of these things? Because we don’t know anything about you. We can’t trust you. The default is unless we have data on you, we can’t trust you. And which is really quite worrying because it means that if you don’t embrace the surveillance technology, the surveillance world, you will be disadvantaged not just because nobody has the data because they won’t give you anything but you don’t exist.
100%, the answer to that is yes. And it already happens. It already happens in another way, which is that it’s very standard practice for a potential employer to do a search on the applicants. I know how I feel if I search for somebody, and that person doesn’t come up, I can’t find anything about that person. Say it’s a psychologist that I was thinking of referring to, because I don’t have space in my diary or in my clinic. If I can’t find enough information online for me to make a judgement about that person, I’m probably not going to be making that referral unless I already have close personal knowledge of them. So that’s not information that’s necessarily derived from surveillance, per se, but it’s a reason for people to be visible and present and sharing the data online, because it’s this passport to legitimacy or trust. If you’re not visible, if your data is not there, you’re somehow stamped as a less-trustworthy person. Of course, these decisions are going to be made in large part, some of these decisions about mortgages, about bank accounts about whatever, get made algorithmically. There sometimes are errors in that as we’ve seen some pretty recently on the stuff with the A-Levels. This was one of those things in the UK where a lot of students were disadvantaged because of some algorithmic decisions that were made with universities the other year. We should be looking at that and saying, See this? See what happens with all of these kids headed to university and this decision that got made and the mess that that was? Welcome to it! Apply this to all sorts of other things about other realms of our life, this is a Cassandra moment right here that we should be paying attention to, and not seeing in isolation as, Oh, that was a one off thing that’s illustrative.
But it is seen as a one-off aberration. I’ve seen computer code. And I’ve seen the notes that go along the comments that go along the computer code, I’ve literally seen sophisticated pieces of software with: DON’T CHANGE THIS BIT. Nobody knows why. But if you change it, the system doesn’t work anymore. And that’s human readable component on it. And people leave that sort of Here Be Dragons components of the code, they leave it out. They just go, don’t touch it. Because every time anybody touches this bit, it’s a disaster. And we don’t know why. And I’m always talking to people about YouTube, and about algorithms. I get the feeling that there is no human being that actually knows how the system works anymore, that the systems are so complex, that it’s not possible to know how it works anymore. You can have some ideas, and those ideas can be closer to how it works. But the systems are so complex, that in fact, it’s not plausible to actually know how it works or how it doesn’t work.
That’s the definition of big data. We’re at the point where the smartest person in that room is the room, right? Big data, people have a hard time wrapping their head around big data. By definition, big data is a kind of post-theory landscape in which all that matters is correlation. And you have this impossibly huge fund of data. That’s looking all the time for correlations that might make no logical sense. And we don’t know why we’re there. And we don’t know why people who drive a Ford Focus in Wyoming are very likely to buy that brand of beef jerky, but you know that so you just make sure that you still keep on working that correlation. But the problem is then human actors, because humans are humans, and we have a hard time not deriving causation from correlation. Based on our own prejudices and biases, we draw causal explanations for things that are correlations. Or we just don’t question that correlation. So when we’re talking about the baby valence and the monitoring kids and of course, surveillance is now pretty much being equated with responsible parenting and non-surveillance is being starting to be equated with neglectful parenting, it is worrying that starts from pre-birth onwards. Let’s say from all of this data that they’re deriving from all these infants wearing these pulse oximeter sock things that they see a correlation over time, that kids who had a particular reading when they wore the socks are this much more likely to have a heart attack or some sort of coronary event at the age of 49. And guess who’s going to be interested in that data? Health insurance companies in places where people depend on health insurance and what would be really convenient time to not renew that health insurance policy? Let’s say 48, and it doesn’t mean that person is going to have it because it’s a correlation, but this is the kind of landscape we’re in and this is why these kinds of data – physiological and all this monitoring and tracking it’s being sold to us as being for our health and so we can be more aware of our health and maybe, yeah, we are. We love looking at our watches and saying, Oh yes, okay I must get more activity today. That’s great and everything, but unless there’s something to explicitly forbid, or we’ve opted out of it in some sort of way, that data, anonymized or not, is being fed into things that might adversely affect us or disadvantage us a way down the line, because a lot of these applications of data have yet to mature.
Jason Kingsley 50:19
So you could, as a baby, have been surveilled, without any knowledge of yourself. You’re suddenly effectively on a list that you have no idea about. Then suddenly, you’ve lived your life. Suddenly, at 48, your insurance company cancels everything, because that was scheduled to be the case, but you didn’t know about it. And your thinking, I don’t understand, nothing has changed in my life. And they are, Yes, but your profile as an infant was such that you’re high risk now. So forget about it. You didn’t know, you’re chugging along normally, and suddenly the rug is pulled from under your feet because of something that was found out 40 years ago.
Elaine Kasket 51:00
That’s right.
Jason Kingsley 51:01
That’s quite terrifying. So there’s these traps in the road for us in the future, potentially, that we won’t even have knowledge of until suddenly we fall into them.
Elaine Kasket 51:10
Yes. And of course, and the gatherers of these data might not also have knowledge of yet because it might yet be that the twist or turn comes where they think, Aha! Now that we have triangulated, this with this with this. Now we see this application for this data, because of course, the context is changing all the time. So if these data do stick around, and if somebody has the status shadow that follows them and burdens them throughout life, there might be all of this data that seems like it useless, right? Like, why not get rid of it? Why not just jettison it, why not get it off the servers that are heating up the planet? Well, I’ll tell you why. Because when big data gets bigger, and big data gets bigger and bigger data gets bigger than this, that many more unexpected correlations that could come out of unexpected places. That’s what big data does. It finds connections that no human being, no organisation know anything could find or understand on their own. And big data doesn’t care about theory or causality. The people benefiting from the correlations that big data finds don’t care about that kind of thing. They just care about how they can protect themselves or make more money or be more secure as a result of knowing about those correlations and applying them to decisions like health.
Jason Kingsley 52:23
The research is completely inhuman and amoral.
Elaine Kasket 52:27
Exactly. Yeah. Not immoral, amoral. Atheoretical, amoral, non causal. It’s post-causal. It’s post-theory, it’s post-moral. It’s just utterly utilitarian and pragmatic,
Jason Kingsley 52:39
It’s like a science fiction nightmare, slowly unravelling, and we found ourselves thoroughly enmeshed in it without realising whether we liked it.
Elaine Kasket 52:49
I’m not quite sure that slow is the adjective that I’d apply to it now. But I think it’s accelerating.
Jason Kingsley 52:55
We should probably sort of wind up but do you think that there’s a way to put the brakes on this, this is coming from government and society, deciding this is not acceptable? And basically saying, No, you can’t do this. It’s illegal.
Elaine Kasket 53:09
Absolutely. And I think that one of the things that’s happened up to this point is that organisations have been very much pushing it on to individual consumers and users and data sources to make decisions about their data or whatever. But this is a kind of trade off fallacy when there’s such inequality and power and knowledge and comprehensibility. And we go around all the time saying, Oh, well, no, I do want that good or service, I’m perfectly happy to exchange my data for that, because I get it free. So that’s fine. How can it possibly hurt me? But that’s a fallacy, and because that person actually doesn’t have any power and doesn’t have any knowledge about what could happen to them or theirs as a result of relinquishing their data it’s really not okay. We’re basically being cannibalised for our data in ways that are going to unfold, potentially, and be really divisive and unequal, and unfair and profit driven kinds of things. And so it’s a very bad direction. The only thing that can really put a stop to it is all of these corporations and organisations being brought to heel, but that requires dismantling a business model that’s sort of now really bedded in and a lot of the governments – especially in places where lobbies and things like that have a lot of power – are kind of in the pockets of it. You don’t necessarily get a whole lot of super-savvy tech specialists within government itself. You get people advising, of course, and people like the Centre for Humane Technology advise governments and regulators, but governments and regulators need to stand up and say, No. We need to be electing people who stand up and say no, and aren’t profiting themselves from relationships with corporations that do it.
Jason Kingsley 54:55
Yeah, and if governments and society won’t do that, then my understanding of history is that if your legitimate routes to modifying aspects of society that you don’t like, like the French Revolution, for example, what results is armed insurrection. It’s the final way of disagreeing with something if your government won’t actually do something about it, and you feel strongly enough about it, you’ve got nothing left to lose. Is that kind of extreme level of correction in a society and that way lies madness these days? That would be unfortunate.
Elaine Kasket 54:56
I definitely don’t want to cap off a podcast by encouraging armed insurrection.
Jason Kingsley 55:37
There are plenty of science fiction stories where humanity is controlled and ruled by artificial intelligence. And the only option is to dig tunnels underground, and defend yourself against the robots and the overlords of society. And it’s not that implausible.
Elaine Kasket 55:54
Its a cliche like the war with robots and man versus machine and everything like that. It sounds like such a terrified Luddite sort of thing, and it’s a big trope in science fiction, and probably goes back from the Industrial Revolution onwards. II’ve been using the term Luddite for years without knowing who they were. And then I found out they were factory workers in 19th century England who were going in and like smashing up the mill equipment and stuff and gathering under cover of darkness on the moors to go and smash everything up, because they were worried about what was going to happen.
Jason Kingsley 56:27
It was Ned Ludd, wasn’t it?
Elaine Kasket 56:31
Him and his Luddites? Ned & The Luddites sounds like a great name for band. So of course, we’ve had this fear for a long time, and we have fear of new things, or anxiety about innovation and everything like that. So there is that kind of instinct to feel anxious about innovation. So I get it, but I feel very strongly that this is different territory that we’re in. And we’re probably more at risk of exactly that man-versus-machine robot war than we’ve been in the course of history, so smelling the coffee is required.
Jason Kingsley 57:00
Or we might come up with new terms, because apparently, the French ‘saboteur’ comes from throwing your shoes into the machinery in a very similar way. So the industrial revolution has got a lot of models for us to examine and see what might happen. Now, I think they were a bit more into direct action back then, and could be a bit more into direct action. But there were troops called on social movements in this country and other countries, you know, there was bad times came from industrial revolutions. And you, you could argue that there’s another Industrial Revolution, and that if you look at history, there’s the fairly significant possibility of other bad times coming as a result of it. Well, hopefully that won’t be the case, but not that far from it in many ways.
Elaine Kasket 57:45
And there were simpler times when this kind of mass influence below the level of our conscious awareness largely wasn’t going on. And so that’s the concerning thing about now, I think, are the greater challenges, that almost mass-hypnosis that’s happened. That’s a tough one. We’ve gotten to the point where kind of need to be saved from ourselves, and I’m not quite sure who those saviours are or will be.
Jason Kingsley 58:10
Well, it was absolutely delightful talking to you. Thank you very much. It’s really interesting. You’ve mentioned your book and everything. Is there anything else you wanted to mention, for people to follow up on this?
Elaine Kasket 58:21
My book looks at a lot of these things but we’ve gone way beyond the scope of the things that I talked about in that book, which is very much about death in the digital age. I’m working on another book called Exposed: A Life In Data, which goes from pre-birth to post-death and all the way across the lifecycle and looks at our relationship with technology and privacy across the lifespan. So that’s a work in progress, slightly set back by pandemic life and everything like that. So I don’t think it’s going to come out as soon as I’d hoped it would, but I’m slogging away at it. The more I find out about this landscape, the more I am fascinated and concerned in equal measure. I just really want to encourage people to try to stay awake, and to stay reflective, and to think and to always pause before giving away too much of your data. It’s it’s a really difficult thing to catch this in flight because it’s so automatic now, but I really just encourage people to slow down and think.
Jason Kingsley 59:22
Wonderful. Thank you so much. That was really interesting. And hopefully we’ll speak again soon. I’d like that thing.