Black Lives Matter Special - Canaries In The Coal Mine with Ian Forrester - Show Transcript
This a special edition of the Tech For Good Live podcast, in support of the Black Lives Matter movement.
In the second of three episodes guest hosted by Ian Forrester (BBC R&D, Cubicgarden), he's joined by:
Ade Adewunmi (Data strategist, FastForwardLabs)
David Eastman (Software Developer)
Ethar Alali (MD, Axelisys)
Vimla Appadoo ( Co-Founder & Chief Culture Officer, HoneyBadgerHQ)
To find out more about Black Lives Matter, to support the movement or to download helpful resources, visit www.blacklivesmatter.com.
Transcript
Ian: Hello and welcome to a special edition of the Tech For Good Live podcast in support of the Black Lives Matter movement. I’m Ian Forrester and I’ll be your host today, and I wanted to say I‘ve got some great people with me, so let’s start with….Ethar
Ethar: Hi, my name’s Ethar, I’m a founder, technologist and a bit of an activist, working to support people in financial inclusion and of course has particular representations, or disproportionate representation within the people of colour community.
Ian: ok, Vim…
Vim: Hi Ian, thanks for having me on today, I’m Vim, I’m the co-founder of a new organisation called Honey Badger which is looking to build inclusive work environments for people around the world and I also work for an organisation called Culture Shift that’s building a SaaS product to tackle workplace bullying and harassment through reporting and support techniques, and like Ethar I’m an activist...not by trade (laughs)
Ian: great, and David…
David: Hello I’m David Eastman.. I’m an activist, NOT by trade, agile developer, and right now I’m finishing a game that should be published next month, and that’s about running a secret police force, so there you go..
Ian: (Laughs) Very apt! And Ade…
Ade: Hi I’m Ade, I work as a Data Strategist at Cloudera’s Fast Forward Labs, helping organisations use data better but with a particular focus on machine learning.
Ian: Okay, excellent, we’ve got an amazing panel here. We want to cut this into a conversation, so we want to start talking about black representation when it comes to artificial intelligence, machine learning and those fields, and we can also start to touch on the tech pipeline questions. I don’t know who wants to start off, Vim?
Vim: Yeah, so think what I felt quite striking and I think it was towards the end of last year, was the Home Office launching the photorecognision technology on the ‘Get a passport’ service and the imagine recognition part of that technology didn't tailor for darker skin. And so, would either fail to recognise the skin tone or would actually revert back to comments saying ‘Your eyes are closed’ if you were of South Eastern origin or.. Your.. it was..
Ade: It was ‘your mouth is open’.
Vim: Yeah. And for black people it just screamed to me ‘what has happened?’. Like, how are technologists, particularly in government services, still developing technologies that does this? And it trickles down even into what, for me, is the more micro aggression version of this, if that’s overt then the less overt is having white at the top of the ethnicity form, when you know, it could just be alphabetised. Things like that really just say to me technology that we’re so dependent on others, is just not there yet.
Ian: Yeah it’s just.. Reminded me of how bad this is. And it just strikes me that either, two things: either the people of colour are so low down in the chain that if they even mention that there's a problem they get pushed aside or taken to a room and told to shut up. Or there are just no people of colour involved, I don’t know which one is more terrifying and scary?! Ethar, I know you wanted to say something?
Ethar: Yeah, I was actually trying to tie this together with what Ade was probably going to say as well. From my perspective there’s two parts. What the government is doing and there’s also the wider context of AI and testing of AI, and reinforcing the learning of this in regards to lots of cases. For me, there's obviously a couple of different problems here, and there's also the stances the technology companies have generally taken on this. Some of them like IBM and Amazon of not tying this into police forces, especially when they are obviously involved in aversive processes. I suppose there's quite a lot to untangle, I’m interested to ask Ade, who’s an expert in this field, what is it you think that we've obviously got wrong, and how do you think we can solve it?
Ade: That’s a really good question. Before I answer I really wanted to just make some follow on comments to points already made. Firstly, with the Home Office facial recognition enabled applications - I just wanted to build on Vim’s point and add on that the Home office was aware of this issue. As Vim was speaking I had to go and look at the article that I had bookmarked, and then I’ll get to Ethar’s point, so their response - and this was a BBC article - not one that might take a more strident position on this. The Home Office response was, not only that they knew but that their focus was making the application simple to use, right? So, the overall performance was judged sufficient to deploy, and the home office told the BBC it wanted the process of uploading the passport application photo to be simple. It added that it would try to improve the system. And also to make the point that for these organisations that more recently had indicated that they will stop supporting the use of facial recognition within law enforcement situations, it’s also worth pointing out that many of them have made explicit commitments to avoiding its use to law enforcement only in the United states. And I flag these points because, I guess in response to Ethar’s questions: technology has never been neutral or a thing apart from the systems in which it is deployed. So when I look at the Home Office response, it mirrors a pattern of behaviour and a very clear policy intent. So when we talk about what organisations do I think, for me at least, what I found really hopeful in a lot of the messaging and the conversations that the Black Lives Matter movement is generating, is a focus on systems. And I think what's going to get harder for technologists and for the organisations that they work in, is to deny the connection between the two. So this argument of, which was Amazon’s argument, for its recognition technology, the facial recognition software it was selling, this was in the face of the work of researchers like Joy Buolamwini and Deborah Raji kind of making the point that you cannot separate out the operation of technology from the training of the people who would be using it, and the context in which they would be using it. So Amazon represented their idealised behaviour, the pushback of the researchers was: what's the most likely scenario in which it'll be used? What do you think will happen? And I think increasingly technologists kind of go ‘well you know, I just build the app..’ wont wash. I’m hoping also that the movement is giving some of these technologists the boldness to go ‘we know it doesn't work in practice and we don't want to be involved in the creation of software that we know will in practice reinforce racist policies or reinforce existing biases and prejudices against specific groups’. So that’s my hope. I don't know if there's something special that these organisations can do, other than be human in their understanding of their technology and the world in which it is deployed and not adopt this intentional, but false, narrative of ‘well you know, we just build the tech. We can't be held responsible for how it might be used’.
Ethar: I have to kind of echo that, because that's kind of the intent in which I asked the questions. Part of the issue is there's no such thing, as you say, as a separation. Technology has always been an enabler of the type of system. In fact, we build them these days with the view that actually, it has to create some business goal or business value and if that happens to be systemically racist then the system is naturally going to be systemically racist as well. And from the point of view of the arguments for making it deployable and making it easy to use - I don't know about you, but if it prejudiced a particular group of people, then it’s not exactly easy to use. All they've done is create a system and there's no actual correlation between the two. The fact is, if it's easy to use by a person of colour it's the same as whether it's easy to use for a person who isn’t of colour. So this idea seems quite false, and a bit of an excuse. It's almost like deflect and diverge.
Ade: Or a more cynical take might be, actually that is a clear statement about who matters.
Ethar: True. Very true.
Ade: and who we believe should be willing to put up with a service that doesn't work. So in the particular case of the gentleman who was tempted to use the tool and couldn't, his application did not go through. It was rejected on those grounds and he would have to a take less easy to use route, and so essentially what the Home Office is saying, and I think it’s an unguarded comment, rather than flawed logic was: ‘Yeah, a percentage of the population will not be able to use this tool and we've decided which bit of the population should be able to use this simply, and we’re happy with the collateral damage that is influenced upon those who can't.’
Ian: So we're back to the hostile environment.
Ethar: Basically yeah, you're absolutely right. This does create a two tier dam. Do you think that does create.. Well.. part of that situation? It’s the fact that we technologists build to the greatest value first. In the event where we've chosen, we've made an explicit choice that why people have the greatest value in that context by doing what we've done and said that people of colour don't matter. Especially when it's so irrelevant to the point of ease of use. I agree with you, it's a very explicit statement. We value people of colour more for sure.
Ade: And I think, I know our focus here is especially on the impact of people of colour, but you get this all the time for people with disabilities where an organisation will justify, will acknowledge that their application doesn't work for a part of the population and go ‘Yeah you know, when building our Minimum Viable Product this was considered something extra’ right? So they’ve made the decision. So again, I think technologists having to realise that these are explicit statements and there are built in assumptions and we can't run away from them. When you're building your features for your, or when you decide what's going into, your MVP and accessibility isn't on there, or accessibility for certain groups isnt on there, knowing that tech doesn't work for this proportion and you still call that an MVP.. that's not an accident.
Vim: It's exactly that. And thats again.. it's the responsibility of the technologist to call that out in the room because, in my mind, the pressure is comeing from the business to ‘get it out there’, ‘test it quickly’, ‘to do all this stuff’, and I hold it as my personal responsibility for someone in the room to say ‘well, that's not going to work for someone with color blindness or dyslexia’ or ‘its never going to work on a screen reader’. It's such a hard conversation to have, but it shouldn't be and I can't understand why. To go back to, and sorry to cycle back on this, but I was looking for the article.. but the police using facial recognition or AI technology to do law enforcement, there’s a BBC article again where at the end of last year the police force themselves started to call out that the AI they were using was telling them to make decision they wouldn’t have otherwise made, and they were saying ‘actually this technology is not helping us because the data it is using is telling us to make misinformed decisions.’ And it just baffled me. It's the same with the ability questions, it's putting the onerous on the person using the service or technology to call it out when it goes wrong, rather than saying it’s not good enough from the beginning of the process. And, especially in this ‘move fast, break things, keep going’, we still don't call things out. We still don't say ‘that's not good enough’, not when it comes to the things that matter. And I really want that to be one of the outcomes of the movement and particularly in technology, that we get better at calling it out, not just when it's not going to make us money but when it's fundamentally not good enough.
David: I just have an example. And it's sort of to the side of this. So I have what's called a single person discount in my flat, a lot of you probably do. So, I got a letter and this was a nice paragraph saying ‘Hi! You have a single person discount’, and the next line was ‘a recent data matching exercise has indicated that you may no longer be the only adult at your address’ and I was like WTF. I’m reading it and basically went back to ‘oh we’re not particularly certain, but you know, we just wanted to remind you..’ and then suddenly you realise I’ve been targeted by some obvious algorithm that they have not pointed out what it is, or pointed out what information they might be holding. This of course doesn't necessarily relate to Black Lives Matter, but the problem is I have no idea what algorithm they used to do that. And they offer no particular address or what information they’re holding. This has been sent, I suspect, to everybody in Eeling who fits this algorithm. And this is such an example of an outwardly broken thing. I tend to politely, ok, wasn’t that polite, email back saying ‘could you please at least check it against another form of data so you're a lil bit tighter and you’re more likely to actually be accurate?’ It literally could have been anything, like they spotted an address that mentions you or mentioned another person. But this is a reflection of what we’re talking about: people are happy to use AI inaccurately on that agile excuse ‘oh well it's just the first one. Well improve it later’.. maybe.
Ethar: Computer says no.
Ian: This is a very explicit example of this very problem.
Ade: But worse, I think, also giving room to the cognitive bias that these decisions cannot be prejudiced and cannot be discriminatory in nature, because we've taken biased humans out of the loop. It's the computer saying this, it's fair and it’s objective. The dangers of that. I mean in this particular situation you were in a position to not just refute the claims, but you could follow up. But there's any path redress or accountability is not something to be taken for granted in so many of these situations. Decisions are being made that people don't have the ability to challenge, aren't always made transparently, and I think this is the other thing as well that when machine learning models are being deployed they are very often deployed in situations where there isn't sufficient thought being given to the deployment within the application. But also that to the humans who will remain in the loop and are having to use that for decision making, and so a situation begins to arise where Vims point earlier, where the police is going ‘huh, this is beginning to skew not just our focus, but also our decision making process’. But imagine a situation where a relatively junior person in an organisation is seeing that their decision making is being skewed, but are not in the position in the organisation to go ‘I'm going to counter this. I don't think this is right’. That does two things: firstly it means that over time the model, or at least its implementation in the application, begins to skew the context, the policy context, and begins to skew the applications context. But also there is no feedback loop for correcting that, and I'm hoping that one of the things we get to talk about is the importance of the curation of data: where the data is coming from, the systems that have collected that data, the assumptions that were made and how that's built into this. In this particular case that David cites, you might have been one of the relatively few people who had both the confidence and the time and the inclination to challenge that and send an email or call someone up to complain about it. Now assuming they have a system, that’s even just assuming that, and they are able to take that information in, what is the feedback loop for future training for that model or correction to that application? In increasingly cash strapped local authorities?
Ethar: Yeah I guess it was bluffing: it was failure, disconnect. I very much suspect that.
Ian: The feedback is key.. because nobody's systems are flawed or they deploy because it's simple.. I'm doing the air quotes.. and there's got to be feedback, there has to be. Otherwise it would just become this self fulfilling prophecy. It's just pointless. I think you're spot on, we've been shaking our heads all the way through this going ‘Yes, yes, yes’ (laughs).
Vim: I wanted to do a mini round of applause everytime someone speaks.
(all laughs)
Ian: Yeah. So back to Ethar.
Ethar: Just because I actually like to talk about some of that data aspect now. Because I think Ade raised the most salient point about this and I dont think it’s just cash strapped councils, though cash strapped is obviously part of it. Many of them don't even know that there has to be a feedback loop in the process. Because the problem you've got is that it's like training a child or yourself to walk or ride a bike, it has to continuously experience the feedback for it to actually improve. If you don't, then you assume that what you've got is correct and that is a kind of dangerous place to be because if you're training on something that's unrepresentative or has given David a partner he didn't know he had, then that creates a situation where in principles the silence reinforce that issue. Now, how are organisations who aren’t even aware they don't know about feedback going to deal with that? The unknown unknowns. It's quite a difficult thing. Do you think there's an onerous on society to create that expectation, that there must be feedback, is there an education part here? Is there something civic tech should be doing? I suppose that's kind of an open question from me. Certainly as an activist I did quite a lot of this and certainly in my CAD years well before AI was used mainstream. You started to see a lot of this stuff appear and it was surprising that actually people had to go to bodies like the Citizen advice to find information they didn't know they needed to know and in fact sometimes, I don’t know if people remember when the changes to incapacity benefits came in, this was probably sort of early 2000s. The overnight change to ESA and the assessments that went with them created a steep change in the number of people we started to see who are poor, because of the number of people being affected by, wrongly I would say, being categorized as fit to work in those cases. We had people who were assessed and had to appeal, and the appeal rates were ridiculously high. At one point, in the beginning, we were getting 90-95% success at appeals, so that's a broken system. And if you started to create an AI around that understanding the first thing it'll do is reject everybody who's entitled or the council has a duty of care to, or the wider society. This is my fear, when there is the unknown unknowns that then reinforce this position going forward. It's kind of a worrying state. In terms of data, where else could they collate it where the unknown unknown are not known, If that makes sense… (Laughs)
Ian: Go on Vim, I know you wanted to say something on this.
Vim: I was just going to say there's a spectrum of this understanding. There was an example in the US of the court that was using AI to determine sentence times. The AI was based on historic racist data in the US which were proliferating young black men getting sentenced longer, feeding the system that young black men getting sentenced for longer. So you can see what was happening. But the court, or the state, had signed a contract which meant no appeals could be made against the sentences and they had no ability to see the code because the IP was protected. And that's what I meant of the spectrum, of it going completely wrong. If we work backwards from there, I think we can build a vision of a data policy and understanding we would feel comfortable with, but the horrifying thing.. that's a real life event, that's happened already. And so it's not just what's the role of civic tech, it's the role of government to prevent that from happening as well.
Ade: it's like you just build on that point. I think sometimes when people hear people of colour talking about the impact of disproportionate policing or prosecutions or even just not being considered in the development of technology, I think because these are old problems I think there’s a temptation to just go ‘oh well, that’s the world what can you do?’ and I think this is where solidarity becomes really important. But I think of the base of effective solidarity, first you need empathy, you need to be able to perceive situations outside of your immediate context, but also there’s a creative component and imagination required. And I think one of the ways, at least when I’m speaking to friends that aren't black or asian or from other ethinic minority groups. One of the things I actively invite them to do is to think: Eventually these technologies will grow, their use will expand, very often marginalised groups are the canaries in the coal mine. The things that marginalised groups are complaining about are the things that are eventually going to get you too. And I cite that because, and this is an example from the states, I always feel bad looking over at horror what could be coming our way, but one of the really early examples, even before the introduction of computers, so before a machine learning based approach, or based implementation for sentencing guidelines, when these algorithms were manually applied or at least when there was no machine learning involved, people were already very aware of the problem, but it was happening to the groups that are historically marginalised and people kind of shrugged their shoulders and went ‘uhg, you know, the world what can you do?’. But I remember reading in Cathy O'Neil’s book, two examples that I found really useful when inviting people to emphasise better, and specifically to imagine themselves. So just take the imaginative leap and try to perceive a world in which this might be happening to you and the first again, was a black box algorithm that had been procured by a school district in the states and I'm embarrassed to say I can’t remember the specific states. But this was a teacher and I don't think she was a person of colour, which I think is in part why it made the news, who had been shown to be really effective but for whatever reason this black box model had decided that her pupils were not improving sufficiently and she was fired. When her union got involved, her schools’ district response was ‘well, that's what the algorithm says and it's a black box algorithm’. And to Vim’s points, for commercial reasons this organisation had entered into a contract that means this organisation is not required to explain this to us. And you can see why an organisation that has not done its due diligence would (laughs) seek to prevent or reduce its liability by refusing to demonstrate the inner workings and in this particular instance the models were such as interpretability was shot, I don't think they even could. But what was interesting, was, aside from its usefulness, trying to get people to think more broadly how these things might affect them too, even if they're not in a historically marginalised group.
The second thing was something as simple as procurement guidelines. There's no way that should even ever have been procured. And that goes back to the point about people buying things they don't understand because they've been bamboozled, but also because of poor enforcement and this is one of the things I'm hoping we get time to talk about. The impact of both light regulations, but secondly poor enforcement. If you're working in a context in which you're constantly pushing against the boundaries, missteps will happen. The way you address that on a societal scale is through regulation and effective enforcement. And I think one of the things that's been really interesting but also disheartening and to this point I think it speaks to what gives technologists or just people within organisation the boldness to speak out, is the existence of regulations they can reference and when the business is pushing to be able to go ‘but this is the liability we become exposed to because we know there's effective enforcement.’ Everytime when an organisation behaves in a reckless manner, and regulator fails to enforce existing regulation, it just makes it that much harder to be the voice internally to push back and say ‘that's a risky move’ and I think this is yet another one of the things that doesn't absolve the technologists of their responsibilities, but again removing this from individuals and taking this to a structural perspective. These are the safeguards we put in place as a society to make it easier to do the right thing. That was a bit rambly, but yeah.
All: that was amazing.
Ian: virtual clap! The book is called ‘Weapons of math destruction’, it's well worth a read. I'm trying to find that example but yeah, it's fantastic.
Vim: I was actually going to say, alongside the governance and the procurement strategies that needs to be built in place, I think it even needs to go through to PHD level research applications to prove that your data set that you’re collecting is inclusive before you, embed it in the learning. Not when you get to build your solution. If we're not getting it in at the point when you're understanding the application of this, you're not going to get it further down the line. Again, it needs to be systemic and across the board.
Ian: it feels like there's an opportunity for some organisations to really… I mean I know a lot of organisations, meaning media and product, have been pleading support for Black Lives Matter. But hey, if you really want to make a change, this is where you start, you know rather than just putting a hashtag out on your channel. Really make some sustainable changes by changing the way you do this, rather than just ‘oh this is a thing, here's some money, forget about it and let's keep doing what we're doing’. It's just, ugh, it’s beggars belief that we don't have this level of transparency and policy. Clear. In a law. It should be there. We could talk all day about GDPR and there's been a lot of talk about reforming it because there hasn't been enough teeth, but I totally agree with you. Without that there's no way you can be in a position where you go ‘look, if we do this, this is going to happen’ if it's just ‘dont worry about it, we're innovative and cool and breaking all the rules, move fast and break things’ it's all good. You know.
Vim: the frustrating thing is that anything that deviates from the norm, so people see it as ‘oh, it only affects them.’ or ‘it only affects women’. Whatever it is. It's anything that deviates from this very slim norm, so it affects so many more people than the general population realises and that's back to Ades point about empathy. If it's going to happen to you too, it's that kind of depth that isn’t known. I think it was Ethar saying that the general knowledge around this, people's understanding of it, but we can’t put the onerous on the individual to try and learn this, we have to take the systemic route, otherwise you end up socio economic problems, opportunity problems or access problems.
Ade: I think also Vim, just to add to that point, when also as a society when we fail to take structural approaches and push the onerous on the individual, one of the things that start to happen to your point about ‘where's the evidence - it only happens to this small group?’, is that we begin to desensitise ourselves to the pain that is caused. So how much pain should people of colour or people with disabilities or other marginalised groups, how much should they take on in order to educate the system? And what is enough pain? I know this to be true for many people of colour, you know you kind of go ‘Hmm, this thing happens, and it happens fairly frequently to me, I think there might be a problem there’ and the first thing that mainstream culture taught us is: ‘where is the evidence though?’ or ‘I’m sure there's a good reason that thing keeps happening to you.’. And then you'll get five years later someone is painstakingly collecting the data and then the research will come out and then they'll go ‘oh look! Turns out those people weren't living in some absurd fantasy persecution world, it turns out that was real!’ and you're like ‘but that's five years?’ assuming the researchers have immediately taken this onboard and then actions are immediately taken at the back of it, you know, wouldn't that be lovely? What we’re actually teaching ourselves as a society is ‘yeah, five years of pain for this group is an ok consequence, we’re ok with that.’ and it's the same groups that are consistently marginalised and consistently bearing the brunt and the pain of malfunctioning systems, well that's just the world, isn't it? I think there's an underestimation of what it does to the society psyche to continue to dump on certain groups and the desensitisation that takes place in our hearts as a society at large. I think there's a real underestimation of that, it has a real corrosive effect.
David: I suppose the COVID example may be a minor counter example. I suppose this time around, people said: ‘No. No. We don't want to wait for a government report. We want to know NOW, people working in the health service, why there's been a marked increase in the number of deaths and BAME casualties within the NHS and the caring service.’. Very unusually they didn't sit back and go ‘oh yeah right there'll be a report in five years’. BBC covered, there was a specific example where a doctor said ‘my father died 2 days ago, and I want know why now, because I'm going in to do work right now, what's going on?’ It's a nice counter example. Unfortunately it's only now and unfortunately due to this emergency. The BAME community have in fact stood up and said ‘look, let's look at this now. Lets not wait’. I'm very glad that happened.
Ade: But even on that score David, first they had to die.
David: Oh yes, of course!
Ade: Right, so first that had to happen, and then after that people went ‘hmmm, it might be an issue and there's an opportunity here to apply occam’s razor and go ‘what is the most likely reason this might be?’ and still have people go trying to explain away the disproportionate number of deaths. But even more heartbreaking for me it hasn't had a material effect on the behaviour of organisations.
David: No yeah, not at all.
Ade: So in this case even the deaths are not enough. So at best, people are going ‘yeah, loads of black and brown people seem to be dying at a disproportionate rate, this is worrisome, especially if it means that this is something the wider society could be affected by as well’.
David: Oh Sure, yeah.
Ade: Doesn't seem as localised as you know, we might be comfortable with, but even then that's not enough to then bring about action.
David: oh yeah, you're right. It was when it looked like a shifting problem that people started to look. I'm absolutely grieving, if it stayed in a small secluded area there might have been many more deaths before anyone would bother publicizing it.
Ian: I think this comes back to the question of empathy. For people to die you've got to have some empathy around that, you can't just sit back and go ‘oh we’ll wait five years all those people will die, meh. Oh there's a problem maybe we should sort that out later on?’ no. Not good enough.
Ethar: Yeah, I was just gonna add and build on Ade just said. This was actually something that I was also going to bring up, which is also the point of what I call spatial, ‘everywhere you go you experience certain events’. It seems to be quite common. But also temporal structural racism, where things keep happening, that latter one ties into this five year problem. Because we have two ways of making legislation in the UK, we have parliamentary processes and we also have case law. And they interact, but the problem with case law is that by default having to take it to court actually means that something has had to have happened, it’s retrospective in nature. So when you have a situation when you trace legislation in parliament say, and its disenfranchised particular minority groups, you've then got this issue where the minority group have to go to court and take however long that takes to get justice, assuming they do, to then start changing these laws. Now we've started to see that obviously with disbaled groups, but also by people of colour, people in unions and wider society. But by its very nature it is temporary retrospective so we've actually got this situation when you have a systemic racism problem we have to look back through time to see how and why that was caused and if you're just waiting for people to act or to die or to attend a particular issue, or tend to the streets, that itself is then too late because the events have then actually occurred. You can't bring people back from the dead and that's kind of my disappointment in all of this, there are temporary systemic issues, that we force people to go down much harder routes to solve. It's disadvantageous, there's no two ways about it.
Vim: Yeah, and one of the points, or two points on that. So the first point around the Corona virus and BAME communities dying more from what I've seen and I don't need to speak on anyone's behalf when I say we, but we still get the blame. It's still because of how we live, the jobs we take, our decision to be frontline staff and it's even to the point of getting back to racial slurs 'that's just how asian people live in Leicester, and that's why there's a spike’. And you know, it's that frustration that just honestly... I can't even describe it. It's kind of like waiting for things to happen, again not waiting for things to happen, but having to take to the streets and waiting for it to get so bad before the protests or before action. Again really frustrating, because its that thing of like ‘well, slavery, colonialism’... for people that understand the roots of this problem it’s blindingly obviously and it goes back to that burden its for us to reeducate and reexplain, its for us to emphasise. It’s a lot. It’s just a lot.
Ian: I also want to kind of find a way to end on a good note, but I kind of feel like.. This needs to sink in. and I’d rather leave people on this note of thinking and really really think about your empathy and things that they face at work. And are they actually going with it because it's the new thing and everyones going with it, or are they actually consciously going ‘no this is not right these cases. There's something here’. That example of the canary in the coalmine is exactly the example I always use as well. If it's happening to the canary, it's going to happen to all of us, it’s going to happen to me. You know, think about that.
Ade: That is an active imagination, there's a work of imagination that is necessary here. I think whether it's within organisation or broader structures, and I think at the individual level for fellow technologists to say ‘just think a little bit harder. Imagine a little bit wider’, right? this stuff isn't... You know, if you grew up reading books, if you grew up reading fiction, surely this is not something outside of our capacity? You know, the technology that we work with is a function of our imagination, right? If you can imagine a world in which this new technology and these new capabilities are possible, then is it such a leap to imagine that the things that are happening to marginalised groups could then be something that might affect wider society? Like, this doesn't even seem to be a big leap of imagination, but I would love to see more imagination in these discussions, just a little bit more curiosity.
Ian: I think that is a perfect place to end. So that's all that we have time for today. Thanks for listening, if you have anything to say about what we talked about, I’m sure you do, let us know on Twitter @Techforgoodlive or email hello@techforgood.live. We would love for you to join the discussion, we really would. Find out more about the movement, support or download helpful resources visit wwww.blacklivesmatter.com. Thanks to the wonderful podcast.co for hosting us on their platform. If you've enjoyed listening to the tfgl podcast, please consider giving us a friendly review on itunes. You can find out more about us at techforgood.live. Thank you very much! If the panel would like to say goodbye?
All: Goodbye!