« The Vergecast

The ethics of AI with Google's AI lead Jeff Dean

2019-06-04

What are tech giants like Google doing to tackle the ethical issues that surround artificial intelligence? Verge senior reporter James Vincent speaks with Google AI lead Jeff Dean and Verge editor-in-chief Nilay Patel about AI bias, facial recognition, and government regulation around AI.

Learn more about your ad choices. Visit podcastchoices.com/adchoices

This is an unofficial transcript meant for reference. Accuracy is not guaranteed.
Support for the show comes from shop, apply to run a six for on my business. All you need is an idea and shop. I can take care of the rest shop if I is a commerce platform to help run your business and their flexible templates and powerful tool. make building and managing your online present simpler than ever. You can sign up ray one dollar a month, trial period at shop if I'd dot, com, slash, vocs business, all lowercase visit shop if I'd dot com, slash box business to see of shop of by can take your business to the next level. Today, that's shop, a five hour come slash. Vocs business, hey I'm rhine, reynolds at mid mobile. We like to do the opposite of what big wireless. Does they charge you a lot? We charge you a little so naturally when they announced that we raising their prices due to inflation, we decided to deflate our prices. Do not hating you, that's right! We're cutting the price
meant unlimited from thirty dollars a month to just fifteen dollars a month, give it a try: men, mobile dot com, slash, switch new activation enough payment for three months and required. Taxes in these extra additional restrictions apply seem at levels outcome for full terms. Everybody its new life from the verge cast. This recently, we have says a little bit different worlds trying to experiment. I think it's going to go really well, but let me know what you think about James Vincent with me: hey James Haney line how's it going it's going great james- is our excellent ai reporter and he interviewed Jeff dean, who is head of Google AI. Ten of google's ai research. You might have seen Jeff on stage at Google IO talking about ai. Actually, ended the keynote of big presentation at the end to James interview Jeff a while ago, but a I move so fast that there has been a bunch of things that have happened since that interview
so I've asked james to join us, we're going to run a bunch of clips from james' interview with Jeff dean head of Google AI, but because so many things have happened since they talked I've asked james to sort of walk us in and out of the conversation and provide the newest information jameson sort start with the basics tommy limit more by drafting. Who is why is important. dean S, a really big deal for numerous reasons is like an engineer The engineer he has been with Google for a very long time, and he helped develop some of the technologies that help them scale in the very early days, and he saw two essential that growth basically and today's success and recently he was made had a virus, so he overseas all of google. They I products and services, and I researched Well, and you know, as sundar is always saying that
Google is- and I first company now so he's basically in charge of Google's entire future. At this point you know he's a guy who is going to the point in the direction. That's going to continue securing its dominance in all these various fields for the next twenty thirty forty years, harbor along the panic. I just think it was interesting that at io, literally, the entire keynote ended with Justine just explaining what he's doing what they are. So there are some big. issues of ai. However, however, you report on them every day we talk about them all the time there there there quite obvious now. I think they ve they become so common knowledge and the first big one is that when you train these models, they tend to reflect. A lot of biases and you have to counteract them in some way. Yet so I'd buy their night. You know one of the big
where he isn't talking points about ai as it's getting integrated into all his real life applications come on as he gets like it's a really complicated topic on. It said it is even difficult to define what I buy these neatly, because there are lots of different ways to use the term by us and you can use it in a statistical sense. Were it doesn't necessarily mean anything other than a soldier era in results that tend to askew towards a certain sort of error. But yours bias. Has human meanings hazard of meetings in society, and I like to think I want to cry bias. We're talking about an interception of these two things are talking about a technical error that reflects a societal prejudice or the embodies of societal prejudice. Should you be giving them,
well yeah. So he didn't think about the ways that algorithms are being used in the real world that are doing criminal sentencing that, like judging creditworthiness, they're recommending medical treatment. So it's at these points where an algorithm which has been trained on data is making a decision that affects someone's life in a fantastic example of this is within our them used to screen rise, maize or to judge applicants for job. So there was this great story I wanna go from the noise is you are putting the first from amazon who had the better a tumor, and I was told that they use the screen razumihin and this tool had been trained on data about at the sort of people the amazon likes to employ and who does well company, and you know what the engineers like already it looks all that data. It was like great we're going to use these things to learn what is a good fit for amazon and then, when it came to ashley
Analyzing, see reason resonates. It turns out that it penalized women S, applicants, tv said that had attended an old woman college or erasmus even contained the word women's at any moment haired women's chest, global environment in teams. In my back there ass, he visa application would be like would be downright because of that, and that is because the algorithm you know had just look at the data to be present. it with which was a male dominated world and besides an engineering, and it reflected that date back, I should qualify. The amazon never launch this tool, they saw, these problems during the development process and they shouted down. However, that is exactly the sort of biased it does make. its way into life systems that are making decisions about everyone lights, so it need shortage fifteen who runs all may I can know what did he tell you? So just had a lot to say about this
It's a subject that google and you no doubt that pretty forward thinking when they talk about a lock, and he told me that she would expect behind the scenes a big deal for Google. They do not want to have it in their systems and they are developing also, things to try and counteract it. They have their I principles, which they kind of green set out how they went in play, I in the real world- and I also do some very good- cutting edge research in terms of developing practical tools for engineers that you know the counterbalance to all this other than response to all issues. Is that really enough? Just as if a sort of fh is really telling of how these systems are working at the moment? Is that, yes, Google has it's ai principles, but google published those ai principles after it came to light that the company was building tools for the department of defense using algorithms to help build systems that could analyze drone footage, and there was this huge outcry to with the project marvin stuff and as a response to that, Google was like okay. You know we need to
I'm quite the becca and when you come up with these principles, so I think there's this dynamic, where these companies are doing stuff, but are they doing enough and are they only doing it when they get called out for it. Issues of bias and fairness and machine learning- and I are our front and centre because often what you do to solve machine learning problem. Is you take data from the world and you can collect some training, get us that for your problem, you train a model then you can maybe do that thing. Faster, more efficiently are hundreds of times per day and that a five and the issue is that often the data? Your training on reflects the world as it is not the world as we would like it to be right sophie example Let's say your training, a model to predict who should get a homeless and we all know that there's lots of kinds of viruses in the home process and even if you don't include things like someone's race
gender in the infant data. machine learning models are very good at learning, nuno picking up. on patterns, and so you can pick up on patterns of certain zip codes mean, I should say no to a lawyer whenever, because that's what the the data for training on actual showed and so there a whole sequence of research work it's a whole line of research by the entire community really, and how do you, actually take machine learning models and remove certain kinds of of bias, but keep other kinds of biogas. if you can't robbed the models of all by us, because that sort of how a lot of their power is, in a language model. You want to learn that the word surge is associated with scalpel and carpenters with hammer right, but who unfortunately, learn that the word doktor is associated with ye be and
then there were nurse and associated with sheer manages it because of the nature of the textual data, which retraining on doctors are more often referred to, as I think it makes. A very important point here is good players in this. By the way, right, like the problems or April remains the algorithms are reflecting disparities in the data and, to a certain degree, all those what an algorithm supposed to do in supposed to look at the data look for patterns in it, and then rapid the package that should matter in what matters. However, the really tricky thing here and why I biases such a huge messy convoluted topic is like in all those biases things you want or things you don't want, so will end up being a sort of
start the technical solution, I, how do we make these? I wouldn't work and being a huge question about how do we want society to work? You know what judgements do we think these machines should be make every time this house's nose up that arrives at that point right, which is what you really talking, is on. Our reflections. Rise, fraser evil, because it reflects society at large like so many conversations intact, and at this point of where we are just building a mirror. To society, but there are some solutions to these problems: either there are ways to move forward without having to wholly re architect. Society right, I upset yeah yeah we're not without recourse in this situation. As with facebook, you know that there is nothing we can do and that stuff that is being done so like ebay
The problem and an obvious a province take on is data that, if you are training an algorithm for a facial recognition system, for example, and and if you train it purely on white faces, if it has to identify or analyze a person's color, then it's not going to recognize that data. You know just on a on a pure training level. It just hasn't seen that before so it won't know what to do so. That's a relatively easy problem because you just have to make sure that the data you're training, the algorithm on is representative of the the task of taking off, but there are much more difficult problems that ought to do with the sort of basic chemical structure of algorithms and our one of those is that these black box problem is often called, which is that we can get out driven to explain why they make certain decisions,
if you have an algorithm, daddy's electricity, detecting lung cancer. How do you know what it is using to identify cancerous nodules in your lungs? Can answer that I really tricky thing and that something the engineer that places like google and universities as well are trying to deal with. So this is what I ask Jeff I said jack what are you doing about. The chicago is recognising. The bias can occur is really important. First step their algorithmic techniques you can do to say. I would not what I would like everyone in me. Two different routes. the same chance of achieving a sort now come everything being equal, and that can help there than theirs algorithmic ways you can adjust the output of the model to make that case. There's some nice work by more its heart and others. in the space. So we build tools that help you visualize your data, your training data understand what kind of protection your model of making
but this is not a solved problem. This is a ongoing. Thing where we, when we used a I'm shimmering day, we apply the best known techniques to remove violence, but were also we're continually developing better techniques that enable us to do that. We definitely heard sundar talking about this at io this year and this seems to be on everyone's mind. Yeah hundred percent yeah, yes- and I was talking about a specific tool that Google's been developing cookie or testing with contact activation vectors, which is a lovely rose right off the tongue and they said All about approaching this black box problem, which I mentioned. So it's like saying when you haven't neural network, which is analysing the piece of data which bits of the neural net work of firing when it makes a decision, and then you can kind of digging into that like he would have like a physical circuit board, a new kind of trace where the activation juice was going through investor picking a part
Google is building tools to help deal with this sort of problem, but the huge debate quest It is like is really enough. You know just because you have a tool that work out how an algorithm made a certain decision is not really going to change. Harrow companies behave. How garments behave. Just explain this to me, like I'm, really done. Why can't the algorithm tell you, I made it Then. Why can't say here are the factors I I weighed, and I waited these ones highly, and this is the result of that. I we haven't really tricky one answer, and I would First of all say that there are ways the algorithms can't say that a little bit there are tools like eager and in some other one being developed in other places that do help with this problem. So the basics of machine learning is that you don't want to tell a computer explicitly. What do you want it to work out? What to do so
feed it, the data, you may put some labels on that data and it will try and make connections between the data and the labels in order to make these decisions. So you might feel that an example. You feel a bunch of pictures of cats and dogs that learns were kept in the dark looks like so in an old ai system or in a sort of traditional software. You might write those rules by hand that you might say it's got. Whiskers and a nice cute tail odds a little a little. I love it break, you know if it's got paws and the, and it goes off before, is probably a dog fine, and what of asleep the scientists learned with this sort of thing, which is sometimes referred to as an expert system? Is that writing down all these rules is exhausted, and time consuming and often the work anyway. So, instead of do not you happy systems, visa, architectures, neural network that look at the data and make this connection himself. However, this is where the big problems start coming and because
they are teaching themselves? They are making me shoot me complex. Labyrinth, connections, he's mathematics, connections, deep within their structures, the forming by themselves and digging into that is really very difficult. There's just business builtin mechanism for them to explain why they did that. That was never part of the brief when these systems were created. All the people wanted is for it to be good at making these decisions. You can kind of think of it like it. I like this. In the complex sb be because this was not part of the design brief, we just wanted decisions being made. We didn't need to know why they were being made, and now people are saying well It would be really useful if we found out it'd be great why you're going to jail yeah one hundred percent like he doesn't want to? I that does seem to the instant right. The the the computer decides that you're a criminal and sends you to jail, and you may be want to know
Can you re architect the systems to make them more transparent, yeah? This is exactly what they stopped by pick out and by this isn't like doing that, in finding new ways to work out where the outbreak, and the network is focusing it's attention to. None of these approaches are like the with concept activation, says: they're, saying okay. So where is the machine looking in the case of visual data, so you know like and have you ever used, one of those sorts of eye tracking software where you're showing a picture, and it shows where your gaze and yeah yeah so eight? There are the computer versions of that for machine vision systems where they, Look to see where the algorithm is looking again if we go back to the cat dog picture stuff that your algorithm is looking at parts of the face looking at whiskers and it's looking at the tail, you'd go okay, so it's looking at the stuff that a human would look at, but if it's like looking at, maybe you fed it pitches,
Will the cats were in both eyes and you algorithm is looking around the neck of each animal, see whether its wearing a boat I know that's how it decides, what the cat and dog or you then you know you ve, made a mistake, and you really need to get a better, better dataset which isn't full of pictures of cats or bowties, which is a shame you're going to have to legalise pictures. Let's assume I have the best possible dataset where does include break from had will be right back. Hey, I'm ryan, reynolds at mid mobile. We like to do the opposite of what big wireless. Does they charge you a lot? We charge you a little, so naturally when they announced that we raising their prices due to inflation, we decided to deflate our prices. Do not hating you! That's right! We're cutting the price meant unlimited from thirty dollars a month to just fifteen dollars a month, give it a try: men, mobile dot com, slash, switch new activation enough. payment for three months and required taxes in these extra additional restrictions apply seem at levels outcome for full terms. The support for this,
comes from american express business bro. a business is no easy feet. American express business we are here to help whether you are a small business owner, ready to expand into new markets or your business is just taking off american express business cards, built for businesses like yours, let's about investing in your business american x, I understand that every penny earned is another penny that can go towards taking your business further with select, benefits and features available like a flexible spending limit there more than just a business card there, a partner who works with you to help your business thrive, and if there is ever an emergency there, twenty four seven support is ready. help no matter the time of day american, breast business cards are built for your business with features in benefits like village to earn membership awards points and select cards the power to pay for big?
in his purchases and more built for your business, amex business terms apply, learn more An american express dot com, slash business cards back regime. Since we talked a lot about recognising pictures of cats and voters, but it seems like the scariest was controversial use of ay. I right now is facial recognition right. San Francisco is just banned. It you can get through an airport in china is industry. It's like the the sort of scale of how it's being deployed is radically different, depending on where you are there's all kinds of different norms around. It is that coming up at the the research level, with the people that you're talking to yeah
I mean so, I think, facial recognition, a really good example of all this stuff as well, because it's a something where there have been multiple studies showing how these systems have exactly the sorts of biases in them, and this is something where these companies are still pushing them forward. All the same, though, and a lot of stuff has been done with mit's gender shades project, which is led by John, but I'm winnie arab and she and looking you ve, been working with her had been doing really fantastic stuff casting these algorithms, for example, amazon's recognition algorithm, which is one that is being sold to police officers and showing how the systems that these companies are built just to cuba for words, if you're, not a white male, is really quite like excuse, a powerful pon, black and white stuff
you know like the the systems that do have biases in them and yet are still being sold that are still being used, they're still being incorrectly deployed. You know, amazon's response to this is always while you've tested our systems wrong and they shouldn't work like that and wild media police. Very specific instructions. Put you know that was this big story out the other day on by clare, garvie from Georgetown, showing the athlete police have no rules without the stuff. That was that they had a suspect and the cctv image of them was too blurry to feed into the facial recognition system. But one of the cops was like you know what, like I kind of, looks like woody harrelson, so they just got a picture of woody harrelson from Google images drop that in instead and search for woody harrelson. Like you know, they ain't real, like wild wild west stuff. In terms of how the system is being implemented to the capture harrison. He s a good thing that any kind of under my story is an example of peace building. This technology will partly, they did have probably harold
well not only heresy, but it did lead to an arrest, but a history could not have done and that just a huge amount of the problem with taking that approach, the step notic any way that this is one example where we know the technology spices in it and it's still being used still being deployed and has no laws about it. You do have these on occasions. You know you brought up san francisco where the tech has been banned, and you know such a great example. I have been at the helm of this ten one he wears been developed and the people who know it best and they're like we don't want to use angeles that will be sexy. You are saying: well, I don't let my kids on facebook and the interesting thing is this back and forth- is happening within tech companies, so amazons owls, fatal facial special recognition, but Google doesn't ask jack, I said well what led to this decision? is our facial recognition like a lot of technologies, Facial recognition has
to really good uses. The anthem uses that our maybe a bit undesirable, depending on how you use them, So, for example, we do have a facial commissioner algorithms, that we use in something like google for us. So we can identify that these seven pictures have the same person in them and then you, as a user of the system, can say: oh yeah, that's my my daughter. Please you know: can peter fine pictures of my daughter and then when I search for my daughter for my daughter, I can actually see all the pictures of the, seems like a really useful user benefiting use of face recognition. At the same time we don't offer a general purpose. Facial recognition api because we there are real some
real downsides in terms of surveillance applications that could be built by third parties if we did offer that that that to me is the difference. If you offer a general api You know you really don't have very much control over how that guys, you, yes and so forth. Recognition in particular, because this is a sensitive area in a we've chosen, not not offer and really, if you look at our ai principles, one of them that says we will not participate in is. Is certain surveillance related up the kitchen? So jack is saying: hey you don't need to worry about this potentially dangerous technology, because we at Google know what's best and why
I sell it to anyone, even though a lot of other companies do severe, but there's an explosion in self governance. Inside of the companies right. Microsoft has an ai ethics board. Facebook has some grand vision of it. Google has one lay them out and then tell me if they're effective yeah, so the response to the current AI boom took off in the toilet like in the years that followed and you know, obviously, that experts and researchers knew about the slant province much earlier. But as I became more mainstream knowledge and people like herself started talking about them. Companies reacted by going alright we're going to do something about it. They set up ethics committees, they set up ethics boards, they you know cant published a prince. and pretty much any big tech company you in a name, you, google, whether it's microsoft, facebook Is it an ibm, whoever they had done? Something like this? They got an ethical code or they could set of principles. I once it is good that these companies are thinking about these problems,
They also do in a hurry peopled analyzing, but there has also been a little bit of a backlash within the ai ethics community. Saying that you know the term the one academic respect to quote Ben Wagner. He called ethics washing, which is that the company set up these boards. They sat at these committees. They probably these principles so whenever, whenever someone says them, why? Why are you doing this? They go oh well. We know it will really. Thank you. We are: we've got top men on this man looking into it, allows them to deflect this criticism and appear to be doing something while doing very little the problem with responses. They have no power, they can't beat her decisions, the company is making and they have no transparency
you know there was. This smacks of set up this border code of ethics oversight committee, or at least or something like that sounds like a came out. The model cinema, the universe and they said in an interview that significant sailed quote- had been cut off because of the cup group's recommendations, but they never said who the sails were to all water obligations were so all we microsoft word the sore about things start to happen, and we don't know anything more than that and, as we see we face book. Do we really trust these companies to govern themselves right? It's russia without any transparency into what are the only option. I think you see now on amazon, employees and shareholders are begging the company to not use recognition, which is their facial recognition system
certain ways in there. They don't seem to be getting anywhere with that that protests yeah, I mean that could be the case and it sort of this is where the the problem or the challenge the topic of ai bias ties into all these other trends we're seeing in silicon valley. It has gone right to the heart of and employees can protest and whether they can change their companies are employers actions. Unless you know you mentioned amazon, but there's been similar adaptation. Indifferent companies are- and you know I wouldn't. Google project mappen example as well, so this is something employees are definitely agitating from within companies, but whether or not companies are actually going to listen to their employees is an open question and I think, as we've seen so far, it's not really going the employees way. So this brings up an obvious question: if we don't trust companies to regulate themselves, do governments need to do the regulation? You know
It would mean that we were talking about. I shall rethinking earlier. This is a really interesting want, so amazon sells, it has been criticised for microsoft, has called for regulation, and google is refusing to sell it altogether, because they, it has too many. You know adverse uses, so googly saying that hither really dangerous technology that we could develop and we could solve and were not granted so they're saying you gotta trust us to regulate the stock. So I asked I asked athena said: is it enough for us to trust companies or do we need governments do them as well, because in their example, if they think that facial recognition it's too dangerous for them to, so why should they be happy that amazon is selling it? I mean we last year came out with a set of ai principles that we think is a pretty good list of things. People should be thinking about as they they imply think about applying ai and machine learning to different problems. the world, as well as a list of thing we will not pursue because we don't think they're compatible with the values that we stand for.
and I think there is really important for us to come together as a company and have our crystallized list of these things rather than sort of vague notions of this, because I really helped our thinking as we look at new applications and I'm sure we can then look at how they they sort of are compatible with our principles or non. So, I think you know the reason we made those principles. Public is also because we think other, companies and organisations who are starting to apply machine learning and I to their own problems more look at those principles and decide. You know those It's good or we like some of those, but our business is not necessarily such that we can adopt all of them or whatever. But I think it does start a good conversation and I think but the question around regulations on these things. There is already regulatory frameworks in place for many things, like medical devices and and drugs have a fairly strong
regulatory regimen in most countries already and. The use of ay I and healthcare and medicine the curve regulatory frameworks is, is sort of a good starting point. They might want to be adjusted a bit for things that are more algorithmic rather than a sort of a you know, a pharmaceutical pill and so on, but I think you know that that probably made sense in other places. I think you you want to interact with governments and policymakers to help them understand. You know what is the technology? What are the risks of it? What what can it do? What can it do? What should they be thinking about that super interesting right in the market? This company says this is too dangerous. We don't wanna, sell it. Another company says we think it should be right,
elated by the government, and then a third company says we're just selling it and anything could happen here, and there is no overarching regulation for the stuff outside of a few places like san francisco, where certain agencies can't deploy it. But there is that also challenges, even the regulation that we have, which is this self regulation of an ai ethics board. Has been mired in controversy, particularly at google. So this is the big thing that happened since you talked to Jeff dean was school attempted to set up an ai advisory board and ethics board, and it it quickly fell apart. Just walk us through that yeah I mean so. This is something I would have leaped up about in person and didn't get the chance to spark a lot of backlash that I kind of disgust area where academics are saying that companies just aren't doing enough to regulate themselves at the end of march.
Google said. Ah, you know we want to keep ai ethical. We want to do this. All the poets we're going to have a new advisory board new advisory committee. They announced this a group called the advanced technology external advisory council. So again sounds very grand found out yet another really taking this seriously and it combusted in less than two weeks I think they announced that big fanfare, a nice little blog post, it's all going very well, and then they shut it down within two weeks. So the reason for this was they assembled a group of experts and some of these were sort of academic. Some of these were from private companies, and one of the individuals was the president of the heritage foundation. K, colds, Jane's, so the heritage foundation is a conservative, think tank and you know a lot of it's one. It's policy positions,
exactly the sort of stuff that google is we're not happy with. This includes stuff like climate change, denial and an anti trans rhetoric from Kay coles james herself. So when Google announced this boat it was a huge outcry and very soon there was a petition circulated inside google for james to he removed. I think it was signed by just over two hundred thousand people in the end, and also one of the academics who had been appointed to the board resigned as part of this, because they said you know I I don't want to be a part of this initiative. Google kept very quiet about this. This backlash of bubble, clover,
I'm sure they were just sort of waiting to see where it would go whether it would go away. It obviously wasn't going away so they shut it down and would have liked to rothstein about it, but had no chance but sent google a question about it and they sent me this statement concluded in the current environment. A t e, a c which is the board count function as we wanted, so ending the council and going back to the drawing board will be will continue to be responsible in our work on the important issues that ai raises and we and will find different ways getting outside opinion on these topics so that it'll come at all and that basically there's nothing more reassuring that a promise from one of the world's largest companies. They will continue to be responsible I have a cause. We know we know these companies are incredibly responsible. I mean I just can't fathom why they didn't see that this was going to be objected to. I can not to be overly charitable
but if you're putting the area aboard, that is supposed to generate some policy by which your work will be regulated, that might turn into some actual government regulation and the line. It is not entirely surprising that you would go seek a conservative viewpoint for that. I think that they sought the specific viewpoint that they arrived at. They should have seen that coming, but that the sort of broader idea that this needs to be across the spectrum of viewpoints before we build a law that governs what seems to be just a tidal wave of technological progress. I see it and I don't want to be too charitable, but you certainly see why they were like. While we need someone from the left and someone from the retina, like you see how you would build that to sort of insulate yourself from criticism and
the actual specific of choosing the people opened them up to a wellspring of criticism that they clearly did not anticipate yeah yeah. I just think it's such a good example of when these different interests within companies crash into one of google's principles is like the respect, good sign, I already know it's part of the day. I asked principles and yet bay would be happy to have someone advising them who doesn't respect good science because they downplay the threat from climate change, and I just think obviously sometimes you need to take a stand. You need to say that you know this is this. This is really happening, but see what you mean about trying to curry favor in certain sectors of society. There's no doubt in my mind at least that part of the reason
at these boards just say: oh we've already done the work now just pass a law that we've proposed. It's like we've already, we've already done the work. It's a a bipartisan group that we've assembled a take our recommendations and turn them into the light. That'll be fine. You don't have to think about this too hard. Is that the right move, certainly not in his or even consensus. I think it's something that you talked about with with Jeff, where some of google's own researchers were working on outside boards, developing proposals that he doesn't agree with yeah. So what Interesting, instead of crumbling fall out from this words, a report and with was originally wired that was connected to also lawsuit the Google walk out on which ended in google and in others, you put on trial for sexual harassment, kind, one of monopolies that with market with the girl and the other was stressed,
april them and those historian wide about how they had been. He lived for their work in that it is not clear exactly what has happened since google obesity, the night. There isn't any internal retaliation, but at the time whittaker for example, was told that she would have to quote abandon and quote at work when I ethics, including her work, I would add an institutional They are now which is attached to it and why you and those absolutely fantastic research and all the problems we ve been talking about, and this for me is very, very, very troubling back Google. These reports were True, when I think of the denied that there was any retaliation said whatever button that Google would try and retaliate against someone who is doing exactly the foundational work in mitigating these,
and they say they catch that much sir James. It seems very, very complicated to me. I have to be honest with you the the number of competing interests, problems, solutions and people offering those solutions does not seem in any way simple. At this point. No, it's not a hideously complicated and I'm going to say that they are going to make me sound like a complete idiot, but like any other technology,
we're doing this for awhile, and it is all politics, it's just it. If it always comes back to politics to be want this stuff to work correctly, it can never just be a technical solution. It has to be a wider conversation with the with the public about what what they want happened, and that has to be political involvement. How do you see that conversation taking place when, as it isn't happening in a way that is productive or does that need to change, is happening, but it's happening slowly? I think the thing that is often missing is that when the systems say an algorithmic system is integrated into a certain decision making process you need to have experts. Within that original domain. So not you not even your experts an eye, but people are experts in a criminal sentencing in whatever would have a situation at it and they need to be giving feedback about these algorithms. So for that to happen, you need broader education and you need to have money.
is to to make sure that these checks are being made and we're not just handing it over to the people who designed the algorithm because they can't do it by themselves. So, if you're a virgin ass listener- and you are interested in this stuff and you're very interested in how technology and society and culture interplay with each other- and it seems ai- is the center of that. What should you be watching? How do you pay closer attention to us? I mean I would look at how the the fall out to the san francisco facial recognition plan plays out over the next couple, yeah. To be honest and look at how politicians are trying to formulate tactics to deal with this yeah, I dunno you know to read good news sites that you know there's so much interesting and fantastic conversation and discussion happening about these topics that you know
if you're interested- and you want to find out more, you can really dive at night will james. Where can they follow you? Because I know you are constantly reporting on these things. So if you want to bother me, you can put me at Jj, vincent on twitter, or just you know, had w w dot alert. That's good! That's good! Plug thanks, appreciate gypsy. Thank you. So much with autism, thanks to senior border james Vincent, you can check out all of his reporting on a more thorough com. Also, his twitter, just at J J Vincent why don't you know that watch pusher button is often running, that big series called death, I'm starting next week is a three part serious about death and the internet really excited about it. Does we ve, given up said about whether quitting instagram can make you happier caitlin quit instagram for a few weeks to see for herself centralism The essence of driver the forecasts for free and fair back ass, obvious tat, the lincoln shuts get new epochs you know it's a private hassle, free and fair favorite can tap the link in the show notes to get new episode.
and please leave a rating interview and apple pie cast Ghana. Producers are asking me to say: apple will pass it's where they wanted to go. Please go there. Could we have on twitter about reckless? Something only want me to interview next, I love are you suggestions were points, many peoples who can we're back on friday with data and palm the regular checks Hey, I'm ryan reynolds at mid mobile. We like to do the opposite of what big wireless. Does they charge you a lot? We charge you a little so naturally when they announced that we raising their prices due to inflation, we decided to deflate our prices, do not hating you, that's right, we're cutting the price, meant unlimited from thirty dollars a month to just fifteen dollars a month? Give it a try? Men, mobile dot com, slash switch new activation enough payment for three months and required. Taxes in these extra additional restrictions apply seem at levels outcome for full terms,
Support for the show comes from gold peak real brew, there's a time of day about an hour before sunset, where the rays feel warm and the breeze feels cool. But the half hour of gordon This is always gone too soon. You may rekindle that feeling with a bottle of gold peak made with high quality tea, leaves its smooth tastes. Transport should golden our at any our gold bt, its cap to be gold.
Transcript generated on 2023-08-30.