The Ethics of Deep Learning

Sundog Education by Frank Kane
A free video tutorial from Sundog Education by Frank Kane
Founder, Sundog Education. Machine Learning Pro
4.5 instructor rating • 19 courses • 387,886 students

Lecture description

As with any new technology, sometimes we can become overzealous in how we use it. A few cautionary tales to make sure your deep learning work does more good than harm.

Learn more from the full course

Machine Learning, Data Science and Deep Learning with Python

Complete hands-on machine learning tutorial with data science, Tensorflow, artificial intelligence, and neural networks

14:11:01 of on-demand video • Updated July 2020

  • Build artificial neural networks with Tensorflow and Keras
  • Classify images, data, and sentiments using deep learning
  • Make predictions using linear regression, polynomial regression, and multivariate regression
  • Data Visualization with MatPlotLib and Seaborn
  • Implement machine learning at massive scale with Apache Spark's MLLib
  • Understand reinforcement learning - and how to build a Pac-Man bot
  • Classify data using K-Means clustering, Support Vector Machines (SVM), KNN, Decision Trees, Naive Bayes, and PCA
  • Use train/test and K-Fold cross validation to choose and tune your models
  • Build a movie recommender system using item-based and user-based collaborative filtering
  • Clean your input data to remove outliers
  • Design and evaluate A/B tests using T-Tests and P-Values
English A lot of people are talking about the ethics of Deep Learning. Are we actually creating something that's good for humanity or ultimately bad for humanity? So let's go there. Now, I'm not going to preach to you about the singularity and robots taking over the world, I mean, maybe that will be a problem 50 years from now, maybe even sooner, but for the immediate future it's more subtle ways in which Deep Learning can be misused that you should concern yourself with, and as someone entering the field, either as a researcher or a practitioner, it is up to you to make sure that this powerful technology is used for good and not for evil and sometimes this can be very subtle, so you might deploy a new technology in your enthusiasm and it might have unintended consequences and that's mainly what I want to talk about in this lecture, understanding unintended consequences of the systems you're developing with Deep Learning. First of all, it's important to understand that accuracy doesn't tell the whole story, so we've evaluated our neural networks by their ability to accurately classify something and if we see like a ninety nine point nine percent accuracy value, we congratulate ourselves and pat ourselves on the back, but often that's not enough to think about. First of all, there are different kinds of errors, there's what we call a "type 1" error, which is a false positive, that's when you say that something is something that it isn't. For example, maybe you mis, misinterpreted a tumor that was measured by some, you know, biopsy that was taken from a breast sample, as being malignant, and that false positive of a malignant cancerous result could result in real unnecessary surgery to somebody; or maybe you're developing a self-driving car and your camera on the front of your car sees a shadow from an overpass in front of you, this is actually happened to me by the way, and slams on the brakes because it thinks that the road is just falling away into oblivion into this dark mass and there's nothing for you to drive on in front of you. Both of those are not very good outcomes, that could be worse, mind you, I mean, arguably it's worse to leave cancer untreated than to have a false positive, or one, or it might be worse to actually drive off the edge of a cliff than to slam on your brakes, but these could also be very bad as well, right? You need to think about the ramifications of what happens when your model gets something wrong. Now, for example, the self-driving car maybe it could take the confidence level of what it thinks is in front of you and maybe work that into who is behind you, so at least if you do slam on the brakes for no reason you can make sure there's not someone riding on you're, on your tail who is going to rear-end you or something like that. So think through what happens when your model is incorrect, because even a ninety nine point nine percent accuracy means that one times out of a thousand you're going to get it wrong, and if people are using your system more than a thousand times there's going to be some bad consequence that happens as a result, you need to wrap your head around what that result is and how you want to deal with it. The second type is a false negative, and, for example, you might have breast cancer, but fail to detect it, you may have misclassified it is being benign instead of malignant, Somebody dies if you get that wrong, OK? So think very closely about how your system is going to be used and the caveats that you put in place and the failsafes and the backups that you have to make sure that if you have a system that is known to produce errors under some conditions, you are dealing with those in a responsible way. Another example of a false negative would be thinking that there's nothing in front of you in your self-driving car when in fact there is, maybe it doesn't detect the car that stopped at a stoplight in front of you, this is also happened to me. What happens then if you're, if the driver is not alert, you crash into the car in front of you and that's really bad, again, people can die, OK? So people are very eager to apply Deep Learning to different situations in the real world, but often the real world consequences of getting something wrong is a life and death matter quite literally, so you need to really, really, really think about how your system is being used and make sure that your superiors and the people who were actually rolling this out to the world understand the consequences of what happens when things go wrong and the real odds of things going wrong, you know, you can't oversell your systems as being totally reliable because I promise you they're not. There can also be hidden biases in your system. So just because the artificial neural network you've built is not human, does not mean that it's inherently fair and unbiased. Remember, your model is only as good as the data that you train it with. so let's take an example, if you're going to build a neural network that can try to predict whether somebody gets hired or not just based on attributes of that person, now, you are your model itself may be all pure and whatnot, but if you're feeding it training data from real humans that made hiring decisions, that training data is going to reflect all of their implicit biases, that's just one example, so you might end up with a system that is in fact racist, or ageist or, or sexist, simply because the training data you provided was made by people who have these implicit biases who may not have even been fully aware of them at the time, OK? So you need to watch out for these things. Simple things you can do, I mean, obviously making an actual feature for this model that includes age, or sex, or race, or religion would be a pretty bad idea, right? But I can see some people doing that, think twice before you do something like that, but even if you don't implicitly put in features that you don't want to consider as part of your model, there might be unintended consequences or dependencies in your features that you might not have thought about. For example, if you're feeding in years of experience to the system that predicts whether or not somebody should get a job interview, you're going to have an implicit bias in there, right? The years of experience will vary, definitely be correlated with the age of the applicant, so if your past training data had a bias toward, you know, white men in their 20s who are fresh out of college, your system is going to penalize more experienced candidates who might in fact be better candidates who got passed over simply because they were viewed as being too old by human people. So think deeply about whether the system you're developing has hidden biases and what you can do to at least be transparent about what those biases are. Another thing to consider is, is the system you just built really better than a human? So if you're building a Deep Learning system that the people in your sales department, or your management, or your investors really want to sell as something that can replace jobs and save people, or save company's money rather, you need to think about whether the system you're selling really is as good as a human and if it's not, what are the consequences of that. For example, you can build Deep Learning systems that perform medical diagnoses and you might have a very eager sales rep who wants to sell that as being better than a human doctor, Is it really? What happens when your system goes wrong? Do people die? That will be bad! You'll be better to be insistent with your superiors that this system is only marketed as an supplementary tool to aid doctors in making a decision and not as a replacement for human beings making a decision that can affect life or death. Again, self-driving car is another example where if you get it wrong, if your self-driving car isn't actually better than a human being and someone puts your car on autopilot, it can actually kill people, so I see this happening already, you know, where self-driving cars are being oversold and there are a lot of edge cases in the world still where self-driving cars just can't cut it where a human could and I think that's very dangerous. Also think about unintended applications of your research. So, let me tell you a story, because this is actually happened to me more than once. Sometimes you develop something that you think is a good thing that will be used for positive use in the real world, but it ends up getting twisted by other people into something that is destructive, and it's something else you need to think about, so let me tell you a story. So you need to think about how the technology you're developing might be used in ways you never anticipated and can those usages be in fact malicious. This is actually happen to me a couple of times, I'm not talking theoretically here, and this isn't just limited to Deep Learning, it's really an issue with Machine Learning in general, or really any new powerful technology. Sometimes our technology gets ahead of ourselves as a species, you know, socially. Let me tell you one story. So this isn't actually related Deep Learning, but one of the first things I built in my career was actually a military flight simulator and training simulator, its idea was to simulate combat in sort of a virtual reality environment in order to train our soldiers to better preserve their own lives and, you know, come out of the battlefield safely. I felt that was a positive thing, hey! I'm saving the lives of soldiers; but after a few years this same technology I made ended up being used in a command and control system, it was being used to help commanders actually visualize how to actually roll out real troops and actually kill real people, I wasn't OK with that and I left the industry, in part because of that stuff. A more relevant example. Back when I worked at Amazon.com I was one of the, I don't want to take too much credit for this because the people who came up the ideas were before me, but I was one of the early people actually implementing recommendation algorithms and personalization algorithms on the Internet, taking your user behavior on the Internet and distilling that down into recommendations for content to show you, and that ended up being sort of the basis that got built upon over the years that ultimately led to things like Facebook's targeting algorithms, is another example of that, and, you know, when I look at how people are using fake news and fake accounts on Social Media to try to spread their political beliefs or, you know, some ulterior motive that may be financially driven and not really for the benefit of humanity, I don't feel very good about that, you know, I mean, the technology that I created at the time just to sell more books which seemed harmless enough, ended up getting twisted into something that really changed the course of history in ways that might be good or bad depending on your political leanings. So, again, remember that if you actually have a job in Deep Learning and Machine Learning, you can go anywhere you want to, if you find yourself being asked to do something that's morally questionable, you don't have to do it, you can find a new job tomorrow, OK? I mean this is a really hot field and at the time, by the time you have real world experience in it, the world's your oyster, you know, if you find yourself being asked to do something that's morally questionable, you can say no, someone else will hire you tomorrow I promise you, if you're any good at all. So, I see this happening a lot lately, there's a lot of people publishing research about using neural networks to crack people's passwords or to, you know, illustrate how it can be used for evil, for example, by trying to predict people's sexual orientation just based on a picture of their face, I mean, this can't go anywhere good, guys, what are you trying to show by actually publishing this sort of a research? So think twice before you publish stuff like that. Think twice before you implement stuff like that for an employer because your employer only cares about making money, about making a profit, they are less concerned about the moral implications, about the technology you're developing to deliver that profit, and people will see what you're building out there, and they will probably take that same technology, those same ideas, and twist it into something you may not have considered. So I just want you to keep these ideas and these concerns in the back of your head because you are dealing with new and powerful technologies here, and it's really up to us as technologists to try to steer that technology in the right direction and use it for the good of humanity and not for the detriment of humanity. Sounds very, very high level, high horse preachy I know, but these are very real concerns and there are, you know, a lot of people out there that share my concerns. So, so please consider these concerns as you delve into your Deep Learning career.