Artwork

Content provided by Kelsea Morrison and Matt Radolec. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kelsea Morrison and Matt Radolec or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Privacy Attorney Tiffany Li and AI Memory, Part II

14:10
 
Share
 

Manage episode 192025982 series 1411238
Content provided by Kelsea Morrison and Matt Radolec. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kelsea Morrison and Matt Radolec or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn.

Guidance for GDPR Right to be Forgotten

Cindy Ng

We continue our discussion with Tiffany Li who is an attorney and Resident Fellow at Yale Law Schools Information Society Project. In part two, we discuss non-human creators of intellectual property and how it could potentially impact the right to be forgotten, as well as the benefits of multi-disciplinary training where developers take a law class and lawyers take a tech class.

Andy Green

So do you think the regulators will have some more guidance specifically for the GDPR right to be forgotten?

Tiffany Li

The European regulators typically have been fairly good about providing external guidance outside of regulations and outside of decisions. Guidance documents that are non-binding have very helpful in understanding different aspects of regulation. And I think that we will have more research done. I would love to really see though is more interdisciplinary research. So one problem I think that we have in law generally, in technology law, is the sort of habit of operating in a law and policy only silo. So we have the lawyers, we have the policymakers, we have the lobbyists, everyone there in a room talking about, for example, how we should protect privacy. And that's wonderful and I've been in that room many times.

But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers.

AI and Intellectual Property

Cindy Ng

Speaking of interdisciplinary, you recently wrote a really interesting paper on AI and intellectual property, and you describe the future dilemmas of what might arise in IP law specifically involving works by non-human creators. And I was wondering if you can introduce to our listeners the significance of your inquiry.

Tiffany Li

So this is a draft paper that I've been writing about AI and intellectual property. Specifically, I'm looking at the copyright ability of works that are created by non-human authors, which could include AI, but could also include animals for example, or other non-human actors. Getting back to that same difference I mentioned earlier where we have one from an AI that is simply machine learning and super advanced statistics, and we have one from an AI that may be something close to a new type of intelligence. So my paper looks at this from two angles. First, we look what current scholarship says about who should own creative works that are created by AI or non-humans. And here we have an interesting issue. For example, if you devise an AI system to compose music, which we've seen in a few different cases, the question then is who you should own the copyright or the IP rights generally over the music that's created?

One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality.

But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory.

And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally.

Andy Green

If we give AIs some of these rights, then it almost conflicts with the right to be forgotten because now you would need the consent of the AI?

Tiffany Li

Sure. That's definitely possible. We don't know. I mean, we don't have AI citizens yet except in Saudi Arabia.

Andy Green

I've heard about that, yeah.

Cindy Ng

So since we're talking about AI citizens, if we do extend AI citizens to have intellectual property rights, does it mean that they get other kinds of rights? Such as freedom of speech and the right to vote, or that's not a proper approach or way to think about it? Are we treading in science fiction movies that we've been where humans are superior to a machine? I know we're just kind of playing around with ideas, but it will be really interesting to hear your insights especially... It's your specialty.

Tiffany Li

No problem. I mean, I'm in this field because I love playing around with those ideas. Even though I do continually mention that there is that division between the AI we have now and that futuristic sentient AI, I do think that eventually we will get there. There will be a point where we have AI that can think, for a certain definition of thinking, that can think at least like level human beings. And because those intelligent systems can design themselves, it's fairly easy to assume that they will then design even more intelligent systems. And we'll get to that point where there will be super intelligent AIs who are more intelligent than humans. So the question they ask then I think is really interesting. It's the concept of whether we should be giving these potential future beings the same rights that we give human beings. And I think that's interesting because it gets down to a really a philosophical question, right? It's not a question about privacy or security or even law. It's the question of what we believe is important on a moral level, and it's who we believe to be capable of either having morals or being part of a moral calculus.

So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth.

Multidisciplinary Training

Cindy Ng

And to wrap up and to bring all our topics together, I wanna bring it back to regulations and technology and training and I'd like to continue our play thinking with the idea that developers who create technology, if we should require training so that they take principle such as right to be forgotten, privacy by design, and you even mentioned the moral obligation for developers to consider all of these elements because what they'll be creating will ultimately impact humans. And I wonder if they could get the training that we require of doctors and lawyers so that everyone is working from the same knowledge base. Could you see that happening? And I wanted to know what your opinions are on this.

Tiffany Li

I love that mode of thought. I think that in addition to lawyers and policymakers needing to understand more from technologists, I think that people working in tech definitely should think more about these ethical issues. And I think that it's starting, we're starting to see a trend of people in the technology community thinking about really how their actions can affect the world at large. And there may be partially in the mainstream news right now because of the reaction to the last election and to ideas such as fake news and disinformation and so on. But we see the tech industry changing and we're accepting somewhat the idea that maybe they should be responsibility or ethical considerations built into the role of being a technologist. So what I like to think about it's just the fact that regardless of whether you are a product developer or you are a privacy officer or you're a lawyer at a tech company per se, for example, regardless of what role you have every action that you make have an impact in the world at large.

And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on.

I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap.

Cindy Ng

So I just wanted to ask if you had anything else that you wanted to share that we didn't cover? We covered so many different topics.

Tiffany Li

So I'd love to take a moment to introduce the work that I'm currently doing. I'm a Resident Fellow at Yale Law School's Information Society Project, which is a research center dedicated to different legal issues involving the information society as we know it. I'm currently leading a new initiative which is called the Wikimedia and Yale Law School Initiative on intermediaries and information. This initiative is funded by a generous grant from the Wikimedia Foundation, which is the nonprofit that runs Wikipedia. And we're doing some really interesting research right now on exactly what we just discussed on the role of tech companies, but particularly these information intermediaries or these social media platforms and so on.

These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.

  continue reading

188 episodes

Artwork
iconShare
 
Manage episode 192025982 series 1411238
Content provided by Kelsea Morrison and Matt Radolec. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Kelsea Morrison and Matt Radolec or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Tiffany C. Li is an attorney and Resident Fellow at Yale Law School’s Information Society Project. She frequently writes and speaks on the privacy implications of artificial intelligence, virtual reality, and other technologies. Our discussion is based on her recent paper on the difficulties with getting AI to forget. In this second part, we continue our discussion of GDPR and privacy, and then explore some cutting edge areas of law and technology. Can AI algorithms own their creative efforts? Listen and learn.

Guidance for GDPR Right to be Forgotten

Cindy Ng

We continue our discussion with Tiffany Li who is an attorney and Resident Fellow at Yale Law Schools Information Society Project. In part two, we discuss non-human creators of intellectual property and how it could potentially impact the right to be forgotten, as well as the benefits of multi-disciplinary training where developers take a law class and lawyers take a tech class.

Andy Green

So do you think the regulators will have some more guidance specifically for the GDPR right to be forgotten?

Tiffany Li

The European regulators typically have been fairly good about providing external guidance outside of regulations and outside of decisions. Guidance documents that are non-binding have very helpful in understanding different aspects of regulation. And I think that we will have more research done. I would love to really see though is more interdisciplinary research. So one problem I think that we have in law generally, in technology law, is the sort of habit of operating in a law and policy only silo. So we have the lawyers, we have the policymakers, we have the lobbyists, everyone there in a room talking about, for example, how we should protect privacy. And that's wonderful and I've been in that room many times.

But what's missing often is someone who actually knows what that means on the technical end. For example, all the issues that I just brought up are not in that room with the lawyers and policymakers really, unless you bring in someone with a tech background, someone who works on these issues and actually knows what's going on. So this is something that's not just an issue with the right to be forgotten or just with EU privacy law, but really any technology law or policy issue. I think that we definitely need to bridge that gap between technologists and policymakers.

AI and Intellectual Property

Cindy Ng

Speaking of interdisciplinary, you recently wrote a really interesting paper on AI and intellectual property, and you describe the future dilemmas of what might arise in IP law specifically involving works by non-human creators. And I was wondering if you can introduce to our listeners the significance of your inquiry.

Tiffany Li

So this is a draft paper that I've been writing about AI and intellectual property. Specifically, I'm looking at the copyright ability of works that are created by non-human authors, which could include AI, but could also include animals for example, or other non-human actors. Getting back to that same difference I mentioned earlier where we have one from an AI that is simply machine learning and super advanced statistics, and we have one from an AI that may be something close to a new type of intelligence. So my paper looks at this from two angles. First, we look what current scholarship says about who should own creative works that are created by AI or non-humans. And here we have an interesting issue. For example, if you devise an AI system to compose music, which we've seen in a few different cases, the question then is who you should own the copyright or the IP rights generally over the music that's created?

One option is giving it to the designer of the AI system on the theory that they created a system which is the main impetus for the work being generated in the first place. Another theory is that the person actually running the system, the person who literally flipped the switch and hit run should own the rights because they were provided the creative spark behind the art or the creative work. So other theories prevail or exists right now. Some people say that there should be no rights to any of the work because it doesn't make sense to provide rights who are not the actual creators of the work. Others say that we should try to figure out a system for giving the AI the work. And this of course is problematic because AI can't own anything. And even if it could, even if we get the world where AI is a sentient being, we don't really know what they want. We can't pay them. We don't know how they would prefer to be incentivized for their creation, and so on. So a lot of these different theories don't perfectly match up with reality.

But I think the prevailing ideas right now are either to create a contractual basis for figuring this out. For example, when you design your system, you signed a contract with whoever you sell it to, that lays out all the rights neatly in the contract so you bypass a legal issue entirely. Or think of it as a work-for-hire model. Think of the AI system as now just an employee who is simply following the instructions of an employer. In that sense for example, if you are an employee of Google and you develop something, you develop a really great product, you don't own the product, Google owns that product, right? It's under the work-for-hire model. So that's one theory.

And what my research is finding is that none of these theories really makes sense because we're missing one crucial thing. And I think the crucial point they're missing is really goes back to the very beginnings of why we have copyright in the first place, or why we have intellectual property, which is that we want to incentivize the creation of more useful work. We want more artists, we want more musicians, and so on. So the key question then if you look at works created by non-humans isn't, you know, if we can contractually get around this issue, the key question is what we want to incentivize. Whether we want to incentivize work in general, art in general, or if for some reason we think that there's something unique about human creation, that we want humans to continually be creating things, and those two different paradigms I think should be the way we look at this issue in the future. So it's a little high level but I think that that's interesting distinction that we haven't paid enough attention to yet when we think about the question of who should own intellectual properties for works that are created AI and non-humans generally.

Andy Green

If we give AIs some of these rights, then it almost conflicts with the right to be forgotten because now you would need the consent of the AI?

Tiffany Li

Sure. That's definitely possible. We don't know. I mean, we don't have AI citizens yet except in Saudi Arabia.

Andy Green

I've heard about that, yeah.

Cindy Ng

So since we're talking about AI citizens, if we do extend AI citizens to have intellectual property rights, does it mean that they get other kinds of rights? Such as freedom of speech and the right to vote, or that's not a proper approach or way to think about it? Are we treading in science fiction movies that we've been where humans are superior to a machine? I know we're just kind of playing around with ideas, but it will be really interesting to hear your insights especially... It's your specialty.

Tiffany Li

No problem. I mean, I'm in this field because I love playing around with those ideas. Even though I do continually mention that there is that division between the AI we have now and that futuristic sentient AI, I do think that eventually we will get there. There will be a point where we have AI that can think, for a certain definition of thinking, that can think at least like level human beings. And because those intelligent systems can design themselves, it's fairly easy to assume that they will then design even more intelligent systems. And we'll get to that point where there will be super intelligent AIs who are more intelligent than humans. So the question they ask then I think is really interesting. It's the concept of whether we should be giving these potential future beings the same rights that we give human beings. And I think that's interesting because it gets down to a really a philosophical question, right? It's not a question about privacy or security or even law. It's the question of what we believe is important on a moral level, and it's who we believe to be capable of either having morals or being part of a moral calculus.

So in my personal opinion, I believe if we do get to that point, if there are artificially intelligent beings who are as intelligent as humans, who we believe to be almost exactly the same as humans in every way in terms of having intelligence, being able to mimic or feel emotion, and so on, we should definitely look into expanding our definition of citizenship and fundamental rights. I think, of course, there is the opposite view, which is that there is something inherently unique about humanity and there's something unique about life as we see it right now, biological, carbon based life as we see it right now. But I think that's a limited view and I think that that limited view is not something that really serves us well if you consider the universe as a whole and the large expanse of time outside of just these few millennia that humans have been on this earth.

Multidisciplinary Training

Cindy Ng

And to wrap up and to bring all our topics together, I wanna bring it back to regulations and technology and training and I'd like to continue our play thinking with the idea that developers who create technology, if we should require training so that they take principle such as right to be forgotten, privacy by design, and you even mentioned the moral obligation for developers to consider all of these elements because what they'll be creating will ultimately impact humans. And I wonder if they could get the training that we require of doctors and lawyers so that everyone is working from the same knowledge base. Could you see that happening? And I wanted to know what your opinions are on this.

Tiffany Li

I love that mode of thought. I think that in addition to lawyers and policymakers needing to understand more from technologists, I think that people working in tech definitely should think more about these ethical issues. And I think that it's starting, we're starting to see a trend of people in the technology community thinking about really how their actions can affect the world at large. And there may be partially in the mainstream news right now because of the reaction to the last election and to ideas such as fake news and disinformation and so on. But we see the tech industry changing and we're accepting somewhat the idea that maybe they should be responsibility or ethical considerations built into the role of being a technologist. So what I like to think about it's just the fact that regardless of whether you are a product developer or you are a privacy officer or you're a lawyer at a tech company per se, for example, regardless of what role you have every action that you make have an impact in the world at large.

And this is something that, you know, maybe is giving too much moral responsibility to the day to day actions of most people. But if you consider that any small action within a company can affect the product, and any product can then affect all the users that it reaches, you kind of see this easy scaling up of your one action to effect on the people around you, which can then affect maybe even larger areas and possibly the world. Which is not to say, of course, that we should live in fear of having to the decide every single aspect of our lives based on greater impact the world. But I do think it's important to remember that especially if you are in a role in which you're dealing with things that might have really direct impact on things that matter, like privacy, like free speech, like global idealistic human rights values, and so on.

I think it's important to consider ethics and technology definitely. And if we can provide training, if we can make this part of the product design process, if we can make this part of what we expect when hiring people, sure. I think it would be great. Adding it to curriculum, adding tech or information ethics course into the general computer science curriculum for example would be great. I also think that it would be great to have a tech course for the law school curriculum as well. Definitely both sides can learn from each other. We do in general just need to bridge that gap.

Cindy Ng

So I just wanted to ask if you had anything else that you wanted to share that we didn't cover? We covered so many different topics.

Tiffany Li

So I'd love to take a moment to introduce the work that I'm currently doing. I'm a Resident Fellow at Yale Law School's Information Society Project, which is a research center dedicated to different legal issues involving the information society as we know it. I'm currently leading a new initiative which is called the Wikimedia and Yale Law School Initiative on intermediaries and information. This initiative is funded by a generous grant from the Wikimedia Foundation, which is the nonprofit that runs Wikipedia. And we're doing some really interesting research right now on exactly what we just discussed on the role of tech companies, but particularly these information intermediaries or these social media platforms and so on.

These tech companies and their responsibilities or their duties, towards users, towards movements, towards governments, and possibly towards the world and larger ideals. So it's a really interesting new initiative and I would definitely welcome different feedback and ideas on these topics. So if people want to check out more information, you can head to our website. It's law.yale.edu/isp. And you can also follow me on twitter @Tiffany, T-I-F-F-A-N-Y-C-L-I. So I would love to hear from any of your listeners and love to chat more about all of these fascinating issues.

  continue reading

188 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide