Artwork

Content provided by cidsel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by cidsel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

The Right Thing to Do

30:22
 
Share
 

Manage episode 381031568 series 3454996
Content provided by cidsel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by cidsel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

There is much talk these days about the ethical challenges that generative artificial intelligence technologies pose. While most of us might be hazy about what these are, they themselves are in little doubt. A question to ChatGPT on these lines will result in a list of areas of concern that include job displacement, intellectual property, transparency, bias, and fairness. Upon reflection, there is nothing terribly unexpected here. Any one of us would recognise these matters as raising issues to do with what we consider to be morally right or wrong, or good or bad, or just or unjust. In fact, we pride ourselves in our uniqueness as a species, in this way. We believe that we are alone in our capacities to anticipate the future consequences of our actions, as well as being able to make ethical judgements about these potential outcomes.

But included in the list of ethical concerns recognised by ChatGPT about itself, as it were, is an indication of a technological self-awareness with the comment that the development and use of generative AI technologies “raises questions about the moral status and consciousness of AI systems themselves”. Suddenly we are faced with the possibility that we have created machines that might be able to exhibit consciousness, have a conscience, and have capacities for making their own moral judgements. If this is a possibility, should we just let them do that, or should we ensure that such development is nipped in the bud?

In this episode, our host Richard Bawden discusses this and other questions about ethics and morality related to AI with his guest, Declan Humphreys. Declan is the newly appointed Lecturer in Cybersecurity at the University of the Sunshine Coast, where he is developing research into the ethical design and use of AI. He received his PhD in philosophy from the University of New England, with a focus on the ethical impacts of new and emerging technologies.

  continue reading

56 episodes

Artwork
iconShare
 
Manage episode 381031568 series 3454996
Content provided by cidsel. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by cidsel or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

There is much talk these days about the ethical challenges that generative artificial intelligence technologies pose. While most of us might be hazy about what these are, they themselves are in little doubt. A question to ChatGPT on these lines will result in a list of areas of concern that include job displacement, intellectual property, transparency, bias, and fairness. Upon reflection, there is nothing terribly unexpected here. Any one of us would recognise these matters as raising issues to do with what we consider to be morally right or wrong, or good or bad, or just or unjust. In fact, we pride ourselves in our uniqueness as a species, in this way. We believe that we are alone in our capacities to anticipate the future consequences of our actions, as well as being able to make ethical judgements about these potential outcomes.

But included in the list of ethical concerns recognised by ChatGPT about itself, as it were, is an indication of a technological self-awareness with the comment that the development and use of generative AI technologies “raises questions about the moral status and consciousness of AI systems themselves”. Suddenly we are faced with the possibility that we have created machines that might be able to exhibit consciousness, have a conscience, and have capacities for making their own moral judgements. If this is a possibility, should we just let them do that, or should we ensure that such development is nipped in the bud?

In this episode, our host Richard Bawden discusses this and other questions about ethics and morality related to AI with his guest, Declan Humphreys. Declan is the newly appointed Lecturer in Cybersecurity at the University of the Sunshine Coast, where he is developing research into the ethical design and use of AI. He received his PhD in philosophy from the University of New England, with a focus on the ethical impacts of new and emerging technologies.

  continue reading

56 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide