Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

EA - Utilitarianism and the replaceability of desires and attachments by MichaelStJules

27:42
 
Share
 

Manage episode 431122338 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism and the replaceability of desires and attachments, published by MichaelStJules on July 28, 2024 on The Effective Altruism Forum. Summary 1. Consider a pill that would cause a happy person with a fulfilling life to abandon their most important desires and cherished attachments, including goals, career and loved ones, but increase their lifetime subjective well-being. If what's best for someone is just higher subjective well-being (including even higher lifetime preference/desire satisfaction), then it would be better for them to take the pill. However, it seems to me that if they prefer not to take such a pill, to honour their current specific desires and attachments, it could be worse for them to take it (more). 2. I anticipate some responses and reply to them: 1. People aren't always right about what's best for themselves. R: That's true, but attitude manipulation is quite different from other cases, where individuals neglect, discount or otherwise misweigh attitudes they do or will have (more). 2. Deontological constraints against involuntary manipulation. R: Deontological constraints could oddly recommend not to do what's better for someone on their behalf (more). 3. Indirect reasons count against involuntary attitude manipulation. R: Probably, but I also think it wouldn't be better for them in many cases where it would increase their well-being (more). 4. We can't compare someone's welfare between such different attitudes. R: We wouldn't then have reason either way about manipulation, or to prevent manipulation (more). 5. The thought experiment is too removed from reality. R: In fact, reprogramming artificial minds seems reasonably likely to be possible in the future, and regardless, if this manipulation would be worse for someone, views consistent with this could have important implications for cause prioritization (more). 3. This kind of attitude manipulation would be worse for someone on preference-affecting views, which are in favor of making preferences (or attitudes) satisfied, but neutral about making satisfied preferences (for their own sake). Such views are also person-affecting, and so neutral about making happy people or ensuring they come to exist (for their own sake). I expect such views to give relatively less priority to extinction risk reduction within the community (more). Acknowledgements Thanks to Lukas Gloor and Chi Nguyen for helpful feedback. Thanks to Teo Ajantaival, Magnus Vinding, Anthony DiGiovanni and Eleos Arete Citrini for helpful feedback on earlier related drafts. All errors are my own. Manipulating desires and abandoning attachments Let's start with a thought experiment. Arneson (2006, pdf) wrote the following, although I substitute my own text in italics and square brackets to modify it slightly: Suppose I am married to Sam, committed to particular family and friends, dedicated to philosophy and mountain biking, and I am then offered a pill that will immediately and costlessly change my tastes, so that my former desires disappear, and I desire only [to know more about the world, so I will obsessively and happily consume scientific material, abandoning my spouse, my friends and family, my career as a philosopher and mountain biking, and instead live modestly off of savings or work that allows me to spend most of me time reading]. I am assured that taking the pill will increase my lifetime level of [subjective well-being]. Assume further that Arneson loves Sam, his family and friends, philosophy and mountain biking, and would have continued to do so without the pill. He would have had a very satisfying, subjectively meaningful, personally fulfilling, pleasurable and happy life, with high levels of overall desire/preference satisfaction, even if he doesn't take the pill. On all of these measures of subjective well-bein...
  continue reading

2443 episodes

Artwork
iconShare
 
Manage episode 431122338 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: Utilitarianism and the replaceability of desires and attachments, published by MichaelStJules on July 28, 2024 on The Effective Altruism Forum. Summary 1. Consider a pill that would cause a happy person with a fulfilling life to abandon their most important desires and cherished attachments, including goals, career and loved ones, but increase their lifetime subjective well-being. If what's best for someone is just higher subjective well-being (including even higher lifetime preference/desire satisfaction), then it would be better for them to take the pill. However, it seems to me that if they prefer not to take such a pill, to honour their current specific desires and attachments, it could be worse for them to take it (more). 2. I anticipate some responses and reply to them: 1. People aren't always right about what's best for themselves. R: That's true, but attitude manipulation is quite different from other cases, where individuals neglect, discount or otherwise misweigh attitudes they do or will have (more). 2. Deontological constraints against involuntary manipulation. R: Deontological constraints could oddly recommend not to do what's better for someone on their behalf (more). 3. Indirect reasons count against involuntary attitude manipulation. R: Probably, but I also think it wouldn't be better for them in many cases where it would increase their well-being (more). 4. We can't compare someone's welfare between such different attitudes. R: We wouldn't then have reason either way about manipulation, or to prevent manipulation (more). 5. The thought experiment is too removed from reality. R: In fact, reprogramming artificial minds seems reasonably likely to be possible in the future, and regardless, if this manipulation would be worse for someone, views consistent with this could have important implications for cause prioritization (more). 3. This kind of attitude manipulation would be worse for someone on preference-affecting views, which are in favor of making preferences (or attitudes) satisfied, but neutral about making satisfied preferences (for their own sake). Such views are also person-affecting, and so neutral about making happy people or ensuring they come to exist (for their own sake). I expect such views to give relatively less priority to extinction risk reduction within the community (more). Acknowledgements Thanks to Lukas Gloor and Chi Nguyen for helpful feedback. Thanks to Teo Ajantaival, Magnus Vinding, Anthony DiGiovanni and Eleos Arete Citrini for helpful feedback on earlier related drafts. All errors are my own. Manipulating desires and abandoning attachments Let's start with a thought experiment. Arneson (2006, pdf) wrote the following, although I substitute my own text in italics and square brackets to modify it slightly: Suppose I am married to Sam, committed to particular family and friends, dedicated to philosophy and mountain biking, and I am then offered a pill that will immediately and costlessly change my tastes, so that my former desires disappear, and I desire only [to know more about the world, so I will obsessively and happily consume scientific material, abandoning my spouse, my friends and family, my career as a philosopher and mountain biking, and instead live modestly off of savings or work that allows me to spend most of me time reading]. I am assured that taking the pill will increase my lifetime level of [subjective well-being]. Assume further that Arneson loves Sam, his family and friends, philosophy and mountain biking, and would have continued to do so without the pill. He would have had a very satisfying, subjectively meaningful, personally fulfilling, pleasurable and happy life, with high levels of overall desire/preference satisfaction, even if he doesn't take the pill. On all of these measures of subjective well-bein...
  continue reading

2443 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide