Artwork

Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

LW - AI #72: Denying the Future by Zvi

1:03:35
 
Share
 

Manage episode 428675214 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #72: Denying the Future, published by Zvi on July 12, 2024 on LessWrong. The Future. It is coming. A surprising number of economists deny this when it comes to AI. Not only do they deny the future that lies in the future. They also deny the future that is here, but which is unevenly distributed. Their predictions and projections do not factor in even what the AI can already do, let alone what it will learn to do later on. Another likely future event is the repeal of the Biden Executive Order. That repeal is part of the Republican platform, and Trump is the favorite to win the election. We must act on the assumption that the order likely will be repealed, with no expectation of similar principles being enshrined in federal law. Then there are the other core problems we will have to solve, and other less core problems such as what to do about AI companions. They make people feel less lonely over a week, but what do they do over a lifetime? Also I don't have that much to say about it now, but it is worth noting that this week it was revealed Apple was going to get an observer board seat at OpenAI… and then both Apple and Microsoft gave up their observer seats. Presumably that is about antitrust and worrying the seats would be a bad look. There could also be more to it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Long as you avoid GPT-3.5. 4. Language Models Don't Offer Mundane Utility. Many mistakes will not be caught. 5. You're a Nudge. You say it's for my own good. 6. Fun With Image Generation. Universal control net for SDXL. 7. Deepfaketown and Botpocalypse Soon. Owner of a lonely bot. 8. They Took Our Jobs. Restaurants. 9. Get Involved. But not in that way. 10. Introducing. Anthropic ships several new features. 11. In Other AI News. Microsoft and Apple give up OpenAI board observer seats. 12. Quiet Speculations. As other papers learned, to keep pace, you must move fast. 13. The AI Denialist Economists. Why doubt only the future? Doubt the present too. 14. The Quest for Sane Regulation. EU and FTC decide that things are their business. 15. Trump Would Repeal the Biden Executive Order on AI. We can't rely on it. 16. Ordinary Americans Are Worried About AI. Every poll says the same thing. 17. The Week in Audio. Carl Shulman on 80,000 hours was a two parter. 18. The Wikipedia War. One obsessed man can do quite a lot of damage. 19. Rhetorical Innovation. Yoshua Bengio gives a strong effort. 20. Evaluations Must Mimic Relevant Conditions. Too often they don't. 21. Aligning a Smarter Than Human Intelligence is Difficult. Stealth fine tuning. 22. The Problem. If we want to survive, it must be solved. 23. Oh Anthropic. Non Disparagement agreements should not be covered by NDAs. 24. Other People Are Not As Worried About AI Killing Everyone. Don't feel the AGI. Language Models Offer Mundane Utility Yes, they are highly useful for coding. It turns out that if you use GPT-3.5 for your 'can ChatGPT code well enough' paper, your results are not going to be relevant. Gallabytes says 'that's morally fraud imho' and that seems at least reasonable. Tests failing in GPT-3.5 is the AI equivalent of "IN MICE" except for IQ tests. If you are going to analyze the state of AI, you need to keep an eye out for basic errors and always always check which model is used. So if you go quoting statements such as: Paper about GPT-3.5: its ability to generate functional code for 'hard' problems dropped from 40% to 0.66% after this time as well. 'A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset Then even if you hadn't realized or checked before (which you really should have), you need to notice that this says 2021, which is very much not ...
  continue reading

2447 episodes

Artwork
iconShare
 
Manage episode 428675214 series 2997284
Content provided by The Nonlinear Fund. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by The Nonlinear Fund or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Welcome to The Nonlinear Library, where we use Text-to-Speech software to convert the best writing from the Rationalist and EA communities into audio. This is: AI #72: Denying the Future, published by Zvi on July 12, 2024 on LessWrong. The Future. It is coming. A surprising number of economists deny this when it comes to AI. Not only do they deny the future that lies in the future. They also deny the future that is here, but which is unevenly distributed. Their predictions and projections do not factor in even what the AI can already do, let alone what it will learn to do later on. Another likely future event is the repeal of the Biden Executive Order. That repeal is part of the Republican platform, and Trump is the favorite to win the election. We must act on the assumption that the order likely will be repealed, with no expectation of similar principles being enshrined in federal law. Then there are the other core problems we will have to solve, and other less core problems such as what to do about AI companions. They make people feel less lonely over a week, but what do they do over a lifetime? Also I don't have that much to say about it now, but it is worth noting that this week it was revealed Apple was going to get an observer board seat at OpenAI… and then both Apple and Microsoft gave up their observer seats. Presumably that is about antitrust and worrying the seats would be a bad look. There could also be more to it. Table of Contents 1. Introduction. 2. Table of Contents. 3. Language Models Offer Mundane Utility. Long as you avoid GPT-3.5. 4. Language Models Don't Offer Mundane Utility. Many mistakes will not be caught. 5. You're a Nudge. You say it's for my own good. 6. Fun With Image Generation. Universal control net for SDXL. 7. Deepfaketown and Botpocalypse Soon. Owner of a lonely bot. 8. They Took Our Jobs. Restaurants. 9. Get Involved. But not in that way. 10. Introducing. Anthropic ships several new features. 11. In Other AI News. Microsoft and Apple give up OpenAI board observer seats. 12. Quiet Speculations. As other papers learned, to keep pace, you must move fast. 13. The AI Denialist Economists. Why doubt only the future? Doubt the present too. 14. The Quest for Sane Regulation. EU and FTC decide that things are their business. 15. Trump Would Repeal the Biden Executive Order on AI. We can't rely on it. 16. Ordinary Americans Are Worried About AI. Every poll says the same thing. 17. The Week in Audio. Carl Shulman on 80,000 hours was a two parter. 18. The Wikipedia War. One obsessed man can do quite a lot of damage. 19. Rhetorical Innovation. Yoshua Bengio gives a strong effort. 20. Evaluations Must Mimic Relevant Conditions. Too often they don't. 21. Aligning a Smarter Than Human Intelligence is Difficult. Stealth fine tuning. 22. The Problem. If we want to survive, it must be solved. 23. Oh Anthropic. Non Disparagement agreements should not be covered by NDAs. 24. Other People Are Not As Worried About AI Killing Everyone. Don't feel the AGI. Language Models Offer Mundane Utility Yes, they are highly useful for coding. It turns out that if you use GPT-3.5 for your 'can ChatGPT code well enough' paper, your results are not going to be relevant. Gallabytes says 'that's morally fraud imho' and that seems at least reasonable. Tests failing in GPT-3.5 is the AI equivalent of "IN MICE" except for IQ tests. If you are going to analyze the state of AI, you need to keep an eye out for basic errors and always always check which model is used. So if you go quoting statements such as: Paper about GPT-3.5: its ability to generate functional code for 'hard' problems dropped from 40% to 0.66% after this time as well. 'A reasonable hypothesis for why ChatGPT can do better with algorithm problems before 2021 is that these problems are frequently seen in the training dataset Then even if you hadn't realized or checked before (which you really should have), you need to notice that this says 2021, which is very much not ...
  continue reading

2447 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide