Show notes are at https://stevelitchfield.com/sshow/chat.html
Manage episode 326394540 series 2475293
By CCC media team. Discovered by Player FM and our community — copyright is owned by the publisher, not Player FM, and audio is streamed directly from their servers. Hit the Subscribe button to track updates in Player FM, or paste the feed URL into other podcast apps.
In this talk, we will explore adversarial attacks on neural networks. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object. In recent years, AI systems have shown incredible abilities, such as playing chess, driving cars, recognizing speech, diagnosing cancer, and recognizing different kinds of objects. But there are cases in which AIs fail in strange ways. In this talk, we will explore adversarial attacks on neural networks. The goal is to create images that look unsuspicious to humans but can trick AI into believing that it sees something entirely different. We will show how it is possible to make neural networks believe that turtles look like weapons or any other kind of object. This talk will cover: - What kinds of adversarial attacks are there? - How do they work? - What are the consequences for security and safety in AI technology? about this event: https://pretalx.linuxtage.at//glt22/talk/JTLBXV/