Artwork

Content provided by Jayesh Jagasia. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jayesh Jagasia or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#111 - Leslie Nooteboom - Co-founder & Chief Product Officer, Humanising Autonomy

43:37
 
Share
 

Manage episode 289364281 series 2793161
Content provided by Jayesh Jagasia. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jayesh Jagasia or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Data is the lifeblood of Artificial Intelligence. Quite simply, the better and richer the quality of data, the more capable the algorithm. Now this applies to both, the training data available to train the algorithm, but importantly, also the input data that is available for the algorithm to do its job.

Take the case of autonomous vehicles or advanced driver assistance systems. These systems rely on the eyes - cameras, LIDARs and RADARs - to see the environment around the vehicle. The input from these eyes is then passed on to the brain - the algorithm - which makes sense of what the eyes see.

Most state of the art ADAS and AV algorithms today are designed to perceive what these sensors see by drawing bounding boxes around road users. That’s how they perceive pedestrians, other road users, vehicles and obstacles.

But human behaviour rarely fits in a box. And human behaviour has a huge impact on how good or not an AV algorithm is. A bounding box alone is not sufficient to really perceive pedestrian behaviour, for instance. Is that pedestrian about to cross the road? How much risk does this road user pose? Is that a vulnerable road user?

Enter Humanising Autonomy. A company on a mission to create a global standard for human interaction with automated systems. This is an incredibly interesting company, and I was delighted to have the opportunity to speak to their Co-founder and Chief Product Officer, Leslie Nooteboom.

Think of Humanising Autonomy as a module you could add to the AV brain, that then makes the brain capable of perceiving - and predicting - human behaviour on roads. I would imagine a solution like this could improve road safety by orders of magnitude.

These guys are up to some really fascinating stuff that sits at the intersection of behavioural psychology, vision perception and artificial intelligence. How does that impact the world of autonomous driving? Find out in my very interesting chat with Leslie.

http://ai-in-automotive.com/aiia/111/leslienooteboom

AI in Automotive Podcast

  continue reading

40 episodes

Artwork
iconShare
 
Manage episode 289364281 series 2793161
Content provided by Jayesh Jagasia. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Jayesh Jagasia or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Data is the lifeblood of Artificial Intelligence. Quite simply, the better and richer the quality of data, the more capable the algorithm. Now this applies to both, the training data available to train the algorithm, but importantly, also the input data that is available for the algorithm to do its job.

Take the case of autonomous vehicles or advanced driver assistance systems. These systems rely on the eyes - cameras, LIDARs and RADARs - to see the environment around the vehicle. The input from these eyes is then passed on to the brain - the algorithm - which makes sense of what the eyes see.

Most state of the art ADAS and AV algorithms today are designed to perceive what these sensors see by drawing bounding boxes around road users. That’s how they perceive pedestrians, other road users, vehicles and obstacles.

But human behaviour rarely fits in a box. And human behaviour has a huge impact on how good or not an AV algorithm is. A bounding box alone is not sufficient to really perceive pedestrian behaviour, for instance. Is that pedestrian about to cross the road? How much risk does this road user pose? Is that a vulnerable road user?

Enter Humanising Autonomy. A company on a mission to create a global standard for human interaction with automated systems. This is an incredibly interesting company, and I was delighted to have the opportunity to speak to their Co-founder and Chief Product Officer, Leslie Nooteboom.

Think of Humanising Autonomy as a module you could add to the AV brain, that then makes the brain capable of perceiving - and predicting - human behaviour on roads. I would imagine a solution like this could improve road safety by orders of magnitude.

These guys are up to some really fascinating stuff that sits at the intersection of behavioural psychology, vision perception and artificial intelligence. How does that impact the world of autonomous driving? Find out in my very interesting chat with Leslie.

http://ai-in-automotive.com/aiia/111/leslienooteboom

AI in Automotive Podcast

  continue reading

40 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide