Artwork

Content provided by Rick Spair. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rick Spair or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

πŸ”Ž AI Vendor Verification: Navigating Hype, Reality, and Compliance

45:52
 
Share
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on December 03, 2025 15:35 (4d ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 521836517 series 3485568
Content provided by Rick Spair. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rick Spair or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Send us a text

Guide for enterprise decision-makers on verifying vendor claims regarding artificial intelligence, emphasizing that trust must shift from mere sentiment to a verifiable engineering state by 2025. The core challenge identified is the GenAI Divide, a fundamental chasm between AI's theoretical capability and the near-zero measurable ROI reported by the vast majority of organizations. The document details methods for detecting AI washing and identifying "wrapper" vendors who merely resell public foundation models as proprietary technology, recommending technical forensics like latency analysis and refusal testing. Operational reality is scrutinized, revealing that sophisticated agentic AI exhibits significant fragility in multi-step workflows and that high hallucination rates persist even with advanced retrieval systems. Therefore, organizations must mandate compliance with rigorous global frameworks, specifically citing the transparency and testing requirements of the EU AI Act, the operational standards of the NIST AI Risk Management Framework, and the certified audit process established by ISO/IEC 42006.

  continue reading

241 episodes

Artwork
iconShare
 

Fetch error

Hmmm there seems to be a problem fetching this series right now. Last successful fetch was on December 03, 2025 15:35 (4d ago)

What now? This series will be checked again in the next hour. If you believe it should be working, please verify the publisher's feed link below is valid and includes actual episode links. You can contact support to request the feed be immediately fetched.

Manage episode 521836517 series 3485568
Content provided by Rick Spair. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Rick Spair or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Send us a text

Guide for enterprise decision-makers on verifying vendor claims regarding artificial intelligence, emphasizing that trust must shift from mere sentiment to a verifiable engineering state by 2025. The core challenge identified is the GenAI Divide, a fundamental chasm between AI's theoretical capability and the near-zero measurable ROI reported by the vast majority of organizations. The document details methods for detecting AI washing and identifying "wrapper" vendors who merely resell public foundation models as proprietary technology, recommending technical forensics like latency analysis and refusal testing. Operational reality is scrutinized, revealing that sophisticated agentic AI exhibits significant fragility in multi-step workflows and that high hallucination rates persist even with advanced retrieval systems. Therefore, organizations must mandate compliance with rigorous global frameworks, specifically citing the transparency and testing requirements of the EU AI Act, the operational standards of the NIST AI Risk Management Framework, and the certified audit process established by ISO/IEC 42006.

  continue reading

241 episodes

Alle episoder

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide

Copyright 2025 | Privacy Policy | Terms of Service | | Copyright
Listen to this show while you explore
Play