The podcast where two dudes with nothing better to do, hang out and tell you what they think about things. Things include: Games, Movies, TV, Pop culture, comics, toys, collectables and much etc.
…
continue reading
DanceSport, Coaching and The Aging Athlete. An amateur ballroom dancer with a couple of disorders trying to decipher the meaning of it all. Discussing her experiences in the competitive dance world and sports.
…
continue reading
Welcome to the home of The Bable Effect, a podcast of my unfilitered thoughts on things I feel the need to talk about Support this podcast: https://podcasters.spotify.com/pod/show/bable-effect/support
…
continue reading
Welcome to irResponsible AI —a series where you find out how NOT to end up on the New York Times headlines for all the wrong reasons! 💡 Why are we doing this? As experts, we are tired of the boring “mainstream corporate” RAI communication. Here, we give it to you straight. ⁉️ Why call it Irresponsible AI? Responsible AI exists because of irresponsible AI. Knowing what NOT to do can be, at times, more actionable than knowing what to do. 🎙️Who are your hosts? Why should you even bother to list ...
…
continue reading

1
Algorithm Integrity Matters: for Financial Services leaders, to enhance fairness and accuracy in data processing
Risk Insights: Yusuf Moolla
Insights for financial services leaders who want to enhance fairness and accuracy in their use of data, algorithms, and AI. Each episode explores challenges and solutions related to algorithmic integrity, including discussions on navigating independent audits. The goal of this podcast is to give leaders the knowledge they need to ensure their data practices benefit customers and other stakeholders, reducing the potential for harm and upholding industry standards.
…
continue reading
Welcome to the Life Advice With Dimily podcast, where you can get the life advice from a mid-twenties couple
…
continue reading

1
Article 23. Algorithmic System Integrity: Testing
5:47
5:47
Play later
Play later
Lists
Like
Liked
5:47Spoken by a human version of this article. TL;DR (TL;DL?) Testing is a core basic step for algorithmic integrity. Testing involves various stages, from developer self-checks to UAT. Where these happen will depend on whether the system is built in-house or bought. Testing needs to cover several integrity aspects, including accuracy, fairness, securi…
…
continue reading
The invaders are graced by the presence of their Executive Producer for a wonderful podcast romp.
…
continue reading

1
Article 22. Algorithm Integrity: Third party assurance
7:26
7:26
Play later
Play later
Lists
Like
Liked
7:26Spoken by a human version of this article. One question that comes up often is “How do we obtain assurance about third party products or services?” Depending on the nature of the relationship, and what you need assurance for, this can vary widely. This article attempts to lay out the options, considerations, and key steps to take. TL;DR (TL;DL?) Th…
…
continue reading

1
Guest 3. Shea Brown, Founder and CEO of BABL AI
41:23
41:23
Play later
Play later
Lists
Like
Liked
41:23Navigating AI Audits with Dr. Shea Brown Dr. Shea Brown is Founder and CEO of BABL AI BABL specializes in auditing and certifying AI systems, consulting on responsible AI practices, and offering online education. Shea shares his journey from astrophysics to AI auditing, the core services provided by BABL AI including compliance audits, technical te…
…
continue reading

1
Article 21. AI Risk Training: Role-based tailoring
6:26
6:26
Play later
Play later
Lists
Like
Liked
6:26Spoken by a human version of this article. AI literacy is growing in importance (e.g., EU AI Act, IAIS). AI literacy needs vary across roles. Even "AI professionals" need AI Risk training. Links EU AI Act: The European Union Artificial Intelligence Act - specific expectation about “AI literacy”. IAIS: The International Association of Insurance Supe…
…
continue reading
They're baaaaack! We didn't forget about you! Chuck gives a quick update on the lives of the invaders and then we launch into the episode!
…
continue reading

1
Guest 2. Patrick Sullivan: VP of Strategy and Innovation at A-LIGN
32:03
32:03
Play later
Play later
Lists
Like
Liked
32:03Navigating AI Governance and Compliance Patrick Sullivan is Vice President of Strategy and Innovation at A-LIGN and an expert in cybersecurity and AI compliance with over 25 years of experience. Patrick shares his career journey, discusses his passion for educating executives and directors on effective governance, and explains the critical role of …
…
continue reading

1
Guest 1. Ryan Carrier: Executive Director of ForHumanity
44:49
44:49
Play later
Play later
Lists
Like
Liked
44:49Mitigating AI Risks Ryan Carrier is founder and executive director of ForHumanity, a non-profit focused on mitigating the risks associated with AI, autonomous, and algorithmic systems. With 25 years of experience in financial services, Ryan discusses ForHumanity's mission to analyze and mitigate the downside risks of AI to benefit society. The conv…
…
continue reading

1
Article 20. Algorithm Reviews: Public vs Private Reports
8:26
8:26
Play later
Play later
Lists
Like
Liked
8:26Spoken (by a human) version of this article. Public AI audit reports aren't universally required; they mainly apply to high-risk applications and/or specific jurisdictions. The push for transparency primarily concerns independent audits, not internal reviews. Prepare by implementing ethical AI practices and conducting regular reviews. Note: High-ri…
…
continue reading

1
Article 19. Algorithmic System Reviews: Substantive vs. Controls Testing
6:26
6:26
Play later
Play later
Lists
Like
Liked
6:26Spoken by a human version of this article. Knowing the basics of substantive testing vs. controls testing can help you determine if the review will meet your needs. Substantive testing directly identifies errors or unfairness, while controls testing evaluates governance effectiveness. The results/conclusions are different. Understanding these diffe…
…
continue reading

1
Article 18. Algorithm Integrity: Training and Awareness
4:07
4:07
Play later
Play later
Lists
Like
Liked
4:07Spoken by a human version of this article. Ongoing education helps everyone understand their role in responsibly developing and using algorithmic systems. Regulators and standard-setting bodies emphasise the need for AI literacy across all organisational levels. Links ForHumanity - join the growing community here. ForHumanity - free courses here. I…
…
continue reading

1
Article 17. Algorithm Integrity: Audit vs Review
9:10
9:10
Play later
Play later
Lists
Like
Liked
9:10Spoken by a human version of this article. The terminology – “audit” vs “review” - is important, but clarity about deliverables is more important when commissioning algorithm integrity assessments. Audits are formal, with an opinion or conclusion that can often be shared externally. Reviews come in various forms and typically produce recommendation…
…
continue reading

1
Article 16. Algorithmic System Accuracy Reviews – Choosing the Right Approach
8:07
8:07
Play later
Play later
Lists
Like
Liked
8:07Spoken (by a human) version of this article. Outcome-focused accuracy reviews directly verify results, offering more robust assurance than process-focused methods. This approach can catch translation errors, unintended consequences, and edge cases that process reviews might miss. While more time-consuming and complex, outcome-focused reviews provid…
…
continue reading

1
Article 15. Algorithm Integrity Documentation - Getting Started
5:19
5:19
Play later
Play later
Lists
Like
Liked
5:19Spoken (by a human) version of this article. Documentation makes it easier to consistently maintain algorithm integrity. This is well known. But there are lots of types of documents to prepare, and often the first hurdle is just thinking about where to start. So this simple guide is meant to help do exactly that – get going. About this podcast A po…
…
continue reading
Spoken (by a human) version of this article. Banks and insurers are increasingly using external data; using them beyond their intended purpose can be risky (e.g. discriminatory). Emerging regulations and regulatory guidance emphasise the need for active oversight by boards, senior management to ensure responsible use of external data. Keeping the c…
…
continue reading

1
Article 13. Bridging the purpose-risk gap: Customer-first algorithmic risk assessments
7:18
7:18
Play later
Play later
Lists
Like
Liked
7:18Spoken (by a human) version of this article. Banks and insurers sometimes lose sight of their customer-centric purpose when assessing AI/algorithm risks, focusing instead on regular business risks and regulatory concerns. Regulators are noticing this disconnect. This article aims to outline why the disconnect happens and how we can fix it. Report m…
…
continue reading

1
Article 12. Risk-Focused Principles for Change Control in Algorithmic Systems
12:00
12:00
Play later
Play later
Lists
Like
Liked
12:00Spoken (by a human) version of this article. With algorithmic systems, an change can trigger a cascade of unintended consequences, potentially compromising fairness, accountability, and public trust. So, managing changes is important. But if you use the wrong framework, your change control process may tick the boxes, but be both ineffective and ine…
…
continue reading

1
Article 11. Deprovisioning User Access to Maintain Algorithm Integrity
9:27
9:27
Play later
Play later
Lists
Like
Liked
9:27Spoken (by a human) version of this article. The integrity of algorithmic systems goes beyond accuracy and fairness. In Episode 4, we outlined 10 key aspects of algorithm integrity. Number 5 in that list (not in order of importance) is Security: the algorithmic system needs to be protected from unauthorised access, manipulation and exploitation. In…
…
continue reading

1
Article 10. Fairness reviews: identifying essential attributes
6:54
6:54
Play later
Play later
Lists
Like
Liked
6:54Spoken (by a human) version of this article. When we're checking for fairness in our algorithmic systems (incl. processes, models, rules), we often ask: What are the personal characteristics or attributes that, if used, could lead to discrimination? This article provides a basic framework for identifying and categorising these attributes. About thi…
…
continue reading
The invasion picks back up as the invaders discuss ancient history, 2D Con, and life in general.
…
continue reading

1
Article 9. Algorithmic Integrity: Don't wait for legislation
10:39
10:39
Play later
Play later
Lists
Like
Liked
10:39Spoken (by a human) version of this article. Legislation isn't the silver bullet for algorithmic integrity. Are they useful? Sure. They help provide clarity and can reduce ambiguity. And once a law is passed, we must comply. However: existing legislation may already apply new algorithm-focused laws can be too narrow or quickly outdated standards ca…
…
continue reading

1
Article 8. A Balanced Focus on New and Established Algorithms
8:50
8:50
Play later
Play later
Lists
Like
Liked
8:50Spoken (by a human) version of this article. Even in discussions among AI governance professionals, there seems to be a silent “gen” before AI. With rapid progress - or rather prominence – of generative AI capabilities, these have taken centre stage. Amidst this excitement, we mustn't lose sight of the established algorithms and data-enabled workfl…
…
continue reading

1
Article 7. Postcodes: Hidden Proxies for Protected Attributes
11:31
11:31
Play later
Play later
Lists
Like
Liked
11:31Spoken (by a human) version of this article. In a previous article, we discussed algorithmic fairness, and how seemingly neutral data points can become proxies for protected attributes. In this article, we'll explore a concrete example of a proxy used in insurance and banking algorithms: postcodes. We've used Australian terminology and data. But th…
…
continue reading

1
Article 6. Balancing Security and Access for increased algorithmic integrity
5:41
5:41
Play later
Play later
Lists
Like
Liked
5:41Spoken (by a human) version of this article. When we talk about security in algorithmic systems, it's easy to focus solely on keeping the bad guys out. But there's another side to this coin that's just as important: making sure the right people can get in. This article aims to explain how security and access work together for better algorithm integ…
…
continue reading
The invaders go to an MC Chris show with the one and only Robb!
…
continue reading

1
Article 5. Equal vs Equitable: Algorithmic Fairness
13:53
13:53
Play later
Play later
Lists
Like
Liked
13:53Spoken (by a human) version of this article. Fairness in algorithmic systems is a multi-faceted, and developing, topic. In episode 4, we explored ten key aspects to consider when scoping an algorithm integrity audit. One aspect was fairness, with this in the description: "...The design ensures equitable treatment..." This raises an important questi…
…
continue reading

1
Article 4. Structuring the Audit Objective: 10 Key Aspects of Algorithm Integrity
12:56
12:56
Play later
Play later
Lists
Like
Liked
12:56Spoken (by a human) version of this article. In Episode 1, we explored the challenges of placing undue reliance on audits. One potential solution that we outlined is a clear scope, particularly regarding the audit objective. In this episode, we focus on algorithm integrity as the broad audit objective. While it’s easy to assert that an algorithm ha…
…
continue reading

1
Article 3. Navigate Algorithm Audit Guidance: some aren't relevant to your context
8:55
8:55
Play later
Play later
Lists
Like
Liked
8:55Spoken (by a human) version of this article. AI and algorithm audits help ensure ethical and accurate data processing, preventing harm and disadvantage. However, the guidelines are not yet mature, and quite disparate. This can make the audit process confusing, and quite daunting - how do you wade through it all to find the information that you need…
…
continue reading

1
Article 2. Choice vs obligation: motivation shapes the effectiveness of your review
6:53
6:53
Play later
Play later
Lists
Like
Liked
6:53Spoken (by a human) version of this article. The motivation(s) for commissioning a review can determine how effective it will be. Consider a personal health check-up: Sometimes we undergo medical check-ups because we don’t have a choice. We need to - for example for workplace requirements or for insurance. At other times, we choose to undergo such …
…
continue reading

1
Article 1. How reliable is the algorithm review that you have commissioned?
10:14
10:14
Play later
Play later
Lists
Like
Liked
10:14Spoken (by a human) version of this article. One common issue with audits is undue reliance. Can you rely on the audit report to tell you what you need to know? Could you be relying on it too much? https://riskinsights.com.au/blog/reliable-audits About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in …
…
continue reading
A brief intro to the podcast. If you have suggestions for topics you'd like me to cover, feel free to reach out to me via email. [email protected] About this podcast A podcast for Financial Services leaders, where we discuss fairness and accuracy in the use of data, algorithms, and AI. Hosted by Yusuf Moolla. Produced by Risk Insights (risk…
…
continue reading

1
🔥 Generative AI Use Cases: What's Legit and What's Not? | irResponsible AI EP6S01
26:54
26:54
Play later
Play later
Lists
Like
Liked
26:54Got questions or comments or topics you want us to cover? Text us! In this episode of irResponsible AI, we discuss ✅ GenAI is cool, but do you really need it for your use case? ✅ How can companies end up doing irresponsible AI by using GenAI for the wrong use cases? ✅ How may we get out of this problem? What can you do? 🎯 Two simple things: like an…
…
continue reading

1
🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01
34:32
34:32
Play later
Play later
Lists
Like
Liked
34:32Got questions or comments or topics you want us to cover? Text us! In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile: ✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for ✅ Unpack challenges of evaluating AI frameworks ✅ Inert knowledge in frameworks need to be …
…
continue reading

1
🧐 Responsible AI is NOT the icing on the cake | irResponsible AI EP4S01
31:03
31:03
Play later
Play later
Lists
Like
Liked
31:03Got questions or comments or topics you want us to cover? Text us! In this episode filled with hot takes, Upol and Shea discuss three things: ✅ How the Gemini Scandal unfolded ✅ Is Responsible AI is too woke? Or is there a hidden agenda? ✅ What companies can do to address such scandals What can you do? 🎯 Two simple things: like and subscribe. You h…
…
continue reading

1
🔥 The Taylor Swift Factor: Deep fakes & Responsible AI | irResponsible AI EP3S01
22:59
22:59
Play later
Play later
Lists
Like
Liked
22:59Got questions or comments or topics you want us to cover? Text us! As they say, don't mess with Swifties. This episode irResponsible AI is about the Taylor Swift Factor in Responsible AI: ✅ Taylor Swift's deepfake scandal and what it did for RAIg ✅ Do famous people need to be harmed before we do anything about it? ✅ How to address the deepfake prob…
…
continue reading

1
🌶️ Cutting through the Responsible AI hype: how to enter the field | irResponsible AI EP2S01
33:43
33:43
Play later
Play later
Lists
Like
Liked
33:43Got questions or comments or topics you want us to cover? Text us! It gets spicy in this episode of irResponsible AI: ✅ Cutting through the Responsible AI hype to separate experts from "AI influencers" (grifters) ✅ How you can you break into Responsible AI consulting ✅ How the EU AI Act discourages irresponsible AI ✅ How we can nurture a "cohesivel…
…
continue reading

1
🤯 Harms in the Algorithm's Afterlife: how to address them | irResponsible AI EP1S01
33:46
33:46
Play later
Play later
Lists
Like
Liked
33:46Got questions or comments or topics you want us to cover? Text us! In this episode of irResponsible AI, Upol & Shea bring the heat to three topics-- 🚨 Algorithmic Imprints: harms from zombie algorithms with an example of the LAION dataset 🚨 The FTC vs. Rite Aid Scandal and how it could have been avoided 🚨 NIST's Trustworthy AI Institute and the fut…
…
continue reading
Thomas, Chuck and Vincent visit Pizza Karma for some tasty 'za.
…
continue reading
Physical Media, brick and mortar, and Summer Convention Plans!
…
continue reading
Thomas, Chuck and Vincent visit the Maplewood Mall.
…
continue reading
KayBee Toys, Toy Popularity, Physical Media, Blu-Ray, and why streaming services suck...
…
continue reading
Chuck and Larry get after it with an episode of their very own!
…
continue reading
The one where they go to a dead mall.
…
continue reading
The boys discuss their new feelings with all things physical media.
…
continue reading
Chuck and Vincent chillout.
…
continue reading
The boys are back... discussing interesting and creative garage sale / flea market strategies.
…
continue reading
The boys discuss old school wrestling and get distracted by old video games down the road!
…
continue reading
This episode could have been called Godfathers and Dead Media.
…
continue reading

1
219 - Hedonism Bot Apologizes for Nothing
48:14
48:14
Play later
Play later
Lists
Like
Liked
48:14The invaders reminisce, mess with Chat GPT, and apologize for nothing!
…
continue reading
The one where the invaders lament their retro woes.
…
continue reading