19 subscribers
Go offline with the Player FM app!
Podcasts Worth a Listen
SPONSORED


1 Family Secrets: Chris Pratt & Millie Bobby Brown Share Stories From Set 22:08
Experiencing Data w/ Brian T. O’Neill (UX for AI Data Products, SAAS Analytics, Data Product Management)
«
»
164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge
Manage episode 469679935 series 2938687
Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products.
While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow.
I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into?
If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users, this episode will help!
Highlights/ Skip to
- Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02)
- AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35)
- Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55)
- Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26)
- AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14)
- Not overheard from customers: “I would buy this/use this if it had AI” (26:52)
- Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20)
- Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56)
Quotes from Today’s Episode
- Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25)
- When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work? Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51)
- Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49)
- What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly. Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21)
105 episodes
Manage episode 469679935 series 2938687
Are you prepared for the hidden UX taxes that AI and LLM features might be imposing on your B2B customers—without your knowledge? Are you certain that your AI product or features are truly delivering value, or are there unseen taxes that are working against your users and your product / business? In this episode, I’m delving into some of UX challenges that I think need to be addressed when implementing LLM and AI features into B2B products.
While AI seems to offer the change for significantly enhanced productivity, it also introduces a new layer of complexity for UX design. This complexity is not limited to the challenges of designing in a probabilistic medium (i.e. ML/AI), but also in being able to define what “quality” means. When the product team does not have a shared understanding of what a measurably better UX outcome means, improved sales and user adoption are less likely to follow.
I’ll also discuss aspects of designing for AI that may be invisible on the surface. How might AI-powered products change the work of B2B users? What are some of the traps I see some startup clients and founders I advise in MIT’s Sandbox venture fund fall into?
If you’re a product leader in B2B / enterprise software and want to make sure your AI capabilities don’t end up creating more damage than value for users, this episode will help!
Highlights/ Skip to
- Improving your AI model accuracy improves outputs—but customers only care about outcomes (4:02)
- AI-driven productivity gains also put the customer’s “next problem” into their face sooner. Are you addressing the most urgent problem they now have—or used to have? (7:35)
- Products that win will combine AI with tastefully designed deterministic-software—because doing everything for everyone well is impossible and most models alone aren’t products (12:55)
- Just because your AI app or LLM feature can do ”X” doesn't mean people will want it or change their behavior (16:26)
- AI Agents sound great—but there is a human UX too, and it must enable trust and intervention at the right times (22:14)
- Not overheard from customers: “I would buy this/use this if it had AI” (26:52)
- Adaptive UIs sound like they’ll solve everything—but to reduce friction, they need to adapt to the person, not just the format of model outputs (30:20)
- Introducing AI introduces more states and scenarios that your product may need to support that may not be obvious right away (37:56)
Quotes from Today’s Episode
- Product leaders have to decide how much effort and resources you should put into model improvements versus improving a user’s experience. Obviously, model quality is important in certain contexts and regulated industries, but when GenAI errors and confabulations are lower risk to the user (i.e. they create minor friction or inconveniences), the broader user experience that you facilitate might be what is actually determining the true value of your AI features or product. Model accuracy alone is not going to necessarily lead to happier users or increased adoption. ML models can be quantifiably tested for accuracy with structured tests, but because they’re easier to test for quality vs. something like UX doesn’t mean users value these improvements more. The product will stand a better chance of creating business value when it is clearly demonstrating it is improving your users’ lives. (5:25)
- When designing AI agents, there is still a human UX - a beneficiary - in the loop. They have an experience, whether you designed it with intention or not. How much transparency needs to be given to users when an agent does work for them? Should users be able to intervene when the AI is doing this type of work? Handling errors is something we do in all software, but what about retraining and learning so that the future user experiences is better? Is the system learning anything while it’s going through this—and can I tell if it’s learning what I want/need it to learn? What about humans in the loop who might interact with or be affected by the work the agent is doing even if they aren’t the agent’s owner or “user”? Who’s outcomes matter here? At what cost? (22:51)
- Customers primarily care about things like raising or changing their status, making more money, making their job easier, saving time, etc. In fact,I believe a product marketed with GenAI may eventually signal a negative / burden on customers thanks to the inflated and unmet expectations around AI that is poorly implemented in the product UX. Don’t think it’s going to be bought just because it using AI in a novel way. Customers aren’t sitting around wishing for “disruption” from your product; quite the opposite. AI or not, you need to make the customer the hero. Your AI will shine when it delivers an outsized UX outcome for your users (27:49)
- What kind of UX are you delivering right out of the box when a customer tries out your AI product or feature? Did you design it for tire kicking, playing around, and user stress testing? Or just an idealistic happy path? GenAI features inside b2b products should surface capabilities and constraints particularly around where users can create value for themselves quickly. Natural hints and well-designed prompt nudges in LLMs for example are important to users and to your product team: because you’re setting a more realistic expectation of what’s possible with customers and helping them get to an outcome sooner. You’re also teaching them how to use your solution to get the most value—without asking them to go read a manual. (38:21)
105 episodes
All episodes
×
1 165 - How to Accommodate Multiple User Types and Needs in B2B Analytics and AI Products When You Lack UX Resources 49:04

1 164 - The Hidden UX Taxes that AI and LLM Features Impose on B2B Customers Without Your Knowledge 45:25

1 163 - It’s Not a Math Problem: How to Quantify the Value of Your Enterprise Data Products or Your Data Product Management Function 41:41

1 162 - Beyond UI: Designing User Experiences for LLM and GenAI-Based Products 42:07

1 161 - Designing and Selling Enterprise AI Products [Worth Paying For] 34:00

1 160 - Leading Product Through a Merger/Acquisition: Lessons from The Predictive Index’s CPO Adam Berke 42:10

1 159 - Uncorking Customer Insights: How Data Products Revealed Hidden Gems in Liquor & Hospitality Retail 40:47

1 158 - From Resistance to Reliance: Designing Data Products for Non-Believers with Anna Jacobson of Operator Collective 43:41

1 157 - How this materials science SAAS company brings PM+UX+data science together to help materials scientists accelerate R&D 34:58

1 156-The Challenges of Bringing UX Design and Data Science Together to Make Successful Pharma Data Products with Jeremy Forman 41:37

1 155 - Understanding Human Engagement Risk When Designing AI and GenAI User Experiences 55:33

1 154 - 10 Things Founders of B2B SAAS Analytics and AI Startups Get Wrong About DIY Product and UI/UX Design 44:47

1 153 - What Impressed Me About How John Felushko Does Product and UX at the Analytics SAAS Company, LabStats 57:31

1 152 - 10 Reasons Not to Get Professional UX Design Help for Your Enterprise AI or SAAS Analytics Product 53:00

1 151 - Monetizing SAAS Analytics and The Challenges of Designing a Successful Embedded BI Product (Promoted Episode) 49:57
Welcome to Player FM!
Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.