Artwork

Content provided by Data as a Product Podcast Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Data as a Product Podcast Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

#73 Ship-Posting and Cake Recipes: Measuring the Return of Your Data Initiatives - Interview w/ Katie Bauer

1:02:04
 
Share
 

Manage episode 328002203 series 3293786
Content provided by Data as a Product Podcast Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Data as a Product Podcast Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here

Katie's LinkedIn: https://www.linkedin.com/in/mkatiebauer/

Katie's Twitter: https://twitter.com/imightbemary

In this episode, Scott interviewed Katie Bauer, a Data Science Manager at Twitter in their Core-Tech group. To be clear she was not on representing Twitter, only her own opinions. The main topic of discussion was how to measure the value and success of your data projects/implementations.

Some very useful advice from Katie that can feel a bit obvious when said but is VERY often and easily overlooked: measure for what would make you drive actions. If getting a 10x higher than expected or 90% below expected result isn't going to change your decision, while it may be interesting information, is it really important? If not, don't waste the time to measure it. Especially early on in your data measurement maturity. The point is also to get to an objective evaluation, not overly precise measurements. Set yourself up to improve and iterate. Don't make this hard on yourself.

She also gave the pithy statement: what is valuable is not necessarily valued.

Katie has a cake analogy that plays into data maturity well. Think about your need and the other person's capability regarding making a cake. Do you need a fancy cake for wedding or is this for a 3 year old's birthday party? One, you probably want to be special. One, if it vaguely resembles something from TV and tastes decent, the consumer will probably be happy. Is the other person capable of making a super fancy layered red velvet cheesecake or is a cake mix in a box probably more up their alley. How mature are the parties on creating measurement data and how mature or advanced do you need the output to be?

Katie started the conversation talking about some survivorship bias / other biased ways of measuring. Often, she has seen throughout her career that people having success seek to prove their success via metrics instead of find the metrics that matter the most. That has some pretty obvious flaws so we need to move forward towards better measurement practices. For Katie, measuring the value of data science is pretty meta.

Katie recommends starting out with some really easy measurements around engagement and usage. If it's a platform, what are your daily active users, weekly active users, and/or monthly active users - and what is the actual most useful metric? Should people actually be leveraging your project daily? Think about what is your addressable market and what percent of that market you have. And NPS (net promoter score) is a very lagging indicator.

When thinking about metrics, there are two things that really stand out to Katie: first, what is your useful granularity? Don't get overly precise if you don't need to. You want an objective evaluation and anything past that can become overkill, which has an inherent cost. And second, what is your useful time-scale? Is this on a micro-scale, where the task should take 5min to complete so a difference of 5min is a big deal? Or is it a much longer time scale?

When thinking about what to measure, ask yourself what does your company value. Is it shipping, usage, cleaning up tech debt/deprecation, etc.? Katie threw out a bit of a mind bender: what is valuable is not necessarily valued. So think about what people care about regarding information flow. It might not be the most valuable information, but it might be highly valued. Or vice versa. At the end of the day, are you there to be right or to serve your constituents?

Katie's getting started on measurement advice includes starting with something concrete - use that initial measurement as a learning stepping stone. She mentioned that it can be hard to recover from measuring the wrong thing or getting your measurement wrong - people can jump to measuring is bad so set yourself up via expectation setting that you will iterate on your metrics.

Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

  continue reading

422 episodes

Artwork
iconShare
 
Manage episode 328002203 series 3293786
Content provided by Data as a Product Podcast Network. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Data as a Product Podcast Network or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

Sign up for Data Mesh Understanding's free roundtable and introduction programs here: https://landing.datameshunderstanding.com/

Please Rate and Review us on your podcast app of choice!

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

Episode list and links to all available episode transcripts here.

Provided as a free resource by Data Mesh Understanding / Scott Hirleman. Get in touch with Scott on LinkedIn if you want to chat data mesh.

Transcript for this episode (link) provided by Starburst. See their Data Mesh Summit recordings here and their great data mesh resource center here

Katie's LinkedIn: https://www.linkedin.com/in/mkatiebauer/

Katie's Twitter: https://twitter.com/imightbemary

In this episode, Scott interviewed Katie Bauer, a Data Science Manager at Twitter in their Core-Tech group. To be clear she was not on representing Twitter, only her own opinions. The main topic of discussion was how to measure the value and success of your data projects/implementations.

Some very useful advice from Katie that can feel a bit obvious when said but is VERY often and easily overlooked: measure for what would make you drive actions. If getting a 10x higher than expected or 90% below expected result isn't going to change your decision, while it may be interesting information, is it really important? If not, don't waste the time to measure it. Especially early on in your data measurement maturity. The point is also to get to an objective evaluation, not overly precise measurements. Set yourself up to improve and iterate. Don't make this hard on yourself.

She also gave the pithy statement: what is valuable is not necessarily valued.

Katie has a cake analogy that plays into data maturity well. Think about your need and the other person's capability regarding making a cake. Do you need a fancy cake for wedding or is this for a 3 year old's birthday party? One, you probably want to be special. One, if it vaguely resembles something from TV and tastes decent, the consumer will probably be happy. Is the other person capable of making a super fancy layered red velvet cheesecake or is a cake mix in a box probably more up their alley. How mature are the parties on creating measurement data and how mature or advanced do you need the output to be?

Katie started the conversation talking about some survivorship bias / other biased ways of measuring. Often, she has seen throughout her career that people having success seek to prove their success via metrics instead of find the metrics that matter the most. That has some pretty obvious flaws so we need to move forward towards better measurement practices. For Katie, measuring the value of data science is pretty meta.

Katie recommends starting out with some really easy measurements around engagement and usage. If it's a platform, what are your daily active users, weekly active users, and/or monthly active users - and what is the actual most useful metric? Should people actually be leveraging your project daily? Think about what is your addressable market and what percent of that market you have. And NPS (net promoter score) is a very lagging indicator.

When thinking about metrics, there are two things that really stand out to Katie: first, what is your useful granularity? Don't get overly precise if you don't need to. You want an objective evaluation and anything past that can become overkill, which has an inherent cost. And second, what is your useful time-scale? Is this on a micro-scale, where the task should take 5min to complete so a difference of 5min is a big deal? Or is it a much longer time scale?

When thinking about what to measure, ask yourself what does your company value. Is it shipping, usage, cleaning up tech debt/deprecation, etc.? Katie threw out a bit of a mind bender: what is valuable is not necessarily valued. So think about what people care about regarding information flow. It might not be the most valuable information, but it might be highly valued. Or vice versa. At the end of the day, are you there to be right or to serve your constituents?

Katie's getting started on measurement advice includes starting with something concrete - use that initial measurement as a learning stepping stone. She mentioned that it can be hard to recover from measuring the wrong thing or getting your measurement wrong - people can jump to measuring is bad so set yourself up via expectation setting that you will iterate on your metrics.

Data Mesh Radio is hosted by Scott Hirleman. If you want to connect with Scott, reach out to him on LinkedIn: https://www.linkedin.com/in/scotthirleman/

If you want to learn more and/or join the Data Mesh Learning Community, see here: https://datameshlearning.com/community/

If you want to be a guest or give feedback (suggestions for topics, comments, etc.), please see here

All music used this episode was found on PixaBay and was created by (including slight edits by Scott Hirleman): Lesfm, MondayHopes, SergeQuadrado, ItsWatR, Lexin_Music, and/or nevesf

  continue reading

422 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide