Artwork

Content provided by Himakara Pieris. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Himakara Pieris or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

A Day In The Life Of An AI Product Manager With Tarig Khairalla From Kensho

30:33
 
Share
 

Manage episode 361961527 series 3458395
Content provided by Himakara Pieris. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Himakara Pieris or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

I’m excited to bring you this conversation with Tarig Khairalla. Tarig is an AI product manager at Kensho; Kensho is the AI and innovation hub for S&P Global. They are focused on helping customers unlock insights from messy data.

In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome adoption challenges, and his approach to collaborating with both engineering partners and customers.

Links

Tarig on LinkedIn

About Kensho

Transcript

[00:00:00] Tarig Khairalla: being a product manager space right now is very exciting. Things are moving fast. There are opportunities everywhere. To leapfrog competitors and other companies out there. And I think it's an opportunity for a lot of product managers now to get into this space and, really, make a difference for a lot of customers.

[00:00:20] Hima: I'm, Himakara Pieris. You're listening to smart products. A show where we recognize, celebrate, and learn from industry leaders who are solving real-world problems. Using AI.

[00:00:30] Himakara Pieris: I'm excited to this conversation with Tarig Khairalla. Tarig is an

[00:00:35] Himakara Pieris: AI product manager. Kensho. Kensho is the AI and innovation hub for s and p Global. They're focused on helping customers unlock insights from messy data. In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome challenges, and his approach to collaborating with both engineering partners and customers. Check the show notes for links.

[00:00:54]

[00:00:56] Himakara Pieris: Welcome to the Smart Product Show, Tarig.

[00:00:58] Tarig Khairalla: Thank you for having me[00:01:00]

[00:01:00] Himakara Pieris: Could you tell us a bit about your background and how you started and how you got into managing AI products?

[00:01:06] Tarig Khairalla: Interestingly, coming out of school, I did not immediately become a product manager. I graduated with an accounting, finance, and economics degree. And I worked in the accounting space actually for the first few years at Ernest and Young. And so sometime during those first few years of working, there is when I got involved in the AI and machine learning space. And you know, from there, got stuck to it. I worked at Verizon afterwards for a little bit, and then here I am now with Kensho Technologies as a PM in the AI and machine learning space.

[00:01:38] Himakara Pieris: Tell us a bit about what Kensho does and specifically what Kensho does with AI and machine learning to help companies work with s e data.

[00:01:47] Tarig Khairalla: As you mentioned Kensho, we develop AI solutions that unlock insights that are hidden in messy data. If you think about the business and finance space, the majority of data created has no standard format, [00:02:00] right? You know, your typical professional gathers information from images.

[00:02:05] Tarig Khairalla: Videos, text, audio, and, and so much more. And unfortunately, as a result of that critical insights are generally trapped and, and are hard to uncover in that data. And so the reality is that, you know, the data today is being created at a rate much faster than before. And so the solutions we build are tackling problems that many organizations today are, are, are facing.

[00:02:30] Tarig Khairalla: You know, we build a variety of machine learning. Products that serve to structure, unify and contextualize data. We have products that are used in a daily basis for things like transcribing, voice to text, extracting financial information from pdf, f documents, identifying named entities like. People in places within text understanding sentences and paragraphs to tag them with a topic or concept that are being discussed.

[00:02:59] Tarig Khairalla: Right? So, [00:03:00] you know, at the end of the day, what we're looking to do is, is, is make information more accessible, easier to use, and allow our customers to discover hidden insights much faster than that they could before and ultimately, you know, enabled them to make decisions with a conviction.

[00:03:15] Himakara Pieris: Do you target specific industry verticals or specific business processes?

[00:03:21] Tarig Khairalla: Yeah, our products are mainly geared towards you know, your finance workflows. . A lot of the models and products that we built, were trained on financial data, like the extraction capabilities that we have or trained on financial documents or the transcription products that we we provide are trained on earnings calls, for example, and many other types of financial related data.

[00:03:44]

[00:03:44] Himakara Pieris: I presume you expect higher level of accuracy because your training data set is focused on domain specific data.

[00:03:51] Tarig Khairalla: That's the angle, yes. That we are focused on business and finance to make sure that we're the best at developing machine [00:04:00] learning solutions for the finance professional.

[00:04:02] Himakara Pieris: As a PM how do you think about ways to improve the product or add extra value? Is it primarily around, increasing accuracy or pushing into adjacent use cases? How do you build on top of what you already have as a pm?

[00:04:19] Tarig Khairalla: It's a little bit of both, definitely.

[00:04:21] Tarig Khairalla: Making sure that we are continuing to push the boundaries in terms of what's possible from an accuracy perspective across all our products. But you know, the other thing we do too is make sure that we can provide value beyond just what we offer with one product. For example, you know, a lot of our.

[00:04:36] Tarig Khairalla: Kind of capabilities sometimes are synergistic in nature. So you can add something like, and you know, the product called Kero extract to some of, some, some other Kero product that's called Kero Classify. To be able to now provide something that is beyond just one product or one solution that that users can drive value from.

[00:04:56] Himakara Pieris: How is it different being a product manager working [00:05:00] in in an AI product versus being a product manager who is working in a traditional software product?

[00:05:06] Tarig Khairalla: I think that there's a lot of parallels and similarities, but with being in the AI space, you add a layer of now you have to work really closely with machine learning and data scientists, and you also add an element of uncertainty,

[00:05:21] Tarig Khairalla: because as we're building AI and machine learning products, a lot of times we're uncertain whether a given experiment is gonna succeed. And so there's a little bit more uncertainty around it. There's a little bit more in terms of the discipline, right? You have to learn a little bit how to build machine learning models,

[00:05:37] Tarig Khairalla: the life cycle of a machine learning model. You have to learn that and really kind of be able to implement it in your day-to-day and, and build processes around that to make sure that you're still delivering what you need to del deliver for your customers and, and clients.

[00:05:51] Himakara Pieris: How do you plan the uncertainty is common in air and machine learning product life cycles?

[00:05:57] Tarig Khairalla: certainly what's important to do [00:06:00] is have specific measures of success and targets in mind before starting, let's say, for example, a specific experiment. But I'll also say that it's, it's important to also tie inbox your activities. So you know, when you're scoping out a specific experiment or a specific project, understanding what your no star is going to be.

[00:06:20] Tarig Khairalla: Understanding some of the measure of success that you're looking to go after and making sure that you're timeboxing what you're looking to achieve in a reasonable amount of time. Right? Typically what happens with machine learning experimentation is that, you know, it can, you, you can, you can experiment for, for a very long time, but is that going to be valuable?

[00:06:38] Tarig Khairalla: Is there a return of investment in, you know, spending two to three to four months of time experimenting or something? Or are you better off pivoting to something else that can potentially drive value elsewhere?

[00:06:50] Himakara Pieris: Do you have a data science and machine learning team that's embedded inside a product team, or is there a separation between the data science machine learning team and the product team?[00:07:00]

[00:07:00] Tarig Khairalla: The way we think about things is, there's three kind of main key players in developing a product.

[00:07:05] Tarig Khairalla: We've got the product manager we've got the tech lead on the engineering side application side, and then there's the tech lead on the machine learning side. So the three of them combined with a, a designer is usually how we approach building products at Kin Show.

[00:07:20] Himakara Pieris: How does that interface look like between let's say the machine learning lead and the engineering lead, and also between machine learning and product so those, different interfaces..

[00:07:31] Tarig Khairalla: . , we work really closely together. You know, say that we touch base regularly on a weekly basis to kind of talk about what we're looking to do. Right. Whether it's like a new product that we're building out if, if machine learning team is in kind of the research mode we make sure that we're continuing to talk to our tech leads to make sure that We build expectations ahead of time.

[00:07:56] Tarig Khairalla: So if a model is being built out, it's in research [00:08:00] mode. Maybe there's not a lot of effort that's needing to be done on the kind of backend side of things on the application side. But once that model graduates to being more of a potential one that's gonna be productionized, we're already kind of established.

[00:08:14] Tarig Khairalla: We're already on the same page as far as like, well, there's a model coming your way. We have to think about setting up that service up front. So I would say it's very collaborative in nature as we're scoping out. Kind of product early on. Everybody from a design to product, to applications to machine learning is involved in that process.

[00:08:33] Himakara Pieris: What does the input to the machine learning team look like from you as the pm.

[00:08:38] Tarig Khairalla: As pm I think the largely still remains very similar to how a software product manager would work, right? The voice of the customer, right. Thinking about exactly what problems we're trying to solve for and, and, and how we wanna optimize the algorithms and the models that we're building out, right?

[00:08:54] Tarig Khairalla: The measure of success. So my input is around kind of the [00:09:00] We think about them, kind of the, the domain expertise, right? What do the models need to do in collaboration with the users and customers that we're going after? My input is also around evaluation, right? How do we collect the right data to be able to evaluate the models and make sure that we're comfortable with what they're doing?

[00:09:18] Tarig Khairalla: And yeah, anything around coordinating between customers and, and, and clients, right? Being able to kind of pass feedback back and forth between the two groups of, of, of, of disciplines.

[00:09:30] Himakara Pieris: When you initially engage with the mission learning team on, let's say a new feature idea Is it that you have a data set and you outline a requirement that says, we want to accomplish X, Y, and Z with this data set, and here's the performance criteria that we expect. Or is it a lot more free form than that?

[00:09:50] Tarig Khairalla: I would say we work together on scoping all of that out. So the requirements are things that both myself on the product side and the machine learning side [00:10:00] work together to get really comfortable with. It's a little bit more collaborative in nature, so I, you know, I don't.

[00:10:07] Tarig Khairalla: Directly provide requirements as is because I know there may be some flexibility and flexes in how machine learning team can help develop something. And so you know, what I contribute to that is, is understanding the problem really well, understanding the domain knowledge behind it, really well, understanding where we want to go.

[00:10:24] Tarig Khairalla: So the actual North star, the strategy behind we're achieving, and then all this stuff that's in the middle. The requirements and, and exactly how we wanna execute on it is something that we work together to scope out and make sure we're all comfortable with.

[00:10:37] Himakara Pieris: If someone as a PM is working, let's say setting up their, first AI powered feature that they're putting together a team how do you think they should structure that conversation?

[00:10:47] Tarig Khairalla: You know, I would say that there's a, there's a few things to think about when you're first starting off trying to structure a project.

[00:10:55] Tarig Khairalla: You know, other than just knowing exactly what you're trying to solve for and some of the actual conversations that [00:11:00] you've had with clients and customers, to be able to really explain and talk about what we're trying to solve for with our, with our machine learning team. Right? That's number one. And then number two, I think it's important to know some of the inputs and outputs that we're looking for.

[00:11:14] Tarig Khairalla: Is there existing training data or data sets that we can go after somewhere to be able to build this model? Right? Feasibility, essentially just high level feasibility of that. What are the outcomes we're going after, right? What are the outputs gonna look like? Is this something that's gonna be embedded into a platform or is this something that's standalone, freeform product that we're building out?

[00:11:33] Tarig Khairalla: So just having an idea of like what it is. Where we want to deploy or build it and when, so timing is really important to have, right? Is this something that we want to build tomorrow in, in a week from now, two months from now, three months from now? Those are things that you need to be able to answer to then help guide and help structure how you wanna build it out with the machine learning team.

[00:11:57] Himakara Pieris: Considering there already sort of the feasibility [00:12:00] analysis phase at the very top of it do you recommend people to run multiple pilots in peril or is it best tackled sequentially?

[00:12:10] Tarig Khairalla: I think it depends on the use case, but I do think that Multiple pilots will help. I wouldn't say we take on too much because I think the idea here is the way I like to execute in myself is to start really small.

[00:12:25] Tarig Khairalla: So we're trying to validate something really quickly at a very small scale. And so starting off potentially with, with one pilot is perfectly fine if you can validate very quickly and then scaling up to the next and the second and the third. But if, if. If, depending on the use case, I don't see a problem of why we couldn't do multiple pods at the same time as well.

[00:12:46] Himakara Pieris: If you had to timebox these pilots, what do you think is a reasonable time box window to kind of think about?

[00:12:53] Tarig Khairalla: I'll say that it, this probably something that really depends on the, the existing team, but [00:13:00] generally I would say, We've used things like for from between four to six weeks as, as a tie in box.

[00:13:08] Tarig Khairalla: In some instances we do. If it's something that's really, really quick and we're in, we don't know the value of it yet. We do two weeks.

[00:13:14] Himakara Pieris: Could you share a bit about what your day in life looks like as an I P M? It's interesting. I think the role that you're in, because when we think about AI products, there's this sort of division between co ai products where there is a research you know, research in European component, and then there's the applied AI products where you are embedding these, you know, capabilities into.

[00:13:38] Himakara Pieris: A user application and user facing application. And then there's the machine learning m and ops side of things. And it sounds like you have to touch all three of them in a way and at least the first two for sure. So curious how, how your day is structured.

[00:13:55] Tarig Khairalla: That's actually, a fair assessment.

[00:13:57] Tarig Khairalla: Right? My, my role does touch on [00:14:00] things like research, like actually time, you know, allowing time for research to be done in terms of experimenting with models, and then ultimately how do we take them and actually productionize them. A day in the life of a PM is generally hard to, to pinpoint, and I think especially in the AI space, but the reality that it, it varies, right?

[00:14:18] Tarig Khairalla: I personally. I like to make sure that even though it's a very varied you know, schedule and, and, and, and week that I may run into, I like to make sure that I have a fixed you know, fixed activities that I do on a daily basis. So, for example, I start my day usually with three things, right? I.

[00:14:36] Tarig Khairalla: Read up on articles. The first thing I go on just to make sure that I'm continuing to be up to date on the latest developments in this space, especially nowadays. The this space moves really fast. Then I go through and organize my emails. I I don't immediately respond to emails, right? But I go in the morning and, and try to make sure that I prioritize and find the ones that I need to make sure to respond to during the day.

[00:14:58] Tarig Khairalla: And then the same goes for [00:15:00] things like Slack and teams and things of that nature. And the third thing I do is I finally go through analytics. Look at whether there's a user that's signed up to our product. Look at some of the metrics that we're tracking internally just to make sure that I am staying up to date on, on the product and, and everything that happened overnight, essentially.

[00:15:21] Tarig Khairalla: From there, I think the, the bulk of the day can vary, but what I will typically say that there's a lot of context switching. And so I may start off with a client call, for example, to discuss a use case or a project that we wanna explore with them. And then maybe I'll jump into a leadership call to present on something.

[00:15:42] Tarig Khairalla: Right. And then maybe go into like a technical brainstorming call with our machine learning team, followed by another technical running, brainstorming call with our engineering team, right, of software engineers. So these are just examples of what a typical day could look like. But there's other aspects for me personally that I run into, like [00:16:00] design.

[00:16:00] Tarig Khairalla: So working with design teams to introvert users or come up with concepts, sales. Marketing, finance, legal implications, right? In the AI space specifically, there's a lot of legal implications that we have to think about. And so as a PM in the AI space, you typically have to get involved in a lot of these different aspects on a daily basis.

[00:16:22] Tarig Khairalla: And so, you know, at the end of the day, I, I like to then structure myself as well. So I start with some structure and then I wanna make sure that I end with some structure. So I go through and figure out, What I wasn't able to accomplish during the day. And then I create a a to-do list of things I need to accomplish the following day.

[00:16:40] Tarig Khairalla: What I will say though, that if, if, if you're somebody who likes variety in their day, then I, I think being an AI pm is something that, you know, you'd, you'd like, you'd enjoy.

[00:16:53] Himakara Pieris: How do you prepare for a brainstorming session with a mission learning team? Would you [00:17:00] recommend a pm sort of a read up and get up to speed on technical aspects of things to be able to have a conversation there? Or are there any other recommendations?

[00:17:11] Tarig Khairalla: I think that N n having a high level understanding of the different types of machine learning approaches is really helpful in terms of being able to communicate with your counterparts and the machine learning space or, and machine learning team.

[00:17:28] Tarig Khairalla: But I certainly don't think it's a requirement, right? I, again, I started my career in the finance and accounting space, so I didn't have a technical background, but through kind of learning on the job and through reading up and, and staying up to date on the. The industry, I was able to kind of learn that as, as I went along.

[00:17:44] Tarig Khairalla: And so certainly I think it helps to, to understand some of the high levels of how a machine learning model is constructed. Some of the different techniques to get to, to, to a solution. But keep in mind as in those brainstorming sessions when we're [00:18:00] talking about how potentially things can be developed what you bring or what I bring as, as a pm is again, the customer.

[00:18:07] Tarig Khairalla: Right, understanding exactly the pain points and the, the measure of success that we're trying to optimize for. And that's a valuable input in terms of being able to develop and build something that's important and it's gonna solve a problem for users.

[00:18:19] Himakara Pieris: Moving on to the good market side of things what are the big challenges that you face? Considering. Finance is a big target vertical for you. , very risk of this industry. I I can't imagine there are, there are a lot of questions that you have to answer.

[00:18:39] Tarig Khairalla: Certainly many challenges that come our way when we try to productionize deploy and commercialized products. One of them that first comes up is general legality. The data that's underlying a lot of these models sometimes is. Proprietary nature. And a lot of our customers and users don't want us to use their data, for example. But I think [00:19:00] one thing that's, you know, more challenging and I, from my experience have run into many challenges around is actual adoption of our products.

[00:19:08] Tarig Khairalla: If you think about some of the users in the finance space, they've likely gotten accustomed to workflows they've been following for, for years. Right. These are individuals that are involved in validating every single detail or every single line item to get comfortable with financial decisions that are being made.

[00:19:27] Tarig Khairalla: And, and that's for good reason, right? There's a significant downside risk to negligence in this space that leads to penalties, liabilities, and things of that nature. And so as a result of that, our users are. Typically and generally very conservative in nature and relatively skeptical in terms of automation, which means that a, as a product team, you, you almost have to ensure that early stages, early stage models are either fairly accurate in nature, or if they're not, then your software that you're building around these models allows [00:20:00] for checks and balances to instill trust early in your product.

[00:20:05] Tarig Khairalla: You know, from my experience there. A few types of mindsets that you'll run into being a product manager in this space when you're introducing new products in the finance space, right. You know, the first one is gonna be around you. You'll run into users that are generally skeptical, but willing to try and adopt new products.

[00:20:25] Tarig Khairalla: And so as they're starting to use a new product, they're generally gonna run into like a learning curve, right? You know, this is something that. They keep in mind they've, they've been following these processes for a very long time and maybe with adopting a new product, the time that's being spent in the perceived complexity of the task at hand is changing and maybe it's going up and typically that will impact their performance and whether they're willing to continue using that product.

[00:20:55] Himakara Pieris: Can you talk a bit about explainability and [00:21:00] how much of a role they play when you're try and drive adoption?

[00:21:03] Tarig Khairalla: The second kind of mindset, actually, going back to the point earlier, a after, you know, other than skepticism and, and kind of the willingness to try is actually gonna be around.

[00:21:14] Tarig Khairalla: The black box nature of, of these sub of sub-machine learning techniques, right? Users will typically find it hard to wrap their head around how a machine is doing what it does. And again, going back to the finance practitioner they've been used to being able to validate the details, tracing back and make sure they're really comfortable with how things are being done.

[00:21:36] Tarig Khairalla: And now you're introducing something that is hard to validate. It's, it's, you know, hard to explain, right? And so what ends up happening is, is due to the lack of that explainability in those machine learning models, what ended up happening is that they go back to the org processes and resign to using their existing solutions.

[00:21:53] Tarig Khairalla: And so, These are, you know, some of these things are important to know as, as being a product manager in the space and you have to be irv. [00:22:00] And part of that is, again, being deeply empathetic and understanding user pain points really well, to be able to address and mitigate some of those issues.

[00:22:10] Himakara Pieris: How do you balance between Model complexity and explainability because there is, you know, there is a big focus and interest in building larger and large models investing or adopting the latest deep learning frameworks.

[00:22:27] Himakara Pieris: Whereas, you know, if you stay on the simplest side, things are easy to explain. Tend to be more consistent in, in, at least in most cases. How do you balance that? Staying on the cutting edge of technology and, and and the complexity of the product?

[00:22:46] Tarig Khairalla: The way I think about it, it's slow slow integration or slow introduction of, of complexity into the system. This is where kind of understanding users is really important as a pm, is if [00:23:00] you're looking for a user ultimately to adopt a product I like to start from a simple point of view, right? What's the simplest thing that we can do at a very fast rate to be able to drive value for that user without getting to the space where they can't, you know, they're experiencing rapid change.

[00:23:18] Tarig Khairalla: Right. And so I think it's totally fine to start with something that's simple, maybe that's less performant in nature, but it's, it's a trade off between trust and and, and performance essentially. Right? And so from there, once we have that initial user that's starting to use the product a little bit, they're, they're.

[00:23:36] Tarig Khairalla: They're trusting a little bit more. It's then you can kind of start to add complexity and add cutting edge technology and things that ultimately will drive value in the long term. But it is, it is really a trade off between balancing how users perceive your products, right? And instilling trust in them with ultimately really complicated cutting edge technology that maybe is less explainable in nature.

[00:23:58] Tarig Khairalla: And at some point, it's a [00:24:00] journey that you take with your users, right? You wanna take that journey and make sure that you're hand in hand. Adopt, you know, building your product up at a rate where users are able to keep up and, and really comprehend, you know, fo follow along essentially with the journey.

[00:24:15] Himakara Pieris: How do you think about success? I think there are two aspects I'd like to get into. The first one is, the system performance metrics like the F1 schools, et cetera, that you touched on earlier. And the second one is the, the real world success measures. How do you think about these various access measures at various stages of the products lifecycle?

[00:24:38] Himakara Pieris: From the initial validation to first version out of the door to, to like, you know, various. Version iterations in the future.

[00:24:47] Tarig Khairalla: Success measures are, to me a little bit more involved in the AI space compared to your traditional you know, building software. I, I, I always like to anchor our products [00:25:00] to user facing metrics.

[00:25:02] Tarig Khairalla: Rather than, you know, strictly model performance to start. And I think it's even important to do that as you're starting to build a model or like during the early phases in the kind of life cycle of the product, right. To, to illustrate that. For example, you know, the product I work on we process and extract information from hundreds and thousands of PDFs in a given day, for example, right?

[00:25:27] Tarig Khairalla: And so to support and so. Well, you may essentially support very large document viewing and research platforms. Well, you may have a really accurate model in terms of your extraction performance, right? But what users really care about when using those platforms is how quickly they can pull information out from a or from the source they're looking at, right?

[00:25:51] Tarig Khairalla: So if you optimize for accuracy, Maybe end up with a very you know, deep neural network of, of some sort. Then [00:26:00] you sacrifice processing time cuz you're a little bit slower in terms of being able to provide outputs to users. And so you'll run into adoption challenges, right? You'll face you'll hopefully face that realize that the business value that you thought was gonna be generated is not being generated because again, the success measure.

[00:26:17] Tarig Khairalla: In this example, to me would've been the time. You know, one of the success matters, there's many that you can track, but one of them would be around the time it takes for somebody to get information outta the system. And so that's why I like to kind of look at both sides of the equation, forgot the user point of view and some of the things that are important to them.

[00:26:37] Tarig Khairalla: To track and then on on the internal side of things. Yes, the model performance is important to look at the health of the model, and ultimately as a product manager, we're looking to solve problems for customers and making sure that we're balancing both the performance of a model from an accuracy perspective is also being balanced with looking at the user and what they're really coming to us for and what problems we're looking to solve for them.[00:27:00]

[00:27:00] Himakara Pieris: You talked about adoption challenges with AI products. As a pm how do you, mitigate these adoption challenges?

[00:27:09] Tarig Khairalla: Yeah. It's, it's a great, great question and I think from my experience, I, I talked a little bit about. The kind of black box nature of some of the techniques that we use.

[00:27:20] Tarig Khairalla: But at you know, as we're building our pro, you know, our AI products, there's, there's a few strategies they can use to make sure that your users are adopting your products. I think the first one that's really important is involving them as early and as often as you can, right? Continually as you're developing your products, get feedback from your users to instill this, this idea of, of, you know, the, the.

[00:27:46] Tarig Khairalla: Co-creation with your users, right? They're, they're understanding how you're building your product and why you're building it the way you're building it. And, and it makes it easier for them to ultimately use the product in the long term. I [00:28:00] think the second piece is around establishing an onboarding or piloting process where you know, you understand that your users are coming from.

[00:28:09] Tarig Khairalla: You know, a workflow that they've been used to for quite a long time. And so if they're interested in easier product, making sure that there's some sort of onboarding program or some sort of pilot or process that they can go through to help them adopt a product you know, easily. I think the, the last thing I'll say here is build feedback into the process and when feedback is given, make sure that you are able to address that feedback really fast.

[00:28:37] Tarig Khairalla: Specifically in the finance space, users aren't going to wait and so they'll, they'll revert back to using their old processes because they have jobs to get done. And so if we're not able to address critical feedback quickly enough, then you know, you're more likely to see churn in in customers relatively quickly.

[00:28:56] Himakara Pieris: There has been an avalanche of new developments coming [00:29:00] our way over the last few weeks and months at this point. What are you most excited about out of all this new stuff that's happening out there?

[00:29:08] Tarig Khairalla: Very excited about all the development in, in language models large language models specifically.

[00:29:15] Tarig Khairalla: The space is moving really, really fast. Y you know, and, and as part of Kensho we're, we're involved in that headfirst. There's a lot of work that we're doing to actually explore some of the work in terms of language models. It's, it's been a, it's been a lot of developments in a very short amount of time.

[00:29:31] Tarig Khairalla: And I think being a product manager space right now is, is very exciting. You know, things are moving fast. There's opportunities everywhere. To leapfrog you know, competitors and other companies out there. And I think it's an opportunity for a lot of product managers now to to get into this space and, and, and really make a difference for a lot of customers.

[00:29:53] Himakara Pieris: Great. Is there anything else that you'd like to share with our audience?

[00:29:57] Tarig Khairalla: Look, if things like speech-to-text, [00:30:00] extraction from documents, classification, and entity recognition and linking seem like types of solutions you're looking for, come find us at kens show.com. We have a friendly team that will talk to you in-depth about our products.

[00:30:14]

[00:30:18] Hima: Smart products is brought to you by hydra.ai. Hydra helps product teams explore how they can introduce AI-powered features to their products and deliver unique customer value. Learn more at https://www.hydra.ai

  continue reading

15 episodes

Artwork
iconShare
 
Manage episode 361961527 series 3458395
Content provided by Himakara Pieris. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Himakara Pieris or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

I’m excited to bring you this conversation with Tarig Khairalla. Tarig is an AI product manager at Kensho; Kensho is the AI and innovation hub for S&P Global. They are focused on helping customers unlock insights from messy data.

In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome adoption challenges, and his approach to collaborating with both engineering partners and customers.

Links

Tarig on LinkedIn

About Kensho

Transcript

[00:00:00] Tarig Khairalla: being a product manager space right now is very exciting. Things are moving fast. There are opportunities everywhere. To leapfrog competitors and other companies out there. And I think it's an opportunity for a lot of product managers now to get into this space and, really, make a difference for a lot of customers.

[00:00:20] Hima: I'm, Himakara Pieris. You're listening to smart products. A show where we recognize, celebrate, and learn from industry leaders who are solving real-world problems. Using AI.

[00:00:30] Himakara Pieris: I'm excited to this conversation with Tarig Khairalla. Tarig is an

[00:00:35] Himakara Pieris: AI product manager. Kensho. Kensho is the AI and innovation hub for s and p Global. They're focused on helping customers unlock insights from messy data. In this episode, Tarig talked about the day-in-life of an AI product manager, how to overcome challenges, and his approach to collaborating with both engineering partners and customers. Check the show notes for links.

[00:00:54]

[00:00:56] Himakara Pieris: Welcome to the Smart Product Show, Tarig.

[00:00:58] Tarig Khairalla: Thank you for having me[00:01:00]

[00:01:00] Himakara Pieris: Could you tell us a bit about your background and how you started and how you got into managing AI products?

[00:01:06] Tarig Khairalla: Interestingly, coming out of school, I did not immediately become a product manager. I graduated with an accounting, finance, and economics degree. And I worked in the accounting space actually for the first few years at Ernest and Young. And so sometime during those first few years of working, there is when I got involved in the AI and machine learning space. And you know, from there, got stuck to it. I worked at Verizon afterwards for a little bit, and then here I am now with Kensho Technologies as a PM in the AI and machine learning space.

[00:01:38] Himakara Pieris: Tell us a bit about what Kensho does and specifically what Kensho does with AI and machine learning to help companies work with s e data.

[00:01:47] Tarig Khairalla: As you mentioned Kensho, we develop AI solutions that unlock insights that are hidden in messy data. If you think about the business and finance space, the majority of data created has no standard format, [00:02:00] right? You know, your typical professional gathers information from images.

[00:02:05] Tarig Khairalla: Videos, text, audio, and, and so much more. And unfortunately, as a result of that critical insights are generally trapped and, and are hard to uncover in that data. And so the reality is that, you know, the data today is being created at a rate much faster than before. And so the solutions we build are tackling problems that many organizations today are, are, are facing.

[00:02:30] Tarig Khairalla: You know, we build a variety of machine learning. Products that serve to structure, unify and contextualize data. We have products that are used in a daily basis for things like transcribing, voice to text, extracting financial information from pdf, f documents, identifying named entities like. People in places within text understanding sentences and paragraphs to tag them with a topic or concept that are being discussed.

[00:02:59] Tarig Khairalla: Right? So, [00:03:00] you know, at the end of the day, what we're looking to do is, is, is make information more accessible, easier to use, and allow our customers to discover hidden insights much faster than that they could before and ultimately, you know, enabled them to make decisions with a conviction.

[00:03:15] Himakara Pieris: Do you target specific industry verticals or specific business processes?

[00:03:21] Tarig Khairalla: Yeah, our products are mainly geared towards you know, your finance workflows. . A lot of the models and products that we built, were trained on financial data, like the extraction capabilities that we have or trained on financial documents or the transcription products that we we provide are trained on earnings calls, for example, and many other types of financial related data.

[00:03:44]

[00:03:44] Himakara Pieris: I presume you expect higher level of accuracy because your training data set is focused on domain specific data.

[00:03:51] Tarig Khairalla: That's the angle, yes. That we are focused on business and finance to make sure that we're the best at developing machine [00:04:00] learning solutions for the finance professional.

[00:04:02] Himakara Pieris: As a PM how do you think about ways to improve the product or add extra value? Is it primarily around, increasing accuracy or pushing into adjacent use cases? How do you build on top of what you already have as a pm?

[00:04:19] Tarig Khairalla: It's a little bit of both, definitely.

[00:04:21] Tarig Khairalla: Making sure that we are continuing to push the boundaries in terms of what's possible from an accuracy perspective across all our products. But you know, the other thing we do too is make sure that we can provide value beyond just what we offer with one product. For example, you know, a lot of our.

[00:04:36] Tarig Khairalla: Kind of capabilities sometimes are synergistic in nature. So you can add something like, and you know, the product called Kero extract to some of, some, some other Kero product that's called Kero Classify. To be able to now provide something that is beyond just one product or one solution that that users can drive value from.

[00:04:56] Himakara Pieris: How is it different being a product manager working [00:05:00] in in an AI product versus being a product manager who is working in a traditional software product?

[00:05:06] Tarig Khairalla: I think that there's a lot of parallels and similarities, but with being in the AI space, you add a layer of now you have to work really closely with machine learning and data scientists, and you also add an element of uncertainty,

[00:05:21] Tarig Khairalla: because as we're building AI and machine learning products, a lot of times we're uncertain whether a given experiment is gonna succeed. And so there's a little bit more uncertainty around it. There's a little bit more in terms of the discipline, right? You have to learn a little bit how to build machine learning models,

[00:05:37] Tarig Khairalla: the life cycle of a machine learning model. You have to learn that and really kind of be able to implement it in your day-to-day and, and build processes around that to make sure that you're still delivering what you need to del deliver for your customers and, and clients.

[00:05:51] Himakara Pieris: How do you plan the uncertainty is common in air and machine learning product life cycles?

[00:05:57] Tarig Khairalla: certainly what's important to do [00:06:00] is have specific measures of success and targets in mind before starting, let's say, for example, a specific experiment. But I'll also say that it's, it's important to also tie inbox your activities. So you know, when you're scoping out a specific experiment or a specific project, understanding what your no star is going to be.

[00:06:20] Tarig Khairalla: Understanding some of the measure of success that you're looking to go after and making sure that you're timeboxing what you're looking to achieve in a reasonable amount of time. Right? Typically what happens with machine learning experimentation is that, you know, it can, you, you can, you can experiment for, for a very long time, but is that going to be valuable?

[00:06:38] Tarig Khairalla: Is there a return of investment in, you know, spending two to three to four months of time experimenting or something? Or are you better off pivoting to something else that can potentially drive value elsewhere?

[00:06:50] Himakara Pieris: Do you have a data science and machine learning team that's embedded inside a product team, or is there a separation between the data science machine learning team and the product team?[00:07:00]

[00:07:00] Tarig Khairalla: The way we think about things is, there's three kind of main key players in developing a product.

[00:07:05] Tarig Khairalla: We've got the product manager we've got the tech lead on the engineering side application side, and then there's the tech lead on the machine learning side. So the three of them combined with a, a designer is usually how we approach building products at Kin Show.

[00:07:20] Himakara Pieris: How does that interface look like between let's say the machine learning lead and the engineering lead, and also between machine learning and product so those, different interfaces..

[00:07:31] Tarig Khairalla: . , we work really closely together. You know, say that we touch base regularly on a weekly basis to kind of talk about what we're looking to do. Right. Whether it's like a new product that we're building out if, if machine learning team is in kind of the research mode we make sure that we're continuing to talk to our tech leads to make sure that We build expectations ahead of time.

[00:07:56] Tarig Khairalla: So if a model is being built out, it's in research [00:08:00] mode. Maybe there's not a lot of effort that's needing to be done on the kind of backend side of things on the application side. But once that model graduates to being more of a potential one that's gonna be productionized, we're already kind of established.

[00:08:14] Tarig Khairalla: We're already on the same page as far as like, well, there's a model coming your way. We have to think about setting up that service up front. So I would say it's very collaborative in nature as we're scoping out. Kind of product early on. Everybody from a design to product, to applications to machine learning is involved in that process.

[00:08:33] Himakara Pieris: What does the input to the machine learning team look like from you as the pm.

[00:08:38] Tarig Khairalla: As pm I think the largely still remains very similar to how a software product manager would work, right? The voice of the customer, right. Thinking about exactly what problems we're trying to solve for and, and, and how we wanna optimize the algorithms and the models that we're building out, right?

[00:08:54] Tarig Khairalla: The measure of success. So my input is around kind of the [00:09:00] We think about them, kind of the, the domain expertise, right? What do the models need to do in collaboration with the users and customers that we're going after? My input is also around evaluation, right? How do we collect the right data to be able to evaluate the models and make sure that we're comfortable with what they're doing?

[00:09:18] Tarig Khairalla: And yeah, anything around coordinating between customers and, and, and clients, right? Being able to kind of pass feedback back and forth between the two groups of, of, of, of disciplines.

[00:09:30] Himakara Pieris: When you initially engage with the mission learning team on, let's say a new feature idea Is it that you have a data set and you outline a requirement that says, we want to accomplish X, Y, and Z with this data set, and here's the performance criteria that we expect. Or is it a lot more free form than that?

[00:09:50] Tarig Khairalla: I would say we work together on scoping all of that out. So the requirements are things that both myself on the product side and the machine learning side [00:10:00] work together to get really comfortable with. It's a little bit more collaborative in nature, so I, you know, I don't.

[00:10:07] Tarig Khairalla: Directly provide requirements as is because I know there may be some flexibility and flexes in how machine learning team can help develop something. And so you know, what I contribute to that is, is understanding the problem really well, understanding the domain knowledge behind it, really well, understanding where we want to go.

[00:10:24] Tarig Khairalla: So the actual North star, the strategy behind we're achieving, and then all this stuff that's in the middle. The requirements and, and exactly how we wanna execute on it is something that we work together to scope out and make sure we're all comfortable with.

[00:10:37] Himakara Pieris: If someone as a PM is working, let's say setting up their, first AI powered feature that they're putting together a team how do you think they should structure that conversation?

[00:10:47] Tarig Khairalla: You know, I would say that there's a, there's a few things to think about when you're first starting off trying to structure a project.

[00:10:55] Tarig Khairalla: You know, other than just knowing exactly what you're trying to solve for and some of the actual conversations that [00:11:00] you've had with clients and customers, to be able to really explain and talk about what we're trying to solve for with our, with our machine learning team. Right? That's number one. And then number two, I think it's important to know some of the inputs and outputs that we're looking for.

[00:11:14] Tarig Khairalla: Is there existing training data or data sets that we can go after somewhere to be able to build this model? Right? Feasibility, essentially just high level feasibility of that. What are the outcomes we're going after, right? What are the outputs gonna look like? Is this something that's gonna be embedded into a platform or is this something that's standalone, freeform product that we're building out?

[00:11:33] Tarig Khairalla: So just having an idea of like what it is. Where we want to deploy or build it and when, so timing is really important to have, right? Is this something that we want to build tomorrow in, in a week from now, two months from now, three months from now? Those are things that you need to be able to answer to then help guide and help structure how you wanna build it out with the machine learning team.

[00:11:57] Himakara Pieris: Considering there already sort of the feasibility [00:12:00] analysis phase at the very top of it do you recommend people to run multiple pilots in peril or is it best tackled sequentially?

[00:12:10] Tarig Khairalla: I think it depends on the use case, but I do think that Multiple pilots will help. I wouldn't say we take on too much because I think the idea here is the way I like to execute in myself is to start really small.

[00:12:25] Tarig Khairalla: So we're trying to validate something really quickly at a very small scale. And so starting off potentially with, with one pilot is perfectly fine if you can validate very quickly and then scaling up to the next and the second and the third. But if, if. If, depending on the use case, I don't see a problem of why we couldn't do multiple pods at the same time as well.

[00:12:46] Himakara Pieris: If you had to timebox these pilots, what do you think is a reasonable time box window to kind of think about?

[00:12:53] Tarig Khairalla: I'll say that it, this probably something that really depends on the, the existing team, but [00:13:00] generally I would say, We've used things like for from between four to six weeks as, as a tie in box.

[00:13:08] Tarig Khairalla: In some instances we do. If it's something that's really, really quick and we're in, we don't know the value of it yet. We do two weeks.

[00:13:14] Himakara Pieris: Could you share a bit about what your day in life looks like as an I P M? It's interesting. I think the role that you're in, because when we think about AI products, there's this sort of division between co ai products where there is a research you know, research in European component, and then there's the applied AI products where you are embedding these, you know, capabilities into.

[00:13:38] Himakara Pieris: A user application and user facing application. And then there's the machine learning m and ops side of things. And it sounds like you have to touch all three of them in a way and at least the first two for sure. So curious how, how your day is structured.

[00:13:55] Tarig Khairalla: That's actually, a fair assessment.

[00:13:57] Tarig Khairalla: Right? My, my role does touch on [00:14:00] things like research, like actually time, you know, allowing time for research to be done in terms of experimenting with models, and then ultimately how do we take them and actually productionize them. A day in the life of a PM is generally hard to, to pinpoint, and I think especially in the AI space, but the reality that it, it varies, right?

[00:14:18] Tarig Khairalla: I personally. I like to make sure that even though it's a very varied you know, schedule and, and, and, and week that I may run into, I like to make sure that I have a fixed you know, fixed activities that I do on a daily basis. So, for example, I start my day usually with three things, right? I.

[00:14:36] Tarig Khairalla: Read up on articles. The first thing I go on just to make sure that I'm continuing to be up to date on the latest developments in this space, especially nowadays. The this space moves really fast. Then I go through and organize my emails. I I don't immediately respond to emails, right? But I go in the morning and, and try to make sure that I prioritize and find the ones that I need to make sure to respond to during the day.

[00:14:58] Tarig Khairalla: And then the same goes for [00:15:00] things like Slack and teams and things of that nature. And the third thing I do is I finally go through analytics. Look at whether there's a user that's signed up to our product. Look at some of the metrics that we're tracking internally just to make sure that I am staying up to date on, on the product and, and everything that happened overnight, essentially.

[00:15:21] Tarig Khairalla: From there, I think the, the bulk of the day can vary, but what I will typically say that there's a lot of context switching. And so I may start off with a client call, for example, to discuss a use case or a project that we wanna explore with them. And then maybe I'll jump into a leadership call to present on something.

[00:15:42] Tarig Khairalla: Right. And then maybe go into like a technical brainstorming call with our machine learning team, followed by another technical running, brainstorming call with our engineering team, right, of software engineers. So these are just examples of what a typical day could look like. But there's other aspects for me personally that I run into, like [00:16:00] design.

[00:16:00] Tarig Khairalla: So working with design teams to introvert users or come up with concepts, sales. Marketing, finance, legal implications, right? In the AI space specifically, there's a lot of legal implications that we have to think about. And so as a PM in the AI space, you typically have to get involved in a lot of these different aspects on a daily basis.

[00:16:22] Tarig Khairalla: And so, you know, at the end of the day, I, I like to then structure myself as well. So I start with some structure and then I wanna make sure that I end with some structure. So I go through and figure out, What I wasn't able to accomplish during the day. And then I create a a to-do list of things I need to accomplish the following day.

[00:16:40] Tarig Khairalla: What I will say though, that if, if, if you're somebody who likes variety in their day, then I, I think being an AI pm is something that, you know, you'd, you'd like, you'd enjoy.

[00:16:53] Himakara Pieris: How do you prepare for a brainstorming session with a mission learning team? Would you [00:17:00] recommend a pm sort of a read up and get up to speed on technical aspects of things to be able to have a conversation there? Or are there any other recommendations?

[00:17:11] Tarig Khairalla: I think that N n having a high level understanding of the different types of machine learning approaches is really helpful in terms of being able to communicate with your counterparts and the machine learning space or, and machine learning team.

[00:17:28] Tarig Khairalla: But I certainly don't think it's a requirement, right? I, again, I started my career in the finance and accounting space, so I didn't have a technical background, but through kind of learning on the job and through reading up and, and staying up to date on the. The industry, I was able to kind of learn that as, as I went along.

[00:17:44] Tarig Khairalla: And so certainly I think it helps to, to understand some of the high levels of how a machine learning model is constructed. Some of the different techniques to get to, to, to a solution. But keep in mind as in those brainstorming sessions when we're [00:18:00] talking about how potentially things can be developed what you bring or what I bring as, as a pm is again, the customer.

[00:18:07] Tarig Khairalla: Right, understanding exactly the pain points and the, the measure of success that we're trying to optimize for. And that's a valuable input in terms of being able to develop and build something that's important and it's gonna solve a problem for users.

[00:18:19] Himakara Pieris: Moving on to the good market side of things what are the big challenges that you face? Considering. Finance is a big target vertical for you. , very risk of this industry. I I can't imagine there are, there are a lot of questions that you have to answer.

[00:18:39] Tarig Khairalla: Certainly many challenges that come our way when we try to productionize deploy and commercialized products. One of them that first comes up is general legality. The data that's underlying a lot of these models sometimes is. Proprietary nature. And a lot of our customers and users don't want us to use their data, for example. But I think [00:19:00] one thing that's, you know, more challenging and I, from my experience have run into many challenges around is actual adoption of our products.

[00:19:08] Tarig Khairalla: If you think about some of the users in the finance space, they've likely gotten accustomed to workflows they've been following for, for years. Right. These are individuals that are involved in validating every single detail or every single line item to get comfortable with financial decisions that are being made.

[00:19:27] Tarig Khairalla: And, and that's for good reason, right? There's a significant downside risk to negligence in this space that leads to penalties, liabilities, and things of that nature. And so as a result of that, our users are. Typically and generally very conservative in nature and relatively skeptical in terms of automation, which means that a, as a product team, you, you almost have to ensure that early stages, early stage models are either fairly accurate in nature, or if they're not, then your software that you're building around these models allows [00:20:00] for checks and balances to instill trust early in your product.

[00:20:05] Tarig Khairalla: You know, from my experience there. A few types of mindsets that you'll run into being a product manager in this space when you're introducing new products in the finance space, right. You know, the first one is gonna be around you. You'll run into users that are generally skeptical, but willing to try and adopt new products.

[00:20:25] Tarig Khairalla: And so as they're starting to use a new product, they're generally gonna run into like a learning curve, right? You know, this is something that. They keep in mind they've, they've been following these processes for a very long time and maybe with adopting a new product, the time that's being spent in the perceived complexity of the task at hand is changing and maybe it's going up and typically that will impact their performance and whether they're willing to continue using that product.

[00:20:55] Himakara Pieris: Can you talk a bit about explainability and [00:21:00] how much of a role they play when you're try and drive adoption?

[00:21:03] Tarig Khairalla: The second kind of mindset, actually, going back to the point earlier, a after, you know, other than skepticism and, and kind of the willingness to try is actually gonna be around.

[00:21:14] Tarig Khairalla: The black box nature of, of these sub of sub-machine learning techniques, right? Users will typically find it hard to wrap their head around how a machine is doing what it does. And again, going back to the finance practitioner they've been used to being able to validate the details, tracing back and make sure they're really comfortable with how things are being done.

[00:21:36] Tarig Khairalla: And now you're introducing something that is hard to validate. It's, it's, you know, hard to explain, right? And so what ends up happening is, is due to the lack of that explainability in those machine learning models, what ended up happening is that they go back to the org processes and resign to using their existing solutions.

[00:21:53] Tarig Khairalla: And so, These are, you know, some of these things are important to know as, as being a product manager in the space and you have to be irv. [00:22:00] And part of that is, again, being deeply empathetic and understanding user pain points really well, to be able to address and mitigate some of those issues.

[00:22:10] Himakara Pieris: How do you balance between Model complexity and explainability because there is, you know, there is a big focus and interest in building larger and large models investing or adopting the latest deep learning frameworks.

[00:22:27] Himakara Pieris: Whereas, you know, if you stay on the simplest side, things are easy to explain. Tend to be more consistent in, in, at least in most cases. How do you balance that? Staying on the cutting edge of technology and, and and the complexity of the product?

[00:22:46] Tarig Khairalla: The way I think about it, it's slow slow integration or slow introduction of, of complexity into the system. This is where kind of understanding users is really important as a pm, is if [00:23:00] you're looking for a user ultimately to adopt a product I like to start from a simple point of view, right? What's the simplest thing that we can do at a very fast rate to be able to drive value for that user without getting to the space where they can't, you know, they're experiencing rapid change.

[00:23:18] Tarig Khairalla: Right. And so I think it's totally fine to start with something that's simple, maybe that's less performant in nature, but it's, it's a trade off between trust and and, and performance essentially. Right? And so from there, once we have that initial user that's starting to use the product a little bit, they're, they're.

[00:23:36] Tarig Khairalla: They're trusting a little bit more. It's then you can kind of start to add complexity and add cutting edge technology and things that ultimately will drive value in the long term. But it is, it is really a trade off between balancing how users perceive your products, right? And instilling trust in them with ultimately really complicated cutting edge technology that maybe is less explainable in nature.

[00:23:58] Tarig Khairalla: And at some point, it's a [00:24:00] journey that you take with your users, right? You wanna take that journey and make sure that you're hand in hand. Adopt, you know, building your product up at a rate where users are able to keep up and, and really comprehend, you know, fo follow along essentially with the journey.

[00:24:15] Himakara Pieris: How do you think about success? I think there are two aspects I'd like to get into. The first one is, the system performance metrics like the F1 schools, et cetera, that you touched on earlier. And the second one is the, the real world success measures. How do you think about these various access measures at various stages of the products lifecycle?

[00:24:38] Himakara Pieris: From the initial validation to first version out of the door to, to like, you know, various. Version iterations in the future.

[00:24:47] Tarig Khairalla: Success measures are, to me a little bit more involved in the AI space compared to your traditional you know, building software. I, I, I always like to anchor our products [00:25:00] to user facing metrics.

[00:25:02] Tarig Khairalla: Rather than, you know, strictly model performance to start. And I think it's even important to do that as you're starting to build a model or like during the early phases in the kind of life cycle of the product, right. To, to illustrate that. For example, you know, the product I work on we process and extract information from hundreds and thousands of PDFs in a given day, for example, right?

[00:25:27] Tarig Khairalla: And so to support and so. Well, you may essentially support very large document viewing and research platforms. Well, you may have a really accurate model in terms of your extraction performance, right? But what users really care about when using those platforms is how quickly they can pull information out from a or from the source they're looking at, right?

[00:25:51] Tarig Khairalla: So if you optimize for accuracy, Maybe end up with a very you know, deep neural network of, of some sort. Then [00:26:00] you sacrifice processing time cuz you're a little bit slower in terms of being able to provide outputs to users. And so you'll run into adoption challenges, right? You'll face you'll hopefully face that realize that the business value that you thought was gonna be generated is not being generated because again, the success measure.

[00:26:17] Tarig Khairalla: In this example, to me would've been the time. You know, one of the success matters, there's many that you can track, but one of them would be around the time it takes for somebody to get information outta the system. And so that's why I like to kind of look at both sides of the equation, forgot the user point of view and some of the things that are important to them.

[00:26:37] Tarig Khairalla: To track and then on on the internal side of things. Yes, the model performance is important to look at the health of the model, and ultimately as a product manager, we're looking to solve problems for customers and making sure that we're balancing both the performance of a model from an accuracy perspective is also being balanced with looking at the user and what they're really coming to us for and what problems we're looking to solve for them.[00:27:00]

[00:27:00] Himakara Pieris: You talked about adoption challenges with AI products. As a pm how do you, mitigate these adoption challenges?

[00:27:09] Tarig Khairalla: Yeah. It's, it's a great, great question and I think from my experience, I, I talked a little bit about. The kind of black box nature of some of the techniques that we use.

[00:27:20] Tarig Khairalla: But at you know, as we're building our pro, you know, our AI products, there's, there's a few strategies they can use to make sure that your users are adopting your products. I think the first one that's really important is involving them as early and as often as you can, right? Continually as you're developing your products, get feedback from your users to instill this, this idea of, of, you know, the, the.

[00:27:46] Tarig Khairalla: Co-creation with your users, right? They're, they're understanding how you're building your product and why you're building it the way you're building it. And, and it makes it easier for them to ultimately use the product in the long term. I [00:28:00] think the second piece is around establishing an onboarding or piloting process where you know, you understand that your users are coming from.

[00:28:09] Tarig Khairalla: You know, a workflow that they've been used to for quite a long time. And so if they're interested in easier product, making sure that there's some sort of onboarding program or some sort of pilot or process that they can go through to help them adopt a product you know, easily. I think the, the last thing I'll say here is build feedback into the process and when feedback is given, make sure that you are able to address that feedback really fast.

[00:28:37] Tarig Khairalla: Specifically in the finance space, users aren't going to wait and so they'll, they'll revert back to using their old processes because they have jobs to get done. And so if we're not able to address critical feedback quickly enough, then you know, you're more likely to see churn in in customers relatively quickly.

[00:28:56] Himakara Pieris: There has been an avalanche of new developments coming [00:29:00] our way over the last few weeks and months at this point. What are you most excited about out of all this new stuff that's happening out there?

[00:29:08] Tarig Khairalla: Very excited about all the development in, in language models large language models specifically.

[00:29:15] Tarig Khairalla: The space is moving really, really fast. Y you know, and, and as part of Kensho we're, we're involved in that headfirst. There's a lot of work that we're doing to actually explore some of the work in terms of language models. It's, it's been a, it's been a lot of developments in a very short amount of time.

[00:29:31] Tarig Khairalla: And I think being a product manager space right now is, is very exciting. You know, things are moving fast. There's opportunities everywhere. To leapfrog you know, competitors and other companies out there. And I think it's an opportunity for a lot of product managers now to to get into this space and, and, and really make a difference for a lot of customers.

[00:29:53] Himakara Pieris: Great. Is there anything else that you'd like to share with our audience?

[00:29:57] Tarig Khairalla: Look, if things like speech-to-text, [00:30:00] extraction from documents, classification, and entity recognition and linking seem like types of solutions you're looking for, come find us at kens show.com. We have a friendly team that will talk to you in-depth about our products.

[00:30:14]

[00:30:18] Hima: Smart products is brought to you by hydra.ai. Hydra helps product teams explore how they can introduce AI-powered features to their products and deliver unique customer value. Learn more at https://www.hydra.ai

  continue reading

15 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide