Artwork

Content provided by Steve Portigal. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Portigal or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

19. Leisa Reichelt of Atlassian (Part 1)

47:22
 
Share
 

Manage episode 234192491 series 62327
Content provided by Steve Portigal. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Portigal or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This episode of Dollars to Donuts features part 1 of my two-part conversation with Leisa Reichelt of Atlassian. We talk about educating the organization to do user research better, the limitations of horizontal products, and the tension between “good” and “bad” research.

If you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain? You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that? – Leisa Reichelt

Show Links

Follow Dollars to Donuts on Twitter and help other people discover the podcast by leaving a review on iTunes.

Transcript

Steve Portigal: Howdy, and here we are with another episode of Dollars to Donuts, the podcast where I talk to the people who are leading user research in their organization.

I just taught a public workshop in New York City about user research, organized by Rosenfeld Media. But don’t despair – this workshop, Fundamentals of Interviewing Users is also happening September 13th in San Francisco. I’ll put the link in the show notes. Send your team! Recommend this workshop to your friends!

If you aren’t in San Francisco, or you can’t make it September 13th, you can hire me to come into your organization and lead a training workshop. Recently I’ve taught classes for companies in New York City, and coming up will be San Diego, as well as the Midwest, and Texas. I’d love to talk with you about coming into your company and teaching people about research.

As always, a reminder that supporting me in my business is a way to support this podcast and ensure that I can keep making new episodes. If you have feedback about the podcast, I’d love to hear from you at DONUTS AT PORTIGAL DOT COM or on Twitter at Dollars To Donuts, that’s d o l l R s T O D o n u t s.
I was pretty excited this week to receive a new book in the mail. It’s called The Art of Noticing, by Rob Walker, whose name you may recognize from his books, or New York Times columns, or his appearance in Gary Hustwit’s documentary “Objectified.” I’ve only just started the book but I am eager to read it, which is not something I say that often about a book of non-fiction. The book is structured around 131 different exercises to practice noticing. Each page has really great pull-quotes and the exercises seem to come from a bunch of interesting sources like people who focus on creativity or art or storytelling. Rob also publishes a great newsletter with lots of additional tips and examples around noticing, and I’ve even sent him a few references that he’s included. I’ll put all this info in the notes for this episode. This topic takes me back to a workshop I ran a few years ago at one of the first user research conferences I ever attended, called About, With and For. The workshop is about noticing and II wonder if it’s time to revisit that workshop, and I can look to Rob’s book as a new resource.

Well, let’s get to the episode. I had a fascinating and in-depth conversation with Leisa Reichelt. She is the Head of Research and Insights at Atlassian in Sydney Australia. Our interview went on for a long time and I’m going to break it up into two parts. So, let’s get to part one here. Thank you very much for being here.

Leisa Reichelt: Thank you for inviting me.

Steve: Let’s start with maybe some background introduction. Who are you? What do you do? Maybe a little bit of how we got here – by we, I mean you.

Leisa: I am the Head of Research and Insights at Atlassian. Probably best known for creating software such as Jira and Confluence. Basically, tools that people use to make software. And then we also have Trello in our stable as well. So, there are a bunch of tools that are used by people who don’t make software as well. A whole bunch of stuff.

Steve: It seems like Jira and Confluence, if you’re any kind of software developer, those are just words you’re using, and terms from within those tools. It’s just part of the vocabulary.

Leisa: Yeah.

Steve: But if you’re outside, you maybe have never heard those words before?

Leisa: Exactly. Atlassian is quite a famous company in Australia because it’s kind of big and successful. But I think if you did a poll across Australia to find out who knew what Atlassian actually does, the brand awareness is high. The knowledge of what the company does is pretty low, unless you’re a software developer or a sort of project manager of software teams in which case you probably have heard of or used or have an opinion about Jira and probably Confluence as well.

Steve: And then Trello is used by people that aren’t necessarily software makers.

Leisa: Correct. A bunch of people do use it for making software as well, but it’s also used for people running – like in businesses, running non-technical projects and then a huge number of people just use it kind of personally – planning holidays, or weddings. I plan my kids weekends of Trello sometimes and I know I’m not alone. So, yeah it’s a real – it’s a very what we call a horizontal product.

Steve: A horizontal product can go into a lot of different industries.

Leisa: Exactly, exactly. I’m very skeptical about that term, by the way.

Steve: Horizontal? Yeah.

Leisa: Or the fact that it won’t necessarily be a good thing, but that’s another topic probably.

Steve: So, I can’t follow-up on that?

Leisa: Well, yeah, you can. Well, the problem with horizontal products, I think, is that they only do a certain amount for everybody and then people reach a point where they really want to be able to do more. And if your product is too horizontal then they will graduate to other products. And that gives you interesting business model challenges, I think. So, you have to be continually kind of seeking new people who only want to use your product up to a certain point in order to maintain your marketplace really.

Steve: When I think about my own small business and just any research I’ve done in small and just under medium sized businesses where everything is in Excel, sort of historically, where Excel is the – there may be a product, Cloud-based or otherwise, to do a thing. That someone has built kind of a custom Excel tool to do it. So, is Excel a horizontal product that way?

Leisa: I think so, yeah. In fact, I was talking to someone about this yesterday. I think that for a lot of people the first protocol for everything is a spreadsheet. They try to do everything that they can possibly do in a spreadsheet. And then there are some people who are like the, “ooh, new shiny tool. Let’s always try to find an application for a new shiny tool.” I think actually the vast majority of people take the first tool that they knew that had some kind of flexibility in it. So, if you can’t do it in Word or Excel – people will compromise a lot to be able to do things in tools that they have great familiarity with.

Steve: Yeah. But from the maker point of view you’re saying that a risk in the horizontalness, the lack of specificity, creates kind of a marketplace for the maker of the tool?

Leisa: Can do. I think for it to be successful you just have to be at such a huge scale to be able to always meet the needs of enough people. But I think things like Excel and Word and Trello, for example, they’ll always do some things for some people. Like just ‘cuz you moved to a more kind of sophisticated tool doesn’t mean that you completely abandon the old tool. You probably still use it for a bunch of things.

Steve: So, your title is Head of Research and Insights?

Leisa: Correct.

Steve: So, what’s the difference between research and insights?

Leisa: Yeah, good question. I didn’t make up my own title. I kind of inherited it. If I remember correctly, the way that it came about was that when I came into my role it was a new combination of people in the team in that we were bringing together the people who had been doing design research in the organization and the voice of the customer team who were effectively running the MPS. And I think because we were putting the two of them together it sounded weird to say research and voice of the customer. So, they went with research and insights instead. And, honestly, I haven’t spent any time really thinking about whether that was a good title or not. I’ve got other problems on my mind that are probably more pressing, so I’ve just kind of left it. If you want to get into it, I think research is the act of going out and gathering the data and pulling it altogether to make sense and insights. So, hopefully the sense that you get from it and we do try to do both of those things in my team, so I think it’s a reasonably accurate description.

Steve: How long have you been in this role?

Leisa: It’s about 18 months now.

Steve: If you look back on 18 months what are the notable things? For the experience of people inside the organization what has changed ins?

Leisa: Quite a few things. The shape and make up of the team has changed quite a lot. We’re bigger and differently composed to what we were previously. When I first came in it was literally made up of researchers who were working in products. Prior to me coming in they were reporting to design managers and we, for various reasons, pulled them out of products pretty quickly once I started. And then we had the other part of the business who were running MPS that was being gathered in product and we don’t do that anymore either. So, that team is doing quite different things now. So, we’ve changed a lot of things. We’ve introduced a couple of big programs of work as well. One of them being a big piece of work around top tasks. So, taking Gerry McGovern’s approach to top tasks. Trying to build that kind of foundational knowledge in the organization. So, that’s kind of new. There have always been those people doing Jobs To Be Done type stuff, but very very close to the product level. So, we’ve tried to pull back a little bit to really build a bigger understanding of what are the larger problems that we’re trying to solve for people and how might we see the opportunities to address those more effectively?

Steve: So, trying to create shared knowledge around what those top tasks are – I’m guessing for different products, different user scenarios.

Leisa: One of the things that we really tried to do is to get away from product top tasks and get more into really understanding what problems-based is the product, or combinations of products, trying to address. So, we don’t do top tasks for Jira. We do top tasks for agile software teams. And then through that we can then sort of ladder down to what that means for Jira or Confluence or Bitbucket or Trello, or whichever individual or combination of products we have. But it means that we sort of pull away a little bit from that feature focus. I think it can be very seductive and also very limiting.

Steve: What’s the size of the team now?

Leisa: I have a general rule in life of never count how many researchers you have in the organization because it’s always too many, according to whoever is asking you – not you, but like senior people. I think we’re probably around the mid-20s now.

Steve: And around the world, or around different locations?

Leisa: Mostly we’re in Sydney and California. So, we’re across three different offices there and we have a couple of people working remotely as well. So, I have somebody remote in Europe and another person in California who’s working out of a remote office too.

Steve: Can we sort of talk about the make up of the team evolving – what kinds of – without sort of enumerating, what are sort of the background, skillsets? Any way that you want to segment researchers, what kinds of – what’s the mix that you’re putting together?

Leisa: So, I think at the highest level we’ve got people who do design research, predominantly qualitative and they do a mixture of discovery work and evaluative work. We’ve got a team of what we call quantitative researchers and those are generally people who have got a marketing research type background and so they bring in a lot of those data and statistical skills that the other crew don’t necessarily have quite so much of. And then we have a research ops team as well who are doing some different things. And then we have a handful of research educators.

Steve: And what are the research educators doing?

Leisa: Educating people about how to do research.

Steve: These are hard questions and hard answers!

Leisa: Well, if you dig into it too much it gets complicated pretty quickly. So, I think the reality of research at Atlassian is the people in the research team probably do – best case – 20% of the research that happens in the organization and that’s probably an optimistic estimate as well. A huge amount of research is done by product managers and by designers and by other people in the organization, most of whom haven’t had any kind of training and so take a very intuitive approach to how they might do research. So, the research education team are really trying to help shape the way that this work is done so that it can be more effective.

Steve: I’m thinking about a talk I saw you give at the Mind the Product Conference where you began the talk – I think you kind of bookended the talk with a question that you didn’t answer – I don’t think you answered it definitively which is, is bad research better than no research, or words close to that. It was a great provocation to raise the question and then sort of talk about what the different aspects of that – how we might answer that? What the tradeoffs are? When you talk about this design education effort I can’t help but think that that’s connected to that question. If 80% of the research is done by people acting intuitively then yeah, how do you level up the quality of that?

Leisa: Absolutely.

Steve: Which implies that – well, I don’t know if it answers – does that answer the question? I’m not trying to gotcha here, but if you are trying to level up the quality that suggests that at some point better research is – I don’t know, there’s some equation here about bad vs. better vs. no. I’m not sure what the math of it is.

Leisa: So, this has been the real thing that’s occupied my mind a lot in the last couple of years really. And I’ve seen different organizations have really different kind of appetites for participating in research in different ways. I think at the time that I did that talk, which was probably, what?

Steve: Almost a year ago.

Leisa: Yeah. I think I was still trying to come to terms with exactly where I’ve felt – what I’ve felt about all of this, because as somebody who – well, as somebody in my role, everybody in an organization is – a lot of people in the organization are going to be watching you to see you trying to be overly precious and to become a gatekeeper or a bottleneck or all of these kinds of things. So, I’ve always felt like I had to be very careful about what you enabled and what you stopped because everyone has work that they need to get done, right. And the fact that people want to involve their customers and their users in the design process is something that I want to be seen to be supporting. Like I don’t want to be – I don’t want to stop that and I certainly don’t want to message that we shouldn’t have a closeness to our users and customers when we’re doing the research.

But, I’ve also seen a lot of practices that are done with the best of intentions that just get us to some crazy outcomes. And that really worries me. It worries me on two levels. It worries me in terms of the fact that we do that and we don’t move our products forward in the way that we could and it worries me because I think it reflects really poorly on research as a profession. I think most of us have seen situations where people have said well I did the research and nothing got better, so I’m not going to do it anymore. Clearly it’s a waste of time, right. And almost always what’s happened there is that the way that people have executed it has not been in a way that has helped them to see the things that they need to see or make the decisions that they need to make. So, it’s this really hard like to walk to try to understand how to enable, but how to enable in a way that is actually enabling in a positive way and is not actually facilitating really problematic outcomes. So, that’s, yeah – that’s my conundrum is balancing that. One the one hand I feel really uncomfortable saying no research is better than bad research. But on the other hand, I’ve seen plenty of evidence that makes me feel like actually maybe it’s true.

Steve: So, that’s kind of a time horizon there that the bad research may lead to the wrong decisions that impact the product and then sort of harm the prospects of it. I’m just reflecting back. That harm the prospects of a research to kind of go forward. Right, every researcher has heard that, “well we already knew that” response which to me is part of what you’re talking about. It’s that when research doesn’t – isn’t conducted in a way and sort of isn’t facilitated so that people have those learning moments where they – I think you said something about sort of helping them see the thing that’s going to help them make the right decision. And that’s – right, that’s not a – that’s different than what methodology do you use, or are you asking leading questions? Maybe leading questions are part of it because you confirm what you knew and so you can’t see something new because you’re not kind of surfacing it.

Leisa: Loads of it comes around framing, right. Loads of it comes around where are you in the organization? What are you focused on right now? What’s your remit? What’s your scope? What are you allowed to or interested in asking questions about? In a lot of cases this high volume of research comes from very downstream, very feature focused areas, right. So, if you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain. You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that. So, a lot of this is kind of structural. That so many of our people who are conducting this research are so close to the machine of delivery, and shipping and shipping and shipping as quickly as possible, that they don’t have the opportunity to think – to look sideways and see what’s happening on either side of their feature. Even when there are teams that are working on really similar things. They’ve run so heads down and feature driven. So, doing the least possible that you can to validate and then ship to learn, which is a whole other area of bad practice, I think, in many cases. It’s really limiting and you can see organizations that spend a huge amount of time and effort doing this research, but it’s at such a micro level that they don’t see the problems that their customers are dying to tell them about and they never ask about the problems that the customers are dying to tell them about because the customers just answer the questions that they asked and that’s kind of what bothers me. And so, it’s not about – in a lot of cases, some of it is about practice. Like I think it’s amazing how few people can just resist the temptation to say hey I’ve got this idea for a feature, how much would you like it? They can get 7 out of who said they loved my feature. Like that feels so definitive and so reliable and that’s very desirable and hard to resist. So, yes that happens.

But the bigger thing for me I think is where research is situated and the fact that you don’t have both that kind of big picture view as well as that micro view. We just have all of these fragments of microness and a huge amount of effort expended to do it. And I don’t feel like it’s helping us take those big steps forward.

Steve: But Leisa, all you need is a data repository so that you can surface those micro findings for the rest of the organization, right?

Leisa: You’re a troll Steve.

Steve: I am trolling you.

Leisa: You’re a troll, but again, I mean in some – kind of yes, but again, like all of those micro things don’t necessarily collectively give you the full view. And a lot of those micro things, because of the way they’re being asked, actually give you information that’s useless. If all of your questioning is around validating a particular feature that you’ve already decided that you think you want to do and you go in asking about that, you never really ask for the why? Like why would you use – like who is actually using this? Why would they use this? What actual real problem are they solving with this, right? So, the problem becomes the feature and then all of that research then becomes very disposable because the feature shifts and then nobody uses it as much as everybody thought they would, or they’re not as satisfied with it as what they thought they would be. So, we just keep iterating and iterating and iterating on these tiny things. We move buttons around. We change buttons. We like add more stuff in – surely that will fix it. But it’s because we haven’t taken that step back to actually understand the why? If you’re talking about those big user need level questions, the big whys, then you know what, you can put those things in a repository and you can build more and more detail into those because those tend to be long lasting.

And then at the other end, just the basic interaction design type stuff that we learn through research. A lot of those things don’t change much either. But it’s that bit in the middle, that feature level stuff, is the most disposable and often the least useful and I feel like that’s where we spend a huge amount of our time.

Steve: Do you have any theories as to why in large software enterprise technology companies that there is a focus on leaning heavily on that kind of feature validation research? What causes that?

Leisa: I think that there’s probably at least two things that I can think about that contribute to that. One is around what’s driving people? What gets people their promotions? What makes people look good in organizations? Shipping stuff – shipping stuff gets you good feedback when you’re going for your promotion. They want a list of the stuff that you’ve done, the stuff that you’ve shipped. In a lot of organizations just the fact that you’ve shipped it is enough. Nobody is actually following up to see whether or not it actually made a substantive difference in customers’ lives or not. So, I think that drive to ship is incredibly important. And our org structures as well, like the way that we divide teams up now, especially in organizations where you’ve got this kind of microservice platform kind of environment. You can have teams who’ve got huge dependencies on each other and they really have to try to componentize their work quite a lot. So, you have all of these kind of little micro teams who their customer’s experience is all of their work combined, but they all have different bosses with different KPIs or different OKRs or whatever the case may be. And I think that’s a problem. And then the other thing is this like build, measure, build – what is it? I’ve forgotten how you say it.

Steve: I’m the wrong person.

Leisa: Build/measure/learn.

Steve: Yeah, okay.

Leisa: The lean thing, right, which is this kind of idea that you can just build it and ship it and then learn. And that’s – that’s – that means that like if it had been learn/build/measure/learn we would be in a different situation, right, because we would be doing some discovery up front and then we would have a lot of stuff that we already knew before we had to ship it out to the customers. But it’s not. It’s build – you have an idea, build/measure/learn. And then people are often not particularly fussy about thinking about the learn bit. So, we ship it, put some analytics on it and we’ll put a feedback collector on it and then we’ll learn.

What are you going to learn? Whatever. What does success look like? I don’t know. It’s very – it’s kind of lazy and it makes – we treat our customers like lab rats when we do that. And I feel like they can tell. There’s lots of stuff that gets shipped that shouldn’t get shipped, that we know shouldn’t get shipped. I’m not talking about Atlassian in specific. I’m talking about in general. We all see it in our day to day digital lives that people ship stuff that we really should know better, but this build/measure/learn thing means that unless you can see it in the product analytics, unless you can see thousands of customers howling with anger and distress, it doesn’t count.

Steve: It reminds me of a client I worked with a few years ago where we had some pretty deep understandings of certain points of the interactions that were happening and what the value proposition was and where there was meaning. Just a lot of sort of – some pretty rich stuff. And the client was great because they kept introducing us to different product teams to help apply what we had learned to really, really specific decisions that were going to be made. But what we started to see – this pattern was they were interested in setting up experiments, not improving the product. And in fact, this is a product that has an annual use cycle. So, the time horizon for actually making changes was really far and some of the stuff was not rocket science. Like it was clear for this decision – A versus B versus C – like here’s the research said very clearly like this is what people care about, or what’s going to have impact, or what’s going to make them feel secure in these decisions. They were like great, we can conduct an experiment. We have three weeks to set up the experiment and then we’ll get some data. And I was – I just hadn’t really encountered that mindset before. I don’t know if they were really build/measure/learn literally, if that was their philosophy, but I didn’t know how to sort of help – I wasn’t able to move them to the way that I wanted it done. I’d really just encountered it for the first time and it seemed like there was a missed opportunity there. Like we knew how to improve the product and I wasn’t against – like they are conducting experiments and measuring and learnings – awesome. That’s research. But acting on what you’ve learned seems like why you’re doing the research in the first place.

Leisa: It feels as though we have this great toolset that we could be using, right? We’ve got going out and doing your kind of ethnography, contextual type stuff. And then right at the other end we’ve got the product analytics and running experiments and a whole bunch of stuff in between. And it really feels to me as though organizations tend to really just get excited about one thing and go hard on that one thing. And growth – doing the experiments is a big thing right now. I see loads of situations where we look at data and we go, oh look the graph is going in this direction. Well, the graph is not going in that direction. Why? And I’ll kind of – like there’s so much guessing behind all of that, right? And if it doesn’t go quite right, well then let’s have another guess, let’s have another guess, let’s have another guess. And this – like you say, there’s so much stuff that we probably already know if we could connect two parts of the organization together to take together more effectively. Or, there are methods that we could use to find out the why pretty quickly without having to just put another experiment, another experiment onto our customers, our users. But the knowledge of this toolset and the ability to choose the right tool and apply it, and to apply these approaches in combination seems to be where the challenge is.

Steve: What’s the role of design in this? In terms of here’s the thing that we know and here’s the thing we want to make. We haven’t talked about designers. Ideally for me, I sort of hope designers are the instruments of making that translation. Without design then you can sort of test implementations, but you can’t synthesize and make anything new necessarily.

Leisa: Well, I mean yeah. Theoretically design has a really important role to play here because I think design hopefully understands the users in a design process better than anybody else. Understands the opportunities for iteration and levels of fidelity for exploring problems in a way that nobody else on the team does. And lots of designers do know that. But the time pressure is enormous, especially in these kind of larger tech companies where a lot of times designers – designers are also concerned about being bottlenecks. They have to feed the engineering machine. And it can be really difficult for them to have the conversations to talk about all of the work that we should be doing before we ship something. So, I feel as though they have a lot of pressure on them, a lot of time pressure on them. They’re being pressured to really contract the amount of effort that they put in before we ship something. And there is – yeah, there’s this huge desire amongst product teams and their bosses to just get something shipped, get something live and then we’ll learn that I think design really struggles with. And I don’t know – it can be really difficult in those kinds of environments to be the person who stands up and says we need to slow down and we need to do less and do it better. So, I have empathy for designers in their inability to shift the system – these problems – necessarily because of this big pressure that’s being put on them just to keep the engineers coding.

Steve: Right. If you say you want more time to work on something that’s – what are the risks to you for doing that?

Leisa: Exactly, exactly. On a personal level everyone wants to look good in front of their boss. Everyone would like a pay raise and a promotion and the easiest way to get those things is to do what you’re told and ship the stuff. Keep everyone busy, produce the shiny stuff, let it get coded up, let it go live, learn, carry on. That’s thinking short term. That’s how to have a happy life. Is that how you’re going to fundamentally improve the product or service that you’re working on? Probably not. But it takes a lot of bravery to try to change that, to try to stop this crazy train that’s out of control. Throw yourself in front of the bus. All that kind of stuff. Like you said, it’s hard, especially when there are loads of people all around you who are very happy to keep doing it the way that it’s being done right now. So, yeah. And I think that’s research’s role, that’s design’s role. I would like that to be PM’s role as well. And most engineers are – a lot of engineers are very, very, very interested in making sure that the work that they’re doing is actually solving real problems and delivering real value as well.

Steve: So, as you point to the individuals there’s a lot of shared objectives. But if you point the system, which is – it’s the system of – it’s the rewards system and the incentive system, but there’s also some – I think there’s just sort of what day to day looks like. Sort of the operations of the system of producing technology products.

Leisa: I think there’s also – there’s something about like what do people like to see? What gets people excited, right? And graphs pointing upwards in the right direction is really exciting. Like certain outcomes are really exciting. Ambiguous outcomes, really not exciting at all. Having things that go fast, very exciting. Things that go slow, not exciting. So, I think there are all of these things that this collection of humans, that form an organization have a strong bias towards, that we get really excited about, that don’t necessary help us in the long run. You know you see lots of people who get really excited about the graph. Very few people dig in behind the graph to the data. Where did this data come from? How much can I actually believe it? Like people are so willing to just accept experiment findings without actually digging in behind it. And a lot of the time, when you do dig in behind it, you go huh, that doesn’t look very reliable at all. So, there’s something about we love to tell ourselves that we’re data informed or data driven, but a huge number of the people who are excited about that don’t have very much data literacy to be able to check in and make sure that the stuff they’re getting excited about is actually reliable.

Steve: So, how could someone improve their data literacy?

Leisa: I think that there’s a lot of work that we need to do to make sure that we actually understand how experiments should be structured. And understand more about – being more curious about where is this data coming from and what are the different ways that we could get tricked by this, right? And there are tons of like books and papers and all kinds of things on the subject. But actually, when you go looking for and you’re coming at it with a critical mind, rather than with a mind that gets excited by graphs, you know a lot of it is pretty logical. When you think about like – like surveys, right? Who did this survey actually – who actually answered these questions? Instead of just going hey, there’s an answer that supports my argument, I’m just going to grab that. To dig behind it and go like are these – are the people who are being surveyed actually the same people that we’re talking about when we’re making this decision. Is there something about the nature of those group of people that is going to bias their response? It’s remarkable to me how few people actually kind of check the source to make sure that it’s worthy of relying on. Everyone – a lot of people are really keen to just grab whatever data they can get, but that supports their argument. I think this is another one of those kind of human things, those human inclinations that we have that lead us towards kind of bad behaviors.

Steve: I can imagine that with this research education effort that you’re doing that people that participate in that are going to learn how to make better choices in the research that they then run, some practical skills, some planning, some framing, as you talked about. But it seems like that literacy is a likely side effect of that, or maybe it’s not the side effect. Maybe it’s the effect. How to be better consumers of research. Once you understand how the sausage is made a little bit then you understand, oh yeah, that’s a biased question, or that’s bad sampling, and there’s a choice to make in all of these and we have to question those choices to understand the research that’s presented to us. I hadn’t really thought of training people to do research as a way to help then in their consumption or critical thinking around research.

Leisa: Something else that we’ve really come to realize recently as well is that because of all of the other pressures that the people who are making these decisions are having to deal with as well, we think we’ll do much better if we train the entire team together instead of just training the people who are tasked with doing the research. Because something that we’ve observed is that you can train people and they can go this is great – I really see what I was doing before was creating these kind of not great outcomes and what I need to do to do better. And then you’ll see them, not that long later, doing exactly the opposite to what we thought we had agreed we were going to do going forward. And we’re like well what’s happening? These are like smart, good people who have been given the information and are still doing crazy stuff. Like what’s going on with that? And you realize they go back into their context where everyone is just trying to drive to get stuff done faster, faster, faster, faster, and you have to plan to do good research. The easiest research to do really, really quickly is the crappy research. If you want to do good research you do have to do some planning. So, you need to get into a proactive mindset for it rather than a reactive mindset for it. And you need the entire team to be able to do that. So, one of the things that we’re looking to do moving forward is not just to train the designers and the PMs, but actually to train a ton of people all around them in the teams to help them understand that the way that you ask the questions and who you ask them of and where – all the different things that could impact the reliability of your research – requires planning and thinking about in advance. So, we hope that means that the whole team will understand the importance of taking the time to do it and it won’t just be like one or two people in a large team fighting to do the right thing and being pressured by everybody else. So, I think it is – the education aspect, I suspect, is super important and it goes way beyond just helping people who are doing research to do better research. It goes to helping the whole organization to understand the impact and risk that goes with doing a lot of research activity in the wrong way or the wrong place.

Steve: Just fascinating to me and I think a big challenge for all of us that lots of researchers are involved in education of one form or another – workshops – a big program like you’ve been building. But most of us are not trained educators. We don’t have a pedagogical, theoretical background about – it’s about communicating information. It’s about influence and changing minds. And it just seems like a lot of researchers, from an individual researcher on a product team to someone like you that’s looking at the whole organization, we’re sort of experimenting and trying to build tools and processes that help people learn. And learn is not just imparting the information, but reframe and empower, like these big words where – I wish I had – I wish you had a PhD in education. I wish I had that. Or that I had been a college professor for a number of years, or whatever it would take – however I would get that level of insight. Personally, I have none of that. So, to hear us – you know we all talk about these kinds of things. I think research gives you some skill and prototyping and iterating and measuring in order to make the kinds of changes in the implementation that your making. I don’t know about you. I feel like I am amateurish, I guess, in terms of my educational theory.

Leisa: Absolutely. And I think talking to the team who are doing this work, like it’s really, really, really hard – really hard work to come up with the best way to share this knowledge with people in your organizational context, in a way that is always time constrained. At Atlassian we have a long history of doing kind of internal training. We have this thing called bootcamps. We’ve got almost like a voluntary system where people come in and they run these bootcamps and you can go and learn about the finer details of advanced Jira administration or you can come in and learn about how to do a customer interview. But the timeframe around that was like – like a two-hour bootcamp was a long bootcamp. Most bootcamps are like an hour. And so, when we started thinking about this we were like we’re going to need at least a day, maybe two days. And everyone was like nobody will turn up. But yeah – fortunately we have – we do day long sessions and people have been turning up. So, that’s great. But yeah, it’s a huge effort to try to come up with something that works well. And every time we do these courses the trainers and educators go away and think about what worked and how can we do it better. So, we iterate every single time. So, yeah, it’s a huge amount of effort. I think in larger organizations too, there are other people in the organizations who are also tasked with learning type stuff. So, we have a couple of different teams at Atlassian who are kind of involved in helping educate within the organization. So, we’re building a lot of relationships with different parts of the org than we have before in order to try to get some support and even just infrastructure. Like there’s just a whole lot of like logistics behind setting this up if you want to do it in any kind of scale. It’s great to have that support internally and it’s really good to start to build these relationships across the organization in different ways. But yeah, I think that we certainly underestimated the challenge of just designing what education looks like and how to make it run well. It’s a massive, massive effort.

Steve: It’s not the material – you guys probably have at hand a pretty good set of here’s how to go do this. If you brought an intern onto your team you could probably get them up to speed with material that you have, but you’re trying to change a culture. And I think the advantage that I have, the context that I have as an external educator is that people are opting in and that I have to by the assumption that they want to be there and I don’t have access to what the outcomes are. I might through someone that’s my gatekeeper, but it’s kind of one them. I have the responsibility for the training, but not the responsibility for the outcomes, which is what you all are kind of working with. So, I envy you and I don’t envy you.

Leisa: Well, I think I – I like it because, you know, going back to the build/measure/learn thing, right. Again, we did a learn before we did our build and measure because we’re researchers and that’s what we do, but it is – it’s super interesting to see the behaviors of people who have come through the training and see whether they shift or not. That gives us – we get feedback from people after the course who tell us whether they thought it was useful or not useful and what they liked and didn’t like, but then we actually get to observe because through our ops team, they come through us to do their recruiting and get their incentives. So, we can keep a little bit more of an eye on what activity is coming out and see if there is any sort of shifting in that as well. And it’s not just courses as well. It’s thinking about like what’s the overall ecosystem? What are other things that we can be doing where we are sort of reaching out to help support people on this journey as well. Before we did educating, we had advisors who kind of made themselves available to go and sort of support teams who recognized that they might have a need for some help with getting their research right. So, that was kind of our first attempt. But we had to pivot into educating because of the time factor. We would go in and give advice and everybody would go it’s great advice. We’d totally love to do that, but I have to get this done by next Thursday. So, I’m going to ignore your advice for now and carry on with what I was going to do anyway. And that was pretty frustrating. So, we felt like we have to invest in trying to get ahead of the curve a little bit – try to get ahead. Try to not necessarily influence the stuff that has to happen by next Thursday, but try to encourage teams to start being proactive and planning to do research instead of doing this really reactive work instead. Or as well as the reactive work perhaps. I don’t know.

Steve: Have the right mix.

Leisa: Yeah.

Steve: I wonder if – and this is probably not going to turn out to be true, but I wonder about being proactive and planning versus time. The time pressure to me is about oh we only have so many hours to spend on this, or that calendar wise we need to be there next Thursday. But being proactive says, well if we started thinking about it three weeks ago we’d be ready for it Thursday to do it “the right way.” I’m wondering, can we tease apart the pressures. One is like no proactivity, the sort of very short-term kind of thinking, is different than hours required. Is that true?

Leisa: I think so. Because I think that even want to do the short-term stuff we still spend like quite a lot of time and effort on it. And the planning in advance, like the more proactive work doesn’t necessarily, I don’t think, entail more actual work. It just might be that you put in your recruitment request a couple of weeks beforehand so that we can try to make sure that the people that you meet are the right kinds of people, instead of if you have to have it done in 3 or 4 days time then your ability to be careful and selective in terms of who you recruit to participate in the research is very much limited. So, we see – when we have those kind of time constraints you see everybody just going to unmoderated usability testing and their panel and that introduces a whole lot of problems in terms of what you’re able to learn and how reliable that might be. Yeah. I was thinking for a second about you know when – theoretically when you do your unmoderated usability testing you should still be watching the videos, right. So, that should take as much time as watching a handful of carefully recruited people being – doing these sessions in a facilitated way. But the reality is, I think, that most people don’t watch the videos, which speaks to quality.

Steve: Here we are back again. It seems like there’s a, for the profession overall, maybe one way to start sort of framing the way around this time pressure thing is to decouple proactiveness versus sort of hours burned. That it’s going to be the similar number of hours burned, but if you start earlier the quality goes up. I had never really thought about those two things as being separate.

Leisa: Yeah. And I don’t think people do. I think when people think about – and this is – I mean I sort of said the word quality. I’m trying not to say quality anymore. I’m trying to talk about meaningfulness more now, I think, because whenever you talk about quality to pushback that you get is well it doesn’t need to be perfect, so it doesn’t need to be academic. I just need enough data to be able to make a decision and understand that. But then I see the data upon which they’re making the decision and that makes me worry, right? And I think that’s – we want to get out of this discussion of like quality levels, more into sort of reliability levels, like how reliable does it need to be? Because surely there has to be a bar of reliability that you have to meet before you feel like you’ve got that information that you need to make a decision. But I see loads of people making decisions off the back of really dreadful and misleading data and that’s what worries me. And they feel confident with that data – going back to the data literacy problem. Like they really haven’t dug into why they might be given really misleading answers as a result of the who and the how and the why, all those decisions that are made around how to do the research, most of which have been driven by time constraints.

Steve: Okay, that’s the end of part one of my interview with Leisa. What a cliffhanger! There’s more to come from Leisa in the next episode of Dollars to Donuts. Meanwhile, subscribe to the podcast at portigal dot com slash podcast, or go to your favorite podcasting tool like Apple Podcasts, Stitcher or Spotify, among others. The website has transcripts, show notes, and the complete set of episodes. Follow the podcast on Twitter, and buy my books Interviewing Users and Doorbells Danger and Dead Batteries from Rosenfeld Media or Amazon. Thanks to Bruce Todd for the Dollars to Donuts theme music.

The post 19. Leisa Reichelt of Atlassian (Part 1) first appeared on Portigal Consulting.
  continue reading

57 episodes

Artwork
iconShare
 
Manage episode 234192491 series 62327
Content provided by Steve Portigal. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Steve Portigal or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

This episode of Dollars to Donuts features part 1 of my two-part conversation with Leisa Reichelt of Atlassian. We talk about educating the organization to do user research better, the limitations of horizontal products, and the tension between “good” and “bad” research.

If you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain? You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that? – Leisa Reichelt

Show Links

Follow Dollars to Donuts on Twitter and help other people discover the podcast by leaving a review on iTunes.

Transcript

Steve Portigal: Howdy, and here we are with another episode of Dollars to Donuts, the podcast where I talk to the people who are leading user research in their organization.

I just taught a public workshop in New York City about user research, organized by Rosenfeld Media. But don’t despair – this workshop, Fundamentals of Interviewing Users is also happening September 13th in San Francisco. I’ll put the link in the show notes. Send your team! Recommend this workshop to your friends!

If you aren’t in San Francisco, or you can’t make it September 13th, you can hire me to come into your organization and lead a training workshop. Recently I’ve taught classes for companies in New York City, and coming up will be San Diego, as well as the Midwest, and Texas. I’d love to talk with you about coming into your company and teaching people about research.

As always, a reminder that supporting me in my business is a way to support this podcast and ensure that I can keep making new episodes. If you have feedback about the podcast, I’d love to hear from you at DONUTS AT PORTIGAL DOT COM or on Twitter at Dollars To Donuts, that’s d o l l R s T O D o n u t s.
I was pretty excited this week to receive a new book in the mail. It’s called The Art of Noticing, by Rob Walker, whose name you may recognize from his books, or New York Times columns, or his appearance in Gary Hustwit’s documentary “Objectified.” I’ve only just started the book but I am eager to read it, which is not something I say that often about a book of non-fiction. The book is structured around 131 different exercises to practice noticing. Each page has really great pull-quotes and the exercises seem to come from a bunch of interesting sources like people who focus on creativity or art or storytelling. Rob also publishes a great newsletter with lots of additional tips and examples around noticing, and I’ve even sent him a few references that he’s included. I’ll put all this info in the notes for this episode. This topic takes me back to a workshop I ran a few years ago at one of the first user research conferences I ever attended, called About, With and For. The workshop is about noticing and II wonder if it’s time to revisit that workshop, and I can look to Rob’s book as a new resource.

Well, let’s get to the episode. I had a fascinating and in-depth conversation with Leisa Reichelt. She is the Head of Research and Insights at Atlassian in Sydney Australia. Our interview went on for a long time and I’m going to break it up into two parts. So, let’s get to part one here. Thank you very much for being here.

Leisa Reichelt: Thank you for inviting me.

Steve: Let’s start with maybe some background introduction. Who are you? What do you do? Maybe a little bit of how we got here – by we, I mean you.

Leisa: I am the Head of Research and Insights at Atlassian. Probably best known for creating software such as Jira and Confluence. Basically, tools that people use to make software. And then we also have Trello in our stable as well. So, there are a bunch of tools that are used by people who don’t make software as well. A whole bunch of stuff.

Steve: It seems like Jira and Confluence, if you’re any kind of software developer, those are just words you’re using, and terms from within those tools. It’s just part of the vocabulary.

Leisa: Yeah.

Steve: But if you’re outside, you maybe have never heard those words before?

Leisa: Exactly. Atlassian is quite a famous company in Australia because it’s kind of big and successful. But I think if you did a poll across Australia to find out who knew what Atlassian actually does, the brand awareness is high. The knowledge of what the company does is pretty low, unless you’re a software developer or a sort of project manager of software teams in which case you probably have heard of or used or have an opinion about Jira and probably Confluence as well.

Steve: And then Trello is used by people that aren’t necessarily software makers.

Leisa: Correct. A bunch of people do use it for making software as well, but it’s also used for people running – like in businesses, running non-technical projects and then a huge number of people just use it kind of personally – planning holidays, or weddings. I plan my kids weekends of Trello sometimes and I know I’m not alone. So, yeah it’s a real – it’s a very what we call a horizontal product.

Steve: A horizontal product can go into a lot of different industries.

Leisa: Exactly, exactly. I’m very skeptical about that term, by the way.

Steve: Horizontal? Yeah.

Leisa: Or the fact that it won’t necessarily be a good thing, but that’s another topic probably.

Steve: So, I can’t follow-up on that?

Leisa: Well, yeah, you can. Well, the problem with horizontal products, I think, is that they only do a certain amount for everybody and then people reach a point where they really want to be able to do more. And if your product is too horizontal then they will graduate to other products. And that gives you interesting business model challenges, I think. So, you have to be continually kind of seeking new people who only want to use your product up to a certain point in order to maintain your marketplace really.

Steve: When I think about my own small business and just any research I’ve done in small and just under medium sized businesses where everything is in Excel, sort of historically, where Excel is the – there may be a product, Cloud-based or otherwise, to do a thing. That someone has built kind of a custom Excel tool to do it. So, is Excel a horizontal product that way?

Leisa: I think so, yeah. In fact, I was talking to someone about this yesterday. I think that for a lot of people the first protocol for everything is a spreadsheet. They try to do everything that they can possibly do in a spreadsheet. And then there are some people who are like the, “ooh, new shiny tool. Let’s always try to find an application for a new shiny tool.” I think actually the vast majority of people take the first tool that they knew that had some kind of flexibility in it. So, if you can’t do it in Word or Excel – people will compromise a lot to be able to do things in tools that they have great familiarity with.

Steve: Yeah. But from the maker point of view you’re saying that a risk in the horizontalness, the lack of specificity, creates kind of a marketplace for the maker of the tool?

Leisa: Can do. I think for it to be successful you just have to be at such a huge scale to be able to always meet the needs of enough people. But I think things like Excel and Word and Trello, for example, they’ll always do some things for some people. Like just ‘cuz you moved to a more kind of sophisticated tool doesn’t mean that you completely abandon the old tool. You probably still use it for a bunch of things.

Steve: So, your title is Head of Research and Insights?

Leisa: Correct.

Steve: So, what’s the difference between research and insights?

Leisa: Yeah, good question. I didn’t make up my own title. I kind of inherited it. If I remember correctly, the way that it came about was that when I came into my role it was a new combination of people in the team in that we were bringing together the people who had been doing design research in the organization and the voice of the customer team who were effectively running the MPS. And I think because we were putting the two of them together it sounded weird to say research and voice of the customer. So, they went with research and insights instead. And, honestly, I haven’t spent any time really thinking about whether that was a good title or not. I’ve got other problems on my mind that are probably more pressing, so I’ve just kind of left it. If you want to get into it, I think research is the act of going out and gathering the data and pulling it altogether to make sense and insights. So, hopefully the sense that you get from it and we do try to do both of those things in my team, so I think it’s a reasonably accurate description.

Steve: How long have you been in this role?

Leisa: It’s about 18 months now.

Steve: If you look back on 18 months what are the notable things? For the experience of people inside the organization what has changed ins?

Leisa: Quite a few things. The shape and make up of the team has changed quite a lot. We’re bigger and differently composed to what we were previously. When I first came in it was literally made up of researchers who were working in products. Prior to me coming in they were reporting to design managers and we, for various reasons, pulled them out of products pretty quickly once I started. And then we had the other part of the business who were running MPS that was being gathered in product and we don’t do that anymore either. So, that team is doing quite different things now. So, we’ve changed a lot of things. We’ve introduced a couple of big programs of work as well. One of them being a big piece of work around top tasks. So, taking Gerry McGovern’s approach to top tasks. Trying to build that kind of foundational knowledge in the organization. So, that’s kind of new. There have always been those people doing Jobs To Be Done type stuff, but very very close to the product level. So, we’ve tried to pull back a little bit to really build a bigger understanding of what are the larger problems that we’re trying to solve for people and how might we see the opportunities to address those more effectively?

Steve: So, trying to create shared knowledge around what those top tasks are – I’m guessing for different products, different user scenarios.

Leisa: One of the things that we really tried to do is to get away from product top tasks and get more into really understanding what problems-based is the product, or combinations of products, trying to address. So, we don’t do top tasks for Jira. We do top tasks for agile software teams. And then through that we can then sort of ladder down to what that means for Jira or Confluence or Bitbucket or Trello, or whichever individual or combination of products we have. But it means that we sort of pull away a little bit from that feature focus. I think it can be very seductive and also very limiting.

Steve: What’s the size of the team now?

Leisa: I have a general rule in life of never count how many researchers you have in the organization because it’s always too many, according to whoever is asking you – not you, but like senior people. I think we’re probably around the mid-20s now.

Steve: And around the world, or around different locations?

Leisa: Mostly we’re in Sydney and California. So, we’re across three different offices there and we have a couple of people working remotely as well. So, I have somebody remote in Europe and another person in California who’s working out of a remote office too.

Steve: Can we sort of talk about the make up of the team evolving – what kinds of – without sort of enumerating, what are sort of the background, skillsets? Any way that you want to segment researchers, what kinds of – what’s the mix that you’re putting together?

Leisa: So, I think at the highest level we’ve got people who do design research, predominantly qualitative and they do a mixture of discovery work and evaluative work. We’ve got a team of what we call quantitative researchers and those are generally people who have got a marketing research type background and so they bring in a lot of those data and statistical skills that the other crew don’t necessarily have quite so much of. And then we have a research ops team as well who are doing some different things. And then we have a handful of research educators.

Steve: And what are the research educators doing?

Leisa: Educating people about how to do research.

Steve: These are hard questions and hard answers!

Leisa: Well, if you dig into it too much it gets complicated pretty quickly. So, I think the reality of research at Atlassian is the people in the research team probably do – best case – 20% of the research that happens in the organization and that’s probably an optimistic estimate as well. A huge amount of research is done by product managers and by designers and by other people in the organization, most of whom haven’t had any kind of training and so take a very intuitive approach to how they might do research. So, the research education team are really trying to help shape the way that this work is done so that it can be more effective.

Steve: I’m thinking about a talk I saw you give at the Mind the Product Conference where you began the talk – I think you kind of bookended the talk with a question that you didn’t answer – I don’t think you answered it definitively which is, is bad research better than no research, or words close to that. It was a great provocation to raise the question and then sort of talk about what the different aspects of that – how we might answer that? What the tradeoffs are? When you talk about this design education effort I can’t help but think that that’s connected to that question. If 80% of the research is done by people acting intuitively then yeah, how do you level up the quality of that?

Leisa: Absolutely.

Steve: Which implies that – well, I don’t know if it answers – does that answer the question? I’m not trying to gotcha here, but if you are trying to level up the quality that suggests that at some point better research is – I don’t know, there’s some equation here about bad vs. better vs. no. I’m not sure what the math of it is.

Leisa: So, this has been the real thing that’s occupied my mind a lot in the last couple of years really. And I’ve seen different organizations have really different kind of appetites for participating in research in different ways. I think at the time that I did that talk, which was probably, what?

Steve: Almost a year ago.

Leisa: Yeah. I think I was still trying to come to terms with exactly where I’ve felt – what I’ve felt about all of this, because as somebody who – well, as somebody in my role, everybody in an organization is – a lot of people in the organization are going to be watching you to see you trying to be overly precious and to become a gatekeeper or a bottleneck or all of these kinds of things. So, I’ve always felt like I had to be very careful about what you enabled and what you stopped because everyone has work that they need to get done, right. And the fact that people want to involve their customers and their users in the design process is something that I want to be seen to be supporting. Like I don’t want to be – I don’t want to stop that and I certainly don’t want to message that we shouldn’t have a closeness to our users and customers when we’re doing the research.

But, I’ve also seen a lot of practices that are done with the best of intentions that just get us to some crazy outcomes. And that really worries me. It worries me on two levels. It worries me in terms of the fact that we do that and we don’t move our products forward in the way that we could and it worries me because I think it reflects really poorly on research as a profession. I think most of us have seen situations where people have said well I did the research and nothing got better, so I’m not going to do it anymore. Clearly it’s a waste of time, right. And almost always what’s happened there is that the way that people have executed it has not been in a way that has helped them to see the things that they need to see or make the decisions that they need to make. So, it’s this really hard like to walk to try to understand how to enable, but how to enable in a way that is actually enabling in a positive way and is not actually facilitating really problematic outcomes. So, that’s, yeah – that’s my conundrum is balancing that. One the one hand I feel really uncomfortable saying no research is better than bad research. But on the other hand, I’ve seen plenty of evidence that makes me feel like actually maybe it’s true.

Steve: So, that’s kind of a time horizon there that the bad research may lead to the wrong decisions that impact the product and then sort of harm the prospects of it. I’m just reflecting back. That harm the prospects of a research to kind of go forward. Right, every researcher has heard that, “well we already knew that” response which to me is part of what you’re talking about. It’s that when research doesn’t – isn’t conducted in a way and sort of isn’t facilitated so that people have those learning moments where they – I think you said something about sort of helping them see the thing that’s going to help them make the right decision. And that’s – right, that’s not a – that’s different than what methodology do you use, or are you asking leading questions? Maybe leading questions are part of it because you confirm what you knew and so you can’t see something new because you’re not kind of surfacing it.

Leisa: Loads of it comes around framing, right. Loads of it comes around where are you in the organization? What are you focused on right now? What’s your remit? What’s your scope? What are you allowed to or interested in asking questions about? In a lot of cases this high volume of research comes from very downstream, very feature focused areas, right. So, if you’re working on a product that has got some more foundational issues that need to be addressed, but the vast majority of the work is happening at that very detailed feature level, how are you going to ever going to stop kind of circling the drain. You get stuck in this kind of local maxima. How are you ever going to take that big substantial step to really move your product forward if it’s nobody’s job, nobody’s priority, to do that. So, a lot of this is kind of structural. That so many of our people who are conducting this research are so close to the machine of delivery, and shipping and shipping and shipping as quickly as possible, that they don’t have the opportunity to think – to look sideways and see what’s happening on either side of their feature. Even when there are teams that are working on really similar things. They’ve run so heads down and feature driven. So, doing the least possible that you can to validate and then ship to learn, which is a whole other area of bad practice, I think, in many cases. It’s really limiting and you can see organizations that spend a huge amount of time and effort doing this research, but it’s at such a micro level that they don’t see the problems that their customers are dying to tell them about and they never ask about the problems that the customers are dying to tell them about because the customers just answer the questions that they asked and that’s kind of what bothers me. And so, it’s not about – in a lot of cases, some of it is about practice. Like I think it’s amazing how few people can just resist the temptation to say hey I’ve got this idea for a feature, how much would you like it? They can get 7 out of who said they loved my feature. Like that feels so definitive and so reliable and that’s very desirable and hard to resist. So, yes that happens.

But the bigger thing for me I think is where research is situated and the fact that you don’t have both that kind of big picture view as well as that micro view. We just have all of these fragments of microness and a huge amount of effort expended to do it. And I don’t feel like it’s helping us take those big steps forward.

Steve: But Leisa, all you need is a data repository so that you can surface those micro findings for the rest of the organization, right?

Leisa: You’re a troll Steve.

Steve: I am trolling you.

Leisa: You’re a troll, but again, I mean in some – kind of yes, but again, like all of those micro things don’t necessarily collectively give you the full view. And a lot of those micro things, because of the way they’re being asked, actually give you information that’s useless. If all of your questioning is around validating a particular feature that you’ve already decided that you think you want to do and you go in asking about that, you never really ask for the why? Like why would you use – like who is actually using this? Why would they use this? What actual real problem are they solving with this, right? So, the problem becomes the feature and then all of that research then becomes very disposable because the feature shifts and then nobody uses it as much as everybody thought they would, or they’re not as satisfied with it as what they thought they would be. So, we just keep iterating and iterating and iterating on these tiny things. We move buttons around. We change buttons. We like add more stuff in – surely that will fix it. But it’s because we haven’t taken that step back to actually understand the why? If you’re talking about those big user need level questions, the big whys, then you know what, you can put those things in a repository and you can build more and more detail into those because those tend to be long lasting.

And then at the other end, just the basic interaction design type stuff that we learn through research. A lot of those things don’t change much either. But it’s that bit in the middle, that feature level stuff, is the most disposable and often the least useful and I feel like that’s where we spend a huge amount of our time.

Steve: Do you have any theories as to why in large software enterprise technology companies that there is a focus on leaning heavily on that kind of feature validation research? What causes that?

Leisa: I think that there’s probably at least two things that I can think about that contribute to that. One is around what’s driving people? What gets people their promotions? What makes people look good in organizations? Shipping stuff – shipping stuff gets you good feedback when you’re going for your promotion. They want a list of the stuff that you’ve done, the stuff that you’ve shipped. In a lot of organizations just the fact that you’ve shipped it is enough. Nobody is actually following up to see whether or not it actually made a substantive difference in customers’ lives or not. So, I think that drive to ship is incredibly important. And our org structures as well, like the way that we divide teams up now, especially in organizations where you’ve got this kind of microservice platform kind of environment. You can have teams who’ve got huge dependencies on each other and they really have to try to componentize their work quite a lot. So, you have all of these kind of little micro teams who their customer’s experience is all of their work combined, but they all have different bosses with different KPIs or different OKRs or whatever the case may be. And I think that’s a problem. And then the other thing is this like build, measure, build – what is it? I’ve forgotten how you say it.

Steve: I’m the wrong person.

Leisa: Build/measure/learn.

Steve: Yeah, okay.

Leisa: The lean thing, right, which is this kind of idea that you can just build it and ship it and then learn. And that’s – that’s – that means that like if it had been learn/build/measure/learn we would be in a different situation, right, because we would be doing some discovery up front and then we would have a lot of stuff that we already knew before we had to ship it out to the customers. But it’s not. It’s build – you have an idea, build/measure/learn. And then people are often not particularly fussy about thinking about the learn bit. So, we ship it, put some analytics on it and we’ll put a feedback collector on it and then we’ll learn.

What are you going to learn? Whatever. What does success look like? I don’t know. It’s very – it’s kind of lazy and it makes – we treat our customers like lab rats when we do that. And I feel like they can tell. There’s lots of stuff that gets shipped that shouldn’t get shipped, that we know shouldn’t get shipped. I’m not talking about Atlassian in specific. I’m talking about in general. We all see it in our day to day digital lives that people ship stuff that we really should know better, but this build/measure/learn thing means that unless you can see it in the product analytics, unless you can see thousands of customers howling with anger and distress, it doesn’t count.

Steve: It reminds me of a client I worked with a few years ago where we had some pretty deep understandings of certain points of the interactions that were happening and what the value proposition was and where there was meaning. Just a lot of sort of – some pretty rich stuff. And the client was great because they kept introducing us to different product teams to help apply what we had learned to really, really specific decisions that were going to be made. But what we started to see – this pattern was they were interested in setting up experiments, not improving the product. And in fact, this is a product that has an annual use cycle. So, the time horizon for actually making changes was really far and some of the stuff was not rocket science. Like it was clear for this decision – A versus B versus C – like here’s the research said very clearly like this is what people care about, or what’s going to have impact, or what’s going to make them feel secure in these decisions. They were like great, we can conduct an experiment. We have three weeks to set up the experiment and then we’ll get some data. And I was – I just hadn’t really encountered that mindset before. I don’t know if they were really build/measure/learn literally, if that was their philosophy, but I didn’t know how to sort of help – I wasn’t able to move them to the way that I wanted it done. I’d really just encountered it for the first time and it seemed like there was a missed opportunity there. Like we knew how to improve the product and I wasn’t against – like they are conducting experiments and measuring and learnings – awesome. That’s research. But acting on what you’ve learned seems like why you’re doing the research in the first place.

Leisa: It feels as though we have this great toolset that we could be using, right? We’ve got going out and doing your kind of ethnography, contextual type stuff. And then right at the other end we’ve got the product analytics and running experiments and a whole bunch of stuff in between. And it really feels to me as though organizations tend to really just get excited about one thing and go hard on that one thing. And growth – doing the experiments is a big thing right now. I see loads of situations where we look at data and we go, oh look the graph is going in this direction. Well, the graph is not going in that direction. Why? And I’ll kind of – like there’s so much guessing behind all of that, right? And if it doesn’t go quite right, well then let’s have another guess, let’s have another guess, let’s have another guess. And this – like you say, there’s so much stuff that we probably already know if we could connect two parts of the organization together to take together more effectively. Or, there are methods that we could use to find out the why pretty quickly without having to just put another experiment, another experiment onto our customers, our users. But the knowledge of this toolset and the ability to choose the right tool and apply it, and to apply these approaches in combination seems to be where the challenge is.

Steve: What’s the role of design in this? In terms of here’s the thing that we know and here’s the thing we want to make. We haven’t talked about designers. Ideally for me, I sort of hope designers are the instruments of making that translation. Without design then you can sort of test implementations, but you can’t synthesize and make anything new necessarily.

Leisa: Well, I mean yeah. Theoretically design has a really important role to play here because I think design hopefully understands the users in a design process better than anybody else. Understands the opportunities for iteration and levels of fidelity for exploring problems in a way that nobody else on the team does. And lots of designers do know that. But the time pressure is enormous, especially in these kind of larger tech companies where a lot of times designers – designers are also concerned about being bottlenecks. They have to feed the engineering machine. And it can be really difficult for them to have the conversations to talk about all of the work that we should be doing before we ship something. So, I feel as though they have a lot of pressure on them, a lot of time pressure on them. They’re being pressured to really contract the amount of effort that they put in before we ship something. And there is – yeah, there’s this huge desire amongst product teams and their bosses to just get something shipped, get something live and then we’ll learn that I think design really struggles with. And I don’t know – it can be really difficult in those kinds of environments to be the person who stands up and says we need to slow down and we need to do less and do it better. So, I have empathy for designers in their inability to shift the system – these problems – necessarily because of this big pressure that’s being put on them just to keep the engineers coding.

Steve: Right. If you say you want more time to work on something that’s – what are the risks to you for doing that?

Leisa: Exactly, exactly. On a personal level everyone wants to look good in front of their boss. Everyone would like a pay raise and a promotion and the easiest way to get those things is to do what you’re told and ship the stuff. Keep everyone busy, produce the shiny stuff, let it get coded up, let it go live, learn, carry on. That’s thinking short term. That’s how to have a happy life. Is that how you’re going to fundamentally improve the product or service that you’re working on? Probably not. But it takes a lot of bravery to try to change that, to try to stop this crazy train that’s out of control. Throw yourself in front of the bus. All that kind of stuff. Like you said, it’s hard, especially when there are loads of people all around you who are very happy to keep doing it the way that it’s being done right now. So, yeah. And I think that’s research’s role, that’s design’s role. I would like that to be PM’s role as well. And most engineers are – a lot of engineers are very, very, very interested in making sure that the work that they’re doing is actually solving real problems and delivering real value as well.

Steve: So, as you point to the individuals there’s a lot of shared objectives. But if you point the system, which is – it’s the system of – it’s the rewards system and the incentive system, but there’s also some – I think there’s just sort of what day to day looks like. Sort of the operations of the system of producing technology products.

Leisa: I think there’s also – there’s something about like what do people like to see? What gets people excited, right? And graphs pointing upwards in the right direction is really exciting. Like certain outcomes are really exciting. Ambiguous outcomes, really not exciting at all. Having things that go fast, very exciting. Things that go slow, not exciting. So, I think there are all of these things that this collection of humans, that form an organization have a strong bias towards, that we get really excited about, that don’t necessary help us in the long run. You know you see lots of people who get really excited about the graph. Very few people dig in behind the graph to the data. Where did this data come from? How much can I actually believe it? Like people are so willing to just accept experiment findings without actually digging in behind it. And a lot of the time, when you do dig in behind it, you go huh, that doesn’t look very reliable at all. So, there’s something about we love to tell ourselves that we’re data informed or data driven, but a huge number of the people who are excited about that don’t have very much data literacy to be able to check in and make sure that the stuff they’re getting excited about is actually reliable.

Steve: So, how could someone improve their data literacy?

Leisa: I think that there’s a lot of work that we need to do to make sure that we actually understand how experiments should be structured. And understand more about – being more curious about where is this data coming from and what are the different ways that we could get tricked by this, right? And there are tons of like books and papers and all kinds of things on the subject. But actually, when you go looking for and you’re coming at it with a critical mind, rather than with a mind that gets excited by graphs, you know a lot of it is pretty logical. When you think about like – like surveys, right? Who did this survey actually – who actually answered these questions? Instead of just going hey, there’s an answer that supports my argument, I’m just going to grab that. To dig behind it and go like are these – are the people who are being surveyed actually the same people that we’re talking about when we’re making this decision. Is there something about the nature of those group of people that is going to bias their response? It’s remarkable to me how few people actually kind of check the source to make sure that it’s worthy of relying on. Everyone – a lot of people are really keen to just grab whatever data they can get, but that supports their argument. I think this is another one of those kind of human things, those human inclinations that we have that lead us towards kind of bad behaviors.

Steve: I can imagine that with this research education effort that you’re doing that people that participate in that are going to learn how to make better choices in the research that they then run, some practical skills, some planning, some framing, as you talked about. But it seems like that literacy is a likely side effect of that, or maybe it’s not the side effect. Maybe it’s the effect. How to be better consumers of research. Once you understand how the sausage is made a little bit then you understand, oh yeah, that’s a biased question, or that’s bad sampling, and there’s a choice to make in all of these and we have to question those choices to understand the research that’s presented to us. I hadn’t really thought of training people to do research as a way to help then in their consumption or critical thinking around research.

Leisa: Something else that we’ve really come to realize recently as well is that because of all of the other pressures that the people who are making these decisions are having to deal with as well, we think we’ll do much better if we train the entire team together instead of just training the people who are tasked with doing the research. Because something that we’ve observed is that you can train people and they can go this is great – I really see what I was doing before was creating these kind of not great outcomes and what I need to do to do better. And then you’ll see them, not that long later, doing exactly the opposite to what we thought we had agreed we were going to do going forward. And we’re like well what’s happening? These are like smart, good people who have been given the information and are still doing crazy stuff. Like what’s going on with that? And you realize they go back into their context where everyone is just trying to drive to get stuff done faster, faster, faster, faster, and you have to plan to do good research. The easiest research to do really, really quickly is the crappy research. If you want to do good research you do have to do some planning. So, you need to get into a proactive mindset for it rather than a reactive mindset for it. And you need the entire team to be able to do that. So, one of the things that we’re looking to do moving forward is not just to train the designers and the PMs, but actually to train a ton of people all around them in the teams to help them understand that the way that you ask the questions and who you ask them of and where – all the different things that could impact the reliability of your research – requires planning and thinking about in advance. So, we hope that means that the whole team will understand the importance of taking the time to do it and it won’t just be like one or two people in a large team fighting to do the right thing and being pressured by everybody else. So, I think it is – the education aspect, I suspect, is super important and it goes way beyond just helping people who are doing research to do better research. It goes to helping the whole organization to understand the impact and risk that goes with doing a lot of research activity in the wrong way or the wrong place.

Steve: Just fascinating to me and I think a big challenge for all of us that lots of researchers are involved in education of one form or another – workshops – a big program like you’ve been building. But most of us are not trained educators. We don’t have a pedagogical, theoretical background about – it’s about communicating information. It’s about influence and changing minds. And it just seems like a lot of researchers, from an individual researcher on a product team to someone like you that’s looking at the whole organization, we’re sort of experimenting and trying to build tools and processes that help people learn. And learn is not just imparting the information, but reframe and empower, like these big words where – I wish I had – I wish you had a PhD in education. I wish I had that. Or that I had been a college professor for a number of years, or whatever it would take – however I would get that level of insight. Personally, I have none of that. So, to hear us – you know we all talk about these kinds of things. I think research gives you some skill and prototyping and iterating and measuring in order to make the kinds of changes in the implementation that your making. I don’t know about you. I feel like I am amateurish, I guess, in terms of my educational theory.

Leisa: Absolutely. And I think talking to the team who are doing this work, like it’s really, really, really hard – really hard work to come up with the best way to share this knowledge with people in your organizational context, in a way that is always time constrained. At Atlassian we have a long history of doing kind of internal training. We have this thing called bootcamps. We’ve got almost like a voluntary system where people come in and they run these bootcamps and you can go and learn about the finer details of advanced Jira administration or you can come in and learn about how to do a customer interview. But the timeframe around that was like – like a two-hour bootcamp was a long bootcamp. Most bootcamps are like an hour. And so, when we started thinking about this we were like we’re going to need at least a day, maybe two days. And everyone was like nobody will turn up. But yeah – fortunately we have – we do day long sessions and people have been turning up. So, that’s great. But yeah, it’s a huge effort to try to come up with something that works well. And every time we do these courses the trainers and educators go away and think about what worked and how can we do it better. So, we iterate every single time. So, yeah, it’s a huge amount of effort. I think in larger organizations too, there are other people in the organizations who are also tasked with learning type stuff. So, we have a couple of different teams at Atlassian who are kind of involved in helping educate within the organization. So, we’re building a lot of relationships with different parts of the org than we have before in order to try to get some support and even just infrastructure. Like there’s just a whole lot of like logistics behind setting this up if you want to do it in any kind of scale. It’s great to have that support internally and it’s really good to start to build these relationships across the organization in different ways. But yeah, I think that we certainly underestimated the challenge of just designing what education looks like and how to make it run well. It’s a massive, massive effort.

Steve: It’s not the material – you guys probably have at hand a pretty good set of here’s how to go do this. If you brought an intern onto your team you could probably get them up to speed with material that you have, but you’re trying to change a culture. And I think the advantage that I have, the context that I have as an external educator is that people are opting in and that I have to by the assumption that they want to be there and I don’t have access to what the outcomes are. I might through someone that’s my gatekeeper, but it’s kind of one them. I have the responsibility for the training, but not the responsibility for the outcomes, which is what you all are kind of working with. So, I envy you and I don’t envy you.

Leisa: Well, I think I – I like it because, you know, going back to the build/measure/learn thing, right. Again, we did a learn before we did our build and measure because we’re researchers and that’s what we do, but it is – it’s super interesting to see the behaviors of people who have come through the training and see whether they shift or not. That gives us – we get feedback from people after the course who tell us whether they thought it was useful or not useful and what they liked and didn’t like, but then we actually get to observe because through our ops team, they come through us to do their recruiting and get their incentives. So, we can keep a little bit more of an eye on what activity is coming out and see if there is any sort of shifting in that as well. And it’s not just courses as well. It’s thinking about like what’s the overall ecosystem? What are other things that we can be doing where we are sort of reaching out to help support people on this journey as well. Before we did educating, we had advisors who kind of made themselves available to go and sort of support teams who recognized that they might have a need for some help with getting their research right. So, that was kind of our first attempt. But we had to pivot into educating because of the time factor. We would go in and give advice and everybody would go it’s great advice. We’d totally love to do that, but I have to get this done by next Thursday. So, I’m going to ignore your advice for now and carry on with what I was going to do anyway. And that was pretty frustrating. So, we felt like we have to invest in trying to get ahead of the curve a little bit – try to get ahead. Try to not necessarily influence the stuff that has to happen by next Thursday, but try to encourage teams to start being proactive and planning to do research instead of doing this really reactive work instead. Or as well as the reactive work perhaps. I don’t know.

Steve: Have the right mix.

Leisa: Yeah.

Steve: I wonder if – and this is probably not going to turn out to be true, but I wonder about being proactive and planning versus time. The time pressure to me is about oh we only have so many hours to spend on this, or that calendar wise we need to be there next Thursday. But being proactive says, well if we started thinking about it three weeks ago we’d be ready for it Thursday to do it “the right way.” I’m wondering, can we tease apart the pressures. One is like no proactivity, the sort of very short-term kind of thinking, is different than hours required. Is that true?

Leisa: I think so. Because I think that even want to do the short-term stuff we still spend like quite a lot of time and effort on it. And the planning in advance, like the more proactive work doesn’t necessarily, I don’t think, entail more actual work. It just might be that you put in your recruitment request a couple of weeks beforehand so that we can try to make sure that the people that you meet are the right kinds of people, instead of if you have to have it done in 3 or 4 days time then your ability to be careful and selective in terms of who you recruit to participate in the research is very much limited. So, we see – when we have those kind of time constraints you see everybody just going to unmoderated usability testing and their panel and that introduces a whole lot of problems in terms of what you’re able to learn and how reliable that might be. Yeah. I was thinking for a second about you know when – theoretically when you do your unmoderated usability testing you should still be watching the videos, right. So, that should take as much time as watching a handful of carefully recruited people being – doing these sessions in a facilitated way. But the reality is, I think, that most people don’t watch the videos, which speaks to quality.

Steve: Here we are back again. It seems like there’s a, for the profession overall, maybe one way to start sort of framing the way around this time pressure thing is to decouple proactiveness versus sort of hours burned. That it’s going to be the similar number of hours burned, but if you start earlier the quality goes up. I had never really thought about those two things as being separate.

Leisa: Yeah. And I don’t think people do. I think when people think about – and this is – I mean I sort of said the word quality. I’m trying not to say quality anymore. I’m trying to talk about meaningfulness more now, I think, because whenever you talk about quality to pushback that you get is well it doesn’t need to be perfect, so it doesn’t need to be academic. I just need enough data to be able to make a decision and understand that. But then I see the data upon which they’re making the decision and that makes me worry, right? And I think that’s – we want to get out of this discussion of like quality levels, more into sort of reliability levels, like how reliable does it need to be? Because surely there has to be a bar of reliability that you have to meet before you feel like you’ve got that information that you need to make a decision. But I see loads of people making decisions off the back of really dreadful and misleading data and that’s what worries me. And they feel confident with that data – going back to the data literacy problem. Like they really haven’t dug into why they might be given really misleading answers as a result of the who and the how and the why, all those decisions that are made around how to do the research, most of which have been driven by time constraints.

Steve: Okay, that’s the end of part one of my interview with Leisa. What a cliffhanger! There’s more to come from Leisa in the next episode of Dollars to Donuts. Meanwhile, subscribe to the podcast at portigal dot com slash podcast, or go to your favorite podcasting tool like Apple Podcasts, Stitcher or Spotify, among others. The website has transcripts, show notes, and the complete set of episodes. Follow the podcast on Twitter, and buy my books Interviewing Users and Doorbells Danger and Dead Batteries from Rosenfeld Media or Amazon. Thanks to Bruce Todd for the Dollars to Donuts theme music.

The post 19. Leisa Reichelt of Atlassian (Part 1) first appeared on Portigal Consulting.
  continue reading

57 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide