Artwork

Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Complex System Security: a CISO Perspective with Emilio Escobar

41:52
 
Share
 

Manage episode 280187781 series 2805034
Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of AppSec Builders, I'm joined by DataDog CISO, Emilio Escobar. Emilio's extensive experience at Hulu and Sony Interactive and his contributions to Ettercap all provide a unique perspective on team maturity, managing complex systems across enterprise, leadership insights, security ownership, and becoming the CISO of a public company.

Follow Emilio on Twitter and Linkedin at the below links:

https://twitter.com/eaescob?lang=en

https://www.linkedin.com/in/emilioesc/

Resources

Ettercap:

Book Recs:

Episode 3 Transcript

Jb: [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.

Jb: [00:00:14] Welcome to the third episode of AppSec Builders today I'm proud to receive Emilio Escobar, who's CISO at DataDog. Welcome and good morning, Emilio.

Emilio: [00:00:24] Good morning. Excited to be here. Thanks for having me.

Jb: [00:00:24] Thanks lot for joining us. So you recently joined DataDog as a CISO, but you have a broad experience as a security leader, at DataDog today. But before that, Hulu, Sony, and I think you are also the maintainer of a famous tool for security geeks like this, which is Ettercap, right?

Emilio: [00:00:48] Yeah, that is correct. I'm one of the three main maintainers of it, and we've been doing it for about nine years already.

Jb: [00:00:56] Do you want to share a bit what Ettercap is about? I used it regularly into pentests'. That's an amazing tool.

Emilio: [00:01:02] Sure. Ettercap has been around for a long, long time, I think, since 2006, and it had slowly died down in around like maybe two thousand eight, two thousand nine. But it is a man in the middle attack tool. It's leveraged by a lot of pentesters for doing man in the middle attack to their customers and trying to obtain credentials for for services like SSH Telnet and what have you. How I got started with it was that when I worked at Accuvant Labs, I was a pentester, one of my colleagues was using it or trying to use it for an engagement that he was working on. And he was running into some, some bugs. And he reached out to me and asked me if I knew how to code in C. I said yes. And he's like, I'll give you five hundred dollars for if you solve these two for each of these two bugs that, that I'm running into. So looking at the code, I was able to fix the issues that he was running into. I never got that thousand dollars back. But what that started was the conversation between him and I. This is Eric Meilin, who I believe is that BlackBerry now about like, hey, should we actually resume the support for Ettercap? We wanted it to work well in MacOS. We wanted IPv6 support. We wanted all these new features that it wasn't supporting. And we reach out to ALoR and NaGA the original authors and they were gracious enough to allow us to to run with it as long as we kept it open source. Right. And that was the commitment that we gave them. So fast forward nine years. We've we've added a few versions. Now, I'm less involved in the coding because I really don't just don't have the time for it, but surrounded by two people who are active. So feel free to check it out on GitHub and submit pull requests, issues or use it and give us feedback.

Jb: [00:02:51] Amazing. Yes great tool, I used it a lot. So after being a pentester, you went to Sony, Hulu. So two companies in the entertainment world.

Emilio: [00:03:34] Yeah. Yeah. So I actually met PlayStation during my consulting days. Right. For some engagements that we did with them and,and a few years later they reach out to me and said, hey, we're looking for to grow the team, we're looking to grow the application product security side of the house. So I joined as employee number two for, for that discipline. And we were able to grow it to a pretty significant team. We were able to build capabilities also out of the Tokyo office out of Europe. So it was it was pretty good program. The team is still growing, is still active. And it was a lot of fun. It was. But it was the first time that I was on the receiving end of attacks from groups like Lizard Squad Anonymous. Right. So PlayStation is a big target and things like fraud and fame and fraud and all those things were a lot of the factors that we had to go sell for. So really, really interesting set of challenges like gaming faces right up. Time is everything. And we have a very opinionated customer base. Right. Like gamers care and they will let you know pretty quickly, I guess.

Jb: [00:04:38] And yes, Sony has been in a couple of important leaks were you in the company when that arrived, it must be insane to live that from the inside.

Emilio: [00:04:46] I wasn't part of PlayStation during their big outage. I supported them as a consultant. I joined after as an employee and for Sony Pictures, theyre a separate entity, right. So we collaborate, but for something like what happened to them, it's thanks but no thanks kind of approach from them. Right. And rightfully so. And I think they had the right support from the FBI and everyone else involved in their investigation. So we only supported from building a discipline and a practice, but not. Step out of the way and let us do what we do, because they have a pretty good team there as well.

Jb: [00:05:16] Yes, OK, interesting. And so then it was Hulu when we first Emilio you were looking at Hulu and I guess that there you had like very distributed architectures. Right. Would you mind sharing a bit about the context at Hulu?

Emilio: [00:05:32] Yeah, certainly so, yes. I joined Hulu to grow and build a security practice there and with a very heavy emphasis on product development. So SDLC security. How do we enable velocity? Time to market is everything, you know, obviously for a streaming platform. When I joined Hulu, we were working on the live TV product, so uptime became even more of a concern. Right. Video-On-Demand, if you can watch a video now, you might try in an hour. But live TV, if it's a Super Bowl or the World Cup or what have you, you want to watch it when it happens and not sometime later, unless you purposely record it because you can't watch it when it's live. So uptime was a big concern. So joining Hulu, I discovered the complexity of the architecture right. It was a complete microservice environment. At PlayStation, they were working towards microservice and segmenting things in smaller type of workloads. Hulu had that built. So dealing with that complexity was something that I wasn't faced with at PlayStation. So it just required a different approach of security, right. Everything was automated. Hulu had a platform as a service framework built by Hulu, which was really interesting where developers to get push can push a production and the containers will get built out and everything. So I thought all the right things were in place. We just had to get security in them to make sure that things were done appropriately. But we had to we had to rethink the whole legacy approach to security, of being a gate, doing code reviews and, you know, how do you do static analysis? How do you do dependency scans and all those things? Because you know a developer can get push any time and they were doing over three hundred deploys a day to production. Right. So it was a lot to catch up to.

Jb: [00:07:14] And could you could you give us some numbers so we can see the scale of that, like how many developers, applications, repositories, if you have that in mind.

Emilio: [00:07:23] Yeah, yeah. If I remember correctly and I'm sure it has changed since, but I think that towards the end of my Hulu tenure we had over 600 developers and I believe the number was around twenty-three hundred microservices. Now, whether that's the right number or not, that's a separate conversation. Right. But that was what we were dealing with and language frameworks were all over the place, right. So we wanted developers to be creative and effective in whatever language they felt the most comfortable with. So we had to support JavaScript, Python, Golang, I believe we have some Scala and node.js and what have you. So it wasn't a centrally standardized environment where everyone was coding Java

Emilio: [00:08:05] and uspring framework and all those things that you can get a little bit more commodity out of those, we had to scramble a little bit.

Jb: [00:08:12] So, I understand, and as a CTO, it's a tough balance to give a lot of autonomy to people, but also you need to keep a certain degree of currency in your deployments.

Jb: [00:08:23] So I'm curious to understand, so ok a lot of different languages, but I guess this also means a lot of different frameworks, a lot of different coding styles and practices, right. That's a nightmare for a security owner.

Emilio: [00:08:37] Yes, it is. Yeah so I think, you know, we had to rely on the developers being strong at what they're good at, right at coding, right, so we had to leverage that partnership. You know, all these frameworks, obviously different attack surfaces, right. So we had to find ways of how to put security in place in a matter that wasn't disruptive, that didn't impact production, that was easily adoptable, right. So starting with the "why" making security the default, right, I always tell teams that if you have a developer choosing between defaults and security, default is always going to win. So why not make security the default? So we have to take chip away at that mindset and approach, right. So we had to put, leverage as much of CICD as we could, do things as infrastructure as code, leverage security controls that you can load the library or through infrastructure as code or some sort of automation. So a lot of self-serve is why we wanted developers and teams to serve themselves security and we had to build a paved roads for them to have that enabled for them. But that on the back end to your point of how do you maintain some level of consistency and priority towards quality and security? We made big strides and efforts into tying security as a quality entity.

Emilio: [00:09:53] Right. A lot of times to see security and quality being two separate worlds. And they want to approach using different processes and different language to approach what I consider to be the same problem, right. If I'm a consumer of a service and whether it's a functional bug or a security bug is still impacts my experience, right. So, I united them to the point that we were reporting to the executives and stakeholders, security issues as part of the quality conversation, right. And we use the same language as in like escape defects, recurrent defects, and track those because we wanted to leverage that already made it already established interruption process QA had for developers for security concerns as well. And that that got us a lot of wins there where we we're not just saying, hey, we want to do this because the security is like here's a quality element to it that everyone cares about. As a developer, you don't want to be the reason for why a service or there's a bug in production that people complain about in Reddit or whatever. You have pride in the work that you do. So I think leveraging that helped us a lot with security.

Jb: [00:10:55] Super interesting. But I guess when you have a bug so it could be impacting the customer experience, like, I don't know, they can't start a movie, it could have a security issue. In the end you want both to be fixed, but the available developer time is still limited. How did you prioritize security versus quality? I guess you still have to make that code somehow?

Emilio: [00:11:17] Right, yeah, and that's exactly why I thought combining those two problems into the same conversation helped, because then we can actually do the trade-off conversations in one forum versus having silos for security or quality issues and sort of not being able to combine the two of them. So, yes, we have to be very pragmatic about if it's a security issue, how easy is it to exploit? How likely is it to be exploited? What's the impact of exploitation? Right. And Hulu being very strict about the quality of the product, even if it was a security issue that will lead to a bad experience from a consumer, whether they couldn't start a movie, a show, they couldn't save something to DVR or whatever core functionality the product has, we will still treat it as equally important as a functional issue, right. So that how the bug manifests itself became less important than the impact of the bug to consumers, right. So that put, again, that put the two security and quality in the same conversation, and then we will have the trade off talks. If it was a functional bug that was being seen by 68 percent of the consumer base and a security bug that was only being presented to 3 percent of the consumer base then that was a no brainer, right. We will choose the functional bug issue over the security bug so that's where pragmatism comes to comes to play.

Jb: [00:12:36] Right, makes sense, makes sense. And so with such a large distributed architecture, so you have a lot of simple small pieces, but the overall complexity is insane, I guess. How did you manage to cope with that? Did anyone have like a holistic vision of the system? How did you, like, enumerate two thousand services?

Emilio: [00:12:57] Yeah, yeah. It was definitely a lot of tribal knowledge for sure. And that was a problem, right. Because well, I think one thing is also to admit to the fact that security will never have the same level of understanding and visibility as like the developers have of their own software and services. So this goes back to the mindset of why security is there, right. So security is there to help developers write secure code and secure and stable services. But if you try and spend energy on security, being able to see and understand one hundred percent of what's there, then that I think you're burning a lot of candles on that side that maybe is not going to drive a lot of results. It's good to have an understanding, but is it good to have it at one hundred percent understanding? I don't think so, because you can rely on the developer community of your company to give you that understanding and empower them to make those decisions. Just measure what security looks like for them. Right. So one example is around abuse of services. And one of the things that we did was that we were empowering development teams to be able to block what they thought was malicious traffic. And the reason for that was like the security team was getting paged, let's say, at 4:00 in the morning, because some some IP's were hitting a few services pretty hard. Right. And the question that we were always getting from developers is, is this, is this a security concern or not? Or is this attack traffic or not? And it always puts us in a weird position because we don't know necessarily how the service gets called. Like, yes, we have an idea, but we don't know it better than the developers who built that service know, right

Emilio: [00:14:32] So I would always like to, we always turn around the question to them and say, hey, based on the use cases that you've built into the service and what you see for, what, P99 or normal patterns look like for you, what do you think? Right. And the answer would always come back and say, yeah, this looks like they're trying something weird that is not part of the normal flow. So the question then was like then you plot them versus we've logged them for you. So we actually build those capabilities for them. And one of the team members on the Hulu security team built a service because now we have to deal with the erroneous blocking of somebody who is a human doing something that was just a mistake. So we, my team built a service called "IsitblockedbytheWAF.hulu.com" that customer service could access internally and say, hey, this person is complaining about it here's a description of what they were trying to do. Are they actually being blocked and that they can actually unblock from there. So we enabled the unblocking part as well. But ultimately, what that led to was teams making more informed decisions for the things that they fully own and therefore reducing the need for security to be able to know one hundred percent of everything that's happening, because that's just unrealistic for a dynamic environment like a microservice cloud environment that Hulu is and so is DataDog. So we're not here to cover all the ground. We're here to make sure that people can cover their own ground.

Jb: [00:15:53] Super interesting! And I guess as security teams, we are always looking to get a stronger connection to the developers and to the other teams. So the fact of giving them the power and ownership, choosing who to lock is amazing in that sense. But as we see it, I guess the teams were already like owning the operations of the service, the availability, the performance, etc.. Right.

Emilio: [00:16:15] Yes.

Jb: [00:16:16] So you already need a pretty distributed model to make that work?

Emilio: [00:16:19] Yes, absolutely. Yes. That only works if your company has the philosophy of "if you build it you own it" type of mindset. Right. So if the developers are just there to write code and they push it and some other team is then responsible for the operational aspects of the service and and uptime, then again, you're just creating silos of knowledge. I don't see how a developer can be a successful software engineer if the performance aspects of whatever that developer is working on is sort of like...

  continue reading

7 episodes

Artwork
iconShare
 
Manage episode 280187781 series 2805034
Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of AppSec Builders, I'm joined by DataDog CISO, Emilio Escobar. Emilio's extensive experience at Hulu and Sony Interactive and his contributions to Ettercap all provide a unique perspective on team maturity, managing complex systems across enterprise, leadership insights, security ownership, and becoming the CISO of a public company.

Follow Emilio on Twitter and Linkedin at the below links:

https://twitter.com/eaescob?lang=en

https://www.linkedin.com/in/emilioesc/

Resources

Ettercap:

Book Recs:

Episode 3 Transcript

Jb: [00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.

Jb: [00:00:14] Welcome to the third episode of AppSec Builders today I'm proud to receive Emilio Escobar, who's CISO at DataDog. Welcome and good morning, Emilio.

Emilio: [00:00:24] Good morning. Excited to be here. Thanks for having me.

Jb: [00:00:24] Thanks lot for joining us. So you recently joined DataDog as a CISO, but you have a broad experience as a security leader, at DataDog today. But before that, Hulu, Sony, and I think you are also the maintainer of a famous tool for security geeks like this, which is Ettercap, right?

Emilio: [00:00:48] Yeah, that is correct. I'm one of the three main maintainers of it, and we've been doing it for about nine years already.

Jb: [00:00:56] Do you want to share a bit what Ettercap is about? I used it regularly into pentests'. That's an amazing tool.

Emilio: [00:01:02] Sure. Ettercap has been around for a long, long time, I think, since 2006, and it had slowly died down in around like maybe two thousand eight, two thousand nine. But it is a man in the middle attack tool. It's leveraged by a lot of pentesters for doing man in the middle attack to their customers and trying to obtain credentials for for services like SSH Telnet and what have you. How I got started with it was that when I worked at Accuvant Labs, I was a pentester, one of my colleagues was using it or trying to use it for an engagement that he was working on. And he was running into some, some bugs. And he reached out to me and asked me if I knew how to code in C. I said yes. And he's like, I'll give you five hundred dollars for if you solve these two for each of these two bugs that, that I'm running into. So looking at the code, I was able to fix the issues that he was running into. I never got that thousand dollars back. But what that started was the conversation between him and I. This is Eric Meilin, who I believe is that BlackBerry now about like, hey, should we actually resume the support for Ettercap? We wanted it to work well in MacOS. We wanted IPv6 support. We wanted all these new features that it wasn't supporting. And we reach out to ALoR and NaGA the original authors and they were gracious enough to allow us to to run with it as long as we kept it open source. Right. And that was the commitment that we gave them. So fast forward nine years. We've we've added a few versions. Now, I'm less involved in the coding because I really don't just don't have the time for it, but surrounded by two people who are active. So feel free to check it out on GitHub and submit pull requests, issues or use it and give us feedback.

Jb: [00:02:51] Amazing. Yes great tool, I used it a lot. So after being a pentester, you went to Sony, Hulu. So two companies in the entertainment world.

Emilio: [00:03:34] Yeah. Yeah. So I actually met PlayStation during my consulting days. Right. For some engagements that we did with them and,and a few years later they reach out to me and said, hey, we're looking for to grow the team, we're looking to grow the application product security side of the house. So I joined as employee number two for, for that discipline. And we were able to grow it to a pretty significant team. We were able to build capabilities also out of the Tokyo office out of Europe. So it was it was pretty good program. The team is still growing, is still active. And it was a lot of fun. It was. But it was the first time that I was on the receiving end of attacks from groups like Lizard Squad Anonymous. Right. So PlayStation is a big target and things like fraud and fame and fraud and all those things were a lot of the factors that we had to go sell for. So really, really interesting set of challenges like gaming faces right up. Time is everything. And we have a very opinionated customer base. Right. Like gamers care and they will let you know pretty quickly, I guess.

Jb: [00:04:38] And yes, Sony has been in a couple of important leaks were you in the company when that arrived, it must be insane to live that from the inside.

Emilio: [00:04:46] I wasn't part of PlayStation during their big outage. I supported them as a consultant. I joined after as an employee and for Sony Pictures, theyre a separate entity, right. So we collaborate, but for something like what happened to them, it's thanks but no thanks kind of approach from them. Right. And rightfully so. And I think they had the right support from the FBI and everyone else involved in their investigation. So we only supported from building a discipline and a practice, but not. Step out of the way and let us do what we do, because they have a pretty good team there as well.

Jb: [00:05:16] Yes, OK, interesting. And so then it was Hulu when we first Emilio you were looking at Hulu and I guess that there you had like very distributed architectures. Right. Would you mind sharing a bit about the context at Hulu?

Emilio: [00:05:32] Yeah, certainly so, yes. I joined Hulu to grow and build a security practice there and with a very heavy emphasis on product development. So SDLC security. How do we enable velocity? Time to market is everything, you know, obviously for a streaming platform. When I joined Hulu, we were working on the live TV product, so uptime became even more of a concern. Right. Video-On-Demand, if you can watch a video now, you might try in an hour. But live TV, if it's a Super Bowl or the World Cup or what have you, you want to watch it when it happens and not sometime later, unless you purposely record it because you can't watch it when it's live. So uptime was a big concern. So joining Hulu, I discovered the complexity of the architecture right. It was a complete microservice environment. At PlayStation, they were working towards microservice and segmenting things in smaller type of workloads. Hulu had that built. So dealing with that complexity was something that I wasn't faced with at PlayStation. So it just required a different approach of security, right. Everything was automated. Hulu had a platform as a service framework built by Hulu, which was really interesting where developers to get push can push a production and the containers will get built out and everything. So I thought all the right things were in place. We just had to get security in them to make sure that things were done appropriately. But we had to we had to rethink the whole legacy approach to security, of being a gate, doing code reviews and, you know, how do you do static analysis? How do you do dependency scans and all those things? Because you know a developer can get push any time and they were doing over three hundred deploys a day to production. Right. So it was a lot to catch up to.

Jb: [00:07:14] And could you could you give us some numbers so we can see the scale of that, like how many developers, applications, repositories, if you have that in mind.

Emilio: [00:07:23] Yeah, yeah. If I remember correctly and I'm sure it has changed since, but I think that towards the end of my Hulu tenure we had over 600 developers and I believe the number was around twenty-three hundred microservices. Now, whether that's the right number or not, that's a separate conversation. Right. But that was what we were dealing with and language frameworks were all over the place, right. So we wanted developers to be creative and effective in whatever language they felt the most comfortable with. So we had to support JavaScript, Python, Golang, I believe we have some Scala and node.js and what have you. So it wasn't a centrally standardized environment where everyone was coding Java

Emilio: [00:08:05] and uspring framework and all those things that you can get a little bit more commodity out of those, we had to scramble a little bit.

Jb: [00:08:12] So, I understand, and as a CTO, it's a tough balance to give a lot of autonomy to people, but also you need to keep a certain degree of currency in your deployments.

Jb: [00:08:23] So I'm curious to understand, so ok a lot of different languages, but I guess this also means a lot of different frameworks, a lot of different coding styles and practices, right. That's a nightmare for a security owner.

Emilio: [00:08:37] Yes, it is. Yeah so I think, you know, we had to rely on the developers being strong at what they're good at, right at coding, right, so we had to leverage that partnership. You know, all these frameworks, obviously different attack surfaces, right. So we had to find ways of how to put security in place in a matter that wasn't disruptive, that didn't impact production, that was easily adoptable, right. So starting with the "why" making security the default, right, I always tell teams that if you have a developer choosing between defaults and security, default is always going to win. So why not make security the default? So we have to take chip away at that mindset and approach, right. So we had to put, leverage as much of CICD as we could, do things as infrastructure as code, leverage security controls that you can load the library or through infrastructure as code or some sort of automation. So a lot of self-serve is why we wanted developers and teams to serve themselves security and we had to build a paved roads for them to have that enabled for them. But that on the back end to your point of how do you maintain some level of consistency and priority towards quality and security? We made big strides and efforts into tying security as a quality entity.

Emilio: [00:09:53] Right. A lot of times to see security and quality being two separate worlds. And they want to approach using different processes and different language to approach what I consider to be the same problem, right. If I'm a consumer of a service and whether it's a functional bug or a security bug is still impacts my experience, right. So, I united them to the point that we were reporting to the executives and stakeholders, security issues as part of the quality conversation, right. And we use the same language as in like escape defects, recurrent defects, and track those because we wanted to leverage that already made it already established interruption process QA had for developers for security concerns as well. And that that got us a lot of wins there where we we're not just saying, hey, we want to do this because the security is like here's a quality element to it that everyone cares about. As a developer, you don't want to be the reason for why a service or there's a bug in production that people complain about in Reddit or whatever. You have pride in the work that you do. So I think leveraging that helped us a lot with security.

Jb: [00:10:55] Super interesting. But I guess when you have a bug so it could be impacting the customer experience, like, I don't know, they can't start a movie, it could have a security issue. In the end you want both to be fixed, but the available developer time is still limited. How did you prioritize security versus quality? I guess you still have to make that code somehow?

Emilio: [00:11:17] Right, yeah, and that's exactly why I thought combining those two problems into the same conversation helped, because then we can actually do the trade-off conversations in one forum versus having silos for security or quality issues and sort of not being able to combine the two of them. So, yes, we have to be very pragmatic about if it's a security issue, how easy is it to exploit? How likely is it to be exploited? What's the impact of exploitation? Right. And Hulu being very strict about the quality of the product, even if it was a security issue that will lead to a bad experience from a consumer, whether they couldn't start a movie, a show, they couldn't save something to DVR or whatever core functionality the product has, we will still treat it as equally important as a functional issue, right. So that how the bug manifests itself became less important than the impact of the bug to consumers, right. So that put, again, that put the two security and quality in the same conversation, and then we will have the trade off talks. If it was a functional bug that was being seen by 68 percent of the consumer base and a security bug that was only being presented to 3 percent of the consumer base then that was a no brainer, right. We will choose the functional bug issue over the security bug so that's where pragmatism comes to comes to play.

Jb: [00:12:36] Right, makes sense, makes sense. And so with such a large distributed architecture, so you have a lot of simple small pieces, but the overall complexity is insane, I guess. How did you manage to cope with that? Did anyone have like a holistic vision of the system? How did you, like, enumerate two thousand services?

Emilio: [00:12:57] Yeah, yeah. It was definitely a lot of tribal knowledge for sure. And that was a problem, right. Because well, I think one thing is also to admit to the fact that security will never have the same level of understanding and visibility as like the developers have of their own software and services. So this goes back to the mindset of why security is there, right. So security is there to help developers write secure code and secure and stable services. But if you try and spend energy on security, being able to see and understand one hundred percent of what's there, then that I think you're burning a lot of candles on that side that maybe is not going to drive a lot of results. It's good to have an understanding, but is it good to have it at one hundred percent understanding? I don't think so, because you can rely on the developer community of your company to give you that understanding and empower them to make those decisions. Just measure what security looks like for them. Right. So one example is around abuse of services. And one of the things that we did was that we were empowering development teams to be able to block what they thought was malicious traffic. And the reason for that was like the security team was getting paged, let's say, at 4:00 in the morning, because some some IP's were hitting a few services pretty hard. Right. And the question that we were always getting from developers is, is this, is this a security concern or not? Or is this attack traffic or not? And it always puts us in a weird position because we don't know necessarily how the service gets called. Like, yes, we have an idea, but we don't know it better than the developers who built that service know, right

Emilio: [00:14:32] So I would always like to, we always turn around the question to them and say, hey, based on the use cases that you've built into the service and what you see for, what, P99 or normal patterns look like for you, what do you think? Right. And the answer would always come back and say, yeah, this looks like they're trying something weird that is not part of the normal flow. So the question then was like then you plot them versus we've logged them for you. So we actually build those capabilities for them. And one of the team members on the Hulu security team built a service because now we have to deal with the erroneous blocking of somebody who is a human doing something that was just a mistake. So we, my team built a service called "IsitblockedbytheWAF.hulu.com" that customer service could access internally and say, hey, this person is complaining about it here's a description of what they were trying to do. Are they actually being blocked and that they can actually unblock from there. So we enabled the unblocking part as well. But ultimately, what that led to was teams making more informed decisions for the things that they fully own and therefore reducing the need for security to be able to know one hundred percent of everything that's happening, because that's just unrealistic for a dynamic environment like a microservice cloud environment that Hulu is and so is DataDog. So we're not here to cover all the ground. We're here to make sure that people can cover their own ground.

Jb: [00:15:53] Super interesting! And I guess as security teams, we are always looking to get a stronger connection to the developers and to the other teams. So the fact of giving them the power and ownership, choosing who to lock is amazing in that sense. But as we see it, I guess the teams were already like owning the operations of the service, the availability, the performance, etc.. Right.

Emilio: [00:16:15] Yes.

Jb: [00:16:16] So you already need a pretty distributed model to make that work?

Emilio: [00:16:19] Yes, absolutely. Yes. That only works if your company has the philosophy of "if you build it you own it" type of mindset. Right. So if the developers are just there to write code and they push it and some other team is then responsible for the operational aspects of the service and and uptime, then again, you're just creating silos of knowledge. I don't see how a developer can be a successful software engineer if the performance aspects of whatever that developer is working on is sort of like...

  continue reading

7 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide