Artwork

Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.
Player FM - Podcast App
Go offline with the Player FM app!

Serverless Security with Erica Windisch

37:49
 
Share
 

Manage episode 282740014 series 2805034
Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of AppSec Builders, I'm joined by New Relic Principal Engineer and AWS Serverless Hero, Erica Windisch. Erica has decades of experience building developer and operational tooling to serverless applications. We discuss all things serverless including why you should care about serverless security, designing app security when migrating to a serverless environment, how to scale your app security with serverless and much more.

About Erica:

Erica is a Principal Engineer at New Relic and previously a founder at IO pipe. Erica has extensive experience in building developer and operational tooling to serverless applications. Erica also has more than 17 years of experience designing and building cloud infrastructure management solutions. She was an early and longtime contributor to OpenStack and a maintainer of the Docker project.

Follow Erica on Twitter and Linkedin at the below links:

Twitter

Linkedin

Resources:

Transcript for Serverless Security with Erica Windisch

[00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.

Jb Aviat: [00:00:14] Welcome to this episode of AppSec Builders today I'm proud to receive Erica Windisch, we will discuss about serverless and serverless security. Welcome, Erica.

Erica Windisch: [00:00:24] Hi.

Jb Aviat: [00:00:26] So Erica you you are an architect and principal engineer at New Relic, you are also an AWS serverless hero previously you were founder at IO Pipe, an before that were security engineer at Docker. Right?

Erica Windisch: [00:00:41] Ah correct yeah.

Jb Aviat: [00:00:42] So thank you so much for joining us today, Erica. I'm really excited to have you as a guest today.

Erica Windisch: [00:00:50] Thank you for having me.

Jb Aviat: [00:00:51] So, Erica, Serverless as an AWS serverless hero, I guess you know almost everything and you are very, very aware of what's happening in the serverless world. Before we dive in, like some AWS specificities, maybe you could remind us what is serverless and how does it differ from the traditional world, especially from a security standpoint?

Erica Windisch: [00:01:14] Absolutely. So, I mean, my background, it's not just Docker, it's building open stack. It's building web hosting services. And, you know, this is an evolving ecosystem that, I mean, in the 2000s was, you know, as simple or as hard as taking your content and uploading it to a remote server and running your application to as complex as running your own servers. Right. And these, of course, are options that are available to you now. But increasingly, developers are moving towards dev ops. They're using containers. They are finding that CI/CD and deployments and all of these things are useful tools for the organizations to move quickly and operating physical machines as pets, as we would call it, versus cattle, which as a vegan is probably not the best metaphor. But, you know, over this time, we've been increasingly going higher level and operating and deploying and building at higher level layers. And serverless is that highest layer in a sense where rather than building a micro service is shipping a service that runs on a VM in a container and a host that you have to manage and operate, even if that's part of a larger Kubernetes cluster.

Erica Windisch: [00:02:33] Instead, you just take your application and you give it to your cloud provider and your cloud provider runs it for you. There's a lot of advantages to this, largely that the platform is fully managed for you to a large degree. You know, you don't have to maintain operating system patches. You don't have to maintain Kernels. You don't have to do anything other than operate your application. And really, the biggest disadvantages to this are that you do lose control of managing some of these pieces. But for most users, there's there's a benefit and a game to not having to operate components that are not mission critical or I mean, arguably they're mission critical because your applications are not going to run without a kernel of some sort of however, that kernel can be tuned, it can be optimized, it can be hardened and it can be done by Amazon rather than having to make that your problem, because you and your organization often may not have the expertise or the time to invest in having the same level of security that Amazon can provide out of the box.

Jb Aviat: [00:03:36] Yes. So that's the ability for users to focus more on what they know, more like their business strategy rather than their infrastructure, rather than there are server configuration you need. So from this point of view, that's much more focused towards what you knew and what it would do as the cloud provider knows best. Right. So that's a lot of advantages from a security standpoint, because, as you said, it's everything that is a maintenance like security updates et cetera, is dedicated to the cloud providers and its not your responsibility anymore. So is that like the best thing from a from a security standpoint, migrating to to serverless?

Erica Windisch: [00:04:14] So I will add an additional caveat here, which is that mean Serverless is a concept. There are multiple products that provide serverless capabilities. Amazon LAMDA being one of the most popular S3, arguably being one of the first Serverless products, and many users are already using S3. So from a certain perspective, you are already using SERVERLESS services and S3 has minimal attack vectors, but there are also large attack vectors. Potentially you could leave your buckets open.

Erica Windisch: [00:04:46] I think that actually just today there's big news that this app called Parler, this alternative to Facebook run by right wing conservatives. And what happened there is that they left S3 buckets open, apparently, and they were in the middle of a shutdown as well, and their services were compromised. And one of the things they've done there is having misconfiguration of their applications. They rely a lot on other serverless Services such as Okta, which they're apparently running a free trial of, and they were removed from that service and then they were then in a situation where people were compromising their services because they didn't have many services available. Now, this is a particular case where they were denied for acceptable use policies for what I consider pretty reasonable reasons of being denied service. But the point kind of stands in a way that here is a company that was relying a lot on some of these serverless services and they found themselves still at the mercy of security vulnerabilities despite doing that. And in some ways, it opened up them more to being disconnected, having Twilio disconnect them, having all these other point solutions that were arguably serverless services, shutting them down because they relied heavily on the platforms on which they were no longer allowed to use.

Jb Aviat: [00:06:06] So your point is that using serverless puts you at risk of the solution provider?

Erica Windisch: [00:06:11] No, not necessarily. No, actually, that's not the point I'm trying to make so much as in they were hacked before they were shut, before they removed some of these services, they were using serverless services and they still got hacked. Right. So the point is more that Serverless itself doesn't ultimately protect you from application level compromises. Right? Right. It does protect you from some of the infrastructure level compromises. It doesn't stop you from other attack factories. Yes, it is true. It doesn't protect you from being bad people and getting yourself kicked off of services. But it also shows that you can use some of these services that are supposed to provide you third party security controls and they can still fail you.

Erica Windisch: [00:06:53] Yes, I guess it's multiple points. Obviously, they made a lot of really critical mistakes, both technologically as well as politically.

Jb Aviat: [00:07:03] So basically using serverless is not perfect. You can still make like configuration mistakes, security mistakes at various places of the thing. You mentioned also application security. That yes is not prevented by the fact that you are using serverless because the code you are running is very similar to what you were writing in a regular application.

Erica Windisch: [00:07:26] Exactly. You're still building applications. So application security is still essential right. If you're relying on something like Okta or Auto0, it's very easy to misconfigure those and to use them incorrectly. You know, it's possible to have Twilio out and not have two factor working correctly or not having it verify phone numbers. Apparently, you can have S3 and you can leave your buckets open. Right. And that is a large part of my point.

Jb Aviat: [00:07:53] Yes, absolutely. One of the opportunities I would see with Serverless is that usually you are starting sometimes from scratch, or at least you need a new CI you need a lot of new things when you are moving to Serverless. So that's also a chance for you to use the infrastructure as code to use a more higher level of deployment frameworks, for instance. And so that could be a place where you can bake some security controls to maybe review you on telephone files or your cloud information files to ensure that you don't have such issues. Are you familiar with such practices, Erica?

Erica Windisch: [00:08:29] Yeah, there are definitely companies. A lot of the larger companies actually use their own custom serverless application frameworks where they bake in a lot of these constraints and security controls for everybody, for everybody that is using that framework. I do see that to be a pretty common use case, especially again larger companies. But even with the smaller companies, I think that CI/CD Is a place where you can then slip in some configuration, whether that's, you know, serverless configuration or even if it's potentially Kubernetes. I don't think it's strictly related to Serverless. I think that was serverless. You have a lot more control over your application via configuration, right? Just because I mean, there's less infrastructure. So I guess it goes both ways, right? You have less control and more control. Right. Like all the knobs that you can turn in configuration. Argueably there's fewer of them, but they're more applicable to your applications specifically rather than knobs that are specific to infrastructure. Like you're not turning knobs that control your IO in general. Other than your on Lambda, you can control how much memory you get, which does control how much IO you get and how much CPU you get. But that becomes more of a billing function. It says, how much am I willing to pay for the service and how much performance am I going to get out of what I'm paying for. But I think that's a little bit different than the level of control that gives you whether or not you are running a certain VM or a different operating system, a different kernel, things like that which are out of your control with serverless applications.

Jb Aviat: [00:09:58] Yeah. And so to me, I'm actually not sure that serverless means less ops. And you said it's a different kind of controls because if you are a developer. Before you were doing zero ops, all the orchestration you were doing was I dont know API or microservice level, maybe application level, if you move towards serverless, you might suddenly start to use things such as step functions that will orchestrate how your functions are communicating together. And so this is Ops a developer starts doing that they were doing previously. So that's also something that is kind of new.

Erica Windisch: [00:10:33] I think that moving away from infrastructure operations to application operations is I think that not operating the hardware gives you more time to focus on operating your application, making sure your applications working, getting your application tests to work, building out more functionality in your application of all of this means that you're using your tools more for application support rather than for infrastructure support.

Jb Aviat: [00:10:58] Yes, I agree. And if you look at the you know, there is the typical Venn diagram where you see security operations and developers. And so to me, if we consider serverless like the things are getting more intricate because you have actually a very different kind of Ops when you are moving to serverless. And so one of the things that could have been previously the responsibility of the operations could now be falling into the hands of the developers. So, for instance, who is responsible to define the privileges that a given function should have in terms of IAM and cloud permissions, that the developers who exactly knows what does and is writing. Like I dont knowdon't one function or several functions per day or that the ops actually are not aware of the business logic. I don't know if you see similar.

Erica Windisch: [00:11:48] Yeah, I see a lot of organizations creating roles and policies organizationally and providing those to developers and developers that need to use these policies. Configure this way. And for a lot of organizations that works. It does create some challenges around the CI/CD platform. And it can create barriers sometimes because if you want to deploy serverless applications and nobody has yet deployed or built your serverless role or has authorized that for use in your or for lambda in particular, if they don't create the necessary roles for lambda and they don't allow you to create those functions with the right roles and permissions, it becomes a barrier towards adoption within your organization. That said, there's advantages towards using locking down things like that organizationally. And I think that a balance has to be struck between, you know, enabling innovation in your company and this like top-down operation level security that happens again in a lot of companies. And it's a balance. It's not necessarily an easy balance to make. I think that a lot of organizations are very set in their ways because they're not expecting Serverless. It is more and more common. Like I know at New Relic it's something that more and more teams are looking at using, but it's still something that is challenging to potentially use as well. Just because you need to have your CI/CD system set up correctly, you need to have team members who are familiar with learning and building things serverlessly it is a different paradigm and it just challenges to especially again the larger organization or depending on how you structure your your operations.

Jb Aviat: [00:13:28] Yes, there is a balance between security and usability. So it's not a new thing. Obviously, from a security standpoint, you would think that the principle of least privilege is super important and that's something that you should keep in your lambda, but probably not to the point of having like one IAM rule per serverless function, because I guess that makes the whole thing super how to scale and even I don't think IAM is a good way to manage like hundreds of rules for you, for your serverless deployment.

Erica Windisch: [00:13:57] Yeah, I think it becomes challenging, though, because a lot of serverless applications do not have really great input validation. So that, of course, does vary according to each language and according to each developer. But most of the code written for Serverless or LAMDA in particular is known in Python, and these are dynamic languages. They are not statically typed. Minimal input validation is often given for these functions. So you know having open IAM permissions does also potentially mean potentially having invalid input past to these functions, which does mean that you probably should want better input validation, depending on how open your IAM permissions are. I mean, there is a good argument, which is that you should have good input validation and starting IAM but we also live in the real world and we recognize that doesn't always happen.

Jb Aviat: [00:14:47] Yeah, too much complexity is also an enemy to a decent security, but that's a good thing that you are touching because the scale that you have when you deploy serverless, instead of managing like one code base you are managing maybe ten or fifty code bases. And so there is a difference in terms of scale that you didn't have previously.

Erica Windisch: [00:15:09] So, you know, I would say that Serverless enables you to build scalable applications and what is good about this is that rather than your application falling over is it will scale and it will also charge you. So it does open up some potential for denial of attacks. Serverless tends to be very inexpensive. So it's not usually a large bill, but it is possible to potentially force a serverless application to scale. Right, almost like a denial of service attack. But instead of denying the service, you are denying a denial of wallet because you're just charging you're putting so many resources that you're just racking up their billing because the service is going to scale. It's going to support your requests, its just going to just keep charging more and more S3 is the same problem. Right.

Jb Aviat: [00:15:57] Denial of wallet issue. I like it.

Erica Windisch: [00:16:00] Yeah, but I did forget the original question.

Jb Aviat: [00:16:04] So it was about the scale. And I think challenges such as, I don't know, like vulnerable dependencies, for instance, is tractable when you have a few code bases. But if you multiply those code bases by 20 or 50, that's much harder to track at that scale.

Erica Windisch: [00:16:20] So I think the challenge for me is not necessarily the code bases, but the deployments, because each serverless function is a deployment of code and each of those deployments is an immutable artifact of that code and a snapshot in time. If you are building your application and you don't have good CI/CD, that code could be out of track with what is in Git. You might have code or applications that are working well for you. And here's the I think a big difference between traditional application of Serverless is that if you have a micro service that was serving, say, 15 rest points and you replace it with 15 serverless functions serving one rest End Point each, you now have 15 deployed services. And if one of those rest end points doesn't need updates in a year, it might fall behind the other code bases just because it's not getting those updates. So what some organizations do is they force deployments. You know, they might do minor repairs and

  continue reading

7 episodes

Artwork
iconShare
 
Manage episode 282740014 series 2805034
Content provided by Datadog. All podcast content including episodes, graphics, and podcast descriptions are uploaded and provided directly by Datadog or their podcast platform partner. If you believe someone is using your copyrighted work without your permission, you can follow the process outlined here https://player.fm/legal.

In this episode of AppSec Builders, I'm joined by New Relic Principal Engineer and AWS Serverless Hero, Erica Windisch. Erica has decades of experience building developer and operational tooling to serverless applications. We discuss all things serverless including why you should care about serverless security, designing app security when migrating to a serverless environment, how to scale your app security with serverless and much more.

About Erica:

Erica is a Principal Engineer at New Relic and previously a founder at IO pipe. Erica has extensive experience in building developer and operational tooling to serverless applications. Erica also has more than 17 years of experience designing and building cloud infrastructure management solutions. She was an early and longtime contributor to OpenStack and a maintainer of the Docker project.

Follow Erica on Twitter and Linkedin at the below links:

Twitter

Linkedin

Resources:

Transcript for Serverless Security with Erica Windisch

[00:00:02] Welcome to AppSec Builders, the podcast for Practitioners Building Modern AppSec hosted by JB Aviat.

Jb Aviat: [00:00:14] Welcome to this episode of AppSec Builders today I'm proud to receive Erica Windisch, we will discuss about serverless and serverless security. Welcome, Erica.

Erica Windisch: [00:00:24] Hi.

Jb Aviat: [00:00:26] So Erica you you are an architect and principal engineer at New Relic, you are also an AWS serverless hero previously you were founder at IO Pipe, an before that were security engineer at Docker. Right?

Erica Windisch: [00:00:41] Ah correct yeah.

Jb Aviat: [00:00:42] So thank you so much for joining us today, Erica. I'm really excited to have you as a guest today.

Erica Windisch: [00:00:50] Thank you for having me.

Jb Aviat: [00:00:51] So, Erica, Serverless as an AWS serverless hero, I guess you know almost everything and you are very, very aware of what's happening in the serverless world. Before we dive in, like some AWS specificities, maybe you could remind us what is serverless and how does it differ from the traditional world, especially from a security standpoint?

Erica Windisch: [00:01:14] Absolutely. So, I mean, my background, it's not just Docker, it's building open stack. It's building web hosting services. And, you know, this is an evolving ecosystem that, I mean, in the 2000s was, you know, as simple or as hard as taking your content and uploading it to a remote server and running your application to as complex as running your own servers. Right. And these, of course, are options that are available to you now. But increasingly, developers are moving towards dev ops. They're using containers. They are finding that CI/CD and deployments and all of these things are useful tools for the organizations to move quickly and operating physical machines as pets, as we would call it, versus cattle, which as a vegan is probably not the best metaphor. But, you know, over this time, we've been increasingly going higher level and operating and deploying and building at higher level layers. And serverless is that highest layer in a sense where rather than building a micro service is shipping a service that runs on a VM in a container and a host that you have to manage and operate, even if that's part of a larger Kubernetes cluster.

Erica Windisch: [00:02:33] Instead, you just take your application and you give it to your cloud provider and your cloud provider runs it for you. There's a lot of advantages to this, largely that the platform is fully managed for you to a large degree. You know, you don't have to maintain operating system patches. You don't have to maintain Kernels. You don't have to do anything other than operate your application. And really, the biggest disadvantages to this are that you do lose control of managing some of these pieces. But for most users, there's there's a benefit and a game to not having to operate components that are not mission critical or I mean, arguably they're mission critical because your applications are not going to run without a kernel of some sort of however, that kernel can be tuned, it can be optimized, it can be hardened and it can be done by Amazon rather than having to make that your problem, because you and your organization often may not have the expertise or the time to invest in having the same level of security that Amazon can provide out of the box.

Jb Aviat: [00:03:36] Yes. So that's the ability for users to focus more on what they know, more like their business strategy rather than their infrastructure, rather than there are server configuration you need. So from this point of view, that's much more focused towards what you knew and what it would do as the cloud provider knows best. Right. So that's a lot of advantages from a security standpoint, because, as you said, it's everything that is a maintenance like security updates et cetera, is dedicated to the cloud providers and its not your responsibility anymore. So is that like the best thing from a from a security standpoint, migrating to to serverless?

Erica Windisch: [00:04:14] So I will add an additional caveat here, which is that mean Serverless is a concept. There are multiple products that provide serverless capabilities. Amazon LAMDA being one of the most popular S3, arguably being one of the first Serverless products, and many users are already using S3. So from a certain perspective, you are already using SERVERLESS services and S3 has minimal attack vectors, but there are also large attack vectors. Potentially you could leave your buckets open.

Erica Windisch: [00:04:46] I think that actually just today there's big news that this app called Parler, this alternative to Facebook run by right wing conservatives. And what happened there is that they left S3 buckets open, apparently, and they were in the middle of a shutdown as well, and their services were compromised. And one of the things they've done there is having misconfiguration of their applications. They rely a lot on other serverless Services such as Okta, which they're apparently running a free trial of, and they were removed from that service and then they were then in a situation where people were compromising their services because they didn't have many services available. Now, this is a particular case where they were denied for acceptable use policies for what I consider pretty reasonable reasons of being denied service. But the point kind of stands in a way that here is a company that was relying a lot on some of these serverless services and they found themselves still at the mercy of security vulnerabilities despite doing that. And in some ways, it opened up them more to being disconnected, having Twilio disconnect them, having all these other point solutions that were arguably serverless services, shutting them down because they relied heavily on the platforms on which they were no longer allowed to use.

Jb Aviat: [00:06:06] So your point is that using serverless puts you at risk of the solution provider?

Erica Windisch: [00:06:11] No, not necessarily. No, actually, that's not the point I'm trying to make so much as in they were hacked before they were shut, before they removed some of these services, they were using serverless services and they still got hacked. Right. So the point is more that Serverless itself doesn't ultimately protect you from application level compromises. Right? Right. It does protect you from some of the infrastructure level compromises. It doesn't stop you from other attack factories. Yes, it is true. It doesn't protect you from being bad people and getting yourself kicked off of services. But it also shows that you can use some of these services that are supposed to provide you third party security controls and they can still fail you.

Erica Windisch: [00:06:53] Yes, I guess it's multiple points. Obviously, they made a lot of really critical mistakes, both technologically as well as politically.

Jb Aviat: [00:07:03] So basically using serverless is not perfect. You can still make like configuration mistakes, security mistakes at various places of the thing. You mentioned also application security. That yes is not prevented by the fact that you are using serverless because the code you are running is very similar to what you were writing in a regular application.

Erica Windisch: [00:07:26] Exactly. You're still building applications. So application security is still essential right. If you're relying on something like Okta or Auto0, it's very easy to misconfigure those and to use them incorrectly. You know, it's possible to have Twilio out and not have two factor working correctly or not having it verify phone numbers. Apparently, you can have S3 and you can leave your buckets open. Right. And that is a large part of my point.

Jb Aviat: [00:07:53] Yes, absolutely. One of the opportunities I would see with Serverless is that usually you are starting sometimes from scratch, or at least you need a new CI you need a lot of new things when you are moving to Serverless. So that's also a chance for you to use the infrastructure as code to use a more higher level of deployment frameworks, for instance. And so that could be a place where you can bake some security controls to maybe review you on telephone files or your cloud information files to ensure that you don't have such issues. Are you familiar with such practices, Erica?

Erica Windisch: [00:08:29] Yeah, there are definitely companies. A lot of the larger companies actually use their own custom serverless application frameworks where they bake in a lot of these constraints and security controls for everybody, for everybody that is using that framework. I do see that to be a pretty common use case, especially again larger companies. But even with the smaller companies, I think that CI/CD Is a place where you can then slip in some configuration, whether that's, you know, serverless configuration or even if it's potentially Kubernetes. I don't think it's strictly related to Serverless. I think that was serverless. You have a lot more control over your application via configuration, right? Just because I mean, there's less infrastructure. So I guess it goes both ways, right? You have less control and more control. Right. Like all the knobs that you can turn in configuration. Argueably there's fewer of them, but they're more applicable to your applications specifically rather than knobs that are specific to infrastructure. Like you're not turning knobs that control your IO in general. Other than your on Lambda, you can control how much memory you get, which does control how much IO you get and how much CPU you get. But that becomes more of a billing function. It says, how much am I willing to pay for the service and how much performance am I going to get out of what I'm paying for. But I think that's a little bit different than the level of control that gives you whether or not you are running a certain VM or a different operating system, a different kernel, things like that which are out of your control with serverless applications.

Jb Aviat: [00:09:58] Yeah. And so to me, I'm actually not sure that serverless means less ops. And you said it's a different kind of controls because if you are a developer. Before you were doing zero ops, all the orchestration you were doing was I dont know API or microservice level, maybe application level, if you move towards serverless, you might suddenly start to use things such as step functions that will orchestrate how your functions are communicating together. And so this is Ops a developer starts doing that they were doing previously. So that's also something that is kind of new.

Erica Windisch: [00:10:33] I think that moving away from infrastructure operations to application operations is I think that not operating the hardware gives you more time to focus on operating your application, making sure your applications working, getting your application tests to work, building out more functionality in your application of all of this means that you're using your tools more for application support rather than for infrastructure support.

Jb Aviat: [00:10:58] Yes, I agree. And if you look at the you know, there is the typical Venn diagram where you see security operations and developers. And so to me, if we consider serverless like the things are getting more intricate because you have actually a very different kind of Ops when you are moving to serverless. And so one of the things that could have been previously the responsibility of the operations could now be falling into the hands of the developers. So, for instance, who is responsible to define the privileges that a given function should have in terms of IAM and cloud permissions, that the developers who exactly knows what does and is writing. Like I dont knowdon't one function or several functions per day or that the ops actually are not aware of the business logic. I don't know if you see similar.

Erica Windisch: [00:11:48] Yeah, I see a lot of organizations creating roles and policies organizationally and providing those to developers and developers that need to use these policies. Configure this way. And for a lot of organizations that works. It does create some challenges around the CI/CD platform. And it can create barriers sometimes because if you want to deploy serverless applications and nobody has yet deployed or built your serverless role or has authorized that for use in your or for lambda in particular, if they don't create the necessary roles for lambda and they don't allow you to create those functions with the right roles and permissions, it becomes a barrier towards adoption within your organization. That said, there's advantages towards using locking down things like that organizationally. And I think that a balance has to be struck between, you know, enabling innovation in your company and this like top-down operation level security that happens again in a lot of companies. And it's a balance. It's not necessarily an easy balance to make. I think that a lot of organizations are very set in their ways because they're not expecting Serverless. It is more and more common. Like I know at New Relic it's something that more and more teams are looking at using, but it's still something that is challenging to potentially use as well. Just because you need to have your CI/CD system set up correctly, you need to have team members who are familiar with learning and building things serverlessly it is a different paradigm and it just challenges to especially again the larger organization or depending on how you structure your your operations.

Jb Aviat: [00:13:28] Yes, there is a balance between security and usability. So it's not a new thing. Obviously, from a security standpoint, you would think that the principle of least privilege is super important and that's something that you should keep in your lambda, but probably not to the point of having like one IAM rule per serverless function, because I guess that makes the whole thing super how to scale and even I don't think IAM is a good way to manage like hundreds of rules for you, for your serverless deployment.

Erica Windisch: [00:13:57] Yeah, I think it becomes challenging, though, because a lot of serverless applications do not have really great input validation. So that, of course, does vary according to each language and according to each developer. But most of the code written for Serverless or LAMDA in particular is known in Python, and these are dynamic languages. They are not statically typed. Minimal input validation is often given for these functions. So you know having open IAM permissions does also potentially mean potentially having invalid input past to these functions, which does mean that you probably should want better input validation, depending on how open your IAM permissions are. I mean, there is a good argument, which is that you should have good input validation and starting IAM but we also live in the real world and we recognize that doesn't always happen.

Jb Aviat: [00:14:47] Yeah, too much complexity is also an enemy to a decent security, but that's a good thing that you are touching because the scale that you have when you deploy serverless, instead of managing like one code base you are managing maybe ten or fifty code bases. And so there is a difference in terms of scale that you didn't have previously.

Erica Windisch: [00:15:09] So, you know, I would say that Serverless enables you to build scalable applications and what is good about this is that rather than your application falling over is it will scale and it will also charge you. So it does open up some potential for denial of attacks. Serverless tends to be very inexpensive. So it's not usually a large bill, but it is possible to potentially force a serverless application to scale. Right, almost like a denial of service attack. But instead of denying the service, you are denying a denial of wallet because you're just charging you're putting so many resources that you're just racking up their billing because the service is going to scale. It's going to support your requests, its just going to just keep charging more and more S3 is the same problem. Right.

Jb Aviat: [00:15:57] Denial of wallet issue. I like it.

Erica Windisch: [00:16:00] Yeah, but I did forget the original question.

Jb Aviat: [00:16:04] So it was about the scale. And I think challenges such as, I don't know, like vulnerable dependencies, for instance, is tractable when you have a few code bases. But if you multiply those code bases by 20 or 50, that's much harder to track at that scale.

Erica Windisch: [00:16:20] So I think the challenge for me is not necessarily the code bases, but the deployments, because each serverless function is a deployment of code and each of those deployments is an immutable artifact of that code and a snapshot in time. If you are building your application and you don't have good CI/CD, that code could be out of track with what is in Git. You might have code or applications that are working well for you. And here's the I think a big difference between traditional application of Serverless is that if you have a micro service that was serving, say, 15 rest points and you replace it with 15 serverless functions serving one rest End Point each, you now have 15 deployed services. And if one of those rest end points doesn't need updates in a year, it might fall behind the other code bases just because it's not getting those updates. So what some organizations do is they force deployments. You know, they might do minor repairs and

  continue reading

7 episodes

All episodes

×
 
Loading …

Welcome to Player FM!

Player FM is scanning the web for high-quality podcasts for you to enjoy right now. It's the best podcast app and works on Android, iPhone, and the web. Signup to sync subscriptions across devices.

 

Quick Reference Guide