Duration 35:48
16+
Play
Video

RailsConf 2019 - No Such Thing as a Secure Application by Lyle Mullican

Lyle Mullican
Founder at Blue Peak Software LLC
  • Video
  • Table of contents
  • Video
RailsConf 2019
May 1, 2019, Minneapolis, USA
RailsConf 2019
Request Q&A
Video
RailsConf 2019 - No Such Thing as a Secure Application by Lyle Mullican
Available
In cart
Free
Free
Free
Free
Free
Free
Add to favorites
778
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About speaker

Lyle Mullican
Founder at Blue Peak Software LLC

Specialties: User experience design, software architecture, information security, Ruby/Rails, Red Hat Linux, usability

View the profile

About the talk

RailsConf 2019 - No Such Thing as a Secure Application by Lyle Mullican

A developer's primary responsibility is to ship working code, and by the way, it's also expected to be secure code. The definition of "working" may be quite clear, but the definition of "secure" is often surprisingly hard to pin down. This session will explore a few ways to help you define what application security means in your own context, how to build security testing and resilience into your development processes, and how to have more productive conversations about security topics with product and business owners.

Share

Good morning, everyone. Play a few more people are struggling here, but I'll go and introduce myself. My name is Lyle mullican and consultant based in the mountains of Asheville North Carolina. Been using rails for a long time since version 1 and a lot of my work is in the healthcare sector trying to take security very seriously, so we're going to talk a little bit. Application security this morning. So it's been observations of mine over the course of my career that development teams have a lot of tools for specifying the

features that we expect our software to have we have user stories are wireframes, but we don't have nearly as many tools for expressing the security characteristics that we expect our software to exhibit and there are companies that have those tools to do a great job of this that are very strong security culture and if you work for one of those feds, Great, but I think that for a lot of developers and especially in the smaller teams in startups on that are trying to get off the ground were very focused on making the software work to do it. It's

supposed to and so we have very well-defined expectations for user-facing behavior and just sort of egg expectation that along the way it's going to be written in a secure manner, but without a lot of definition for what that actually means and occasionally you might end up having conversations like this particularly after there's been some well-publicized security story in the news for a boss to meet a client product owner asked questions about the security posture of an application by saying something along the lines of is this

application secure or maybe if they're slightly more sophisticated they might ask how secure is the application but there's an assumption built into this kind of questioning something that we can actually achieve that it's either a box that we can check or maybe that there's a spectrum with insecure on one end and secure on the other end and we know we're somewhere in between but we're trying trying to move ourselves toward that secure ends of the spectrum, but if you come stopping and think about the security features that we actually build the kinds of security controls we try

to implement When do you really want to realize that that's actually not how security works at all. So what are we talking about? When we use this word particularly when we use the word to describe software really bugging. Risk and how we manage risk. The controls that we put in place to try to limit the risks that are software faces. So security ends up becoming this kind of verbal shorthand that says we believe that we accurately assess the risks that are software faces that we've applied controls usually technical control

to bring those risks to an acceptable level for our business and that phrase the acceptable level is very important. And so they were in a comeback to a little bit more later on that any security control is generally designed to address a very specific scenario and usually at a point in time so often insecurity conversations people like to use metaphors, so I imagine living in a castle Towers in Big Stone walls, you ask yourself the question in my secure it all depends what you thread is if your thread is a

medieval land-based Army You're reasonably secure against that threat if you are threat is a modern Army with modern technology. You're probably not particularly secure at all. Maybe for more technical example, you think about HTTP, let's get gas right there in the acronym that stands for secure rights to the general public they access a website and they see that ass in the URL bar in a little padlock and all of these things that tell them that is a secure website, but that's not the website that tells me nothing about what the website with

my information. It's the connection to the site that situation even then. It's only secure against very specific scenarios that HTTP was designed to address. So any definition of security can really only be based on a definition of a threat and driving instead of asking whether something is since you were but we should be asking is in what ways is secure or not secure I like to break that down further into a few more basic questions. And these are questions that we should be asking ourselves all the time as we build software. We can ask them

at different levels to different contacts. We can ask him if the level of code when I introduce a new method when I change the behavior of a method when I'm working on a feature or I can ask him about the system as a whole the software application in general. So the first question in really the first to kind of go together ends up becoming a definition of what in the industry be called a threat model 183. What are the things we're worried about what could go wrong in a security relevant way what could cause harm to the software to the business to its users

and How likely are those things? We're worried about to actually happen. Once we understand that then we can ask ourselves for what we do about it. How do we stop those things from going wrong? And what happens if things still going strong despite our best efforts so we can Recognize that we're always going to be making trade-offs. I think that the hardest question for developers is often not what the threats are. But how do we prioritize them figure out what deserves the most of our limited attention and what ends up being the acceptable

level of risk for a business? The most secure design that we can come up with his probably completely impractical. So if I'm worried about SQL injection and I am then I could design a system where my application isn't allowed to talk to my database. I can say my application is going to print out questions on a piece of paper that a DBA is going to pick you up interpret walk over to an air-gapped machine and type in his own query very solid protection against SQL injection doesn't completely impractical so as a business we have to recognize that we're going to land

somewhere between that and a database with publicly accessible to the internet. There are trade-offs we have to make So there a lot of security features I could have how do I think about which ones I really need and which ones I really need to be working on right now. So it's one way of thinking about that. It's certainly not the only way narrow lots of ways of developing a threat modeling generals. It was a very formal and we don't have time to explore all of those but we talked about prioritization. How do we think about which threats we need to be focused on

the most right now. Here's one way of having that conversation something you can do on a whiteboard with a team and just start a conversation. I like to think of these things along to axies. We have a line that says whether something is more likely or less likely to happen to be a problem for us and then together taxes says whether something is more or less likely to be damaging or how much damage is likely to be caused to the software to the business to two users if that threat occurs if the attack is successful

and this is both very subjective. It gives us a starting point there. No units of measure I can't stay and nobody can really say that there's a 90% probability that this type of attack is going to occur. There's a 10% probability that another type of attack is going to occur, but I can stay based on what I know about the state of the industry. The kind of traffic. I see in my application logs about the type of business that I'm in that this A more likely scenario than this other scenario and I can start to compare things against each other.

The point is to set priorities and to figure out out of the vast landscape of everything I could be worried about. What's most important. now these specific examples aren't important but it's important to recognize that some threats are very common because they're easily automated others are less likely to happen because they require targeting but unlikely scenarios are also sometimes still think we really need to worry about because they can potentially be very damaging. So generally if we start thinking about things

comparing them in this way, we can set priorities moving from the upper right quadrant down to the lower left quadrant the things that are both very likely to be a problem and very likely to cause great harm or the things that we needed to spend most of our attention on So once we have that conversation, once we have a starting point, this is still something that we should revisit continue release or no threat modeling. Whatever form it takes is ever complete just in the same way that you spike a feature make a prototype get feedback

and start iterating on his you want to do the same thing when you're thinking about security threat, so Once we have something written down though in whatever form that takes week and then start to ask ourselves for the questions. We can say what do we do about this thing? What controls we are you having place? What are gaps that we have where we feel that? We're not adequately addressing this problem and then we can prioritize the word to close those gaps at the good news. If this level is it designing security controls is usually

not that hard because most of the time everybody else has the same problems. I do and smarter people than me. You have thought about the ways that are effective to address the security concerns that I have there probably well-established approaches to controlling them. We know the kinds of things that we need to do to prevent SQL injection. We know the kinds of things we need to do to prevent cross-site request forgery. But what is hard is actually doing it? So one of the big questions that we need to ask ourselves with 4 controllers that we already

have time for things that were thinking about Implement any of the future is how do we know that they're actually effective? How do we know that the security controls? We have are actually working and functional and the answer to that is the same way that we know the behavior of our software is correct in any other way we tested in my own progression as a developer. I was fairly slow to adopt test-driven development in really automated testing in general before I started working with rails out of the PHP developer and I wrote PHP in the traditional way of typing

stuff in a text at it. There's an opening up a web browser and pushing the refresh button to see if it did what I wanted. Eventually, I realized that doesn't scale very well and I understood the power of automated testing. And once I started writing really good test what I found something that probably most of you could have told me if I bothered to ask is that Learning to test made me write better code in a lot of ways because I started thinking about how I'm going to test the code that I write. It made me

think much more critically about the way that I was writing it and security testing will do the same thing. So when you started thinking about how we're going to test the security controls that we implement we design better security controls. So security testing should be a very important part of your test sweet part of your CI pipeline, whatever you have if you're not testing your security controls, somebody probably is and you really don't want to be Outsourcing security testing to the internet but there's several different kinds of security test that we can look

at it. So we're going to walk through a few categories ways of approaching security testing explicit test for controls that are built into our application and this is just like any other Cast of an applications Behavior was an example in cucumber, but direct object references are a big problem for a lot of web applications. So in this example, we have an access control to ensure that a user can only look at their own orders. And in the test on validating that not only is that true. I'm denied access when I try to get

at somebody else's stuff. But also then what the response to that is what happens if I try to and that's also something we're going to talk about just a little bit later on. Explicit test will only cover of course exactly what I tell them to cover. So they'll tell me if the security controls that are designs are doing what they're supposed to if they're behaving as expected. They'll alert me if it changes introduced that breaks that behavior what they won't do is help me find things that I am not already thinking about until 4 that I need other

types of tests the next category of security testing a static analysis and different tools that do this without food is from break man, which is a tool that many of you may already be using real specific and very helpful. It's highlighting here for me a possible sacral injection vulnerability where I'm passing user-controlled input into an active record class method that does not sanitized that belly Show me static analysis is good at he is that kind of thing static analysis to old can

trace the handling of input through many levels of indirection in a way that's hard for human being to do. They are not executing analyzing what it does and it possible to have false positives when you do this kind of analysis. So sometimes developers get frustrated. I think with the output of tools like this, but to me a false positive on a static analysis to like break man is a code smell because if I've made my code hard for brake man to understand and reason about that. I'm probably

making it too hard for people to understand as well and I may not always be the case, but often I found that it is Another type of tooling that I would lump into the static analysis category as well as the auditing of dependencies. So most applications are not just using their own code, but pulling in gems and other dependency is that may have their own the problems that come along with them. So another important piece of a comprehensive approach to automated security testing is auditing the states of the gems that you

depend on and they're different tools that do that one that I use bundler audit which pulls in an open-source Ruby advisory database and simply Chex the contents of my gym file every time I run my sweet to tell me if There's anything that's missing an important patch or version upgrade. Connect category of security testing is dynamic analysis at all, but only understands its structure in the way that analyzing data Dynamic analysis tools are doing the opposite. So they're running the code. They don't know anything about its internals and how it works but analyze

the output and for some people especially in the business World Dynamic analysis is really what they think of as security testing should be woke up with a vulnerability scan. That's the thing. I'm not sure that's the best label for it, but he can only receive the results of execution. So what dianic analysis is good at is generating a whole lot of unexpected input and seeing what happens how the application reacts to that. It's good for finding problems in string handling that may not even be security vulnerabilities. But if you start to see your applications wearing a

lot of Errors because of the kinds of input that's being generated by one of these tools and that can alert you to problems in the way strings are being handled in the code. Even if there's not an obvious vulnerability attached to them is important a lot of times with this kind of too willing to give it access to an authenticated user because I found that a lot of applications may be fairly well locked down to the public but once somebody logs in and there's an implicit level of trust associated with them that is not necessarily deserve and still giving a

tool like this access to a user account and letting it grow the application and see what it can find is often very helpful. And the other good thing about this approach to testing is it it's going to exercise the full stack. So it's running request through the server rack to the all of the gems that are involved in servicing a request. And so that can be helpful in finding problems that may not be obvious when you're just need looking at a pull request in isolation. This kind of comprehensive test is helpful there.

The fourth category is manual testing and we're all of these other approaches to security testing can be done pretty quickly easily cheaply for free even manual testing is expensive. It has across both in time money expertise, but a human being can do things that none of these automated tools can do a human being can get a feel for an application. If it's someone who has substantial development experience sometimes by looking at an app for Behavior, they can start to get a sense of how it works the

kinds of things. That might be going on inside it and they can start to make guesses and inferences about it. They're pulling on suspicious threads and seeing where it takes them. Sometimes a person doing manual testing will start with automated tools to find suspicious things that look like they might be a vulnerability and then start probing further to find out if they can really be successfully exploited or not to manual testing particularly. If you're dealing in any kind of sensitive data, I think it's still very important in there. A lot of ways that you can go about it. You

can have dedicated staff if your organization is big enough to do that. You can have a bug Bounty program. You can hire Consultants to do penetration testing, but it is an important component to a comprehensive approach. And somebody who's manually doing testing is going to be doing essentially what a black hat would do in a targeted attack. Defense-in-depth is a phrase that used a lot in security conversations and implies to testing. So none of these approaches are

mutually exclusive. You can pick the ones that work best for you. You can layer them together and probably should the idea of with defense-in-depth is that when we lay our security controls on top of each other it gives us more opportunities to be wrong. I like having as many opportunities and be wrong as possible. So if one control fails another control is going to stop the attack so against take the example of SQL injection. So maybe I have a web application firewall setting way out here in front of my application analyzing traffic for malicious patterns, but

something gets through and that fails to been in my application code. I'm doing a good job of sanitizing my inputs to even if that query makes it through the web application firewall. My code is going to make it not used. But maybe I missed something and so there is a SQL injection vulnerability in my app. But the application has database user with the least privilege required. And so even though the SQL injection is successful. It can't actually do anything useful answer that final control is the thing that saves me that's what defense in depth means player controls together and

we talked about it in testing. None of these approaches to security testing are perfect and they're all going to miss some things but we layer them together. It increases the confidence that we're finding as much as we can. I never want to tackle what happens when there is an attack going on and particularly if the application maybe still have problems that we haven't found yet incident response is a very broad topic not something that we can explore too much this morning, but I do want to talk about one very important aspect

of it. And that is this usually an exploit takes time. It takes time for an attacker whether to the automated script or somebody manually probing to find things that take some time to find the vulnerability and figure out how to exploit it successfully. So most security conversations particularly in business focus on preventing attack. How do we lock this application down? How do we make sure that nobody can do something malicious with it? And that's important, but we probably all got problems that we haven't found yet. So if that's the

Detecting an attack when somebody is probing when somebody is trying to find that vulnerability detecting it quickly and doing something about it can be just as critical. Tell me to look at a couple of very simple ways to do that. We are much more sophisticated ways to do it. You can buy big expensive security appliances and employ them in your infrastructure on but if you are working in a small out, there are very simple things. You can do just in application code that can make a big difference. So here I'm going to implement a non-existent

routes in my application that has a very predictable names and this is the kind of thing that somebody who wants to prove my application for vulnerabilities is likely to try to poke at and see if there's something there. So just this application have an admin tool that I can try to get into So before I made any changes here, if you hit that admin pass you would have did a 404 cuz it doesn't exist. And then you move on and try to do something else. I'm in this case. I've connected this route to a controller that I called tripwire and you still just get

4047 the outside and doesn't look like anything is there but I'm doing one thing which is that if it was an authenticated user who was trying to hit that URL. I'm just going to send some kind of a notification whatever that looks like for my team because that indicates to me that somebody is up to something. They're doing something that they probably shouldn't be doing. Another example, this one is maybe more hands-off. So one of the first things that an attacker probably wants to do if they're probing my application. Is there figure out what app stack I'm running

and may not be immediately obvious. So they're going to try something to see if they can tell a lot of people run WordPress, of course, so I'm going to connect your a WordPress path to my trip wire controller. And now instead of sending an alert. I'm just going to ban the IP address if somebody hits that route. So it's a trip wire that shuts the door and there are lots of ways to do this to it is not necessarily the most efficient thing to block IP addresses an application code. But again if you have a small lap, it can be effective. At scale you need way more

sophisticated infrastructure to handle this kind of thing. But even in small labs simple measures can sometimes make a big difference in your security posture managing IP Blacklist by hands is not going to work very well, but you can automate something and make it hands off. Then you're reducing your attack surface. So here I stay the IP address that hit this route in a cash entry. I give it an automatic expiration. So it doesn't grow forever and then I can use the Rack Attack Jim to read that cash entry and deny access to the IP address for the time that it lives.

So very simple very easy and free to implement and it reduces the attack surface of my application. But however approach these kinds of things for any type of incident response. I think the biggest key is to limit the number of decisions that you need them. Under Pressure, so how do we block a bad actor? Is that something that lives in the application? Is it something that lives farther Upstream? These are not questions we want to be asking ourselves when we're trying to respond to an incident. These are things we want to

start about beforehand. What are the criteria that we would use to make those decisions and when you start planning thinking about scenarios thinking about what could go wrong again asking that first question, what are the things were worried about? What could go wrong? You may not encounter the actual scenario is you plan for you may end up in a very different place than what you expected. But the act of planning starts to give you tools that prepare you to adapt to what actually does happen in the real world. So speaking of being prepared and responding

I said at the beginning that you have to recognize you can never control all risk there is always some level of risk that anybody who's running software connected to the internet is going to face so course we want to prevent as many security incidents as we can but just as important as preventing them is how well we react to the things that do happen and resilience is one way of thinking about that. So this is a term that can apply to software. It's a term that can apply to individual people and it's a term that can

apply to organizations. Groups of people. So how is my software designed to handle failure in security terms that can mean limiting what date I store in the first place that can mean how I segments networks and applications from each other how I layer controls together so that when something fails the damage is limited On The Human Side of Things resilience asked how do I as an individual handle failure when I write code that turns out to be vulnerable. How

do I do with that? How do I respond to the failures of other people when I find the code that they've introduced you try to hide those problems if they don't happen again, do we look for someone to blame especially if there was an actual events that occurred as the results of a vulnerability? If you try to learn from the things that happened, how do we learn? How do we capture that knowledge and share it with others? I'm sure you're familiar with imposter syndrome. It's already been talked about here at the conference several times and it's something that there is

a lot of important conversation happening about I think in the community as a whole is this sense that a lot of developers live with that everybody else knows more than I do. I'm afraid of being found out. I'm afraid of people will realize I don't know that I should know because security is so important. I think this feeling is especially chart for people in the context of security because the security failures off income with this layer of moral judgment attached to them that maybe other kinds of bugs don't necessarily get Which is reasonable

two points because we do need to recognize that real harm can be done to real people when security isn't taken seriously, but we also need to remember that we need to encourage other people in our organizations to remember that security is hard and it's hard for one very simple reason, which is that the Defenders have to be right all the time. So when we have conversations about security topics, we need to be careful to reframe them and not let this assumption creep in to the words that we use or that we hear other people using that any

system can be perfect there really is no such thing as a secure application. But we do need to be having conversations about security topics with people who may have naive ideas about how security Works in business. So all of these tools that we talked about can help us in reframing those conversations when somebody starts going down that path way of saying it should be this way. You should have known as long as we have these things written down and hopefully agreed to and

preferably involving non-technical people in these conversations. So we're trying to develop a threat model ourselves. And what are we worried about technologist? We got a long list of things that we're worried about you may have a slightly different list of things that they are worried about it and it's important to involve them in those conversations. If we have expectations that are clear about how we approach security and our coding practices and we write them down that become conversational tool

if we have security testing as a part of our automated test sweet and that output is available for everyone to see that's a conversational tool and when we do the planning exercises and think about how we respond to an event and when an event does happen we talk about what happened and how is out with it. Those are conversational tools and all of this gives this structure as we try to develop a culture of security in an organization do when there's structure. It changes the nature of security conversations

without it. If there's a failure of security whether it's a real failure and event that occurs or it's a theoretical say Because of vulnerability is uncovered without structure around our security expectations. What we're going to tend to do is look for someone to blame. They will who introduced that code. And why didn't they know better than to introduce that code? We're going to look for somebody to blame and we're probably going to shame them but when there's structure and we have a documented written down approach to security then it changes the

conversation into blaming the system instead of a person and we blame the system. Then we can look for ways to improve it. We can say roller threat model was in complete our tests were incomplete and we can do something about that. A few tools and things that I mentioned along the way here break man, again is a static analysis tool that's very helpful gyms that I referred to in the examples bundle or bought it and rack attack and some resources to explore some of these ideas further rails applications. There is a security

guy tamang all of the other excellent rail guide owasp is the open web application security project and they publish a guide to secure coding practices, which is definitely worth a read every year. They publish a top 10 list of the things that we're all still doing wrong as an industry, which is also worth reading the US computer Emergency Response Team tracks vulnerabilities and publishes feeds of them. And that's also worthwhile to get a sense of the kinds of things that are going on in the industry. Thank you very much and find me at the conference if you have any

questions you want to talk about security.

Cackle comments for the website

Buy this talk

Access to the talk “RailsConf 2019 - No Such Thing as a Secure Application by Lyle Mullican”
Available
In cart
Free
Free
Free
Free
Free
Free

Access to all the recordings of the event

Get access to all videos “RailsConf 2019”
Available
In cart
Free
Free
Free
Free
Free
Free
Ticket

Interested in topic “IT & Technology”?

You might be interested in videos from this event

September 28, 2018
Moscow
16
166
app store, apps, development, google play, mobile, soft

Similar talks

Justin Searls
Co-Founder at Test Double
Available
In cart
Free
Free
Free
Free
Free
Free
Joel Quenneville
Developer at thoughtbot, inc.
+ 1 speaker
Rachel Mathew
Software Developer at thoughtbot
+ 1 speaker
Available
In cart
Free
Free
Free
Free
Free
Free

Buy this video

Video

Access to the talk “RailsConf 2019 - No Such Thing as a Secure Application by Lyle Mullican”
Available
In cart
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
577 conferences
23312 speakers
8705 hours of content