Events Add an event Speakers Talks Collections
 
RSAC 365 Virtual Summit
January 27, 2021, Online
RSAC 365 Virtual Summit
Request Q&A
RSAC 365 Virtual Summit
From the conference
RSAC 365 Virtual Summit
Request Q&A
Video
Scaling Your Defenses: Next Level Security Automation for Enterprise
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Add to favorites
224
I like 0
I dislike 0
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
  • Description
  • Transcript
  • Discussion

About the talk

The amount of work security teams have to handle is increasing rapidly, but without the tooling or staffing to keep up, how do you address the challenge? Many teams try to leverage automation, but they find that it only helps them with the basics. This session will provide real-world insight on how to transition from just doing the basics towards implementing full end to end security automation.

About speaker

Tomasz Bania
Cyber Defense Manager at Dolby

Tomasz Bania is the Cyber Defense Manager and Security Response Architect at Dolby. He has worked in a number of verticals, most recently as lead at the Cyber Defense Center at HP/HPE. Tomasz has a passion for security automation and has worked on various cyber defense, security research, and incident response initiatives.

View the profile
Share

Hello, everyone. My name is Tomas Banja. And you're at the artist Lee's Summit. Today. We're going to be talking about how to scale your defenses. Next level security automation for the Enterprise. A quick introduction about myself. I am the Cyber defense manager. In the information, security group at doping been here for about just over four years now or Lee was cyber defense in early today HP. I've been in the it space for about 10 years, eight of those being in the cybersecurity space at work in the number of Articles from government education. Now course in the Enterprise space. So

let's start off with a few quick questions. And the person is going to be what are we looking to do today? If part of this presentation, we're looking at is How old is how we transition security operations, teams from Manuel coordination to focusing on more complex. So, why is this important for many of you have seen the security Operation Center State. They really did. The people really look at the alerts. And then sometimes you really get to a point where they're tired of looking the same thing over and over and over again. And in this is really an opportunity where we can automate

the monotonous and bring things that are much more interesting to them. So that they're more engaged in a few more valued within the organization. And among the other reasons that we need alert volumes that we're getting in, because we're bringing a new technologies. We're looking at new techniques, volumes are increasing without really a matching growth in the skilled, Technical Resources that are available to us. Again, the more we can keep people engaged and and and it increased the job satisfaction of the things said that are being looked at that allows us to retain the skilled

Technical Resources. So what did these cortical automations look like today? So if we look at the broader industry as a whole has focused around these single request orchestrations that are initiated by a buying analyst and this would be things like in taking an alert, conducting a check for against an intelligence feed or initiating a remediation float to the remediated give some organizations that certainly moved on past this. But this is this is really where a lot of the things with an existing automation

platforms or are tailored to these single request orchestration. So, how can I measure my organization's automation capabilities and a level one really could be looking at a manual processing. So this is going to be an analyst and taking an alert. Conducting a series of actions until remediation level two is focusing on what we discussed just a moment ago. And this going to be things like a threat intelligence Jack, so limited orchestration with not really any automation being involved with some Automation. And this is where really striking the point between,

how much do you want to, you want to automate as an organization versus how much you want to still have that? That that man manual action taking place level 4 is going to be full orchestration with significant Automation and vulnerability, management, analysis, and patching. So, what we're doing in this scenario is a vulnerability get to notes to buy the industry. What we can do is we can leverage and automation to take a look at where that, where that vulnerability may exist within our environment leveraging tools, such as our PA to deploy a patch

onto a system initiate, a series of tests scripts to to run applications. And then let's check. Have any errors. Have any error has been identified in running those tests and it's nothing. Nothing came up of concern week and then have the next step of the automation, actually deploy the path within our environment. This can vary much to shorten the timeline from identification of a new vulnerability, to having your, having your environment fully patched, and the fifth level, which is really where we're going to be getting into today, and that's going to be the full end and sore

implementation. So now, they were looking at level 5, what candies automations look like in and where do we really start? So for the purposes of for the purposes of this presentation? We're really going to be covering the full end and incident response work. This will allow us to leverage automation throughout the entire process from the point of initial identification. All the way down to the automated handling, the recording and the first we're going to jump into here is component. Number one, alert ingestion. What kind of lizard can I bring in? Whatever I want to start off

with a v e r alerts as well as those proxy web Gateway. Fire alert, Firewall Alert, these feed and tend to provide the highest value as they tend to have the highest Fidelity for the information that they provide initial part of our journey. Well, it's really going to be those rawler peas. And the reason is because one we may not have the scale or capacity to address those but more importantly unless you have a reason to delivered that data such as processing for tertiary data analysis or machine learning and deep learning models, as well as

will touch down a little bit later in the presentation. You don't really want that boil the ocean as a say in the industry. The next component we're going to be covering is data collection. So from a broader perspective. What does what being a matter to me in the first one? We're going to cover here's and almost system activity example about why does Miner attack framework. It so sure many of you are familiar with specific types of activities. That could be, could be tied to a malicious act or malicious activity. Being able to see

one indicator from motor attack. It may not necessarily tell us all, you know, what something really bad is going on. But if we can tie down a pattern of behavior upon multiple heads, this will allow us to allow us after we conduct the data collection to really, really dive in deeper in and then confirm whether there's a malicious ER or benign activity. The second thing would be looking at is anomalous user activity and this going to be things like suspicious user login for the course of the user. Is not in front of the system in further system at a given point in time, but they're

conducting some sort of a quite a bit cheaper. What happen always is going to be our indicator of compromise information. And listen, come from a number of sources. This could be our friend until feed data weather. Be commercially are open source as well as things like our sandbox and also. So let's say within your environment, you have things like your fishing analysis or you have where you have malware, that's the that's identified on, on, on your end points process through an internal or external sandbox and correlating it. As part of your data collection process could also be of great

value to you. So for the moment going to jump over component, 3 analysis, and wouldn't jump into alert remediation itself. So, what steps do we take to eradicate the threat for environment, depending on if this is a network? So we can be leveraging. Firewall blocks. We can do something like automated reimaging. Now, let's say we have a critical asset within our environment that has been compromised for one reason or another. Maybe we need to get approvals from business unit leaders, or otherwise. And we can integrate is part of the automation process contacting. Those end-users getting

the approval and being able to initiate the next steps in the remediation process, alternatively. What's a within your organization. You don't have an automated reimaging process for endpoint. So the next step in this scenario would be contacting your service. Your field team. Whoever may be the end resource within your organization that may be able to assist your end users in conducting through mediation process. So now we're going to jump into the last component of this process and that's really going to be the reporting. So some organizations, do a really great job at reporting reporting

their outcomes to to Executive management down to their technical team. And then one of the things that I find is when we will ever do these automation Frameworks and then and really dive into full end and automation. One of the things that we can gain significant value out of is truly properly reporting to all of the stakeholders, but one of the questions we need to ask ourselves who needs to know what, soap, from a management perspective. When we reported a higher, one of the things that would be of great value to them is to report. What is the state of risk? What are the

things that are being targeted with an organization? What are the things that we need to be concerned about what we're seeing from these malicious hackers. Are we seeing things like social, engineering attempts where we should be making our end users more aware of? Potential Communications that may not be legitimate or we're seeing an increase in phishing campaigns with malware attached to them. So we want to implement some further. Upstream controls are tighten existing controls to ensure that things don't get through. And secondly findings of Interest. So if

there's very specific things that may have occurred over the course of your reporting. You may want to communicate those to your management. So, moving down further. We're going to talk about technical teams, and depending on, depending on how your team operates. This may work in a number of ways, on one hand. There's some organizations that report things like how many threats are how many systems have been remediated. And then on the other hand, you'll have situations where the organization likes to focus on incidents of important. This will be things like a threat after in any

given set of behaviors. So that your analyst as they're other going through these alerts on a day-to-day basis can take a look and maybe potentially correlate to the previous activities and, and have a have a running start at what they're looking at versus having to come up with a new hypothesis, every single time. The last thing I have listed here and again this depends on the level of your of your security, security strategy. And maturity is going to be a trend Hunters within your organization. So the things that they're going to be interested in a sink like suspect finding do

the things that your analysts have taken a look at it. Did really dive deep into it and then not quite sure what maybe the issues are not sure if this is really where your help we can help out and actually died of a little bit deeper. And another thing that you want to potentially time that reporting is the result of previous hunts that may have been conducted by your threat hunting team. So if they've looked at a specific type of activity, and now we're seeing this type of activity, actually taking place within our learning in our environment and maybe they

want to take another look and go back and see. Maybe is there something they missed? Or maybe there is something that they, that they should look into further. Does I mentioned we skip over the third component and that is automated alert analysis. Know, how do I leverage this automation to analyze this data. We're going to cover the first three here today. And the first one is singular indicator scoring. The second one is heuristic analysis. The third one is a machine learning model and two additional levels that we won't be covering here today. Are

either leveraging single-purpose, deep learning model or leveraging, a combination of deep learning and machine learning models for a neuron that analysis. And that's going to be reviewing the manual analysis process for indicators,. So, as I mentioned earlier, if you have an analyst looking at alert, they did they see an indicator and and the go reach out and they try to take a look at what, what may be, what may be behind that it make it. So when we, when we want to look at that from an automation perspective, let's say we have threatened to hit. On

an IP address. But so far, everything, one of those searches that were conducting. As part of that Intel analysis process. We can assign a static or waited score. If something is malicious, we can assign it. A score of one. If something has been eyeing, we can assign her to score a zero. Let's make that a little more complex. Now, with a sample scoring used case. So I'm going to be using virustotal a few times his presentation because I think it's the most ubiquitously used across security industry of these kind of Intel checks. And what's look specifically at virustotal buyer, their final

and URL reputation. So let's say you're doing a check on a file or URL. When you do that in virustotal, you get a number that indicates how many virustotal Texans of Peyton Place. Don't forget Apple for at least one of those detection. If you have virustotal detections over 20, let's say you give it a value of 4.10 to 19 to give it a value of 3 points or 1 to 10. You give me the value of 2 points. So, how does how does that look in practice? So here I have an example. Domain website, seemed calm, and it's got a virus total score of 9 out of 88.

In that scoring model. We give it an analysis value of 2 points because the number of virustotal detections is less than 10. So don't that we know how we're going to score this, the single indicator. What's the next step in this? In this automation now automated analysis process and that's going to be something along the lines of a serious risk. Or what this allows us to do is gather a collective scoring from each of those individual indicator. Jack's to develop a finalized scoring output. As an example, what's a were using that original methodology of malicious get the score of 109 get the

score 0. So, if the total malicious hits is higher than fine. So let's they were looking across all the indicators for a given alert and we get more than five hits within that within that analysis. But on the other hand, if the total number of hits is less than 5 for the primary in Decatur and More Than Zero for a secondary indicator. Maybe we want to mark, that alert for manual review. And what we mean here by primary, vs. Secondary in Decatur, primary indicator is going to be that first thing that's really coming out to you and saying, this is a

potential issue. Where is your secondary indicator is going to be the things that have that have occurred afterwards that we are tying back to the first occurrence that set up. Our initial work at one of these very important, things are going to want to do in this process and that's why we're going to cover, not just using zeros in 1 per say if you want to be able to balance for trust. So if you have a source of information that is known to provide a reliable outcomer, reliable. Data, you want to be able to increase and decrease at scoring accordingly. So let's say you have a commercial

until feed that you feel provides you. A lot of value, make sure that you can increase the scoring on that so that if it sees a threat it is automatically escalated to you much more quickly compared to let's say a lower Fidelity source of information. So diving into this a little further. Let's look at a detailed scoring, use case. And for this example, or going to use virus totals domain reputation. And here, I have outlined a number of the various chats that virustotal, conducts as part of these detailed domain reputation check

delivered in the number of vendors and end. This is an example of that scoring. So we're just going to pull out a few here. For Webbie, Tatian, there's an info. Verdict value. And is that values, such a malicious? Maybe we sign the score Five Points. We look at Trend Micro categories and it has a category that we believe is a significant concern. We can also assign it, a score of 5.23 points, depending on what it is. Or if we look at things like at the end hear Webbie Tatian domain info. See if it's greater than 50. Let's give it 10 points, if

it's less than 50. Let's give it five points. So, this is a lot of information bullets. What's a boiled down into an example? Use case? So in this, in this example, we're going to use a scoring greater than 20 less Mark dealer does malicious less than 20 with Mark the alert as benign. So what say we have a domain malicious sample. I owe. And for this domain, we have three heads. The first one being thrust, secret categories on that. So we assigned a value of 10 lb Tatian domain info. Safety score is 73. We sign that a value of

10 points and then what say we we received a secondary source of information and the demand appears on the fishtank list. So what's the sound that a value of 5 in total? We have an alert total of 25 points, which is greater than the 20-point threshold and beeps that, and therefore, in this scenario, we can mark this particular as malicious. So now that we have to find everything from a single indicator, check to combining a series of the single indicator text, to do to conduct a Securus Tech analysis, aquatile that data into actually leveraging a

machine learning model. so, What is a very, very basic example and say test.com has a score of zero in our initial analysis? Okay. Well, what other metadata can we potentially leverage to gather for their contacts on on whether or not something may be malicious or per night while we can look at things like the geolocation? We can look at the IP range. We can look at the frequency of the site has visited in your network. We can look at its site ranking. And if you really want to get created, one of the examples I

listed here is how often does a given domain IP your L appear within your corporate files or documents? So once again, What state were taking that domain? Are we actually actually train and develop model that the ties into that value of the things that you can do? So you've done your single indicator Jack's. You conducted a heuristic analysis and developed the final scoring for each of those for each of those alerts that you process historical. So, we take that dataset, we take that. Medidata that we

mentioned and we start him putting that data into a training set for your machine learning model. So the example I have outlined here is a sample. Security. It was added on a certain day. It's been searched by our analyst or automations 31 *. That domain results to an IP address in. In ASN 8402. There are three corporate findings within our organization. It has a geographic mapping of Russia and based on this information and historical analysis of this indicator. We've given it a mouse Gore Point.

575 So, that's a single example, but how do I use that to den in agreeing to a machine learning model and an analyzed the next thing? So once we use a model based on these parameters, where it would provide the new model, are we able to provide this new model of a completely random domain with the corresponding indicator? Valleys with the corresponding metadata? And once that data's inputted, the model will generate a malicious analysis score without the need for any external analysis. So here we have another example of bad domain. Space.

You start a little bit later. We've never search for within our Organization, for, for that in Decatur, but it sits within the same ASN space, and it sits within the same geographic region. Well, we have an overall malicious score. That slower at a point. Three seven, six. But it gives a nice base for us to be able to then tape that external non external, announces that we can perform and actually raise or lower the overall threat value to us as an organization. So, let's say that Domaine. Space doesn't hit on any external indicators,

but we know through our to our historic alert history that was in organization. The did the place that host bad to me. Space, is something we've actually seen targeting our organization. This may be a good scenario where you send it over to an analyst for further manual or this specific indicator, check, maybe all you need to then pursue remediation. Again, it depends on the level of comfort within your organization. But I can tell you from personal experience, is

one of the things that overtime you're just going to notice the patterns and you going to be able to leverage it to provide a lot of value before you even get to the next steps. Let's jump into tuning the automation themselves. And one of the most important things are going to want to focus on is tuning based on the organization's risk and disruption tolerance mediation. If on the other hand, the disruption tolerance is not preferred within your organization. Then maybe you need to shift to more manual review. If

something is discovered within its part of the automation process, so don't worry if you're not starting off, so, make sure that you'll ever spend with manual validation checks, where you deem it to be prudent. Lastly, and most importantly, make sure to collect that statistical data. Because as you start developing these things like this analysis, the machine learning models, this date is going to be Paramount to be able to deliver you. A nice bass to start off

from So now that we've covered where, where we can go with automating, all of these steps of the process. How do we calculate the return investment that would never getting from Creedence automation? This will allow you to understand what's actually involved in the automation, is that you're creating determine the amount of time needed to manually complete each. That, of course, is be covered at the very beginning of the presentation level. One is completely manual processing. So someone is

taking in by hand that alert and they're conducting every single one of them. Every single thing that you would have within your automation step by step manual. So determine how much time that takes. The next thing you can do is use a normal salary to calculate how much how much time. Each one of these automations takes to complete. Of course, when you have an animation. You don't need. You don't really need to factor in things like down time, unless you were going to say she chooses to do so. Next thing you can do is document the time needed to complete that action

using Automation. And lastly, you going to want to take all that information regarding the Automation and how much, or how long it takes to do manually and compare the time and cost between manual and automated processing organization. So, at the end of 2017, we really we we folks on the very Basics and that was that was about fifty to a hundred of these automated events day with my fine. I can play books. We had a pretty decent return on investment of up, $75,000. We got the 2019. We're really one of the close the loop on that, on, that initial development of the base

of our automation strategy. So at that point, we had about 25, active Play books and that really allowed us to get to a point where we're processing about 25,000 the automated event today. Now, of course, just so we're clear that someone within the security team was looking at $25,000 a day. What that means is that twenty-five thousand things were processed and then the person is actually receiving. The information was only responsible for reviewing 5, 10 15, 20 days. Which obviously lowers the burden on your animals and at the end of the day behind this was presented

with much more interesting information. Now, we're moving towards here at towards the end of 2021, is really scaling those existing automation capabilities. We're on deck for about 41, active Play books. And this should allow us to process about 100,000 automated events per day. So as I noted the bottom, I cuz I have I have r y values here really after Phase 1 Roi. From a monetary perspective is, is not completely inapplicable metric because at this point you're effectively developing applications, but because I want to provide a nice sample set

across the timeline. I've included it for your consideration. So now that we've covered all this information. How do I implement this in my environment? So, over the next 30 days, you could try to validate your existing manual. Our processes start documenting out with this process hazard. Once you once you completed the validation process, develop your first single or Hero, 6, going out room, so I don't want a few samples in this presentation. But really, you're going to want to Taylor and drive it to what matters is your organization. If

you're focusing more on fraud, you may will be more interested. In looking at frogbass. The detection is going, if you're looking at things like malicious malicious commercial, an open, sore spread until Feet's May likely be able to assist you in that process. So after you've developed your single heuristic scoring algorithm want to validate your scoring advocacy using manual analysis. So basically we're going to be doing here. Is you going to be running through a series of alerts completely manual and you can run through a series of alerts. Those same alerts leveraging, your

automated analysis. Do a comparison between the two and if they're generally on the on the same level you can move forward to developing your first machine learning model. Once you develop your machine learning model. One of the very important things. You going to want to do is between your your training, your testing, in your Productions. That's conducted back test of that model against you or pre automation datasets if you have them available, So, as I mentioned earlier in the presentation, the earlier, you can start documenting, the alerts, the events, the metadata, and

collecting that for future analysis. The better, the better chance. You have a developing, this machine learning model quickly, and effectively. At this point, I think you've everyone for your time and I'll be sticking around for any questions, you may have in the queue and I thank you.

Cackle comments for the website

Buy this talk

Access to the talk “Scaling Your Defenses: Next Level Security Automation for Enterprise”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Ticket

Get access to all videos “RSAC 365 Virtual Summit”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Ticket

Similar talks

John Pescatore
Director at SANS
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Brett Tucker
Technical Manager, Cyber Risk Management at Technical Manager, Cyber Risk Management Company NameSoftware Engineering Institute | Carnegie Mellon University
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Katie Nickels
Director of Intelligence at Red Canary
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Buy this video

Video
Access to the talk “Scaling Your Defenses: Next Level Security Automation for Enterprise”
Available
In cart
Free
Free
Free
Free
Free
Free
Free
Free

Conference Cast

With ConferenceCast.tv, you get access to our library of the world's best conference talks.

Conference Cast
843 conferences
34172 speakers
12918 hours of content