Inscrivez-vous maintenant pour un meilleur devis personnalisé!

The Darkness and the Light

Jan, 04, 2021 Hi-network.com

Introduction

The psychoanalyst Carl Jung once said"One does not become enlightened by imagining figures of light, but by making the darkness conscious. The latter procedure, however, is disagreeable and therefore not popular."

With a quote as profound as this, one feels obligated to start by saying that workload security isn't nearly as important as this concept of personal enlightenment that Jung seems to point to. The two admittedly are worlds apart. Yet, if you'll allow it, I believe there is wisdom here that might be applied to the situation we find ourselves faced with- namely, reducing our business risk by securing our workloads.

The challenge

Many organizations seek to obtain an acceptable balance between the lowest spend in trade off to the highest possible value. Any business not following this general guideline may soon find themselves out of cash. A common business practice is to perform a cost-benefit analysis (CBA). Many even take risk and uncertainty into account by adding sensitivity analysis to variables in their risk assessment as a component of the CBA. However, as well meaning as many folks are, they may often focus on the wrong benefit. With security, the highest benefit is often finding the lowest risk, but again one must ask, "What are our potential risks?".

Often when risks are evaluated folks tend toward asking questions from the perspective of outside looking in, determining who they want to get through their perimeter defenses, and where they want them to be able to go. These questions often get answered with something perhaps as nebulous as 'our employees should be able to access our applications'. Even when they get answered in more detailed fashions, perhaps detailing groups of users needing access to specific applications, they often don't realize that, while not entirely meaningless, that fundamentally they are asking altogether the wrong questions.

The perspective those questions come from is where the failure begins. Deep Throat's dying words to Scully are perhaps the most appropriate and in fact the very premise from which we shall begin:"Trust no one."

Trust no one

Most reading this will be familiar with the industry push towards Zero Trust, and while that isn't the focus of this article, certain aspects of the concept are quite pertinent to our topic of exposing potential darkness in our systems and policies. Aspects such as not trusting yourself or the well-configured security constructs put in place.

The questions to start by asking yourself are:

  • If your organization were compromised, how long would it take you toknow?
  • Would you evenever know?
  • Do you trust your existing security systems and the team that put them in place?
  • Do you trust them enough not to watch your systems closely and set triggers to alert you to undesired behavior?

Most folks believe they are quite secure, but like most beliefs, this comes from the amygdala, not the prefrontal cortex. Meaning this is based onfeeling, not on rational, empirical data backed by penetration-tested proof.

I spent a decade helping folks understand the fundamentals necessary to take and pass the CCIE Voice (later Collaboration), CCIE Security, and CCIE Data Center exams. Often this would look like me and 15-20 students holed up in some hotel meeting room in some corner of the globe for 14 days straight. Often, during an otherwise quiet lab time, someone would ask me to help them troubleshoot an issue they were stuck on. Regardless of the platform, I'd ask them if they could go back and show me the basics of their configuration. Nearly every time the student would assure me that they had checked those bits, and everything was correct. They were certain the issue was some bug in the software. Early on in my teaching career I'd let them convince me and we'd both spend an hour or more troubleshooting the complex parts of the config together, only to at some point go back and see that, sure enough, there'd be some misconfiguration in the basics.

As time went on and I gained more experience, I found it was crucial to short-circuit this behavior and check their fundamentals to start. When they would inevitably push back saying their config was good, I'd reply with, "It's not you that I don't trust, it's me. I don't trust myself and with that, if you would just be so kind as to humor me and show me, I'd be truly grateful." This sort of 'assuming the blame' would disarm even the most ardent detractor. After they'd humored me and gone back to beginning to review, we'd both spot the simple mistake that anyone could have just as easily made and they'd sheepishly exclaim something such as, "How did that get there!?!?  I swear I checked that, and it was correct!". Then it would hit them that perhaps they actuallydidmake a mistake, and they would go on to fix it. What was far more important to me than helping them fix this one issue was in helping them learn not to trust themselves, and in so doing, begin a habit that would go on to benefit them in the exam and I'd like to believe, in life. What they likely didn't know was how much this benefitted me. It reinforced my belief in not trusting myself, rather setting up alerts, triggers, and even other pnuemonics that always forced me to go back and check the fundamentals.

Lighting up the darkness

So, how does all of this apply to workload protection?

Organizations have many applications, built by many different teams on many different platforms running on many different OSes, patch levels, having different runtimes and calling different libraries or classes. Surprisingly, many of these are often not well understood by those teams.

Crucial to business security is understanding the typical behavior in an organization's workloads. Once understood, we can begin to create policy around each one. However, alone, policy is not enough to be trusted. Beyond implementing L4 firewall rules in each workload, it's important to closely monitor all activity happening. Watching the OS, the processes, the file system, users shell commands, privilege escalation from a user login or a process, and other similar workload behaviors is key to knowing what's actually happening rather than trusting whatshouldbe.

An example might be someone cloning a git repo containing some post-exploitation framework -something such as Empire or PoshC2 to use once they gain initial access after exploiting some vulnerability, testing different techniques to elevate their privileges using a valid account attack or perhaps that of a hijacked software process by using an exploitation for privilege escalation attack.

This isn't by any means a new sort of attack. Nor is the knowledge that workload behaviors must be actively monitored.

So why then does this remain such a problem?

The challenges have been in collecting logs at scale, parsing them in the context of every other workload's actions, and garnering useful insights. While central syslog collection is necessary, there remain some substantial drawbacks, primarily with that last bit about context. Avert so-called Zero-Day attacks requires live, contextual monitoring such as is achieved through this type of active forensic investigation.

A better source of light

How do we cast the proper light on only activity we're interested in?

How do we help our workloads have a sort of -again, if you'll allow the rough metaphor- collective conscious?

Cisco Secure Workload  is based primarily on distributed agents installed on every workload, constantly sending telemetry back to a central cluster. Think of them as Vary's informants: "My little birds are everywhere." -The Master of Whisperers, GOT

These informants play a dual role: First in reporting back to the cluster what I like to call the 3 P's: Packages (installed), Processes (running), and Packets (Tx/Rx'd); and secondly in obtaining from the cluster a list of rules to be applied to each workload's firewall rules specific to each workload. They also gather the very type of forensic activity we've been discussing. This is done with the collective knowledge and context of every other workload's behavior.

Cisco Secure Workload gives us great power in defining the behaviors we wish to monitor for, and we can draw from a comprehensive pre-defined list, as well as write our own.

Aggressive Disclosure

Some new regulations require that breaches to an organization must be reported quickly, such as with GDPR where reporting is mandated within 72 hours of each occurrence. Most regulations don't require that aggressiveness in reporting, but are being taken to task over such inadequate measures, such as in the case of HBR's report on a hotel chain breach.

Hackers were camping out for four years in the workloads of a smaller hotelier that the chain aquired. FOUR YEARS! That is an awfully long time to not know that you've been pwned. What I wonder is, how many more organizations have currently breached workloads today with no knowledge or insight. Complete darkness, one might say.

As Jung might have appreciated, it's time to make that darkness conscious.

Key takeaways

  1. Don't rush security policies. Get key stakeholders in the same virtual room, discuss business, application, and workload behavior. Ask questions. Don't ask with a grounding in known technological capabilities. Ask novel questions. Ask behavioral questions such as "how should good actors behave, who are those good actors, and what bad behavior should we be monitoring and alerting for." Ensure wide participation with folks from infosec, governance, devops, app owners, cloud, security, and network teams, to name a few.
  2. Evaluate carefully the metrics you are using for CBAs and, if you're not sure if you are using the best metrics, ask a trusted advisor -someone who has been down this path many times- about what you should be measuring.
  3. Trust no one. Not yourself, not the security policies put in place. Test and monitor everything.
  4. Cast a bright, powerful light into your workload behavior. Deploy little birds to every workload and have them report behavioral telemetry back to a central, AI-driven policy engine, such as Tetration. Turn all of your workloads -regardless if they live in a single data center or are spread out across 15 clouds and DCs- into a singleconscious
  5. Be sure you can meet current and future laws on aggressive reporting in less time than regulations call for. You want this knowledge for yourself in as short of time as possible so that you can take meaningful action to remediate, even if you aren't subject to such regulations.

Be vigilant in monitoring and revisiting the basics often. By staying humble, questioning everything, and going back to the basics, you likely will find ways of tightening security while simplifying access.

Learn more about Cisco Secure Workload

 


tag-icon Tags chauds: Zero Trust tetration Cisco Secure Workload

Copyright © 2014-2024 Hi-Network.com | HAILIAN TECHNOLOGY CO., LIMITED | All Rights Reserved.