Prismatic Security

Moses Harris proved Issac Newton's studies of primary colors was incorrect. He proved there are three primary colors Red, Yellow and Blue and not the proposed 7 colors Issac Newton had originally believed in his theory. Before you think this is an article about Red, Blue, and Purple teams, I will stop now and say no, it is not. I will attempt to explore and convey my thoughts about what I have seen as a series of miscommunications in the security industry. These miscommunications can lead to unfortunate education for intended target audiences...

I recently had a chance to attend the first time security conference called "Structure Security". The conference was a last minute decision with interesting content that drew my attention. Given my experience at previous conferences such as Derbycon, RECon BSides and Defcon, Structure Security, leaned more towards the RSAconference side of things or what some would describe as "SuitCon". The target audience in this case were geared to higher level individuals in organizations such as CIO, CSO, CISO, you know, the whole C's suite. This was apparent based on the talk descriptions and mainly the price point barrier to enter the conference. That being said, it was a good conference, worth attending, to meet, share knowledge and network with the C's from larger and smaller companies. As with any conference, some talks were a hit, others needed a bit of work.

The themes I found amongst these talks, "How do you validate the need for more resources to your CFO, CEO or board of directors?", "How do you break tension between product/engineering teams and security?" and "Why didn't you announce you were compromised years ago?" These questions must be addressed with clear transparency so others may benefit from your experiences.

In security, validating why you need resources (bigger budget, more personnel, etc) can sometimes be challenging. Articulating risk and threats to non-security personnel or personnel who have graduated to management (CISO, CIO, CSO) is a very important task, and can also be challenging. There are crazy amounts of security jargon that many will not understand or retain. Think about Open Web Application Security Project (OWASP) Top Ten for example. Telling individuals that you are protecting from XSS, CSRF and SQLi and you need to cover the cost of a WAF or get more man hours to implement a better solution, would not mean much without some "formal" context to the intended individuals.

Another example such as, asking for an increase in budget to allow your staff to train up on host base forensics and hardware capabilities without elaborating on how this effects the security's incident response procedures. It's our duty as security engineers, to document and elaborate on why we would like increases on resources, so that we can tie it back to the overall business impact of the organization.

To you it may seem clear, but I highly recommend using older incidents or use cases to describe your needs for resource allocation. This can come in the form of charts, data tables, previous incident response summaries, tickets, etc which can describe your current solution and highlight your problems.

Brining those events to the forefront, you can attempt to drill down into threat vectors related to problems that will cause the organization damage. This will be different for each organization, so keep it simple and think (C)onfidential, (I)ntegrity, (A)vailability compromises also known as the CIA model. With this data highlighted you can begin to prepare a full formal presentation to the right intended target audience.

These are just a few primary suggestions which can be used to start your prismatic color wheel. In some cases this actually might not be enough to prove resource allocation, thus you need to make sure this has been documented and the right teams or people are accountable for these "acceptable" risks. This can also in some cases cause cross team tension...

Breaking product development or engineering tension between security can be a challenge itself. There are a handful of reasons teams may have tension, which seem to stem around agile versus security. On one hand, engineers want to be able to ship out products without waiting for a long drawn out security review, while on the other hand security wants to make sure they have mapped the threat landscape and apply the correct security measures.

Over time, this creates a very strong and jaded habit amongst engineers and security teams which eventually becomes subconscious. The subconscious mind can then take over, causing high stress conversations amongst individuals while ego might consume its role within the conversation. I was also fortunate enough to attend a great workshop this past week related to "Unconscious bias", given by Natalie Johnson from Paradigm. I would like to think of my self as a very open and unbiased person (which is a bias statement in itself), but we all are bias and there is no way around creating bias. After seeing countless amounts of security failures in my career, I have developed a default "No" policy in situations where engineers might present risks to an organization in which I am not comfortable accepting risk.

If I were to tell every individual no without an elaborate explanation then I might not be a popular character at work. Instead, articulating the risks around the proposed product will help individuals understand security measures and promotes collaboration for better secure solutions. Also to help curb the "No" policy feeling, one must be open to exposure which can help promote change and slowly fix your unconscious bias through repetition and experience.

Reiterating risk and accountability will help aid in your mission but doesn't mean you will be safe. Organizations and individuals are compromised daily without any sort of notification. Even with some of the most skilled security organizations, threats go unnoticed. Once individuals get wind of a compromise they often ask "Why am I just finding out now and not when it happen?".

Problems in security can come from an "inheritance problem" from previous implementation before a security personnel may have started the job. Other problems come from product owners de-prioritizing security requirements because of "business impact needs", selfishly. So when a compromise happens because of the former issue, it might be related to the latter in some cases. After a while, this compounds a tremendous amount of technical debt that security teams must go back and identify which typically leads to finding compromised items.

When responding to questions like, "why didn't you report this incident back in 2014?", being transparent and stating there was no indications at the time, might be the right answer. Follow by educating the audience about the attacker(s) lifecycle, risk management policies, and accountability for decisions that were made which led to the compromise effecting the CIA model.

When security teams and individuals within an organization start to collaborate more and security can illustrate their threat landscape such as the secret service does when identifying threats to the president, then that is when we can start to alert the appropriate stakeholders in a timely manner. More so, you will have documented events which can be used to explain the timeline of events which lead to the discovery of the compromise.

To recap, security is composed of "primary colors" that come in many shades, tints, and tones. If we can identify and resolve the primary issues, we can start to build a better prismatic circle also known as Harris's colour wheel. This structure will help all parties involved comprehend the subject matter at hand, so that we can publish our own "Natural System of Colours".