Protecting Against Insider Threats with Enterprise Architecture Patterns

Insider Threat , Insider Threat Patterns Add comments

Andrew P. Moore,
Insider Threat Researcher
CERT 

Andrew P. MooreThe 2011 CyberSecurity Watch survey revealed that 27 percent of cybersecurity attacks against organizations were caused by disgruntled, greedy, or subversive insiders, employees, or contractors with access to that organization’s network systems or data. Of the 607 survey respondents, 43 percent view insider threat attacks as more costly and cited not only a financial loss but also damage to reputation, critical system disruption, and loss of confidential or proprietary information. For the Department of Defense (DoD) and industry, combating insider threat attacks is hard due to the authorized physical and logical access of insiders to organization systems and intimate knowledge of organizations themselves. Unfortunately, current countermeasures to insider threat are largely reactive, resulting in information systems storing sensitive information with inadequate protection against the range of procedural and technical vulnerabilities commonly exploited by insiders. This posting describes the work of researchers at the CERT® Insider Threat Center to help protect next-generation DoD enterprise systems against insider threats by capturing, validating, and applying enterprise architectural patterns.

Enterprise architectural patterns are organizational patterns that involve the full scope of enterprise architecture concerns, including people, processes, technology, and facilities.  This broad scope is necessary due to the fact that insiders have authorized access to systems—not only online access but physical access too.  Our understanding of insider threat stems from a decade of experience cataloging more than 700 cases of malicious insider crime against information systems and assets, including over 120 cases of espionage involving classified national security information. 

Our experience reveals that malicious insiders exploit vulnerabilities in business processes of victim organizations as often as they do detailed technical vulnerabilities.  Likewise, our data analysis has identified well over 100 categories of weaknesses in enterprise architectures that allowed the insider attacks to occur. We have used this analysis to develop an insider threat vulnerability assessment method, based on qualitative models for insider IT sabotage and insider theft of intellectual property (IP) that characterize patterns of problematic behaviors seen in insider threat cases.  We have also applied these models to identify insider threat best practices and technical insider threat controls

For example, an organization must deal with the risk that departing insiders might take valuable IP with them.  One set of practices and controls that helps reduce the risk of insider theft of IP is based on case data showing that most insiders who stole IP did so within 30 days prior to their forced or voluntary termination.  The pattern describing this set of practices and controls helps balance the costs of monitoring employee behavior for suspicious actions with the risk of losing the organization’s intellectual property. 

Organizations aware of this pattern can ensure that the necessary agreements are in place (IP ownership and consent to monitoring), critical IP is identified, key departing insiders are monitored, and the necessary communication among departments takes place. At the point at which an insider resigns or is fired, technical monitoring and scrutiny of that employee’s activities within a 30-day window of their termination date are increased. Actions taken upon and before employee termination are vital to ensuring IP is not compromised and the organization preserves its legal options.

Capturing our understanding of insider threat mitigations as architectural patterns allows us to translate effective solutions in forms useful to engineers who design DoD systems.  As part of our research, we are analyzing the subset of insider IT sabotage cases from the CERT insider threat database. We are updating and refining our existing qualitative insider IT sabotage model to include a quantitative simulation capability intended to exhibit the predominate patterns of insider IT sabotage behavior.

We are using a system dynamics approach to model and analyze the holistic behavior of complex problems as they evolve over time.  System dynamics modeling and simulation makes it easier for us to understand and communicate the nature of problematic insider threat behavior as an enterprise architectural concern. After validating that simulating the problem model accurately represents the historical behavior of the problem—and does so for the right reasons—the next step is to examine the enterprise-level architectural insider threat controls proposed to help mitigate it. Our research will focus  on two aspects:

  1. Are those controls effective against insider threats? For example, do the controls mitigate the problematic behavior exhibited in the simulation model?
  2. Do those controls introduce negative unintended consequences? For example, even if the controls are effective against the threat, do they unintentionally undermine organizational trust and reduce team performance?

A key challenge in our research is the difficulty associated with testing these controls in an operational environment. One manifestation of this problem is in the form of unknown false positive rates associated with insider threat controls.  From the perspective of technical observations and resource usage, most malicious insiders behave as their non-malicious counterparts do. We therefore expect that poorly-designed controls will overwhelm operators with false positives.  Controls are also hard to test operationally because insider attacks occur relatively infrequently, but nevertheless result in huge damages for victim organizations.

To meet these challenges, we are using system dynamics modeling and simulation to identify and test enterprise architectural patterns to protect against insider threat to current DoD systems.  We are interviewing members of the DoD who have expressed interest in information security controls to mitigate the insider threat. These steps are enabling us to characterize the baseline enterprise architecture, which represents their  operational architecture as a starting point for our analysis. 

Identified architectural patterns will be applied to modify the baseline architecture to better protect against insider threat.  The basis for establishing the efficacy of the architectural patterns is system dynamics simulation-based testing. The experiments conducted in the simulation environment provide a body of evidence that supports strong hypotheses going into pilot testing within organizations.

Enterprise architectural patterns developed through our research will enable coherent reasoning about how to design—and to a lesser extent implement—DoD enterprise systems to protect against insider threat. Instead of being faced with vague security requirements and inadequate security technologies, DoD system designers will have a coherent set of architectural patterns they can apply to develop effective strategies against insider threat in a more timely and confident manner. Confidence in these patterns will be enhanced through our use of established theories in related areas and the scientific approach of using system dynamics simulation models to test key hypotheses prior to pilot testing. We expect our research results will improve DoD enterprise, system, and software architectures to reduce the number and impact of insider attacks on DoD information assets. 

We will be periodically blogging about the progress of this work.  Please feel free to leave your comments below and we will reply.

Additional Resources:

For more information about the work of the CERT Insider Threat Center, please visit
www.cert.org/insider_threat/

To read a report about preliminary technical controls derived from insider threat data, Deriving Candidate Technical Controls and Indicators of Insider Attack from Socio-Technical Models and Data, please visit
www.cert.org/archive/pdf/11tn003.pdf

To read a report about our insider threat modeling, A Preliminary Model of Insider Theft of Intellectual Property, please visit
www.cert.org/archive/pdf/11tn013.pdf

To read the CERT Insider Threat blog, please visit
www.cert.org/blogs/insider_threat/

Share this

Share on Facebook Send to your Twitter page  Save to del.ico.us  Save to LinkedIn  Digg this  Stumble this page.  Save this page on your Google Home Page 

6 responses to “Protecting Against Insider Threats with Enterprise Architecture Patterns”

  1. Don O'Neill Says:
    Insider threats comprise over 80% of data breaches experienced. When it comes to insider threats, some throw up their hands and claim that nothing can be done. In fact, more can be done to anticipate, protect against, and detect the employee who becomes an insider threat than can be done against other types of Cyber attackers.

    How is the collision between privacy and security reconciled? The counter measure to mitigating insider threats applies the principle of trust but verify when it comes to people by tilting towards trust in systems. For those serious about mitigating insider threats, consider the following actions aimed at providing full digital situation awareness:
    1.   Routinely employ selective passive forensics discovery techniques on all employee computer resources all the time.
    2.   Be aware of the privacy implications of this security measure.

    In reviewing the “Common Sense Guide to Prevention and Detection of Insider Threats” report, insider identification was exclusively achieved by means of digital situation awareness of the type included in the selective passive forensics discovery techniques for use on all employee computer resources all the time. The report states that in most cases, system logs were used to identify the insider, including remote access logs, file access logs, database logs, application logs, system file change logs, and email logs.

    The bad actor engaged in insider threat activity whether as disgruntled employee, hacker, corporate spy, or criminal has some characteristics that may be useful in detection:
    1.   The individual is known.
    2.   There is an extended dwell time in which to do damage and attract attention.
    3.   There is an extended forensic history providing tips to intent and evidence of wrongdoing.

    The management team adopting this approach will have clearly demonstrated the will to act when it makes a commitment to employ passive forensics on all employee computer resources all the time. Further making this commitment public may serve as a deterrent at some level. With this public commitment in place, the next step is to ask employees to sign away their right to privacy as a condition of employment... locking in the commitment from top to bottom.

    The approach illustrates the necessity of management will as the key ingredient in drawing a bright-line between privacy and security in favor of security. Consequently, the approach is destined to invite push back from several directions yet the Wiki Leaks release of military operations incident data, State Department diplomatic cables and emails, and the promised business and banking files may tend to strengthen the management will needed to adopt the approach.
  2. Andy Moore Says:
    Don,
    Thank you for your comments. You make some excellent points.

    I agree with you that there is much that organizations can do to reduce their exposure to insider threat. Most organizations need to broaden their perspective regarding the threat. The support of top-level management is necessary for most cyber-security programs, but especially for insider threat programs as they involve the coordination of many departments within an organization, including HR, physical security, legal, as well as IT management. Our Common Sense Guide, which you reference, provides some useful guidance when dealing with enterprise-wide risk assessments (Practice 1) and developing an insider incident response plan (Practice 16):

    http://www.cert.org/archive/pdf/CSG-V3.pdf

    As you rightly note, organizations have difficult decisions to make regarding the tradeoffs between privacy and security, especially where employee monitoring and technical detection measures are used. You raise the question of how the collision between privacy and security is reconciled. As you know, there is no easy answer. The standards and regulations differ for the public and private sectors.

    When deciding these issues, an organization’s legal counsel needs to be involved to make sure applicable laws are followed. This is not an easy task since the fast pace of technology innovation means these laws are a moving target. Generally, organizations need to treat employees equitably and monitoring rules need to be applied consistently across the workforce.

    Insider threat detection is an important area for our enterprise architectural patterns work. Employee privacy will be an explicit force to be resolved with other forces in developing these patterns. We have been concentrating more on employee privacy in our recent research and have developed a insider threat vulnerability assessment workbook that we use to interview organizational personnel regarding legal issues relevant to their insider threat. Watch for more to come!
  3. Don O'Neill Says:
    The collision between security and privacy may be informed by an exercise in trading off consequences. Trading off consequences is situational. For example, consider the priority ranking of consequences by reputation, economics, mission, and competitiveness.

    The organizational consequences associated with Cyber Security incidents include cleanup costs, lost opportunity costs, recovery costs, loss of availability, loss of trust, and loss of privacy. There are consequences in prioritizing consequences as follows:
    1.   In the reputation scenario, the highest consequences to avoid are loss of trust and loss of privacy followed by lost opportunity and loss of availability and then cleanup and recovery. The financial services sector where trust is all-important fits the reputation scenario.
    2.   In the economics scenario, an organization may place a high value on profitability where the highest consequences to avoid are lost opportunity and loss of availability followed by cleanup and recovery and then loss of trust and loss of privacy. The energy sector fits the economics scenario.
    3.   In the mission scenario, an organization may place a high value on ensuring continuous operation where the highest consequences to avoid are lost opportunity, loss of availability, and loss of trust followed by loss of privacy and then cleanup and recovery. The telecommunications sector fits the mission scenario.
    4.   In the competitiveness scenario, an organization may place a high value on its proprietary information where the highest consequences to avoid are lost opportunity, loss of trust, and loss of privacy followed by cleanup, recovery, and perhaps loss of availability. The e-commerce sector fits the competitiveness scenario.
  4. Jerry Says:
    I don't dispute the stats offered here since I wouldn't be able to offer competing numbers. But my instinct suggests to me this is probably based on how you count "attacks".

    In light of all the recent announcements of foreign states espionage programs and those of other third parties I am having a hard time accepting the data. Just look at the past weekend. 350,000 Korean devices hacked, China's military demo's cyber warfare software The recent story of China's military demonstrating their cyber warfare software for a CCTV documentary is incredible. More incredible is that still days after the news broke <a href="http://blog.thehigheredcio.com/2011/08/23/chinas-cyber-warfare-involving-uab-the-response/">UAB's computers </a>. Not to mention Wikileaks and work work of Anon.

    So call me skeptical on how we are doing the counting.
  5. Andrew Moore Says:
    Prioritizing consequences is an essential aspect of risk management. The framework you suggest looks like a useful one. I do note, however, that most of the consequences you list are impacts on the organization, whereas loss of privacy is an impact on the employee.

    The collision between security and privacy comes about because the organization interests can differ from the employee interests. Applicable laws limit what an organization can do to protect its interests.

    An organization needs to consider these competing interests as it decides how to effectively manage its risks.
  6. Andrew Moore Says:
    Jerry,
    Thanks for your message. You bring up a good point on the scope of our work.

    We are currently focused strictly on the malicious insider, which we define as a current or former employee, contractor, or other business partner who
    • has or had authorized access to an organization’s networks, systems, or data and
    • intentionally exceeded or misused that access in a manner that negatively affected the confidentiality, integrity, or availability of the organization’s information or information systems.

    Furthermore, the statistics that we generally cite include only insider attacks in which the insider was prosecuted in a U.S. court and was found guilty of or plead guilty to the crime. This provides a standard indication of the veracity of the details of the crime and the insider’s culpability.

    Of course, this is just the tip of the iceberg. The cases that are not prosecuted, even those that may never be discovered, are of interest in understanding the full scope of the problem. However, these are hard to get at and sometime hard to discern fact in the case details from fiction.

    The 2011 CyberSecurity Watch survey indicates that over 3/4 of the insider intrusions are handled internally without legal action or law enforcement.

    http://www.cert.org/archive/pdf/CyberSecuritySurvey2011Data.pdf

    Filtering out the low-impact crimes, two of the three top reasons cited for choosing to handle insider incidents internally were that
    • they lacked the evidence to be able to prosecute (40%) and
    • they could not identify the individual responsible for the crime (39%).
    Well below those reasons for not reporting were concerns related to negative publicity (12%) and liability (8%).

    The reasons for not reporting are potential leverage points for identifying enterprise architectural patterns to mitigate these kind of crimes in the future. The evidence indicates that patterns related to forensic evidence collection and handling, and auditing and logging of user actions could improve the reporting and subsequent handling of insider incidents.

Add Comment


Leave this field empty: