SMART GRID CYBERSECURITY:
JOB PERFORMANCE MODEL REPORT
NBISE TECHNICAL REPORT
SGC WORKING GROUP 12-01
DRAFT
DAVID H. TOBEY, PH.D.
DIRECTOR OF RESEARCH
MAY, 2012
Copyright © 2011, 2012 National Board of Information Security Examiners.
The work reported herein was supported under contract to Pacific Northwest National
Laboratory in support of the Office of Electricity Delivery and Energy Reliability division of the
U.S. Department of Energy.
The findings and opinions expressed here do not necessarily reflect the positions or policies of
Pacific Northwest National Laboratory or the U.S. Department of Energy.
To cite this report, please use the following as your APA reference: Tobey, D. (2012). Smart
Grid Cybersecurity: Job Performance Model Report (NBISE SGC Working Group Report
12-01). Idaho Falls, ID: National Board of Information Security Examiners.
TABLE OF CONTENTS
Introduction ......................................................................................................................... 5
Impetus for the Study ...................................................................................................... 5
Job Analysis: Early Attempts at Performance Modeling .................................................... 9
From Job Analysis to Competency Modeling .................................................................. 11
Adapting JTCA Models to Understand Cybersecurity Jobs ............................................. 13
Developing a Job Performance Model .............................................................................. 16
Panel Composition ............................................................................................................ 22
Elicitation Methodology ................................................................................................... 24
Job Description Report ..................................................................................................... 27
Job Classification .......................................................................................................... 27
Job Roles and Responsibilities ..................................................................................... 28
Goals and Objectives .................................................................................................... 29
Task Analysis .................................................................................................................... 30
Definitions .................................................................................................................... 31
Role and Vignette Selection ......................................................................................... 32
Goal Selection ............................................................................................................... 33
Elicitation of Responsibilities for the Selected Roles and Goals.................................. 33
Task Creation ................................................................................................................ 34
Job Analysis Questionnaire............................................................................................... 37
The Competency Grid ....................................................................................................... 39
Deriving a measurable definition of intelligence and competence ............................... 40
Criticality-Differentiation Matrix ................................................................................. 44
Literature Review.............................................................................................................. 48
Preliminary list of job roles .......................................................................................... 50
Integrating the job roles and classification with the NICE IA Framework .................. 50
Vignettes ........................................................................................................................... 57
Process Steps Defining Functional Responsibilities ......................................................... 66
Job Role Involvement in Master Vignettes ....................................................................... 70
Goals and Objectives for Assessing Performance ............................................................ 73
Task Analysis .................................................................................................................... 77
Participants and Respondents ....................................................................................... 77
Demographic Survey Responses .................................................................................. 78
Ratings of Frequency and Importance by Level of Expertise ...................................... 81
Page 1
Differentiating Performance on Critical Tasks ................................................................. 84
CONCLUSION ................................................................................................................. 93
Implications for Workforce Development ........................................................................ 96
Implications for Future Research ...................................................................................... 98
Limitations of the Study.................................................................................................. 101
REFERENCES ............................................................................................................... 103
APPENDICES ................................................................................................................ 108
Page 2
LIST of FIGURES
Figure 1: Best Practices in Competency Modeling........................................................... 12
Figure 2: Example Job Performance Model ..................................................................... 16
Figure 3: The Competency Box ........................................................................................ 18
Figure 4: Potential Performance Analysis (PPA) SOURCE: Trafimow & Rice (2009) .. 20
Figure 5: O*NET Methodology Timeline ........................................................................ 25
Figure 6: JPM Process Methodology Timeline ................................................................ 25
Figure 7: VivoInsight help video showing how to select an action verb .......................... 35
Figure 8: VivoInsight help system showing how to add a new task................................. 35
Figure 9: Completed task entry awaiting role assignment ................................................ 36
Figure 10: Criticality-Differentiation Matrix .................................................................... 47
Figure 11: Histogram of Frequency Ratings. The y-axis is the number of items rated in
the range listed on the x-axis ............................................................................................ 83
Figure 12: Figure 12: Future Research Program............................................................... 99
Page 3
LIST OF TABLES
Table 1: Changes in panel composition ............................................................................ 23
Table 2: Initial Bibliography............................................................................................. 48
Table 3: Preliminary list of job roles ................................................................................ 51
Table 4: Mapping SGC Job Roles to the NICE IA Framework ....................................... 55
Table 5: Operational Excellence Vignettes....................................................................... 59
Table 6: Threat or Vulnerability Response Vignettes....................................................... 60
Table 7: Master Vignettes (continued) ............................................................................. 63
Table 8: Master Vignette Process Stages .......................................................................... 67
Table 9: Nominal number of job roles per master vignette .............................................. 71
Table 10: Percentage of role involvement in each master vignette .................................. 72
Table 11: Primary Goals ................................................................................................... 74
Table 12: Important Goals ................................................................................................ 75
Table 13: PRISM Definition for Important Goals ............................................................ 76
Table 14: Distribution of Respondents ............................................................................. 79
Table 15: Size of Respondent Organization ..................................................................... 79
Table 16: Job Titles of Respondents ................................................................................. 80
Table 17: Levels of Experience ........................................................................................ 81
Table 18: Overall Trends .................................................................................................. 82
Table 19: Summary of Preliminary JAQ Results by Level of Expertise .......................... 83
Table 20: Criticality and Differentiation Scores by Decile .............................................. 84
Table 21: Preliminary Fundamental Tasks (Ordered by Task ID) ................................... 85
Table 22: Preliminary Differentiating Tasks (ordered by Task ID) ................................. 91
Page 4
INTRODUCTION
Impetus for the Study
Faced with an aging, out-of-date power infrastructure, the U.S has embarked on an epic
journey of grid modernization and expansion that will result in a fully digital, highly adaptable
and demand-driven smart grid. But grid modernization and smart grid initiatives could be greatly
hampered by the current lack of a viable workforce development framework for cyber security
and infrastructure risk management personnel. Grid modernization efforts require very advanced
and continually maturing cyber security capabilities or the power system will not be resilient or
reliable.
With thousands of generation plants and many thousands of miles of delivery lines, the
North American power grid presents a vast and ever growing cyber-attack surface that may never
be fully mapped and documented from a vulnerability, asset and risk management standpoint.
The mapping of assets, vulnerabilities and dependencies will continue indefinitely, but
meanwhile the grid is increasingly vulnerable. So, in the meantime, it’s up to the human
cyber-security experts and operational staff to take up the slack while we wait (indefinitely) for
the perfect security automation model and the ideal threat-response reference library. The
protection of the smart grid network and core SCADA control systems requires a very
challenging blend of control engineering and security - which can only be executed by senior
security engineers who have a very special mix of general abilities, acquired skills and learned
knowledge.
Government and industry executives now largely agree that the deficit of cybersecurity
human resources is approaching a crisis point as grid complexity increases and generation of grid
security expertise inevitably retires. Traditionally it takes many years to mature a cybersecurity
worker's knowledge, skills and performance. Senior cybersecurity professionals possess a special
mix of information security (InfoSec), technology infrastructure, risk, operations, social,
analytical and organizational skills. To reach peak performance, senior security engineers had to
first become highly proficient IT professionals. Years of accumulation of IT knowledge are then
enhanced with years of additional security experiences, which eventually allows mastery of
Page 5
forensics, risk management and business impact principles. This path ultimately allows a
seasoned InfoSec expert to perform highly skilled actions that protect grid control systems on
infrastructure in a way that is aligned with organizational and regulatory policies and goals.
Recently, NERC’s Long-Term Reliability Assessment Report noted that the potential loss
of this expertise as industry’s workforce ages poses a long-term threat to bulk system reliability.
There is a growing need to not only replace large numbers of departing cybersecurity workers,
but must also greatly augment the workforce with new skills and advanced capabilities. Thus, the
Energy Industry needs a large scale transfer of expertise plus increased agility and productivity
of the cybersecurity workforce, to bridge generation gap.
The U.S. Department of Energy recognized that the electric industry needs a workforce
development program that can make up for the accelerating loss of security workforce
professionals, while at the same time building substantial new cyber security expertise
capabilities that protect the expanded attack surface of the modernized smart grid. Accordingly,
in Spring, 2001 a contract was awarded to Pacific Northwest National Laboratory to develop a
set of guidelines for the development of certification program for smart grid cybersecurity
specialists. The initial scope was defined as the operational security functions for day-to-day
operations, but not engineering and architecture, and smart grid environments. The project would
examine the technical, problem solving, social and analytical skills used by senior cyber security
staff in the daily execution of their responsibilities. The primary purpose is to develop a
measurement model for assessing technical and operational cybersecurity knowledge, skills, and
abilities.
A workforce development program must be holistic in the way it measures, develops and
supports cybersecurity expertise (Assante & Tobey, 2011). Holistic in this context means:
● Addressing all human factors of accelerated expertise development (book-knowledge,
hands-on skills, innate abilities, cognitive/behavioral influences)
● Including all phases of the workforce development cycle (assessment, training,
certification, re-testing, professional development, communities of practice, etc.)
Page 6
Existing cyber-security training and certification programs focus on the job of testing the
book-learning of security engineers who often study preparation guides before taking the
certification tests. Certification bodies (ISACA, CompTIA, ISC/2 and many others) provide a
gauge of intellectual knowledge in specific cyber security areas. However, existing certification
solutions do not measure or certify the real world where multi-discipline problem solving and
social and intuitive analytical skills are used by senior security engineers in the daily battle to
protect the grid. Even the most advanced "performance based" certifications (e.g., GIAC/GSE)
have not kept up with the latest research advances in the cognitive science of human expertise
development. Traditional security certification organizations cannot create an adequate
cyber-security workforce protection of the modernized smart grid.
In addition to the lack of comprehensive assessment and testing, current approaches do
not provide a blueprint or roadmap for a lifecycle program of workforce expertise management.
Current assessment and evaluation services have these deficiencies:
● Competency measurement gap (what competencies do we need to test for?)
● Assessment gap (how should we conduct tests so they are holistic and accurate,
differentiating between simple understanding of concepts and skilled performance of
actions that effectively resolve problems quickly and despite distractions or the stress
surrounding an attack?)
● Training gap (how do we prepare professionals for the tests and the real world?)
● Certification gap (what is the best framework for security certifications that integrate both
knowledge and skill while predicting constraints of innate abilities on performance)
● Support gap (how do we support the certified cyber-security elite with advanced problem
solving tools, communities of practice, canonical knowledge bases, and other
performance support tools?)
NBISE was formed to leverage the latest advances in performance assessment and
learning science toward the solution of one of the United States’ most critical workforce
shortages: cybersecurity professionals. NBISE’s mission, working with program participants, is
to analyze and design assessment instruments and practical challenges that are both fair and valid
and to enhance the confidence of skill measurement instruments as predictors of actual job
Page 7
performance. In fulfilling this mission, NBISE is developing methodologies for defining and
measuring the factors that determine successful job performance. Because this project required
the use of new techniques to support workforce development that imparts the knowledge and
includes the practice exercises that foster skill development, NBISE was selected to partner with
Pacific Northwest National Laboratory to conduct a three phase study for the Department of
Energy.
This report will consist of a cumulative analysis and report of the multi-phase study to
produce a comprehensive Job Performance Model of Smart Grid Cybersecurity. The first phase
will produce an exploratory job performance model based on a factor analysis of responses to a
Job Analysis Questionnaire (JAQ), culminating in the Smart Grid Cybersecurity Job Analysis
Report. During this phase, critical incidents (Flanagan, 1954; Klein, Calderwood, & MacGregor,
1989) captured as a series of vignettes, or deconstructed stories (Boje, 2001; Tobey, 2007), of a
significant or potentially significant event will be transformed into a detailed list of goals,
objectives, responsibilities, and tasks for the functional and job roles involved in smart grid cyber
security. The second phase will seek to validate the exploratory model in laboratory simulation
studies of these critical incidents. Protocol analysis (Ericsson, 2006; Ericsson & Simon, 1980,
1993) and confirmatory factor analysis (T. A. Brown, 2006; Long, 1983) will be used to validate
the exploratory job performance model. The third phase will involve analyzing data as it is
generated in the previous phase to produce and validate a measurement model for calculating
potential performance on the job (Tobey, Reiter-Palmon, & Callens, forthcoming). Based on all
three phases a final report will provide guidance on the development of assessment instruments,
training modules, and simulation practice environments that may be used to accelerate
proficiency in smart grid cyber security jobs.
The sections below will discuss the science behind the development of job performance
models, the method to be used to develop a job description report and task analysis, and provide
the preliminary results for the definition of functional and job roles based on a job classification
analysis.
Page 8
JOB ANALYSIS: EARLY ATTEMPTS AT PERFORMANCE
MODELING
Job analysis is a method by which we understand the nature of work activities by
breaking them down into smaller components (Brannick, Levine, & Morgeson, 2007;
McCormick, 1979). As the name implies, many job analyses focus primarily on the attributes of
work itself, and then link these work attributes to job-relevant knowledge, skills, abilities, and
other work-related characteristics including attitudes and motivation (KSAOs). Collectively, the
KSAOs represent the competencies required for a job. An individual employee would need to
possess these competencies to successfully perform the work (Shippmann et al., 2000). The
purpose of the job analysis is to provide detailed information about the job that will guide
various aspects related to managing performance such as the development of training materials,
testing for selection and competency evaluation, and developmental plans.
Information about the job may be gained from focus groups, surveys, and interviews of
job incumbents, supervisors, and others that are familiar with the day-to-day task requirements of
the job. Sampling subject matter experts (SMEs) with different levels of expertise and who work
in different organizational settings allows for increased generalizability across domains
(Morgeson & Dierdorff, 2011).
Historically, work-oriented job analyses began by using general work activities (GWAs)
as a framework to capture position-specific job tasks. These GWAs are broad collections of
similar work activities such as “interacting with computers” or “getting information” (Jeanneret,
Borman, Kubisiak, & Hanson, 1999). GWAs were designed to be applicable to all work domains
and allow for comparisons across dissimilar jobs (Reiter-Palmon, Brown, Sandall, Buboltz, &
Nimps, 2006). Work tasks, on the other hand, are more specific to a given occupation than are
GWAs. For instance, most occupations share the GWA of “interact with computers;” an
associated task of “analyze output of port scanners” would only be required of a rather limited
subset of jobs. Typical job analyses yield between 30 and 250 work tasks that are somewhat
unique to a specific industry (Brannick & Levine, 2002). After describing and evaluating work
tasks, the focus of many job analyses then shift to the KSAOs that are needed to complete job
Page 9
tasks. Like GWAs, KSAOs are standardized to facilitate generalizability across job domains. As
such, KSAOs may be appropriate when comparing across jobs, but more specific work goals
may be necessary to adequately capture worker requirements within a given occupation. It is
important to note that in many cases only one component (tasks or KSAOs) may be the focus of
the job analysis. However, obtaining both types of information is critical for some uses of job
analysis and for the flexibility of use.
Page 10
FROM JOB ANALYSIS TO COMPETENCY MODELING
Compared to the “historical snapshot of work” produced by a job analysis, by actively
linking employee behaviors to business goals, competency modeling is considered a more
employee-focused examination of working conditions (Sanchez & Levine, 2009). Competency
models “typically include(s) a fairly substantial effort to understand an organization's business
context and competitive strategy and to establish some direct line-of-sight between individual
competency requirements and the broader goals of the organization” (Shippmann et al., 2000, p.
725). By focusing on clearly defined business goals, managers are then in a position to
distinguish between levels of performance in goal attainment (Campion et al., 2011; Parry,
1996).
In practice, there can be significant overlap between job analysis and competency
modeling. For example, Campion and his associates (2011) suggest that competencies be
developed using job analysis procedures such as observations, interviewing, and focus groups.
Similarly, applied research by Lievens, Sanchez, and De Corte (2004) showed that providing
subject matter experts (SMEs) with the tasks derived from a job analysis enhanced the quality of
the competencies they generated.
Despite this progress in the development of job analysis and competency modeling
techniques, concerns continue to arise about their ability to capture the relevant requirements of a
job necessary to accurately predict performance. These concerns are especially pronounced in
highly technical jobs involving hands-on skills (Arvey, Landon, Nutting, & Maxwell, 1992).
Current methodologies have frequently failed to capture dynamic aspects of a job, especially
those involving integration across job roles and the interpersonal processes involved in such
collaborative work (Sanchez & Levine, 2001; Schmidt, 1993; Schuler, 1989). Since existing
methods focus primarily on GWAs rather than the goals and objectives of a job, they may fail to
identify the practices that differentiate levels of expertise and performance (Offermann &
Gowing, 1993). In summary, research suggests that innovations are necessary to increase the
depth and complexity of these models to match the increasing complexity of today’s jobs
(Smit-Voskuijl, 2005). In a recent summary of best practices in competency modeling, Campion,
Page 11
et al. (2011) identified 20 critical innovations that would address deficiencies in identifying,
organizing, presenting, and using competency information (see Figure 1).
Figure 1: Best Practices in Competency Modeling
Page 12
ADAPTING JTCA MODELS TO UNDERSTAND CYBERSECURITY
JOBS
There is perhaps no more complex and dynamic work environment than cybersecurity.
Further exacerbating the challenge of defining such a dynamic job is that little is known about
the competencies required to meet the new vulnerabilities introduced with the advent of the
smart grid. Extant research has focused mainly on cybersecurity policy or the technological
manifestations of these threats, rather than the individual competencies necessary to identify,
diagnose, and effectively respond to such threats. In a recent essay recounting the past thirty
years of cybersecurity, Ryan (2011, p. 8) argues that the thinking about computer security needs
to change. She says “it is critical that the security community embrace non-technical aspects as
part of the whole problem space… A focus on enterprise security goals rather than security
technologies would be a good start – when security is an architectural goal, there is less
temptation to try to bolt on exotic solutions focusing on tiny slivers of the technological
challenge. Instead, holistic and synergistic solutions must be developed. It is increasingly
important that we architect solutions that incorporate human brains, taking into account
intellectual property and human inadvertent activity.”
MITRE Corporation has developed a framework intended to improve the preparedness of
organizations to meet the challenges of the cybersecurity threat. In their description of this
framework, Bodeau, Graubart, & Greene (2010) propose a five-level model delineating the
strategies and objectives comprising effective cyber preparedness. The levels correspond to
“distinct break points in adversary capabilities, intent, and technical sophistication, as well as in
the operational complexity involved in an attack” (p. 2). This model suggests that developing a
secure cybersecurity posture requires development of capabilities in foundational defense,
critical information protection, responsive awareness, architectural resilience, and pervasive
agility. Organizations may experience increasing cost and coordination to raise organizational
preparedness to match the level of threat. According to Bodeau et al. (2010, Table 2), based on
the level of adversary capabilities, organizations must:
1. Prepare for known external attacks and minor internal incidents;
2. Prevent unauthorized access to critical or sensitive information;
Page 13
3. Deter adversaries from gaining a foothold in the organization’s information
infrastructure;
4. Constrain exfiltrations of critical data, continue critical operations, minimize damage
despite successful attacks from adversaries who have established a foothold;
and
5. Maintain operations on a continuing basis and adapt to current and future coordinated,
successful attacks, regardless of their origin.
However, typical of the few studies in this area, the model concludes with a listing of
technologies that can safeguard critical assets, rather than a specification of the competencies
needed by the cybersecurity workforce to address the growing threats to critical infrastructure.
In a study of the recent RSA attack, Binde, McRee, & O’Connor (2011) suggested that a
complex set of competencies are required. According to their findings, an effective threat
response must include the ability to identify phishing campaigns, recognize and block malicious
traffic, and monitor operating systems for attack signatures. However, they offered no guidance
as to the specific KSAOs that may support development of these abilities.
Frincke & Ford (2010) indicate why the development of a competency map for
cybersecurity professionals has been so difficult. Even the development of a simple depiction of
knowledge requirements is challenging. First, it is difficult or impossible to define a typical
practitioner. Second, it is not known how practitioners derive their knowledge -- from books or
on the internet through tweets or RSS feeds? Finally, it is unclear what differentiates
foundational knowledge from specialized knowledge. They conclude that a competency
framework must determine whether the knowledge needed is universal or changes based on role
and responsibility. For instance, a researcher trying to design a lab test of an advanced persistent
threat would need past knowledge of attacks in order to design a test that could accurately
respond to an attack. On the other hand, a security expert would not only need the basic
knowledge of past attacks but also the knowledge of how to detect an attack and produce the
right defenses in real-time.
Therefore traditional job analysis or competency modeling based on surveys of job
incumbents may fail to fully capture the job of a smart grid cybersecurity specialist. As Campion
Page 14
et al. (2011) have suggested, competency models in such a dynamic and ill-defined domain must
employ unique methods for eliciting job context, goals, objectives, tasks, and KSAOs across
different roles and proficiency levels. Accordingly, to fully understand and model success in
these jobs, NBISE is innovating many of the 20 areas shown in Figure 1. In this section, we will
describe some of these innovations used to develop the beginning of a Job Performance Model.
The JPM seeks to predict performance and assess aptitude necessary to not only understand past
performance, but also develop a profile of future abilities. Accordingly, the Job Performance
Model that will evolve from this Job, Task and Competency analysis will include:
● Context elicitation through vignettes and responsibilities
● PRISM method for eliciting goals and objectives (Tobey, 2007)
● Models that include both technical, operational, and interpersonal skills
● Functional responsibility analysis that recognizes the collaboration that occurs among
roles, and the consequent overlap in task performance
● Competencies defined at novice, apprentice, journeyman, and expert levels for multiple
roles based on industry vernacular
● Exploratory factor analysis to develop a causal model of performance that can be
subsequently validated and used to support design of instructional modules, proficiency
and performance assessments, and lab exercises that facilitate converting knowledge into
skill
Page 15
DEVELOPING A JOB PERFORMANCE MODEL
Developing a list of tasks or competencies necessary to perform a job requires a greater
depth of analysis than that identified as the current state-of-the-art in Figure 1. First, and perhaps
most important, is the transitioning from descriptive models of job performance to prescriptive or
predictive models. The traditional approaches tend to produce lists of job duties and KSAOs that
are sufficiently general to apply broadly but lack the detail necessary to predict, based on an
assessment instrument, how an individual will actually perform on the job. These techniques
tend towards a focus on frequent and important tasks, obtained through a factor analysis, as a
descriptive model of the responsibilities which must be executed by a job incumbent. An
exception is models that combine factor analyses with logistic or structural regression analysis.
While inductive in nature, this approach may identify and subsequently confirm a model of job
performance over two or more studies. Thus, one step towards moving from competency to job
performance modeling is to create a nomological network of factor relationships that fits the
patterns derived from a statistically valid sample of incumbents on the job. Figure 2 shows an
example of such a model for Operational Security Testing prepared recently by NBISE.
Figure 2: Example Job Performance Model
Page 16
For example, Dainty, Cheng & Moore (2005) use this approach to develop a predictive
model of the construction project management job. Their study demonstrates that an important
benefit of such an approach is the creation of a parsimonious model of job performance. By
identifying the critical factors, a job performance model is able to focus training, skill
development, and assessment on the few variables which explain the most variance in job
performance. Consequently, investment in workforce development can become more targeted,
efficient, and effective in accelerating proficiency (Hoffman & Feltovich, 2010).
The transition from competency to job performance models is also facilitated by a more
detailed understanding of the building blocks of competence: knowledge, skill, and ability. Le
Deist and Winterton (2005) reviewed definitions of competence from around the world. While
the U.S. and the U.K. have primarily focused on functional or occupational definitions of
competence, which tend to conflate the definition of the KSA elements, other European countries
have moved towards a multi-dimensional conception of competency. For instance, France has
created the triptyque: a three-dimensional view separating knowledge (savoir), experience
(savoir faire), and behavior (savior être). This multi-dimensional view is consistent with recent
development of an engineering model of learning (Hoffmann, 2011, p. 270) that defines:
knowledge as “retrievable information” created through learning; skill as “courses of action” (or
habituated behavior) created as a result of practice in applying knowledge; and ability as the
application of knowledge and skill to “novel and unusual situations” (thereby showing the
benefit of experience in adapting to unforeseen circumstances).
Similarly, Trafimow and Rice
(2008, 2009) recently proposed Potential Performance Theory to explain and provide
independent measures of knowledge (strategy), skill (consistency), and ability (potential). Based
on these, and other findings from study of expertise (Chi, Glaser, & Farr, 1988; Ericsson, 2004;
Ericsson & Charness, 1994; Ericsson, Charness, Feltovich, & Hoffman, 2006; Hoffman, 1992),
we have developed a tripartite competency framework in which knowledge, skill, and ability
provide distinct contributions to the development of mastery over time. As shown in Figure 3 a
job performance model using this framework may extend job task and competency analysis by
differentiating the expected performance of novices, apprentices, journeymen, and masters
through the values of three distinct but interacting variables: knowledge defined and measured
as a depth of understanding, skill defined and measured by the consistency by which
Page 17
knowledge is applied, and ability defined and measured by the adaptation of skill to address
novel or unusual situations.
As expertise progresses from a novice through a master the relative contribution of
knowledge, skill, and ability to performance changes. Also, the progression may take any of
several trajectories inside the “competency box” – a specialist may be positioned more towards
the upper front of the box, demonstrating deep understanding, consistently applied across many
projects. However, specialists are limited in the application of this knowledge and skill to a
narrow domain. A master may not have substantially greater knowledge than a journeyman, and
perhaps less than a specialist, but demonstrates skilled application of expertise across a broad set
of domains.
Figure 3: The Competency Box
Figure 4 below shows an example of using this framework to perform a Potential
Performance Analysis (PPA) adapted from the study conducted by Trafimow and Rice (2009). In
PPA, potential performance is assumed to be a function of the knowledge (strategy employed),
skill (consistency of knowledge application), and ability to adapt to new situations. Each of the
three dimensions is separately measured: knowledge is measured by the observed scores; skill is
Page 18
measured by the consistency coefficient; and ability is measured by the rate of change (slope of
the line) in the True Score, or potential, over time. This study showed that knowledge, skill, and
ability are not simply separate components of competence. Each dimension provides a unique
and differential impact on the overall potential for an individual to perform the job over time.
Somewhat surprisingly, Trafimow and Rice found that an individual who has become skilled in
using a less effective strategy may underperform a less skillful person who can more easily adapt
to novel conditions. This can be seen by comparing the scores over time of Participant 17 to
Participant 10 as shown in the diagram in Figure 4. Even though Participant 17 employed a less
effective strategy at the outset, the lack of skill and a greater ability to adapt to the new job over
time enabled this individual to outperform a worker who had a reasonably effective strategy
which was skillfully applied, but who lacked an ability to adopt a more effective strategy. Thus,
we propose that when defining a job performance model it is essential to identify those tasks in
which performance greatly differs among those with varying knowledge, skill and ability.
The
traditional focus on importance and frequency of tasks is necessary but insufficient to develop a
predictive model of job performance.
Each of the detailed analyses presented above increases our understanding of the
antecedents to job performance, the factors which define the job and the dimensions which affect
behavior, but to predict performance we also need to understand how variance in performance is
evaluated. Some advanced job task and competency models (for example, Dainty et al., 2005)
have incorporated better factor analysis to improve their prescriptive value. We have yet to see
one which combines advanced factor analysis with multiple methods for assessing the
differential impact of knowledge, skill and ability. However, even doing so would only improve
the definition and measurement of the independent variables comprising a performance model.
To determine aptitude and proficiency to perform a job, we must accurately measure variance in
the dependent variable – job performance – as well.
Page 19
Figure 4: Potential Performance Analysis (PPA) SOURCE: Trafimow & Rice (2009)
Robert Mislevy and his colleagues have been studying methods for assessing and tutoring
technical workers for more than thirty years (for examples, see Behrens, Collison, & DeMark,
2006; Mislevy, 1994, 2006; Mislevy & Bock, 1983; Mislevy, Steinberg, & Almond, 1999;
Williamson et al., 2004). This work has culminated in the development of an evidence-centered
design approach to the development of training and assessment systems that has recently been
adopted by Cisco Systems to increase the predictive and prescriptive value of their Cisco
Networking Academy program (Behrens, Mislevy, DiCerbo, & Levy, 2010). Central to the
success of this program, deployed in over 160 countries, is the clear understanding and modeling
of the context of the job: “the ways people use [their competencies] and the kinds of situations
they use [them] in” (Behrens et al., 2010, p. 11). These and other studies increasingly show the
context-sensitive nature of expertise. Consequently, we propose that a job performance model
must explicitly explore how tasks and KSAO usage differs by scenario, what job roles perform
these functions, and how they interact. Finally, we seek to extend traditional job task and
Page 20
competency analysis by defining the set of goals, objective metrics, and performance against
these metrics that typify workers at different competency levels.
The next section will report our application of these new methods. We will begin with a
review of the composition of the SME panel, attendance and constituencies represented in each
focus group session, and changes made to the panel roster to improve participation or
representativeness of panel members. We will then describe the methods to be used during the
elicitation phases. This will be followed by a review the four steps of job and task definition:
context definition; role definition; mission definition; and process definition, collectively which
form the basis for the Job Description Report. This report will be appended during the next phase
of the project during which the SME panel will elaborate the process definition to produce a
detailed list of tasks, methods, and tools. During the final phase of the exploratory Job
Performance Model development, the panel will define a set of knowledge, skills, and abilities
that are expected to determine performance on the job.
Page 21
PANEL COMPOSITION
The initial Smart Grid Cybersecurity (SGC) panel included 28 subject matter experts
(SMEs), a panel chair, and a panel vice-chair. The panel (28 male; 2 female) is advised by the
NBISE and PNNL project team (3 male, 2 female) and five outside advisors (all male)
representing industry, government, and academic perspectives. The initial panel was formed with
nine members (32.1%) from the energy industry; eight members (28.6%) from the professional
services sector; seven members (25%) from technology vendors; three members (10.7%) from
academic or corporate research organizations; and one representative (3.6%) from government.
The selection of panelists was based on their expertise in the relevant fields, availability of
sufficient time to commit to the project, and maintaining a diverse representation of the
interested stakeholders. The panelists are also widely distributed geographically.
Since panelists are volunteers it is expected that their involvement may change over time.
The cybersecurity profession is in high demand and the need for cybersecurity skills is
unpredictable. Previous SME panels have seen participation rates drop dramatically after the first
focus sessions, often with more than 50% attrition as the volunteers find that their primary work
activities will no longer permit continual attendance on weekly panel calls or making
contributions between calls necessary to complete assigned activities. Accordingly, NBISE
maintains a list of alternates who may be added to the panel if participation rates fall
significantly.
Over the first four sessions five panel members withdrew from the panel, and two
alternates were added, bringing active roster to 25 panel members. The greatest change occurred
in industry representation as year-end business planning and plant maintenance reduced time
available to participate in panel activities. Table 1 shows the changes in panel composition over
the first four sessions, and that at the conclusion of this period the panel composition had become
six members (24%) from the energy industry; seven members (28 %) from the professional
services sector; seven members (28%) from technology vendors; three members (12%) from
academic or corporate research organizations; and two representatives (8%) from government.
Page 22
Table 1: Changes in panel composition
Page 23
ELICITATION METHODOLOGY
Throughout the process, a collection of collaboration tools is used to facilitate the
thinking and contributions of the SME panel. The online collaboration environment has been
designed and configured based on dozens of years of research on Group Decision Support
Systems (Nunamaker, Briggs, Mittleman, Vogel, & Balthazard, 1997). GDSS has been found to
dramatically increase the productivity and scale of processes similar to job performance
modeling which involves complex intellective tasks requiring collective intelligence (Briggs,
Vreede, & Nunamaker, 2003; Briggs, Vreede, Nunamaker, & Tobey, 2001). The GDSS tools are
embedded in a web portal that also includes a clear statement of the SME panel's purpose, the
steps in the Job Performance Model process, and links to the activities the panel is to complete
each week.
Typical of cycle-time reductions found in other uses of collaboration engineering
environments (Tobey, 2001), the process for eliciting job performance constructs can be
dramatically reduced from months to weeks. Figure 5 below shows the time line for the
preparation of a job, task and competency analysis using the traditional techniques developed for
the Office of Personnel Management (Reiter-Palmon et al., 2006). This process uses General
Work Activities to provide the context for task definitions which are analyzed using frequency
and criticality ratings. When used to develop the Job Description Report for Operational Security
Testing (Tobey et al., forthcoming) this approach required six months to produce a task list
suitable for including in a Job Analysis Questionnaire. While this is much too long for a dynamic
profession such as cybersecurity, traditional approaches using face-to-face interviews by
industrial psychologists, rather than focus groups supported by a GDSS, have often taken years
to accomplish the same results.
Page 24
Figure 5: O*NET Methodology Timeline
The elicitation method used in this study, and piloted during the development of a Job
Competency Model for Advanced Threat Response (Tobey, forthcoming) is shown in Figure 6.
The traditional elicitation process was altered by adding vignettes and more detailed mission
definition to provide scaffolding that could help spur the generation of task descriptors by panel
participants. The provision of increased structure in conjunction with revisions to the
collaboration infrastructure reported elsewhere (Tobey et al., forthcoming) appeared to support a
further acceleration of the process. In just six weeks a comprehensive task list was developed
suitable for surveying the profession. Moreover, while the modified O*NET process produced
approximately 120 tasks across two job roles, the JPM Process approach to be used for
developing the Smart Grid Cybersecurity Job Performance Model produced a list of 706 tasks
across four functional roles.
Figure 6: JPM Process Methodology Timeline
Appendix B lists the session objectives and participation of panel members for each of
sessions conducted through the date of this report. In the next section we will present and discuss
the results of the first four sessions. The overall goal of these sessions was to produce a Job
Description Report that would identify the context, mission, roles, and processes involved in
Page 25
smart grid cybersecurity. This job definition was then used to develop a Job Analysis
Questionnaire by elaborating and evaluating the most critical job responsibilities and tasks.
Page 26
JOB DESCRIPTION REPORT
Job Classification
The iterative process described above begins at the abstract level typical of job
descriptions and individual development plans. Traditional job task analysis starts with a
taxonomy of general work activities (GWAs) but such high level descriptors have found to be
poor discriminators of jobs (Gibson, Harvey, & Harris, 2007). The result is a job description that
is frequently used to develop employment inventories and recruiting advertisements.
Competency modeling may extend these lists to produce guides for training and testing or
certification underlying individual/personal development plans. Existing job or functional role
taxonomies and definitions are consulted to identify areas of alignment or misalignment in
current conceptions of a job. Recruitment advertisements and performance evaluations are
evaluated for role definitions, responsibilities and capabilities expected to be determinants of
performance. Finally, stories of critical incidents demonstrating either exemplary performance
(use cases) or errors and omissions (mis-use cases) are collected. Collectively, the job
descriptions, development plans, performance evaluations, and critical incident descriptions
establish the job context.
The word incident in job task analysis is not simply an event requiring a response, as is
frequently the case in the cybersecurity domain. Instead, it represents a defining moment in
which the differences in skill level are notable in clearly identifiable outcomes of action taken.
This may be an actual or a potential event, and includes not only sense-and-respond situations
but also proactive or sustaining events critical to achievement of goals and objectives. Hence, the
word incident here is more broadly defined. Accordingly, John Flanagan, the inventor of the
critical incident technique, defined an incident as:
any observable human activity that is sufficiently complete in itself to permit inferences
and predictions to be made about the person performing the act. To be critical, an
incident must occur in a situation where the purpose or intent of the act seems fairly
clear to the observer and where its consequences are sufficiently definite to leave little
doubt concerning its effects (Flanagan, 1954, p. 327).
Page 27
We define a vignette as the collection of: a critical incident title or description; when the
incident occurs (frequency and/or action sequence); what happens during the incident (problem
or situation); who is involved (entities or roles); and where the incident might happen, now or in
the future (systems or setting). Further definition of a vignette might include why it is important
(severity or priority of response) and how the critical incident is addressed (method or tools that
might be used). A collection of vignettes and the associated job context forms the basis for
developing a Job Classification Report that may be used for comparison with other jobs or to
identify when an individual is performing the job as classified.
Job Roles and Responsibilities
The roles identified during the Job Classification step are categorized into functional
roles. The list of functional roles are discussed with, or ranked by, the panel of subject matter
experts (SMEs) who then select one or more functional roles to focus on for the remainder of the
modeling process. This selection of functional roles establishes an important boundary condition
for the Job Performance Model. A guide to the selection process may be the roles targeted by a
sponsoring organization or roles identified in an existing competency model, such as the NICE
Information Assurance Framework ("NICE Cybersecurity Workforce Framework," 2011) in the
cybersecurity profession.
During the next step, the SME panel is asked to develop a list of responsibilities for the
selected functional role(s) for each vignette. These responsibilities may bear more resemblance
to the tasks defined during a job task analysis or competency model, but in job performance
models they represent the starting point for decomposing a job into finer levels of detail. In
effect, the responsibilities align with job duties often listed in job descriptions or performance
evaluations. One fundamental difference between job performance modeling and previous
approaches is the use of multiple roles at this step in the process. Guided by the vignette
description, the panel defines responsibilities across the entire group of functional roles
determined by the panel to provide the role boundary for the job performance model process.
This approach enables elicitation of job overlap and the establishment of collaborative
requirements of the job where responsibilities are duplicated across functional roles.
Page 28
During the final step for eliciting roles and responsibilities the SME panel collaborates on
developing a list of expected outcomes, both positive (best practices) and negative (errors and
omissions) for each role involved in each vignette. These outcomes can serve both to establish
learning objectives for training programs or situational judgment outcomes for assessment
instruments. In the former case, the mis-use cases (errors and omissions) are especially
important. By identifying likely errors a training program may be developed that enables "failing
forward" where common mistakes are addressed by appropriate remedial instruction modules
and practice exercises that guide the learner through a problem-based approach to deliberate
practice. Research has shown that deliberate practice is necessary to accelerate proficiency. In
the case of situational judgment test development, the mis-use cases can form a set of distractor
choices to ensure that the test taker has developed sufficient understanding, or is able to
demonstrate skilled performance during the Potential Performance Analysis described above.
Goals and Objectives
Measurement of job performance requires the establishment of a criterion which
determines the level of success achieved (Berk, 1980). Each job has a mission -- a primary set of
actions that are expected to produce valued results. These primary goals are often accomplished
through the pursuit of secondary goals, and secondary goals through other subsidiary goals,
collectively forming a goal hierarchy (Cropanzano, James, & Citera, 1993; Miller, Galanter, &
Pribram, 1960; Powers, 1973; Wicker, Lambert, Richardson, & Kahler, 1984). Consequently, to
establish a performance model it is important to elicit multiple levels of goals to be accomplished
in a job. Further, research has shown that each goal definition should specify clear measures of
performance across a broad range of varying degrees of effort and difficulty (Locke, Shaw,
Saari, & Latham, 1981). Accordingly, the SME panel is asked to contribute goal definitions that
indicate whether a goal is primary, secondary, or tertiary. For each goal an objective measure is
provided as the criterion by which performance will be assessed. Finally, specific criterion-based
outcomes are specified at five levels of performance based on the PRISM method of goal setting
(Tobey, Wanasika, & Chavez, 2007): Premier, Robust, Improved, Satisfactory, and Moot.
Responsibilities are then sorted into these goal categories to prepare for the next stage of the Job
Performance Modeling process.
Page 29
TASK ANALYSIS
Task analysis in the development of a job performance model marks an important
departure from traditional approaches discussed above. Our method is based on recent advances
in cognitive task analysis methodology (Crandall, Klein, & Hoffman, 2006) which expand the
depth of incident descriptions that are critical to understanding and predicting job performance.
While improving the elicitation of critical knowledge and skills, this process usually requires
dozens of hours of interviewing and transcribing to develop a rich description. Thus, we need to
adapt this approach to the conditions facing smart grid cybersecurity professionals where the
context, challenges, and required responses change rapidly. Accordingly, we developed and
tested a new approach with the Smart Grid Cybersecurity panel based on group decision support
techniques that have been found to repeatedly and predictively produce substantial increases in
the quality and quantity of creative productivity in brainstorming groups with cycle time
reductions of 80% or more (Briggs et al., 2001; Nunamaker et al., 1997).
During the facilitated elicitation sessions, panel members begin by documenting
recollections of events or hypothetical situations facing smart grid cybersecurity professionals.
They elaborate these brief descriptions by identifying the goals that must be accomplished to
address each of these situations. Next, they develop a matrix of responsibilities and roles needed
to accomplish each goal. Following that the tasks that form the steps for fulfilling the
responsibility are enumerated for each functional role. Optionally, published methods or
software tools may be identified for each task. Consequently, the job performance model
represents the documentation of how the job is performed, rather than simply a description of job
responsibilities. The detailed list of tasks are then assembled into a survey to determine the
relative frequency and importance for individuals at entry, intermediate, and master levels of
performance in the Job Analysis Questionnaire.
This section will describe the facilitation techniques used to support the SME panel
virtual sessions and weekly assignments. We will begin by reviewing definitions of key terms
and then outline the elicitation and analytical procedures for selecting goals, roles, and
responsibilities, and tasks. The next section will provide a brief overview of how this information
Page 30
will be used to develop and administer a Job Analysis Questionnaire that will provide the data
for an exploratory factor model of job performance in the targeted job roles.
Definitions
In a review of task analysis methods, Schraagen (2006 p. 185) defines the word task as
"what a person is required to do, in terms of actions and/or cognitive processes, to achieve a
system goal." This definition implies several important constructs which need to be elicited from
subject matter experts to fully understand the factors impacting performance on the job. A
complete task definition would include detailed goals, objectives, and job responsibilities.
Finally, these statements, and those describing tasks, must be written specifically to highlight the
action verb that indicates the execution of the task. It is often the case, though not a requirement
of task analysis, that the action verbs used to describe goals and tasks align with Bloom's
taxonomy of action verbs (Anderson, Krathwohl, & Bloom, 2001; Bloom, 1956).
We define a goal as a statement that expresses an action that must be successfully
completed to accomplish the job mission, or to facilitate the accomplishment of another goal.
The goal objective is defined as the measurable outcome that establishes the criteria by which the
degree of success or effectiveness may be assessed. Job responsibilities are defined as action
statements which result in outcome states that may be monitored or assessed to determine if an
objective has been accomplished. Accordingly, responsibility statements use passive verbs, such
as "ensure", "follow", or "obtain" that are not included in Bloom's taxonomy.
For example, the SME panel identified the goal "Analyze log files for signs of an attack
or compromise" with the objectives (criterion) of "Percentage of logs that are reviewed" and the
"Time required to review each source." Goal accomplishment could be monitored or assessed by
the job responsibility "Ensure incident response and recovery procedures are tested regularly."
The targeted outcome of this monitoring action would be the percentage of logs being reviewed
by these procedures, and the time required to conduct each review.
Page 31
Finally, following the suggestion of Crandall, Klein and Hoffman (2006), we apply
methods for capturing the SME panelists' stories of cybersecurity events (e.g., Boje, 1991, 2001;
Tobey, 2008) to facilitate a deconstruction of the tacit understanding that experts have of the
relationships between goals, objectives, responsibilities and tasks. These methods, and the
studies upon which they are based, recognize that expert stories are living, dynamic interactive
constructions between the storytellers and their audience in which the latter do not understand
the terse descriptions by which experts communicate with each other (Boje, 1991, 1995). They
are not simply holistic constructions requiring interpretation or translation (Czarniawska, 1997),
but are instead interconnecting fragments that are woven together into a complete perspective
only when in the presence of an interlocutor (Boje, 2008; Tobey, 2007). Consequently, in
addition to their definition as a classification of who, what, when, where, how or why events
occur (see Job Classification section above), vignettes are also a collection of story fragments
which SMEs construct collaboratively into a multifaceted depiction of an event or scenario
requiring skilled performance. These fragments then may be categorized into collections, or
master vignettes, which experts frequently label using terse phrases such as a "Network Attack"
or a "Data Leakage."
Role and Vignette Selection
The first critical decisions that the SME panel must make is which job roles and which
master vignettes will become the focus of the Smart Grid Cybersecurity Job Performance Model
Panel. During the sessions in which the vignettes were identified and elaborated the panel
indicated which job roles are involved at each stage and step of the situational response process.
A frequency distribution of roles across vignettes can therefore assist in determining which job
roles are the most critical and consequently which vignettes (that heavily involve these job roles)
are most relevant for further analysis. Accordingly, we calculate the percentage of steps in which
a job role is involved for each of the master vignettes. Those roles which have the broadest
involvement across the vignettes will be candidates for selection. This information is presented
to the SME panel and they are asked to select, by a consensus of at least two-thirds of panel
members, the most critical roles which they believe should be the focus of the initial Job
Performance Model. Once the panel has made its selection, the master vignettes in which the
Page 32
selected roles collective have substantial involvement will be selected for further analysis.
Substantial involvement will be defined as a simple majority (equal to or greater than 50%,
rounded to the nearest decile) of steps in which the selected job roles are involved.
Goal Selection
In the Job Description Report section above we briefly described the panel process for
developing a list of goal definitions that guide the elicitation of responsibilities for each job role.
A complete detailing of the responsibilities and tasks necessary to accomplish the goals of all
smart grid cybersecurity functions would be far beyond the scope and resources of this project.
Therefore, although a broad list of goals will facilitate establishing clear boundaries for the smart
grid cybersecurity profession, we need to focus on a few select critical goals to guide the
modeling of job performance. In order for this initial job performance model to have the greatest
impact it is desirable for the selected goals necessary to effectively address the largest number of
master vignettes identified by the panel. Consequently, the SME panel individually assigns the
goals elicited during a previous session to the list of master vignettes which involve the selected
job roles. We select for further analysis those goals which the panelists rank as important. The
importance ranking is based on a majority percentage of panelists (rounded to the nearest decile)
indicating that the goal was related to successful performance in at least three master vignettes.
Elicitation of Responsibilities for the Selected Roles and Goals
The selected roles and the vignette steps in which they are involved can now be used to
assist the SME panel members in brainstorming a list of responsibilities associated with the each
goal using the VivoWorks VivoInsight1 idea generation tool. The VivoInsight collaboration tool
includes a feature called Follow Me which enables a facilitator to synchronize participant
displays in a virtual session. Moving down the goal list one at a time, the facilitator presents an
entry screen to each participant to elicit a list of responsibilities. The goal statement is displayed
at the top of the screen. The selected job roles and the vignette steps are shown on the left to
prompt idea generation. A real-time feed is shown below an entry box where the participant may
1
VivoInsight and SiteSpace are Software-as-a-Service programs which are the intellectual property of
VivoWorks, Inc. who has approved reference to their copyrighted and trademarked products in this report.
Page 33
type a new responsibility statement associated with the focal goal while easily viewing the
contributions of others.
Task Creation
The creation of a list of tasks necessary to fill each responsibility is facilitated by the
VivoInsight Task Creation tool. The facilitator uses the Follow Me function to present the same
responsibility to all panelists at the top of their respective screens. In addition to facilitator
instructions at the start of the activity, a help video is provided to guide the panelists through the
creation of tasks associated with each responsibility. Figure 7 shows an example slide from this
video. The task elicitation activity begins with the selection of an action verb from the Bloom
taxonomy (Anderson et al., 2001; Bloom, 1956) that matches an idea for a task that needs to be
performed to fulfill this responsibility. After selecting an action verb a list of current tasks using
that verb or its synonyms are shown on the left hand pane. If the task is already listed, simply
clicking on the box next to the task description will add it to the live feed section below the entry
box. If the task is not listed, the remaining portion of the description after the leading action verb
may be typed into the entry box and the add button clicked to add it to the live feed (see Figure 8).
Once all the tasks necessary to fulfill this responsibility has been added to the live feed, the
panelists may assign each task to any job role or roles they believe should be involved in
executing this task. Clicking the submit circle at the end of the row records their role
assignments. An example of the screen filled with existing tasks awaiting role assignments is
shown in Figure 9.
Page 34
Figure 7: VivoInsight help video showing how to select an
action verb
Figure 8: VivoInsight help system showing how to add a new
task
Page 35
Figure 9: Completed task entry awaiting role assignment
Page 36
JOB ANALYSIS QUESTIONNAIRE
The Job Analysis Questionnaire (JAQ) is the primary data collection method for
developing a theoretical model of job performance in three smart grid cybersecurity roles:
Security Operations, Incident Response, and Intrusion Analysis. Our review of both the smart
grid and job analysis literature suggests this is the first comprehensive analysis of smart grid
cyber security tasks. The task statements contained in the JAQ will be evaluated by nominated
subject matter experts to determine those tasks that are most critical to perform and those tasks
which best differentiate between the performance of individuals possessing basic, intermediate,
and advanced skills.
The results of the JAQ will be used in several ways. First, by identifying the most critical
and differentiating tasks we can better target workforce development programs and investments
to accelerate the proficiency of cyber security professionals working on the smart grid. Second,
the results will be provided to organizations distributing the survey to its members and affiliates
enabling them to compare the responses of their community of practitioners to the overall
population – highlighting areas where differences may indicate unique requirements or emphasis
for effective job performance. Third, survey results will be published to the entire community of
smart grid cyber security practitioners to guide individual development plans and self-assessment
of skill areas. Fourth, human resource managers in organizations employing smart grid cyber
security professionals can utilize the results to prepare competency maps for purposes of
recruitment, workforce planning and development, and performance evaluation. Fifth, the results
will support the development of simulation systems to facilitate the transfer of knowledge into
skill by practicing the most critical and differentiating tasks. Sixth, the results will inform the
development of new technology tools that may lower the skill requirement to perform certain
critical tasks that lend themselves to automation. By guiding the skilled performance of novice or
apprentice practitioners, these technologies would free up valuable expert resources to focus on
the more difficult or novel problems. Finally, and perhaps most important, the results will inform
development of formative assessments which can be used to identify aptitude, skill profiles, and
potential performance of individuals for specific jobs. These assessments will enable improved
Page 37
career paths, team composition, and targeting of learning and practice interventions necessary to
secure and defend the smart grid.
A three-phase purposive sampling strategy combined with random selection of survey
groups is used to improve the likelihood of achieving a representative sample of subject matter
experts with experience in one or more of the targeted job roles.
Phase one is respondents
obtained through organizations related to smart grid cybersecurity. Phase two will send
reminders through these organizational channels but also add individuals who can complete the
entire JAQ (i.e., all 37 survey groups) to ensure adequate sample size and to address grossly
unequal distribution of responses across the three job roles. Phase three will identify other
channels through which the JAQ can be distributed to address any remaining concerns regarding
sample size or representation.
During each phase a respondent begins by choosing from the list of the targeted job roles
(i.e., security operations, incident response, and intrusion analysis) the one that most closely
relates to their current position or experience. Upon selecting a role, they are taken to a
demographic survey of ten questions (see Appendix for a listing of these questions). The
respondent next receives sequential access to a random selection of three survey pages
containing 14 task statements. The respondent may pause the survey at any time and an email
will be sent to them allowing them to continue the survey where they left off. Once they have
completed the three required task statement survey pages, they have the option to continue
answering more of the 37 survey pages or exiting the JAQ.
A pilot test of this process using both full and partial responses to survey pages was
conducted prior to the beginning of the first phase. The purpose of the pilot JAQ is to review and
verify the survey administration process and to evaluate the instructions, task statements, and
other survey text. SME panel members and select individuals at Pacific Northwest National Lab
and Sandia National Lab were recruited to participate in the pilot JAQ. Results of their analysis
are provided in the Appendix along with the list of finalized task statements and a sample section
of the survey showing the rating system used to evaluate the task statements.
Page 38
THE COMPETENCY GRID
In the Developing a Job Performance Model section above, we outlined a
multidimensional framework for understanding an individual’s development and position along a
learning trajectory from novice to master. This framework, which we called The Competency
Box, was summarized in Figure 3 (reproduced below).
The purpose of this section is to
propose a new method for analyzing job analysis data that will facilitate mapping of an
individual’s competency development,
as well as the tasks that best demonstrate such
development, to positions within the Competency Box.
Figure 3: The Competency Box
We will begin by reviewing a brief history of the development of competence and
intelligence theory and resulting testing protocols. This review will demonstrate how the
Competency Box may resolve long-standing disputes over the identification and development of
expertise. Our goal is to develop a technique that can address three important constraints to
determining the capabilities of the smart grid cybersecurity workforce:
Page 39
6. Job performance has not been highly correlated with existing industry measures of
competence (Center for Strategic and International Studies, A Human Capital Crisis in
Cybersecurity: Technical Proficiency Matters.)
7. Cyber threats are constantly changing and the practice of cybersecurity is still emerging
(Assante & Tobey, 2011). Thus, measures of past performance are not good indicators of
future potential to perform.
8. Given the emergent nature of the workforce, development is of greater importance than
selection. We therefore need formative measures that can help to identify those with the
aptitude to excel, and can also be used to validate the interventions that shorten learning
curves and help individuals reach their maximum potential as quickly as possible.
Deriving a measurable definition of intelligence and competence
Since the development of intelligence testing at the turn of the 19th century, there has
been a continual debate in the literature over the level of test specificity required to adequately
predict job performance. On the one side are those who believe a single general measure of
intelligence, labeled “g” by Spearman (1904), is all that is necessary to predict performance
(e.g., Ree & Earles, 1991, 1993; Ree, Earles & Teachout, 1994). The other side argues that g is
an inaccurate or insufficient measure (e.g., Sternberg & Wagner, 1993; Carroll, 1993; Ackerman,
1996).
Recently, Scherbaum, Goldstein, Yusko, Ryan & Hanges (2012) published a review of
the applicability of this research to aptitude and performance assessment. They concluded that
nearly a century of research had: 1) failed to produce a clear definition of intelligence and where
it applies, 2) produced theoretical and measurement models that are inconsistent with recent
findings from cognitive psychology and neuroscience; and 3) limited their application to esoteric
domains of verbal or quantitative work that do not reflect the need for adaptive, integrative and
analytical reasoning required by today’s workplace environment.
Scherbaum et al (2012) suggest a redirection of the debate is needed. We must
understand not only the factors that enable intelligent decision-making, but how to develop
discrete skills in environments characterized by high volatility, uncertainty, complexity and
ambiguity (VUCA; Johansen, 2007). The smart grid cybersecurity domain is an excellent
Page 40
example of a VUCA environment. According to Senge (1990) the skills most needed in VUCA
settings are: critical thinking, metacognition, balanced inquiry and advocacy, and double-loop
learning. In conclusion, Scherbaum et al. (2012: 139) argued: "we need to have a clear
understanding of what intelligence is in terms of a definition and [improved] delineation of the
domain” in which intelligence tests can be fruitfully applied.
In a rebuttal to the Scherbaum et al article, Lievens and Reeve (2012) argue that the
authors misunderstood the purpose of a general intelligence measure. It was not intended to be a
specific measure. In our terms, intelligence is synonymous with the entire Competency Box. In
this light, Spearman’s g can be viewed as a reliable, aggregate measure of the overall state of
competence. Spearman’s g is not intended to indicate how, why, or for how long someone will
be positioned at a specific point within the Competency Box. It is not a measure of development;
it is a measure of achievement at a point in time. Accordingly since g is a blunt measure, it may
be a poor predictor of future performance that explains little variance in job success (Sternberg,
1996).
To gain further explanatory power, we must develop indicators of discrete levels of
knowledge, skill or ability. Therefore, Lievens and Reeve (2012) note that resolution to the
intelligence debate requires that we understand that intelligence is multidimensional. It is best
viewed not as a construct but rather as a causal network in which knowledge, skill and ability
interact in varying ways to produce a result under specific conditions. Lievens and Reeve
conclude that we must begin by understanding the subcomponents of general intelligence: fluid
intelligence and crystallized intelligence.
Cattell (1963, 1971, 1987) was the first to decompose general intelligence into these two
subcomponents. He defined fluid intelligence as the ability to learn and encode historical events
or information. The key term here is “ability” as fluid intelligence is meant to be a fundamental
component that is necessary to excel in many domains. Fluid intelligence is what makes a person
able to perform a task. Crystallized intelligence develops from the base of fluid intelligence as
habits are formed through repeated operations. These habits eliminate the need for “insightful
perception” (Cattell, 1943) that are the hallmark of fluid intelligence. Accordingly, crystallized
Page 41
intelligence relates to the skill with which actions are performed, rather than the ability that
enables that skill to be applied in a new domain.
Carroll (1993) and Ackerman (1996) expanded upon Cattell’s decomposition to suggest
additional factors that could be components of overall intelligence. Carroll presented a taxonomy
of abilities based on an extensive factor analysis. His work demonstrated the need for a more
differentiated analysis and corresponding definition of ability and skill. Ackerman combined
Cattell and Carroll’s work to produce a four factor model that differentiated between: processing,
personality, interests, and knowledge (PPIK). Similar to Cattell, we can adapt Ackerman’s model
to better understand the Competency Box structure and how it may help resolve the ongoing
debate about what is intelligence (or competence) and how to measure it.
Knowledge in Ackerman’s PPIK theory clearly maps to the knowledge dimension of the
Competency Box - measuring the degree of understanding one has gained about a specific
domain. Processing relates to abilities as they facilitate perceptual speed, memory-span, spatial
rotation and other generalizable capabilities that enable transfer of knowledge and skill to an
unknown or novel situation. The skill dimension of the Competency Box is most aligned with
personality in Ackerman’s theory. In this dimension, habituated activities generate consistent
behavior underlying characteristic performance by which others ascribe a personality profile to
someone. Finally, the role of interest is one area in which the Competency Box differs somewhat
with Ackerman’s model. Based on recent evidence from cognitive neuroscience studies
(discussed below), we propose that interest isn’t actually a dimension of intelligence. Instead,
interest represents the state of arousal that underlies the activation of all three dimensions of the
Competency Box -- determining whether ability, knowledge, or skill are enacted in a particular
situation (Tobey, 2007, 2010; Tobey & Benson, 2009; Tobey & Manning, 2009; Tobey,
Manning & Nash, 2010).
The Competency Box model is aligned with cognitive psychology research on skill
acquisition and competent performance. Around the same time as Ackerman was composing his
theory, Anderson (1993) theorized that two components of competence, declarative knowledge
and procedural skill, involve different cognitive functions and therefore should be measured
Page 42
independently. Proctor and Vu (2006) reviewed evidence from laboratory studies related to this
theory. They reported that during skill acquisition a “hierarchy of habits” forms resulting in
consistent performance as a result of practice. These skills are differentiated from the algorithmic
approach taken in performing tasks on the basis of what we might call mere knowledge. With
sufficient and deliberate practice (Ericsson, 1996), knowledge is converted into “holistic
representations” (Proctor & Vu: 269) that become automatically retrieved when needed. The
Competency Box model suggests that these patterns of memory formations are the instantiation
of a skill in neural form, and their automatic execution ensures consistent performance that can
be measured.
Cognitive neuroscience studies show support for the Competency Box dimensions and
separate measurement of knowledge, skill, and ability, as well as the activating mechanism of
arousal. For instance, Markowitsch (2000) found that the location of activity in the brain shifts
with time and practice. During this shift neural networks form which connect the diverse regions
of the brain involved in the task. Other studies have found that these networks are restructured
over time to maximize flexibility while minimizing the distance and energy required for neuronal
communication (Bassett & Bullmore, 2006). As the neural network coalesces, links form
between higher and lower brain centers (Joel, 1999) which may trigger behavior outside
conscious awareness. In the end, these networks may become so highly optimized that a
behavioral sequence, the stepwise activation of the entire neural network, may be triggered by a
single neuron based on release of a “command” hormone (Kim, Zitman, Galizia, Cho, & Adams,
2006). In turn, these command hormones are controlled by a part of the brain usually associated
with motor or “non-cognitive” functions, but which was found to be activated during conditioned
and intuitive responses (Lieberman, 2000). In a previous study, the author of this report
discovered that unconscious, instantaneous execution of neural patterns -- called a thinkLet
(Tobey, 2001) -- could be identified with behavioral and physiological methods.
In summary, skilled performance is distinguishable from mere knowledge of a task and
the ability to adapt knowledge or skill to address a novel domain.
The Competency Box model
assumes fluid intelligence tests measure abilities, and while crystallized intelligence tests must
be devised to separately measure declarative knowledge and procedural skills. The former may
Page 43
be measured using tradition proficiency tests, but the latter requires a test of judgment,
decision-making, and action choices typical of situational judgment tests (McDaniel, Morgeson,
Finnegan, Campion & Braverman, 2001; McDaniel & Whetzel, 2005) and performance-based
tests.
Thus, in answer to the call made by Scherbaum et al. (2012), the Competency Box: 1)
provides a clear and measurable differentiation between the three proposed dimensions of
knowledge, skill, and ability as well as when they will become activated; 2) these constructs and
their measures are consistent with recent discoveries in cognitive science; and 3) the model can
be applied to the practical pursuit of developing talent by distinguishing the role of knowledge,
skill, ability, and motivation as factors in predicting performance. Next, we will discuss how this
model can be used to analyze the Job Analysis Questionnaire data to facilitate categorization of
tasks to produce a Criticality-Differentiation Matrix that can identify the indicators of
fundamental and differential competence.
Criticality‐Differentiation Matrix
The primary lesson learned from intelligence studies is that competency is
multidimensional. Competency is more than just ability, which is adequately measured by
intelligence tests. Competency is more than mere knowledge, which is adequately measured by
proficiency tests, such as many of the current cybersecurity certification exams. To the complete
the picture, competency measurement requires the identification of fundamental and
differentiating skills -- those activities which determine the threshold of performance that all
must pass, and those activities that are performed differently with substantively different
outcomes if performed by someone with more expertise instead of someone with less skill. This
section will describe our approach to determining which tasks may be assessed to best identify
these critical and differentiating skills.
Early development of competency studies recognized the importance of identifying
threshold and differentiating tasks. In Boyatzis’ (1982) landmark study of managerial
competence, he discovered clusters of behaviors that were expected of anyone entering the field
Page 44
(i.e., threshold performance) and other clusters of behaviors that differentiated excellent
managers from those with less managerial expertise. Similar to the conclusions of Scherbaum et
al (2012) and Senge (1990) discussed above, Boyatzis found that master managers had
developed skill in systems thinking and pattern recognition. However, Boyatzis’s study showed
that master managers have more than cognitive skills. His data suggested two additional forms of
competence: emotional intelligence and social intelligence (see also Goleman, 1995, 2006). In
his most recent review of this work, Boyatzis (2008: 8) summarized the characteristics of
behaviors that might be monitored or assessed to indicate development of these threshold and
differentiating skills. These tasks should be:
● Behaviorally observable
● Differentially supported by neural circuits (i.e., it should be possible to measure
knowledge and skill separately)
● Related to specific job outcomes (or goals)
● Sufficiently different from other constructs or indicators
● Demonstrate convergent and discriminant validity as indicators of a skill
Spencer and Spencer (1993: 15) summarized and extended the work of Boyatzis and
created clearer definitions for threshold and differentiating competencies:
● Threshold Competencies. These are the essential characteristics (usually knowledge or
basic skills, such as the ability to read) that everyone in a job needs to be minimally
effective but that do not distinguish superior from average performers.
● Differentiating Competencies. These factors distinguish superior from average
performers.
We constructed the Job Analysis Questionnaire to meet all five of these criteria, though
further research is required before asserting predictive and construct validity (see Implications
for Future Research section below for a discussion of this issue). Following the guidance of
Mansfield (1996) on the development of multiple-job competency models, we obtained ratings
for each task at three levels of proficiency: novice, intermediate, and expert. The collected
ratings of frequency and importance at each of these three levels of expertise enable the creation
Page 45
of two measures, criticality and differentiation. These measures may be combined to categorize
tasks that should be strongly related to job performance, reflect current ground truth (Assante
and Tobey, 2011), and can be assessed to determine the position and development path within
the Competency Box for individuals or teams.
The criticality of a task is defined as the product of arithmetic means of frequency and
importance across all levels of expertise. The differentiation of a task is determined by the slope
of criticality scores, signifying the frequency that a person with a given skill level must be
involved, and the importance of that task for determining the performer’s skill level. We define
fundamental tasks as those that are rated as highly critical but show little differentiation across
these three levels. Performance on these tasks is essential and should be considered minimal
entrance requirements for the field. We define differentiating tasks as those that exhibit both high
criticality and high differentiation scores.
The result of this analysis is a 2 x 2 matrix that we call the Criticality-Differentiation
Matrix (CDM) shown in Figure 10. Quadrant 1 shown in black contains those tasks that have
low criticality and low differentiation. These tasks might be labeled “inhibitors” because they
actually inhibit determination of the competence. Measuring performance on these tasks would
likely attenuate difference scores between individuals as they would constrain variance in test
scores. This is not to suggest that these tasks should not be performed, simply that they are not
good candidates for development of assessment instruments. Quadrant 2 shown in blue contains
those tasks that are low in criticality but high in differentiation. These tasks might be labeled
“esoterica” because while the methods used or results gained by experts differ significantly from
novices, they are likely to make trivial difference in overall job performance. The final two
quadrants are the most relevant for our further analysis. Quadrant 3 shown in green would list the
Fundamental Tasks. Quadrant 4 shown in red would list the Differentiating Tasks.
Page 46
Figure 10: Criticality‐Differentiation Matrix
The CDM guides development of a theoretical Job Performance Model by suggesting
those tasks which are best performed by individuals who are novices (Quadrant 1), proficient
(Quadrant 2), competent (Quadrant 3), and expert (Quadrant 4). This, of course, would be a very
crude depiction of the work of smart grid cybersecurity practitioners across the three target job
roles. Accordingly, once a sufficient sample has provided input through the Job Analysis
Questionnaire an exploratory factor analysis of the tasks will be conducted to identify the task
clusters which are predicted by the respondents to explain the performance of those at varying
levels of expertise.
The next section reviews the results of our research to date. We begin with the results of
the literature review followed by the outcomes of the SME panel discussions on vignettes,
process, goals, and tasks involved in smart grid cybersecurity work. We conclude the results
section with the status of our analysis of the Job Analysis Questionnaire responses and the plans
to create a Criticality-Differentiation Matrix and Job Performance Model for the three smart grid
cybersecurity job roles.
Page 47
LITERATURE REVIEW
The NBISE and PNNL research teams with help from the panel advisors and panel
members assembled the preliminary list of documents shown in Table 2. These documents will
be consulted by the panel members to support their collaboration during the elicitation sessions.
Links to URLs or copies of files are made available through the panel portal site in a public
Evernote notebook entitled SGC Panel Literature. The initial bibliography will be expanded
during the term of the project.
Table 2: Initial Bibliography
Anderson, R., & Fuloria, S. (2010). Who Controls the off Switch? Paper presented at the 2010 1st IEEE International
Conference on Smart Grid Communications (SmartGridComm), Gaithersburg, MD.
Bartels, G. (2011). Combating smart grid vulnerabilities. Journal of Energy Security.
Baumeister, T. (December 2010). Literature Review on Smart Grid Cyber Security, from
http://csdl.ics.hawaii.edu/techreports/10-11/10-11.pdf
Boyer, W. F., & McBride, S. A. (2009). Study of Security Attributes of Smart Grid Systems- Current Cyber Security
Issues (U.S. Department of Energy Office of Electricity Delivery and Energy Reliability, Trans.). Idaho Falls, ID: Idaho National
Laboratory Critical Infrastructure Protection/Resilience Center.
Cavoukian, A., Jules, P., & Christopher, W. (2009). SmartPrivacy for the smart grid: embedding privacy into the design
of electricity conservation. Toronto, Ontario: Information and Privacy Commissioner, Ontario, Canada.
Cyber Security Coordination Task Group. (2009). Smart grid cyber security strategy and requirements (Draft. ed.).
Gaithersburg, MD: U.S. Dept. of Commerce, National Institute of Standards and Technology.
Dagle, J. (2009). Summary of Cyber Security Issues in the Electric Power Sector. Ft. Belvoir: Defense Technical
Information Center.
Dan, G., & Sandberg, H. (2010). Stealth Attacks and Protection Schemes for State Estimators in Power Systems. Paper
presented at the 2010 1st IEEE International Conference on Smart Grid Communications (SmartGridComm), Gaithersburg, MD.
Davis, C. M., Tate, J. E., Okhravi, H., Grier, C., Overbye, T. J., & Nicol, D. (2006, 17-19 Sept. 2006). SCADA Cyber
Security Testbed Development. Paper presented at the 38th North American Power Symposium (NAPS 2006) Carbondale, IL.
Depuru, S. S. S. R., Wang, L., & Devabhaktuni, V. (2011). Smart meters for power grid: Challenges, issues, advantages
and status. Renewable & Sustainable Energy Reviews, 15(6), 2736-2742. doi: 10.1016/j.rser.2011.02.039
Dong, W., Yan, L., Jafari, M., Skare, P., & Rohde, K. (2010). An Integrated Security System of Protecting Smart Grid
against Cyber Attacks. Paper presented at the 2010 Innovative Smart Grid Technologies (ISGT), Gaithersburg, MD.
Fadlullah, Z. M., Fouda, M. M., Kato, N., Xuemin, S., & Nozaki, Y. (2011). An early warning system against
malicious activities for smart grid communications. Network, IEEE, 25(5), 50-55. doi: 10.1109/MNET.2011.6033036
Fouda, M. M., Fadlullah, Z. M., Kato, N., Rongxing, L., & Xuemin, S. (2011). Towards a light-weight message
authentication mechanism tailored for Smart Grid communications. Paper presented at the IEEE Conference on Computer
Communications Workshops, Shanghai, China.
Gustavsson, R., & Stahl, B. (2010). The empowered user-The critical interface to critical infrastructures. Paper
presented at the 2010 5th International Conference on Critical Infrastructure (CRIS), Beijing, China.
Hamlyn, A., Cheung, H., Mander, T., Wang, L., Yang, C., & Cheung, R. (2008). Computer Network Security
Management and Authentication of Smart Grids Operations. Paper presented at the 2008 IEEE Power & Energy Society General
Meeting, Pittsburgh, PA. http://b-dig.iie.org.mx/BibDig/P09-0134/PESGM2008-001547.PDF
http://ieeexplore.ieee.org/ielx5/4584435/4595968/04596900.pdf?tp=&arnumber=4596900&isnumber=4595968
Hiskens, I. A., & Akke, M. (1999). Analysis of the Nordel power grid disturbance of January 1, 1997 using trajectory
sensitivities. IEEE Transactions on Power Systems, 14(3), 987-994.
Holmgren, A. J., Jenelius, E., & Westin, J. (2007). Evaluating Strategies for Defending Electric Power Networks
Against Antagonistic Attacks. IEEE Transactions on Power Systems, 22(1), 76-84. doi: 10.1109/TPWRS.2006.889080
Idaho National Laboratory. (2011). Vulnerability analysis of energy delivery control systems. Idaho Falls, ID: U.S.
Department of Energy.
Inc., K. (2010). The U.S. Smart Grid Revolution: Smart Grid Workforce Trends 2011 (pp. 37): The GridWise Alliance.
Iyer, S. (2011). Cyber Security for Smart Grid, Cryptography, and Privacy. International Journal of Digital Multimedia
Broadcasting, 2011. doi: 10.1155/2011/372020
Page 48
Kim, T. T., & Poor, H. V. (2011). Strategic Protection Against Data Injection Attacks on Power Grids. IEEE
Transactions on Smart Grid, 2(2), 326-333. doi: 10.1109/TSG.2011.2119336
Kosut, O., Liyan, J., Thomas, R. J., & Lang, T. (2010). On Malicious Data Attacks on Power System State Estimation.
Paper presented at the 2010 45th International Universities Power Engineering Conference (UPEC 2010), Cardiff, Wales.
Ling, A. P. A., & Masao, M. (2011). Selection of Model in Developing Information Security Criteria on Smart Grid
Security System. Paper presented at the 2011 IEEE 9th International Symposium on Parallel and Distributed Processing with
Applications Workshops (ISPAW), Los Alamitos, CA.
Naone, E. (2010, August 2). Hacking the Smart Grid. Technology Review.
National SCADA Test Bed. (2009). Study of security attributes of smart grid systems: Current cyber security issues.
Idaho Falls, ID: INL Critical Infrastructure Protection/Resilience Center.
Office of the Information & Privacy Commissioner of Ontario, Hydro One, GE, IBM, & TELVENT. (2011).
Operationalizing privacy by design the Ontario smart grid case study. Toronto, Ontario: Information and Privacy Commissioner
of Ontario.
Pearson, I. L. G. (2011). Smart grid cyber security for Europe. Energy Policy, 39(9), 5211-5218. doi:
10.1016/j.enpol.2011.05.043
Qian, H., & Blum, R. S. (2011). New hypothesis testing-based methods for fault detection for smart grid systems. Paper
presented at the 2011 45th Annual Conference on Information Sciences and Systems (CISS 2011), Baltimore, MD.
Reddi, R. M. (2010). Real time test bed development for power system operation, control and cybersecurity. Master's
thesis, Mississippi State University, Mississippi State, MS Retrieved from
http://library.msstate.edu/etd/show.asp?etd=etd-11112010-175544 Available from OCLC WorldCat database.
Robles, R. J., & Tai-hoon, K. (2010). Communication Security for SCADA in Smart Grid Environment. Paper presented
at the 9th WSEAS International Conference on Data Networks, Communications, Computers (DNCOCO 2010), Athens, Greece.
Salmeron, J., Wood, K., & Baldick, R. (2004). Analysis of electric grid security under terrorist threat. IEEE
Transactions on Power Systems, 19(2), 905-912.
Scarfone, K., Grance, T., & Masone, K. (2008). Computer security incident handling guide: Recommendations of the
National Institute of Standards and Technology. (800-61 Revision 1). Gaithersburg, MD: Computer Security Division,
Information Technology Laboratory, National Institute of Standards and Technology Retrieved from
http://csrc.nist.gov/publications/nistpubs/800-61-rev1/SP800-61rev1.pdf.
Sheldon, F. T., & Okhravi, H. (2010). Data Diodes in Support of Trustworthy Cyber Infrastructure. Oak Ridge, TN:
Oak Ridge National Laboratory.
Simmhan, Y., Kumbhare, A. G., Baohua, C., & Prasanna, V. (2011). An Analysis of Security and Privacy Issues in
Smart Grid Software Architectures on Clouds. Paper presented at the 2011 IEEE 4th International Conference on Cloud
Computing (CLOUD 2011), Los Alamitos, CA.
Smart Grid Information Clearinghouse. (2011). Deployment Experience, from
http://www.sgiclearinghouse.org/Deployment
So, H. K. H., Kwok, S. H. M., Lam, E. Y., & King-Shan, L. (2010). Zero-configuration Identity-based Signcryption
Scheme for Smart Grid. Paper presented at the 2010 1st IEEE International Conference on Smart Grid Communications
(SmartGridComm), Gaithersburg, MD.
Sugwon, H., & Myongho, L. (2010). Challenges and Direction toward Secure Communication in the SCADA System.
Paper presented at the 2010 8th Annual Communication Networks and Services Research Conference (CNSR), 11-14 May 2010,
Los Alamitos, CA, USA.
Ten, C.-W., Liu, C.-C., & Govindarasu, M. (2008). Cyber-vulnerability of power grid monitoring and control systems.
Paper presented at the Proceedings of the Fourth Annual Workshop on Cyber Security and Information Intelligence Research,
Oak Ridge, TN. http://powercyber.ece.iastate.edu/publications/CSIIR-extended.pdf
http://delivery.acm.org/10.1145/1420000/1413190/a43-ten.pdf?ip=130.20.60.12&acc=ACTIVE%20SERVICE&CFID
=50077011&CFTOKEN=81965875&__acm__=1319245062_c9850234e99b8595994a53eabc08dd48
The Energy Sector Control Systems Working Group. (2011). Roadmap to Achieve Energy Delivery Systems
Cybersecurity (pp. 80).
The Smart Grid Interoperability Panel – Cyber Security Working Group. (August 2010). Guidelines for Smart Grid
Cyber Security:Vol. 1, Smart Grid Cyber
Security Strategy, Architecture, and High-Level Requirements. (NISTIR 7628). Gaithersburg, MD: National Institute of
Standards and Technology Retrieved from
https://www.evernote.com/shard/s66/res/307ff759-2782-4e63-990a-2b438a01574b/nistir-7628_vol1.pdf.
The Smart Grid Interoperability Panel – Cyber Security Working Group. (August 2010). Guidelines for Smart Grid
Cyber Security:Vol. 2, Privacy adn the Smart Grid. (NISTIR 7628). Gaithersburg, MD: National Institute of Standards and
Technology Retrieved from
https://www.evernote.com/shard/s66/res/307ff759-2782-4e63-990a-2b438a01574b/nistir-7628_vol1.pdf.
The Smart Grid Interoperability Panel – Cyber Security Working Group. (August 2010). Guidelines for Smart Grid
Cyber Security:Vol. 3, Supportive Analyses and References. (NISTIR 7628). Gaithersburg, MD: National Institute of Standards
and Technology Retrieved from http://csrc.nist.gov/publications/nistir/ir7628/nistir-7628_vol3.pdf.
Page 49
United States Government Accountability Office. (2011). Electricity Grid Modernization: Progress Being Made on
Cybersecurity Guidelines, but Key Challenges Remain to be Addressed. (GAO-11-117). Retrieved from
http://www.gao.gov/products/GAO-11-117.
Wang, T.-K., & Chang, F.-R. (2011, 26-28 May 2011). Network Time Protocol Based Time-Varying Encryption System
for Smart Grid Meter. Paper presented at the 2011 Ninth IEEE International Symposium on Parallel and Distributed Processing
with Applications Workshops (ISPAW), Busan, South Korea.
Wang, Y. (2011). sSCADA: securing SCADA infrastructure communications. International Journal of Communication
Networks and Distributed Systems, 6(1), 59-78. doi: 10.1504/ijcnds.2011.037328
Wang, Y., Ruan, D., Xu, J., Wen, M., & Deng, L. (2010). Computational Intelligence Algorithms Analysis for Smart
Grid Cyber Security. In Y. Tan, S. Y. H. & T. K. C. (Eds.), Advances in Swarm Intelligence, Pt 2, Proceedings (Vol. 6146, pp.
77-84).
Zhuo, L., Xiang, L., Wenye, W., & Wang, C. (2010, Oct. 31 2010-Nov. 3 2010). Review and evaluation of security
threats on the communication networks in the smart grid. Paper presented at the Military Communications Conference
(MILCOM 2010), San Jose, CA.
Preliminary list of job roles
In addition to the literature above, PNNL researchers assembled a library of job
requisition and recruitment advertisements for roles expected to play a part in smart grid
cybersecurity. The job roles listed in Table 3 include a broad range of levels and departmental
affiliations, such as analyst, consultant, engineer, researcher, supervisor, and manager. The
appendix includes copies of the job descriptions listed below.
Integrating the job roles and classification with the NICE IA Framework
The National Initiative for Cybersecurity Education (NICE) lead by the National Institute
of Standards and Technology (NIST) and the Department of Homeland Security (DHS) is
working to establish an operational, sustainable, and continually improving cybersecurity
education program for the nation to use sound cyber practices that will enhance the nation’s
security. The initiative is comprised of over twenty federal departments and agencies and its
work products are to serve as a resource to both the public and private sector.
Page 50
Table 3: Preliminary list of job roles
Job Title
Classification
Manager of Technology
Manager
IT Development Supervisor
Supervisor
Information Security Risk Analyst III
Analyst
Network Security Analyst
Analyst
Senior Software Security Analyst
Analyst
Smart Grid Senior Manager – Professional Services
Consultant
Smart Grid Consultant
Consultant
Protection Emphasis Engineer
Engineer
Substation SCADA Integration Engineer
Engineer
SCADA Protocol Engineer
Engineer
Smart Grid Security Engineer
Engineer
Integrative Security Assessment Researcher
Researcher
There are four components under the NICE initiative, but the work of the Smart Grid
Cybersecurity (SGC) panel is relevant to component three, titled “Cybersecurity Workforce
Structure”. The lead agency for component three is DHS, which is coordinating its efforts
through the National Cyber Security Division (NCSD). The goal is to define cybersecurity jobs,
attraction, recruitment, retention, career path strategies. The component contains the following
Sub-Component Areas (SCAs):
● SCA1 – Federal Workforce (Lead by the Office of Personnel Management)
● SCA2 – Government Workforce (non-Federal, led by DHS)
● SCA3 – Private Sector Worforce (lead by Small Business Administration, Department of
Labor, and NIST)
Page 51
The initial work product under this initiative includes a DRAFT framework document
that enumerates cybersecurity functional roles across the government and extending into the
private sector. This document leveraged work performed by OPM and others to identify
cybersecurity roles and responsibilities across the federal government. OPM surveyed
approximately 50,000 federal employees and their supervisors to establish a high-level view of
cyber responsibilities and to construct a cybersecurity specific competency model. The
development of an overarching framework has been difficult as cybersecurity encompasses a
breadth of disciplines to involve law enforcement investigations, intelligence community
analysis, IT design, engineering, and operations and cyber defense.
The NICE effort will serve as a focal point for existing and future cybersecurity
workforce development initiatives. The program outputs will certainly drive future development
efforts and will likely shape cybersecurity roles over time. The SGC project team has been
monitoring and engaging with NICE program leadership and activities to align our work and
help build upon this nation-wide effort. The SGC project has partnered with the NICE organizers
to improve capabilities and effectiveness of cybersecurity professionals and specifically to align
the needs of smart grid cybersecurity roles and responsibilities with this evolving standard
competency framework.
The starting point for the alignment begins by evaluating the current draft of the NICE
Cybersecurity Specialties Framework. The framework captures 31 high-level cybersecurity
specialties across seven categories. The categories include:
● Securely Provision
● Operate and Maintain
● Support
● Protect and Defend
● Investigate
● Operate and Collect
● Analyze
Page 52
The scope of the SGC project places a focus on operational cybersecurity job roles
employed by utilities. This focus draws cybersecurity specialties that fall primarily under the
‘Operate and Maintain’ and ‘Protect and Defend’ categories. The structure of the Framework
document identifies and describes a specialty and provides sample job titles. The document
further describes applicable job tasks and lists high-level competencies and KSAs.
It is important to note that job roles shown in Table 3 do not necessarily equate to job
titles or functional roles defined in the NICE Framework, and several roles may be represented
by one specific employee/job position. In addition to the job roles identified in the literature
review, panel leadership has suggested that the following roles may also be associated with smart
grid operational cybersecurity:
● Advanced Meter Security Specialist (Platform Specialist)
● Security Administrator (certificate management, etc.)
● Security Architect (many architects are involved and consulted in new technology
deployments on operational matters)
● Network Security Specialists
● Security Operations Specialists
● Incident Response Specialist/Analyst
● Intrusion Analyst
● Penetration Tester/Red Team Technician
● Risk/Vulnerability Analyst
● Telecommunications Engineer
● Reverse Engineer
● Meter or Field Device Technician
The project team reviewed cybersecurity specialties in the following NICE Framework
categories: Securely Provision; Operate and Maintain; Protect and Defend; and Investigate. The
review compared Job Performance Panel member input and literature review results against the
Page 53
current description of the cybersecurity specialties to include the corresponding task,
competencies, and KSA fields. The review included a broader list of utility smart grid related job
roles knowing that the panel will select three to four operational job roles to focus its work.
The greatest alignment appears to be with the cybersecurity specialties by category shown in
Table 4.
SGC Panel Members extended the NICE Framework by adding roles and responsibilities
that must work closely with cybersecurity specialties to address cybersecurity issues within the
context of an organization. Panel members identified job roles within legal and public relations
that consistently interface with cybersecurity functional roles to manage security. The SGC
project team will provide that feedback to NICE representatives and share the overall alignment
review.
It is also important to note that SGC Project team members have joined the Industrial
Control Systems Joint Working Group (ICSJWG) Workforce Development Subgroup to help
align the current framework with the broader ICS community and continue to provide relevant
input and monitor the progression of the framework.
Page 54
Table 4: Mapping SGC Job Roles to the NICE IA Framework
Securely Provision – Specialty areas concerned with conceptualizing, designing, and
building secure IT systems, with responsibility for some aspect of the system’s development.
Operate and Maintain – Specialty areas responsible for providing the support,
administration, and maintenance necessary to ensure effective and efficient IT system
performance and security.
Page 55
Protect and Defend – Specialty areas responsible for the identification, analysis, and
mitigation of threats to internal IT systems or networks.
Page 56
VIGNETTES
The SME Panel developed a list of vignettes using VivoInsight which supports
anonymity for participants, parallel input by participants, and shifting perspectives of the
participants through display of others’ insights. These features of Group Decision Support
Systems (GDSS) are frequently shown to be critical for maximizing the productivity and
creativity of problem-solving and brainstorming groups (Nunamaker et al., 1997). In the current
study, the panel had identified a total of 109 vignettes in twenty minutes, though a few had been
created prior to the session by the panel leadership. This represents an idea generation rate of
over 5 vignettes per participant. A previous study (Tobey, forthcoming) using the Groupmind
IdeaSet online collaboration system which also supports anonymity and parallel input, but where
perspective-shifting was constrained by the display design, had shown idea generation rates of
2.5 vignettes per participant during a 20-minute brainstorming session. The user interface of
VivoInsight tool selected for this panel was specifically designed to draw attention to new
contributions through highlighting the contributions by other panel members within the focal
frame of each panel participant. This may explain the significant increase in creative production
which occurred.
These vignettes were then categorized by the type of response required and analyzed to
determine those most critical to determining smart grid cybersecurity job performance. Table 5
lists those vignettes which are related to maintaining a secure posture through operational
excellence. Table 6 lists those vignettes which related to effective response to a threat or
vulnerability.
These vignette listings were included in a survey to determine the criticality of the
vignette. Criticality was defined by how frequently the issue arose (5-point Likert scale ranging
from Never to Constantly) and/or how severe a problem it posed or how high a priority it was
that the issue was resolved (5-point Likert scale ranging from Not a Problem to Critical
Problem). Items were determined to be frequent or severe if they had an average rating greater
than 3.5. These highly critical vignettes were used to develop a job description by defining the
Page 57
roles, mission, responsibilities, and processes necessary effectively perform in these critical
situations.
Twenty-six of the thirty panel members responded to the survey providing a response rate
of 86.6%. However, seven of these responses were incomplete and had to be removed from the
results, resulting in 19 survey responses. Interrater agreement was calculated using the aWG
index (R. D. Brown & Hauenstein, 2005). The panel responses were found to exhibit a moderate
level of agreement (aWG = 0.556) according to LeBreton and Senter (2008, Table 3). Overall
frequency estimates showed slightly higher levels of interrater agreement (FREQaWG = 0.582)
than estimates of severity (SEVaWG = 0.53). There was slightly greater agreement on
non-critical vignettes (FREQaWG = 0.604; SEVaWG = 0.552) than critical vignettes
(FREQaWG = 0.574; SEVaWG = 0.5211), but this difference was not significant (p > .4 in both
cases).
Page 58
Table 5: Operational Excellence Vignettes
Page 59
Table 6: Threat or Vulnerability Response Vignettes
Page 60
The survey results indicated that 30 vignettes should be archived as they were not critical
to maintaining an effective smart grid security posture. The results were reviewed with the panel
during the next step in the job performance modeling process to define the context for
developing a Smart Grid Cybersecurity Job Description. During this review it was determined
that an additional 10 vignettes should be included in the critical list. This final list of critical
vignettes was sorted by the program manager, the panel chairperson, and the panel
vice-chairperson, into 13 master vignettes (see Table 7).
While on average there were 6.85 vignettes that served as examples of each master
vignette, the number of these example scenarios per master vignette ranged from a low of 3 to a
high of 15. The master vignette Network Attacks had the largest number of example vignettes.
This was followed by Threat and Vulnerability Management with 11 example vignettes. The
master vignettes with the least number of example scenarios were Access Control Maintenance,
Phishing Incidents, and Security Testing; each of these had only three examples.
Each of these thirteen master vignettes were loaded into a knowledge exchange (KE) tool
that is available at all times through the VivoWorks SiteSpace portal system. The panelists were
asked to: 1) Discuss whether the examples listed shared a common set of roles and methods
(steps); 2) Describe any notable differences; and 3) Suggest additional examples necessary to
fully define this class of situations or indicate whether an example should not be included in this
classification. The unstated purpose of these discussions is to help the panel evolve a common
understanding regarding the scope of the job description and exploratory performance model
they would be developing. Panelists clearly picked up on the need for setting these boundaries as
can be seen in the anonymous posts below provided in the discussion of the Phishing Incidents
and Data Leakage and Theft vignettes:
PHISHING INCIDENTS
While this is a real threat and occuring [sic] all the time and with greater frequency - is this too far off the
main topic for defining key roles for Smart Grid? This is a much larger issue and relates to much more
than just Smart Grid.
DATA LEAKAGE AND THEFT
Similar to the phishing vignette, how far removed from direct Smart Grid experience are some of these
items? These are all significant issues, but some of them need to be dealt with at different levels and/or
more holistically and not just as it relates to Smart Grid. Loss of PII is a much larger problem. I am
only making the comment to fully understand the actual scope of defining these roles.
Page 61
Similarly, a dialogue regarding Substation/SCADA Attacks vignettes questioned whether
some of the examples were too broad and if additional examples were necessary to address
smart-grid specific components, such as substation controls:
These vignettes appear to be specifically targetted [sic] at malware or improper physical controls. What
about substation specific concerns; e.g. how does a vulnerability or intrusion into a substation control
affect that substation, or how does it affect networked substations, etc.
Important question as system health or status will ultimately impact grid operation decisions and
supporting technology decisions. Real time response decisions are very difficult in operational systems.
The coordination, planning, and communication is essential when considering actions that may impact
operational systems.
Page 62
Table 7: Master Vignettes (continued)
The examples given don't really fit the main topic here.
Poisoned Timing Data Input to Disrupt Synchronization: This example can happen at multiple
places.
False data input into power system state estimation model: Usually, we run the state estimation at
the utility control center.
Page 63
Rogue devices with wireless communications enabled are being place in multiple substations and
accessed remotely by attacker to probe the substation infrastructure and communications links
back to the Utility. It is unknown how many substations are affected: It is unclear whether this
example is referring to AMI, or wireless SCADA?
Compromised device has maliciously embedded hardware or chipset. Can apply to any equipment
placed in the power system.
zero day attack - new malware detected on control system components. This is too broad and can
apply to any equipment controlling the power system.
Physical security vulnerability: LAN ports in common areas in Office premises/ Sub-stations/
Datacenter allow access to anyone connecting to that port. Also applies to different domains.
Each Job Performance Panel must achieve a working consensus on the degree of
specificity and scope that is necessary to fully articulate the critical performance factors that will
determine success on the job. While the examples above suggest the panel is concerned about too
broad a scope, other discussions focused on whether some vignettes had been too narrowly
defined by merging together under one master vignette important events that may need to be
separately analyzed. The vignettes therefore appear to serve as important artifacts which may
help the panel to develop a richer and more robust articulation of the practices through which
expertise and skill development affect security posture and effective response. While the ultimate
test of this will come in future JPM steps where the panel develops the list of tasks, methods, and
tools, a good example of how vignettes serve as an impetus to a richer understanding is shown in
the dialogue around one of the examples categorized into Encryption Attacks. The discussion
ranges from whether examples are appropriately categorized to whether the scope needs to go
beyond internal staff to include vendors and other external parties:
This vignette seems out of place: Running security tests on equipment and software that might brick [sic],
reset, or crash a system [188]. Rather than an encryption attack, I think this is a vulnerability-management
vignette.
Different vendor networks probably have very different models for handling encryption keys and the PKI in
general. These differences will result in very different understandings of what is possible and what is not,
and how damaging a key compromise may be… If a vendor has no way of revoking keys the loss of keys is
a major problem. If a vendor has well exercised tools for key revocation, then loss of keys is a simple
matter of executing the key replacement procedure and carrying out the revocation procedure… I suspect
there is a great deal of vendor specific details here.
Some of the current solutions have components that are managed by third-party organizations.
Understanding how to manage these situations is going to be critical. Implementors [sic] need to
understand how third-party services impact their deployed encryption technologies. They also need to be
training on how to interact with these third-party organizations so that these concerns are outlined during
acquisition.
Page 64
The next sections will discuss further elaboration of these master vignettes by the panel.
They began by defining a list of process stages in addition to three common stages that all
vignettes share: preconditions, onset, and conclusion. They reviewed and expanded the list of job
roles and assigned these to each process stage. Finally, they defined a set of goals and objectives
which may be used to assess performance in the thirteen master vignettes. Collectively, these
definitions form the job description for smart grid cybersecurity.
Further elaboration of the master vignettes will occur over the remainder of the first
phase of the project. For each stage of the master vignette process the panel will detail the plot
and storyline of the vignette. This will include:
● What may happen: The situational conditions (tools, systems, software, data asset,
identities, products, monitoring, alerting and response systems) and key concepts (what
prior knowledge is required)
● Where it may happen: The physical location (e.g., office, virtual, plant floor, data center),
virtual location (layer of the stack, e.g., network, OS, application) and organizational
level or area (e.g., department, division, workgroup, center)
● Why it may happen: Breakdown of root cause (specific events) or why actions are taken.
● How it may happen: The decisions, procedures, options, common errors or best practices.
Page 65
PROCESS STEPS DEFINING FUNCTIONAL RESPONSIBILITIES
The panel began elaborating the master vignettes by listing the stages in which the vignette
would likely progress along with the roles that would need to be involved at each stage (see Table
8). Each vignette includes three stages by default: preconditions, onset, and conclusion. The
precondition stage occurs just prior to the organization becoming aware that the vignette had
begun to play out and describes monitoring or other situation awareness functions intended to
identify when a vignette has become active. The onset stage begins upon the occurrence of the
critical event that signals the vignette has begun. The conclusion stage occurs upon resolution of
the vignette and describes ongoing maintenance of post-vignette conditions.
Page 66
Table 8: Master Vignette Process Stages
Page 67
Page 68
Page 69
JOB ROLE INVOLVEMENT IN MASTER VIGNETTES
Each of the above stages was analyzed by the panel members and they assigned roles to
each stage. An analysis was performed to determine the nominal (total assignments) and relative
(percent of all assignments) that were made for each job role. Table 9 lists each of the job roles
and the number of stages within each vignette assigned to this job role. The list is sorted by the
total number of stages in which each job role appeared. The top ten roles by a nominal count of
assignments collectively represent nearly half of all role assignments. Table 10 lists the relative
involvement of roles in the vignettes. This analysis suggests which roles may be the most critical
to analyze further as they may have greater responsibility for the overall effectiveness of
cybersecurity operations. The SME panel reviewed these analyses and unanimously agreed that
further development of the performance model should focus on the three job roles of Security
Operations, Incident Response, and Intrusion Analysis.
Once the focal job roles were identified, an analysis was performed to determine the
master vignettes in which the selected roles collective have substantial involvement. Based on
the majority involvement rule described in the method section above, the decision was made to
eliminate 5 of the 13 master vignettes from further consideration. The remaining master vignettes
in which the three job roles have significant involvement are: AMI Attacks; Client Side Attacks;
Encryption Attacks; Network Attacks; Substation/SCADA Attacks; Network Separation and
Attack Paths; Phishing Incidents; and Incident Response and Log Management.
Page 70
Table 9: Nominal number of job roles per master vignette
Page 71
Table 10: Percentage of role involvement in each master vignette
Page 72
GOALS AND OBJECTIVES FOR ASSESSING PERFORMANCE
The final component of a Job Description is a list of goals that the panel believes must be
accomplished for effective performance, and the criterion, i.e., objective measure, that will be
used to assess such performance. The panel identified a total of 108 goals and sorted them into
categories: primary, secondary, and tertiary. Primary goals must be accomplished to achieve the
organizational mission. Secondary and tertiary goals must be accomplished to successfully
achieve a higher level goal. This resulted in a list of 27 primary goals (see Table 11). Each goal
description was elaborated using the PRISM method for goal setting (Tobey et al., 2007) to
produce a set of outcome indicators that may later be used in assessments and certification exams
to determine the relative level of performance.
The SME panel ranked the importance of each goal to achieving an effective response
during each of eight selected master vignettes. Table 12 lists the seven goals determined to be
most important. Table 13 shows the PRISM definitions for these goals which were edited to be
consistent with the Bloom taxonomy of action verbs.
Page 73
Table 11: Primary Goals
Page 74
Table 12: Important Goals
Page 75
Table 13: PRISM Definition for Important Goals
Page 76
TASK ANALYSIS
Data collection from the Job Analysis Questionnaire (JAQ) will continue into the second
phase of the project. This report provides the preliminary results from data collected through
May 22, 2012. In this section we will review information collected about those expressing
interest and responding to the JAQ. This will be followed by a brief summary of the trends that
appear to be developing through these responses. In the next section we will report a very
preliminary set of findings regarding the development of the Criticality-Differentiation Matrix
(CDM) that as discussed above will be the primary analytical tool along with an exploratory
factor analysis of the JAQ responses in developing the Smart Grid Cybersecurity Job
Performance Model.
Participants and Respondents
As of the closing date for this report, 129 people had responded to the JAQ (100 male; 23
female; and 6 not reported). Response rates are difficult to measure for internet surveys, but the
design of the VivoSurveyTM system did enable us to track the number of people who expressed
interest in the JAQ by clicking on the link in their email and accessing the introductory landing
page on the survey system website. From this landing page the interested participant could elect
to participate by clicking on a link to access the demographic survey page. We created two
response rate statistics based on this information. First, we calculated the number of people
expressing interest as a percentage of the total number of invites sent through the various
channels distributing the JAQ. However, there was much duplication in these lists, so we include
in the calculation of interest only those channels that had at least one respondent access the
survey site. We refer to the resulting statistic as the Participation Rate, which was 7% of May 22,
2012. Second, we calculated the number completing the JAQ demographic page as a percentage
of the total number expressing interest in participating.
We refer to this statistic as the
Response Rate. The response rate as of May 22, 2012 was 43.6%.
The participation rate and the response rate were monitored throughout the administration
of the JAQ and continue to be an important source for targeting groups most likely to participate.
Page 77
Based on this data, we conducted three waves of survey administration. The first wave was
broadly disseminated to approximately 18 trade associations and organizations representing a
large number of potential respondents. However, the participation rate from this group was only
3.2%. Consequently, during Wave 2 and 3 of the survey administration our efforts were
concentrated on utilities and related organizations that sent far fewer invites but seemed to be
receiving much higher rates of expressed interest and response. This second group represents
only 741 of the approximately 4,300 invitees, but had a participation rate to date of 25%.
Furthermore, the response rate from this group has been fantastic with over 62% of participants
becoming respondents to the JAQ. We have begun to further our focus with this group, with
webinars planned for select utilities that have committed to have their staff complete the entire
JAQ in one or more sittings.
Demographic Survey Responses
The demographic survey collected basic information about the respondents that will be
useful for post-hoc comparative analyses of the JAQ data. Table 14 shows that the respondents
were well distributed across age groups. Similarly, the respondents come from organizations of
various sizes; though most work at large organizations with 10,000 or more employees (see
Table 15). The most common job title is Cyber security analyst, but that represents only 28% of
the respondents as shown in Table 16. Respondents indicated that they have held their position
an average of 4.84 years (s.d. = 4.68). Finally, Table 17 indicates that our objective to reach
those with a broad range of experience was achieved with nearly equal representation from the
Competent, Proficient, and Expert categories in cyber security. Of note is a similar structure but
skewed towards the early stages of development in smart grid operations, reflecting the emergent
nature of this field.
Page 78
Table 14: Distribution of Respondents
Age Group
Percentage of
Respondents
21-30
9%
31-40
27%
41-50
28%
51-60
22%
Over 60
4%
Not reported
10%
Table 15: Size of Respondent Organization
Number of
Employees
Percentage
of Respondents
Less than 10
2%
10-99
11%
100-999
15%
1,000-4,999
16%
5,000-9,999
5%
10,000+
43%
Unreported
8%
Page 79
Table 16: Job Titles of Respondents
Job Title
Percentage of Respondents
Control systems engineer (CT01)
5.84%
Control systems operator (CT02)
0.73%
Control systems manager (CT03)
1.46%
Training specialist (CT04)
2.19%
IT Executive (CT18)
2.19%
IT manager (CT05)
4.38%
IT professional (CT06)
16.06%
IT systems administrator (CT07)
3.65%
Network engineer (CT08)
9.49%
Intrusion analysis staff (CT11)
5.84%
Intrusion analysis manager (CT12)
2.19%
Incident handling staff (CT13)
5.11%
Incident handling manager (CT14)
2.92%
Cyber security analyst (CT15)
28.47%
Cyber security operations staff (CT09)
10.22%
Cyber security operations manager (CT10)
5.11%
Cyber security manager (CT16)
10.95%
Cyber security executive (CT17)
6.57%
Other
20.44%
Page 80
Table 17: Levels of Experience
How would you classify your level of expertise in the cyber security field?
Percentage
Novice: minimal knowledge, no connection to practice (LE1)
2.92%
Beginner, working knowledge of key aspects of practice (LE2)
14.60%
Competent: good working and background knowledge of the area (LE3)
24.09%
Proficient: depth of understanding of discipline and area of practice (LE4)
24.82%
Expert: authoritative knowledge of discipline and deep tacit understanding across
23.36%
area of practice (LE5)
No answer
10.22%
What level of familiarity do you have with smart grid operations?
Percentage
Novice: minimal knowledge, no connection to practice (LE1)
19.71%
Beginner, working knowledge of key aspects of practice (LE2)
26.28%
Competent: good working and background knowledge of the area (LE3)
26.28%
Proficient: depth of understanding of discipline and area of practice (LE4)
10.22%
Expert: authoritative knowledge of discipline and deep tacit understanding across
8.03%
area of practice (LE5)
No answer
9.49%
Ratings of Frequency and Importance by Level of Expertise
The main body of the JAQ is the task statement ratings. As explained in the Competency
Grid method review section our goal is to collect sufficient ratings to support inferences
regarding the criticality and differentiation of each task in determining the need for and factors
impacting performance of individuals with varying levels of expertise. As explained in the
method section above, a number of survey pages must be submitted to obtain a complete JAQ
submission. As of the closing date of this report, between 10 and 17 responses were received for
Page 81
each task in the JAQ. While this number of responses is far below that needed to effectively
analyze and make inferences from the data, we will report some trends that will be interesting to
monitor as data collection continues during the second phase of the project.
Table 18 provides a brief overview of the responses collected through the closing date of
this report. It is important to emphasize that the sample size is currently far too low to conclude
anything from these statistics, so they are provided simply to suggest trends that may be worth
monitoring as further data is collected. The variance in the current data set is quite broad as
demonstrated by the low average agreement using the AWG index (Brown & Hauenstein, 2005).
Currently, when combining ratings across all expertise levels only 3 items show sufficient
agreement in frequency ratings to suggest that a consensus is emerging. Further, though it may
be premature, it does appear that we can conclude that the panel did an excellent job of
identifying tasks that are essential to job performance. As shown in Figure 11 the ratings are
highly skewed but resemble a normal distribution within the upper bounds of the rating scale.
Table 18: Overall Trends
An analysis of the ratings by expertise level shown in Table 19 suggests further
interesting trends are developing. First, it appears that the instructions for rating items are
effective as the ratings are increasing as the level of expertise rises. This should be expected
because individuals with lower level skills should not be as frequently involved and the tasks
they perform should be less important in determining the overall skill level of the performer.
Also interesting is that the agreement indices vary substantially across the levels of expertise. As
reported above, overall levels of consensus on ratings are very low. However, the frequency
ratings on novice and intermediate levels of expertise are much higher. This may reflect the ease
Page 82
with which people understand the limitations of early development of expertise, but not be as
clear about when experts are required. Alternatively, this artifact might have arisen because the
participant pool are themselves in the early stage of expertise development in smart grid
operations, and hence may be more familiar with the roles played by those with developing
expertise. Finally, it is also important to note that criticality scores are trending in the expected
direction, with the criticality of tasks performed by novices rated much lower than the criticality
of those performed by individuals with higher levels of expertise.
Figure 11: Histogram of Frequency Ratings. The y‐axis is the number of items rated
in the range listed on the x‐axis
Table 19: Summary of Preliminary JAQ Results by Level of Expertise
Page 83
DIFFERENTIATING PERFORMANCE ON CRITICAL TASKS
Based on the responses received to date we can begin to suggest what might be some
tasks that should become the focus for Phase II activities and further analysis by the SME panel.
While additional data collection is necessary to fully analyze the results, the distribution of
responses has been sufficient to develop an initial Critical-Differentiation Matrix of tasks. In the
method section above we defined criticality as the product of arithmetic means of frequency and
importance across all levels of expertise. We also defined differentiation as the slope of
criticality scores, signifying the frequency that a person with a given skill level must be involved,
and the importance of that task for determining the performer’s skill level. Fundamental tasks
were defined as those that are rated as highly critical but show little differentiation across these
three levels. Performance on these tasks is essential and should be considered minimal entrance
requirements for the field. Finally, differentiating tasks are those that exhibit both high criticality
and high differentiation scores.
Table 20 lists the scores for criticality and differentiation by decile. The range of scores is
encouraging and permits experimentation with cutoff scores for development of the quadrant
analysis. Currently, especially due to the small sample size, we have elected to focus on those
tasks which are found in the top three deciles. The preliminary list of Fundamental Tasks is
shown in Table 21 and the Differentiating Tasks are listed in Table 22. We expect that these lists
will change as data is collected, but will provide a good starting point for the initial panel
discussions to take place during the first step of Phase II of this project. The lists below are
ordered by task id because insufficient data is available to prepare reliable weights for these tasks
at this time.
Table 20: Criticality and Differentiation Scores by Decile
Page 84
Table 21: Preliminary Fundamental Tasks (Ordered by Task ID)
Task
ID
Task Description
9103
Analyze available logs and note gaps and time periods.
9111
Verify that all systems are logging to a central location.
Understand incident response process and initiate incident handling according to
9116
documented policies and procedures.
Identify and filter out false positives; if determined to be an incident, assign to incident
9117
handler.
9137
Analyze individual threat activity by correlating with other sources to identify trends.
9149
Implement intrusion prevention/detection solution.
9150
Understand the selected Security Event and Information Management tool.
9183
Understand the company's incident response process and procedures .
9191
Understand incident response, notification, and log handling requirements of business.
9200
9201
9244
9254
9259
Identify repeat incidents involving the same person or persons, systems, or
adversaries.
Prioritize systems within your network to determine which ones are of the High,
Moderate, or Low impact value.
Report vulnerabilities to staff and stakeholders.
Configure vulnerability scanners to operate safely and effectively in the targeted
environment.
Assess whether network scan results are real or false positives.
Page 85
9262
Review vulnerability scan results.
Test all vulnerability scanners for modes or configurations that would be disruptive to
9263
the communication paths and networks being tested and host communication
processing looking for possible conflicts that may result in negative operational
impacts.
9265
Analyze vulnerability reports.
9268
Coordinate assessment of any target systems with System Owners ahead of time.
9270
Develop a scanning plan and make sure all network operations staff and key
stakeholders are consulted and notified about the timing of test initiation.
9276
Review assessment results in accordance with defined risk categorization model.
9295
Communicate timing and schedule of scans.
9298
9304
9314
9318
9319
Coordinate efforts with the vendor to develop an understanding of the component and
security implications.
Understand how phishing attacks can adversely impact web-based management
applications.
Alert end users of potential risks and vulnerabilities that they may be able to mitigate.
Understand environment (culture, staff) to create a better relationship for transmitting
delicate and sometimes poorly understood information.
Monitor industry groups and forums to stay up to date on the latest security
vulnerabilities related to smart grid components.
9331
Identify threat actors.
9342
Identify sources of targets to scan.
9361
Review log files for signs of intrusions and security events.
Page 86
9363
9399
9406
9430
Develop and/or procure a data logging and storage architecture that scales and is fast
enough to be useful for analysis.
Coordinate with other departments to ensure that routine business operations are not
affected during testing.
Identify all systems that may be affected by testing.
Verify all devices are being submitted to Security Information and Event Management
for full network visibility.
Communicate changes to user security tools and information regarding identified
9538
events and incidents.
9544
Monitor for new systems installed on the network.
9556
Communicate with the vendor to ensure you are registered to receive updates.
9572
Implement solution to identify new devices connecting to the network(s).
9575
Understand the data classification strategies that are in place.
9595
Maintain a prioritized list of critical resources.
9597
Maintain or be able to access a list of assigned system owners.
9604
9606
9610
Maintain incident data repository and analyze data and metrics regarding types of
incidents, frequency, and systems impacted.
Review past incidents to determine if host security solutions and logs are providing
data that can identify an event.
Report the attack Tactics, Techniques, and Procedures (used in the last 6 months
against the organization).
Review tool configurations and target configurations to reduce false positives based on
9611
historic information.
Page 87
Develop a periodic verification process to ensure that the assets are logging in
9619
alignment with the intended operational architecture.
Scan all affected systems to ensure the patch or mitigations are present and the risk
9628
associated with the vulnerability has been reduced as expected.
Test all identified mitigations or patches to make sure they remove or mitigate the
9629
vulnerability as expected with no negative impacts.
Identify security incidents that require training or awareness for users and security
9632
9633
9640
9641
9674
9690
9701
9703
9708
9709
staff.
Develop mitigations based on incidents analyzed and recommend improvements in
security capabilities or tools as appropriate.
Analyze the intrusion by looking for the initial activity and all follow-on actions of the
attacker.
Collect images of affected system for further analysis before returning the system to
an acceptable operational state.
Document all actions taken to contain systems.
Assess what configuration settings result in capturing the required information for
monitoring.
Monitor all systems that were suspected or confirmed as being compromised during an
intrusion/incident.
Review running processes to determine if incident response successfully removed
malware.
Develop and publicize ways to distinguish between routine system errors and
malicious activities.
Protect classified or proprietary information related to the event, but release general
Page 88
incident information to stakeholders.
9710
9711
9717
9718
9720
9722
Review incident response actions to ensure actions were taken properly.
Monitor systems that were affected and the entire sub-network for activity associated
with the attack.
Monitor security news and intelligence sources to include vendor web pages for
vulnerability disclosures, incident announcements, and knowledge briefs.
Communicate with vendors about a vulnerability or incident in order to understand
risk and devise a mitigation strategy.
Decide what mitigations should be implemented on remote connections.
Understand company policies and procedures for downloading and installing
third-party software.
9725
Access company policies to verify that the software being downloaded is allowed.
9729
Scan systems in an attempt to detect the use of unacceptable software.
9750
Define reports on the current patch and update status of all security tools and identify
any variances against vendor releases.
9751
Establish a systems and tools patching program and schedule.
9755
Document current patch levels and updates before use in critical situations.
9781
Sign up for vendor notifications and alerts
9785
Maintain a current list of stakeholders' contact information and link this information to
notification requirements.
9791
Monitor for unauthorized access to tools and data.
9802
Define security events and incidents with evaluation criteria.
Page 89
9807
9808
Develop Security Event and Information Management rule sets to detect documented
event classes for each monitored system.
Communicate warning signs of security events to internal stakeholders.
Collect observed attacker Tactics, Techniques, and Procedures from available sources
9809
to include Information Sharing and Awareness Councils, peer utilities, government
sources.
9819
9831
9849
9850
9859
9860
9861
9878
Analyze the security incident and identify defining attributes.
Escalate findings to appropriate personnel to review event and ensure accuracy of
false-positive findings.
Report the time of discovery for all reportable events and incidents and the time of
notification.
Verify that all reported events and incidents were handled in compliance with the
reporting requirements.
Understand desired outcome as well as purpose of assessment so that the solution can
be configured appropriately.
Test the vulnerability assessment solution in a development environment to see if
desired results are achieved.
Implement monitoring system that meets design criteria.
Minimize spread of the incident by ensuring contaminated systems cannot
communicate to systems outside of the network boundary.
Page 90
Table 22: Preliminary Differentiating Tasks (ordered by Task ID)
Task
ID
Task Description
Review known intrusion Tactics, Techniques, and Procedures and observables to
9129
assist in profiling log events and capture event information that may relate to known
signatures.
9192
Understand the basic components of an incident response process (Prepare, Identify,
Contain, Eradicate, Recover, Lessons Learned).
9267
Develop a prioritized list of critical resources.
9338
Understand NERC CIP and audit requirements.
9348
Understand how to run wireshark and tcpdump.
Review all internal incidents for the purposes of staying current in threats and how to
9414
to stay up to date on current threats and determine the best way to analyze them,
review all internal incidents.
9491
Monitor vulnerability reports.
9527
Update database of device configurations upon changes to configurations.
9577
Understand data classification levels and how to identify such levels with assets.
9605
9625
9627
Review incidents over time to determine lessons learned or how to better align
security tools.
Assess the risk ratings of the vulnerability based on the technical information and how
the technology is deployed and the importance of the systems.
Implement vulnerability mitigations in accordance with the plan to include patches or
additional security controls.
Page 91
9634
Define how systems were initially compromised and how the attack progressed and
what observables were available for detection and response.
9649
Monitor security tool providers for updates and patches for tools that are in use.
9712
Report closing of the incident and all incident response processes that were followed.
9719
Monitor all logs associated with third party accessing your systems; this may require a
manual review against historic use profiles.
9749
Maintain a list of approved security tools and their approved patch levels.
9783
Maintain knowledge of reporting requirements associated with systems.
9857
9877
Develop a standardized process to ensure appropriate steps are taken during and after
an event occurs.
Minimize spread of the incident by ensuring contaminated systems are monitored.
Page 92
CONCLUSION
The National Board of Information Security Examiners has developed a new approach to
job task and competency analysis that is intended to identify the knowledge, skills and abilities
(KSAs) necessary to successfully perform the responsibilities of three smart grid cybersecurity
job roles: security operations, intrusion analysis, and incident response. The overall goal of this
first phase of the project was to develop the frameworks and models necessary to promote
advances in workforce development to better prepare entry-level candidates for new jobs in the
field. The report will contribute to the development of valid curricula and assessments that
markedly improve the knowledge, skill and ability of all practitioners involved in smart grid
cybersecurity job functions, across the continuum from novice to expert.
The results of the preliminary analysis of job description data has demonstrated the value
of shifting away from traditional job analysis and competency models that tend to define jobs at
a high level suitable only for descriptive modeling of a job. We proposed a new approach based
on an integrative, multidimensional theory of human performance, called The Competency Box.
This theory proposes that discrete definition and measurement of knowledge, skill, and ability
enables better understanding of learning curves and individual or team positioning within the
competency development space. Accordingly, a new approach was developed to elicit the tasks
that serve as indicators of progression within the Competency Box.
This project seeks to make important contributions to the science of competency
modeling and to the practice of smart grid cybersecurity competence assessment and
development:
1. Develop innovations in modeling techniques that may predict the potential of individuals
or teams to meet the future demands of a dynamic threat landscape
2. Develop innovations in the elicitation and definition of tasks that can dramatically
shorten the time required to create and maintain detailed competency models to facilitate
the inclusion of ground truth regarding vulnerabilities, adversary strategy and tactics, and
best practices for detection and defense
Page 93
3. Develop a method to produce multiple role competency models that facilitate the creation
of interrelated competency profiles to support the maturation of organizational
competence in smart grid cybersecurity teams
The primary objective of the initial project phase was to use this theory to derive a job
performance model (JPM). Unlike prior descriptive models, JPMs are intended to support
predictive inferences through the development of a factor model. A second objective of this
initial effort was to develop competency models that could extend these inferences to predict
future performance, rather than simply providing an assessment of prior accomplishments or
producing normative lists of what a job should entail. Accordingly, we developed innovations in
both the evaluation and analysis of tasks, and a method for determining whether a task is
fundamental or differentiating based on the results of a detailed Job Analysis Questionnaire
(JAQ) in which practitioners of varying backgrounds and experience rate the frequency and
importance of tasks performed by individuals with varying levels of expertise.
The tasks listed in the JAQ were identified through a series of brainstorming sessions
with a group of 30 subject matter experts with broad representation across industry, academia
and government. The development of a structured elicitation protocol enabled the SME panel to
generate increasing layers of detail in just a few weeks, resulting in detailed lists of job
behaviors, processes, and goals supporting effective response in real-world scenarios, or
vignettes. Moreover, the process substantially reduced the cycle time for competency modeling,
while grounding the process in the current truths regarding vulnerability, adversary tactics and
effective defense techniques. Consequently, we expect the resulting job performance model to
achieve higher fidelity than previous approaches. Finally, we argue that this ground truth
approach to expertise development may help to ensure education, training, practice, and
assessments are aligned with the current threat landscape and current best practices for detection
and mitigation techniques.
Throughout this process, cybersecurity in the smart grid environment was presumed to
involve several functional and job roles. The panel of subject matter experts (SMEs) identified
over 100 situations in which such roles may be involved and how their individual responsibilities
related to each other throughout the work practice. A rich description of the three targeted job
Page 94
roles emerged from this preliminary work, based on an innovative process of job model
elicitation using vignettes to captured exemplary performance or mis-use cases involving errors
or omissions. Thus, another important contribution of job performance models may be the
development of team assessments. These models may also foster better understanding of the role
of soft skills necessary to initiate, manage, and excel in collaborative problem solving activities
typical of smart grid cybersecurity. Finally, the catalog of the fundamental and differentiating job
tasks across multiple job roles may foster the creation of shared libraries for curricula,
assessments, and lab exercises that will help develop an adaptive and versatile workforce and
identify career path options for individuals based on their competency profile.
The next step will be to conduct a practice analysis in Phase II of the project to guide
selection from the list of fundamental and differentiating tasks. These tasks will then be further
elaborated using cognitive task and protocol analysis. The primary outcome from this effort
should be the development and validation of a set of proficiency and situational judgment item
pools, as well as simulation configurations that may be used to validate the construct and
predictive validity of these items. The confirmatory analysis performed during this phase will
prepare the material necessary to develop a potential performance analysis that can distinguish
the contribution of knowledge, skill, and ability factors in producing effective smart grid
cybersecurity job performance.
Page 95
IMPLICATIONS FOR WORKFORCE DEVELOPMENT
The preliminary results of the Job Analysis Questionnaire suggest that the design of
smart grid cybersecurity development interventions would benefit from a focus on critical
incidents (Flanagan, 1954). Unlike the traditional definition of incidents in the cybersecurity
domain, critical incidents are a defining moment in which the differences in skill level are
notable in clearly identifiable outcomes of action taken. By integrating assessments, further
defined below, which capture the progress against the job mission, modules can be designed
which produce development that uniquely assesses the discrete contribution of knowledge, skill
and ability in producing performance outcomes.
Our results suggest that in developing smart grid cybersecurity education and training
courses or cyber competition content the focus should be directed towards the methods or
behavioral sequences explicit or implicit in the responsibilities and tasks to be performed.
Moreover, an important outcome that emerged from this study is the importance of focusing on
the right tasks and the right learning objectives -- those that are fundamental or differentiating -based on assessment data that indicates the positioning of an individual along the progression
from novice to master. Our SME panels informed us that while a single task may occur across
multiple vignettes and multiple job roles, the sequence and method of task execution may be
substantially altered by context. Thus, an important implication of our initial study for workforce
development is the need for situational judgment tests that may assess if an individual has
selected the right task, using the right method, at the right time. During the next phase of the
project we will elaborate on the normative models developed in the current study. We will ask
the SME panels to also identify the mis-use cases that can form a set of distractor choices to
ensure that the test taker has developed sufficient understanding, or is able to demonstrate skilled
performance when faced with the distraction or distress that accompanies simulated or real-world
smart grid cybersecurity activities.
Perhaps most important, the research on developing job performance models has shown
the value of deliberate practice for accelerating proficiency. Thus, training or future development
of assessment instruments should be designed as small, interchangeable modules that increase
Page 96
understanding in performing a single task, allowing modules to be combined in varying ways to
target specific areas of responsibility associated with each job role. Practice exercises and
assessment instruments should be aligned with each training module to facilitate deliberate
practice, shown to accelerate proficiency across a broad range of expertise domains. Finally, both
proficiency (knowledge) and performance (skill) tests should be included that test not only for
understand but the ability to apply the new knowledge in familiar and unfamiliar settings. In so
doing, the course will support the progression of competency from novice to master in the least
time possible.
Page 97
IMPLICATIONS FOR FUTURE RESEARCH
This has been a preliminary study intended to describe the tasks that indicate
performance in three job roles involved in smart grid cybersecurity. Our intent was to establish
the foundation for expanding research into developing a holistic approach to cybersecurity
workforce development. We have previously proposed a model for accelerating expertise
development described as Ground Truth Expertise Development (GTED; Assante and Tobey,
2011).
Future research is needed to develop a more complete and predictive model of both
potential performance and competence development. This will require factor analysis of the data
collected through the Job Analysis Questionnaire to produce a construct and measurement model
that demonstrates convergent and discriminant validity. The resulting factor model will then
need to be confirmed through critical incident studies. The results of these studies should inform
the development of new proficiency and performance assessments aligned with the tasks found
to be strong indicators of job performance. Finally these assessments should be validated as
predictors of scores in cyber challenges, competitions, and simulations in which the indicator
tasks are being performed. Figure 12 summarizes these steps and suggests both individual and
organizational benefits that may be derived from completion of this framework.
Page 98
Figure 12: Figure 12: Future Research Program
Future research is also needed regarding how Job Performance Models can be
maintained over time, incorporating changes in ground truth on a recurring basis. We need to
better understand the dynamics of cybersecurity and its implications for smart grid deployment
and operation. Future research should study the diffusion of knowledge related to new threats,
vulnerabilities, and best practices. We need to better understand the factors that facilitate broad
dissemination, reduce sharing of ineffective practices that may lead to maladaptive skill
development, and encourage adoption across the affected organizations.
Perhaps most important is the need to better understand the dynamics of movement
within the Competency Box. Future research should include experimental and field studies to
identify benchmarks and factors influencing the shape, slope, and length of learning curves. We
need better understanding of how learning curves might differ between and among fundamental
and differentiating tasks, and therefore across job roles.
We need better understanding of team competence. Future research might therefore seek
to answer questions such as: How do the factors that influence team competence differ from
Page 99
those that influence development of individual competence? How do varying competence profile
configurations of team members affect team performance? How does variance in knowledge,
skill, and ability of team members collaborating in accomplishing a task affect team
performance? Finally, future research may want to explore the role of motivation, arousal, and
the configuration of skills that are labeled as traits in producing individual performance and
affecting team outcomes.
These, and many more questions, become possible to answer with discrete measurement
of the three core dimensions of competence.
Page 100
LIMITATIONS OF THE STUDY
Due to the preliminary nature of the study findings, there are many limitations
which could be addressed by future phases of the project or through other research. We cannot
possibly review in the space provided a comprehensive list of limitations. However, three
limitations are notable involving cautions on making inferences from our small sample to date,
lessons learned from SME panel participation, and coordination requirements for obtaining
broader industry support.
First, and perhaps most important, a sufficient sample to support inferential analysis has
yet to be obtained. Therefore, no conclusions should be drawn regarding the definition of
fundamental or differentiating tasks, job performance model factors, or ratings of specific task
statements. Lacking an evidence-based list of fundamental and differentiating tasks, future panel
activity will need to consider a broader list of tasks in determining the focal areas for conducting
a critical incident analysis. Therefore, to make these panel sessions most effective we strongly
encourage continued diligence in obtaining responses to the JAQ, and production of incremental
updates to the CDM analysis as the panel progresses. Finally, it may prove valuable to have other
cybersecurity experts who may not have extensive control systems experience respond to the
JAQ as many of the tasks are not specific to smart grid deployments and therefore an adequate
sample may be obtained quickly.
Second, SME panel participation as discussed in several progress reports (see Appendix)
has not met our expectations. Consequently, the elicitation of information on the job context,
description, and development of the Job Analysis Questionnaire may have been biased by an
insufficiently representative group of SMEs. During the next phase of the project it is imperative
that we create a longer list of prospective panel members so that replacements can be readily
made for panel members who cannot fulfill their participation commitments. We should consider
assigning a person with strong connections to the participant pool the specific role of recruitment
coordinator, with a quota established for obtaining and maintaining SME participation rates in
panel sessions and activities. Further, while anonymity is essential for public surveys, future
Page 101
panel activities, where appropriate, should be attributed to facilitate tracking progress and
sending reminders to members who have not completed their assignments.
Third, the experience in gathering data from the Job Analysis Questionnaire suggests that
unless more resource and time is allocated to coordination, results may be biased by skewed
participation rates across industry groups. The initial wave of JAQ invitations was appropriately
broad and comprehensive but produced an insufficient participation rate. Accordingly, two
additional waves were initiated that targeted invitations to utility organizations which had
provided much higher participation and response rates. However, this purposive sampling
approach may bias the results by overweighting responses obtained through disproportionate
participation from a few organizations in the sample. During Phase II there will be the need to
obtain industry participation in validating panel opinions (similar to but with much reduced
content than the JAQ), forming a subject pool for the experimental studies of work practices, and
forming a subject pool for a field study of work practices. The experience in Phase I suggests
that recruitment of these participants should begin well in advance of their being needed, a much
larger group than needed should be identified, and any and all factors that could inconvenience a
participant should be addressed if possible. Finally, a project resource with strong ties to the
industry (perhaps a PNNL representative) should be accountable to the research team for
developing the recruitment plan and managing a quota for participation that ensures a sufficient
and representative sample for each activity.
Page 102
REFERENCES
Anderson, L. W., Krathwohl, D. R., & Bloom, B. S. (2001). A taxonomy for learning, teaching, and
assessing: A revision of Bloom's taxonomy of educational objectives (Complete ed.). New York:
Longman.
Arvey, R. D., Landon, T. E., Nutting, S. M., & Maxwell, S. E. (1992). Development of physical ability tests
for police officers: A construct validation approach. Journal of Applied Psychology, 77(6),
996‐1009.
Assante, M. J., & Tobey, D. H. (2011). Enhancing the cybersecurity workforce. IEEE IT Professional, 13(1),
12‐15.
Behrens, J. T., Collison, T., & DeMark, S. (2006). The seven Cs of a comprehensive assessment: Lessons
learned from 40 million classroom exams in the Cisco Networking Academy Program. In S. L.
Howell & M. Hricko (Eds.), Online assessment and measurement: Case studies from higher
education, K‐12, and corporate (pp. 229‐245). Hershey, PA: Information Science Pub.
Behrens, J. T., Mislevy, R. J., DiCerbo, K. E., & Levy, R. (2010). An evidence centered design for learning
and assessment in the digital world (CRESST Report 778). Los Angeles: University of California,
National Center for Research on Evaluation, Standards, and Student Testing (CRESST).
Berk, R. A. (1980). Criterion‐referenced measurement: The state of the art. Baltimore: John Hopkins
University Press.
Binde, B. E., McRee, R., & O’Connor, T. J. (2011). Assessing Outbound Traffic to Uncover Advanced
Persistent Threat (pp. 34): SANS Technology Institute.
Bloom, B. S. (1956). Taxonomy of educational objectives: The classification of educational goals (1st ed.).
New York,: Longmans, Green.
Bodeau, D. J., Graubart, R., & Fabius‐Greene, J. (2010). Improving Cyber Security and Mission Assurance
via Cyber Preparedness (Cyber Prep) Levels. Proceedings of the 2010 IEEE Second International
Conference on Social Computing, 1147‐1152.
Boje, D. M. (1991). The storytelling organization: A study of story performance in an office‐supply firm.
Administrative Science Quarterly, 36, 106‐126.
Boje, D. M. (1995). Stories of the storytelling organization: A postmodern analysis of Disney as
Tamara‐land. Academy of Management Journal, 38(4), 997‐1035.
Boje, D. M. (2001). Narrative Methods for Organizational and Communication Research. London: Sage
Publications.
Boje, D. M. (2008). Storytelling Organizations. London: Sage Publications.
Brannick, M. T., & Levine, E. L. (2002). Job analysis: Methods, research, and applications for human
resource management in the new millennium. Thousand Oaks, CA: Sage Publications.
Page 103
Brannick, M. T., Levine, E. L., & Morgeson, F. P. (2007). Job and work analysis: Methods, research, and
applications in human resource management. Los Angeles: Sage Publications.
Briggs, R. O., Vreede, G.‐J. d., & Nunamaker, J. F. J. (2003). Collaboration engineering with ThinkLets to
pursue sustained success with group support systems. Journal of Management Information
Systems, 19(4), 31‐64.
Briggs, R. O., Vreede, G.‐J. d., Nunamaker, J. F. J., & Tobey, D. H. (2001). ThinkLets: Achieving
predictable, repeatable patterns of group interaction with group support systems (GSS).
Proceedings of the 34th Annual Hawaii International Conference on System Sciences, 1057‐1065.
Brown, R. D., & Hauenstein, N. M. A. (2005). Interrater agreement reconsidered: An alternative to the
rwg indices. Organizational Research Methods, 8(2), 165‐184.
Brown, T. A. (2006). Confirmatory factor analysis for applied research. New York: Guilford Press.
Campion, M. A., Fink, A. A., Ruggenberg, B. J., Carr, L., Phillips, G. M., & Odman, R. B. (2011). Doing
competencies well: Best practices in competency modeling. Personnel Psychology, 64(1),
225‐262.
Chi, M. T. H., Glaser, R., & Farr, M. J. (1988). The
Associates.
nature of expertise. Hillsdale, NJ: Lawrence Erlbaum
Crandall, B., Klein, G. A., & Hoffman, R. R. (2006). Working minds: A practitioner's guide to cognitive task
analysis. Cambridge, MA: MIT Press.
Cropanzano, R. S., James, K., & Citera, M. (1993). A goal hierarchy model of personality, motivation and
leadership. Research in Organizational Behavior, 15, 1267‐1322.
Czarniawska, B. (1997). Narrating the organization: Dramas of institutional identity. Chicago: University
of Chicago Press.
Dainty, A. R. J., Cheng, M.‐I., & Moore, D. R. (2005). Competency‐based model for predicting
construction project managers’ performance. Journal of Management in Engineering, 21(1), 2‐9.
Ericsson, K. A. (2004). Deliberate practice and the acquisition and maintenance of expert performance in
medicine and related domains. Academic Medicine, 79(10), S70‐S81.
Ericsson, K. A. (2006). Protocol analysis and expert thought: Concurrent verbalizations of thinking during
experts' performance on representative tasks. In K. A. Ericsson, N. Charness, P. J. Feltovich & R.
R. Hoffman (Eds.), The Cambridge handbook of expertise and expert performance (pp. 223‐241).
Cambridge, UK: Cambridge University Press.
Ericsson, K. A., & Charness, N. (1994). Expert performance: Its structure and acquisition. American
Psychologist, 49(8), 725‐747.
Ericsson, K. A., Charness, N., Feltovich, P. J., & Hoffman, R. R. (2006). The Cambridge Handbook of
Expertise and Expert Performance. Cambridge, UK: Cambridge University Press.
Page 104
Ericsson, K. A., & Simon, H. A. (1980). Verbal reports as data. Psychological Review, 87(3), 215‐251.
Ericsson, K. A., & Simon, H. A. (1993). Protocol Analysis: Verbal reports as data, Revised edition.
Cambridge, MA: MIT Press.
Flanagan, J. C. (1954). The crtical incident technique. Psychological Bulletin, 51(4), 327‐358.
Frincke, D. A., & Ford, R. (2010). Building a better boot camp. IEEE Security & Privacy, 8(1), 68‐71.
Gibson, S. G., Harvey, R. J., & Harris, M. L. (2007). Holistic versus decomposed ratings of general
dimensions of work activity. Management Research News, 30(10), 724‐734.
Hoffman, R. R. (1992). The psychology of expertise: Cognitive research and empirical AI. New York:
Springer‐Verlag.
Hoffman, R. R., & Feltovich, P. J. (2010). Accelerated Proficiency and Facilitated Retention:
Recommendations Based on an Integration of Research and Findings from a Working Meeting
(pp. 377). Mesa, AZ: Air Force Research Laboratory.
Hoffmann, M. H. W. (2011, April 4‐6). Fairly certifying competences, objectively assessing creativity.
Paper presented at the Global Engineering Education Conference (EDUCON), Amman, Jordan.
Jeanneret, P. R., Borman, W. C., Kubisiak, U. C., & Hanson, M. A. (1999). Generalized work activities. In
N. Peterson & M. D. Mumford (Eds.), An occupational information system for the 21st Century:
The development of O*NET (pp. 101‐121). Washington, DC: American Psychological Association.
Klein, G. A., Calderwood, R., & MacGregor, D. (1989). Critical decision method for eliciting knowledge.
IEEE Transactions on Systems, Man & Cybernetics, 19, 462‐472.
Le Deist, F. D., & Winterton, J. (2005). What is competence? Human Resource Development
International, 8(1), 27‐46.
LeBreton, J. M., & Senter, J. L. (2008). Answers to 20 questions about interrater reliability and interrater
agreement. Organizational Research Methods, 11(4), 815‐852.
Lievens, F., Sanchez, J. I., & De Corte, W. (2004). Easing the inferential leap in competency modelling:
The effects of task related information and subject matter expertise. Personnel Psychology,
57(4), 881‐904.
Locke, E. A., Shaw, K. N., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance:
1969‐1980. Psychological Bulletin, 90, 125‐152.
Long, J. S. (1983). Confirmatory Factor Analysis: A preface to LISREL. Beverly Hills, CA: Sage.
McCormick, E. J. (1979). Job analysis: Methods and applications. New York: AMACOM.
Miller, G. A., Galanter, E., & Pribram, K. H. (1960). Plans and the Structure of Behavior. New York: Henry
Holt and Company.
Mislevy, R. J. (1994). Evidence and inference in educational assessment. Psychometrika, 59(4), 439‐483.
Page 105
Mislevy, R. J. (2006). Cognitive psychology and educational assessment. In R. L. Brennan (Ed.),
Educational measurement (4th ed., pp. 257‐305). Westport, Ct.: Praeger Publishers.
Mislevy, R. J., & Bock, R. (1983). BILOG: Item and test scoring with binary logistic models. Mooresville,
IN: Scientific Software.
Mislevy, R. J., Steinberg, L. S., & Almond, R. G. (1999). On the roles of task model variables in assessment
design. Los Angeles: National Center for Research on Evaluation, Standards, and Student
Testing, Center for the Study of Evaluation.
Morgeson, F. P., & Dierdorff, E. C. (2011). Work analysis: From technique to theory. In S. Zedeck (Ed.),
APA handbook of industrial and organizational psychology (1st ed., pp. 3‐41). Washington, DC:
American Psychological Association.
NICE Cybersecurity Workforce Framework. (2011, September 20, 2011).
from http://csrc.nist.gov/nice/framework/
Retrieved October 30, 2011,
Nunamaker, J. F. J., Briggs, R. O., Mittleman, D. D., Vogel, D. R., & Balthazard, P. A. (1997). Lessons from
a dozen years of group support systems research: A discussion of lab and field findings. Journal
of Management Information Systems, 13(3), 163‐207.
Offermann, L., & Gowing, M. (1993). Personnel selection in the future: The impact of changing
demographics and the nature of work. In N. Schmitt & W. C. Borman (Eds.), Personnel selection
in organizations (1st ed., pp. 385‐417). San Francisco: Jossey‐Bass.
Parry, S. (1996). The quest for competencies. Training, 33, 48‐54.
Powers, W. T. (1973). Behavior: The control of perception. Chicago: Aldine.
Reiter‐Palmon, R., Brown, M., Sandall, D. L., Buboltz, C., & Nimps, T. (2006). Development of an O* NET
web‐based job analysis and its implementation in the US Navy: Lessons learned. Human
Resource Management Review, 16(3), 294‐309.
Ryan, J. J. C. H. (2011). Cyber security: The mess we’re in and why it’s going to get worse. In L. Hoffman
(Ed.), Developing cyber security synergy (pp. 37‐45). Washington, D.C.: Cyber Security Policy and
Research Institute, The George Washington University.
Sanchez, J. I., & Levine, E. L. (2001). The analysis of work in the 20th and 21st centuries. In N. Anderson,
D. S. Ones, H. Sinangil & C. Viswesvaran (Eds.), Handbook of industrial, work, and organizational
psychology (pp. 70‐90). London: Sage.
Sanchez, J. I., & Levine, E. L. (2009). What is (or should be) the difference between competency
modeling and traditional job analysis? Human Resource Management Review, 19(2), 53‐63.
Schmidt, F. L. (1993). Personnel psychology at the cutting edge. In N. Schmitt & W. C. Borman (Eds.),
Personnel selection in organizations (1st ed.). San Francisco: Jossey‐Bass.
Page 106
Schraagen, J. M. (2006). Task analysis. In K. A. Ericsson, N. Charness, P. J. Feltovich & R. R. Hoffman
(Eds.), The Cambridge handbook of expertise and expert performance (pp. 185‐201). Cambridge,
UK: Cambridge University Press.
Schuler, H. (1989). Some advantages and problems of job analysis. In M. Smith & I. T. Robertson (Eds.),
Advances in selection and assessment (pp. 31‐42). Oxford: John Wiley & Sons.
Shippmann, J. S., Ash, R. A., Batjtsta, M., Carr, L., Eyde, L. D., Hesketh, B., . . . Sanchez, J. I. (2000). The
practice of competency modeling. Personnel Psychology, 53(3), 703‐740.
Smit‐Voskuijl, O. (2005). Job analysis: Current and future perspectives. In A. Evers, N. Anderson & O.
Smit‐Voskuijl (Eds.), The Blackwell handbook of personnel selection (pp. 27‐46). Malden, MA:
Blackwell Publishing.
Tobey, D. H. (2001). COTS‐Based Systems: Automating Best Practices. Paper presented at the USC Center
for Software Engineering Annual Research Review, Los Angeles, CA.
Tobey, D. H. (2007). Narrative's Arrow: Story sequences and organizational trajectories in founding
stories. Paper presented at the Standing Conference on Management and Organizational
Inquiry, Las Vegas, NV.
Tobey, D. H. (2008). Storying crisis: What neuroscience can teach us about group decision making.
Proceedings of the Southwest Academy of Management.
Tobey, D. H. (forthcoming). A competency model of advanced threat response. ATR Working Group
Report NBISE‐ATR‐11‐02. Idaho Falls, ID: National Board of Information Security Examiners.
Tobey, D. H., Reiter‐Palmon, R., & Callens, A. (forthcoming). Predictive Performance Modeling: An
innovattive approach to defining critical competencies that distinguish levels of performance.
OST Working Group Report. Idaho Falls, ID: National Board of Information Security Examiners.
Tobey, D. H., Wanasika, I., & Chavez, C. I. (2007). PRISMA: A goal‐setting, alignment and performance
evaluation exercise. Paper presented at the Organizational Behavior Teachers Conference,
Pepperdine, CA.
Trafimow, D., & Rice, S. (2008). Potential Performance Theory (PPT): A general theory of task
performance applied to morality. Psychological Review, 115(2), 447‐462.
Trafimow, D., & Rice, S. (2009). Potential performance theory (PPT): Describing a methodology for
analyzing task performance. Behavior Research Methods, 41(2), 359‐371.
Wicker, F. W., Lambert, F. B., Richardson, F. C., & Kahler, J. (1984). Categorical goal hierarchies and
classification of human motives. Journal of Personality, 52(3), 285‐305.
Williamson, D. M., Bauer, M., Steinberg, L. S., Mislevy, R. J., Behrens, J. T., & DeMark, S. F. (2004). Design
rationale for a complex performance assessment. International Journal of Testing, 4(4), 303‐332.
Page 107
APPENDICES
Page 108
APPENDIX A: JOB DESCRIPTIONS
Job descriptions are an excellent source of job classification data. Accordingly, this
appendix will include sample job descriptions from recruitment advertisements across the
industries that are involved in the smart grid. These advertisements were collected from firms in
the energy industry, profession services firms, and technology vendors. Further information on
the selection of these descriptions may be obtained from Pacific Northwest National Lab.
APPENDIX B: PANEL OBJECTIVES AND ATTENDANCE
Session 1 (Oct 25/27):
Objectives:
Discussions of best practices and job roles
Vignette Title Definition: Brainstorming situations that require cybersecurity skills
Vignette Synopsis Description
For session 1, panel attendance was at 86% of total panel membership. The majority of
attendees were members representing the industry sector at 33% and service representatives
at 29%.
PANEL ATTENDANCE:
24
Service
7
29.17%
Government
1
4.17%
Industry
8
33.33%
Vendor
5
20.83%
Research
3
12.50%
Session 2 (Nov 1/3):
Objectives:
Review results to date (Progress Report)
Vignette survey (Q&A Mode)
Knowledge exchange posts on roles (Conversation mode)
Vignette Detail Elaboration (Conversation mode)
Define actions steps and roles involved for each vignette
Session two reflects a drop in participation from the previous week with 61% of members
attending. Industry representatives are still the highest segment of participants with a sharp
decline in participation by our panel’s service representatives.
PANEL ATTENDANCE:
17
Service
2
11.76%
Government
1
5.88%
Industry
8
47.06%
Vendor
4
23.53%
Research
2
11.76%
Session 3 (Nov 8/10):
Objectives:
Functional Role Validation
Role Categorization (Vote)
Goal Generation (Brainstorming)
Goal Sorting (Vote)
Session three experienced a significant drop to 50% of panel members participating. The
greatest decrease by sector belonged to the industry representatives, with only three of the
nine members attending. We believe this is due to increased end-of-year activity in the
energy industry.
PANEL ATTENDANCE:
14
Service
4
28.57%
Government
1
7.14%
Industry
3
21.43%
Vendor
4
28.57%
Research
2
14.29%
Session 4 (Nov 14/16):
Objectives:
Vignette Elaboration
Goal categorization
Primary goal review
Goal Elaboration
Goal / Vignette Categorization
Categorize Job Roles into Functional roles
Session four shows a slight uptick in the numbers attending with additional industry
representation participating, along with new panel representation in both the government and
vendor sectors, for an increased attendance rate of 64% of panel membership. This week’s
participation by industry reps was still down due to end-of-year activity and was further
reduced due to conflicting with NERC’s Grid Ex event.
PANEL ATTENDANCE:
18
Service
4
22.22%
Government
2
11.11%
Industry
5
27.78%
Vendor
5
27.78%
Research
2
11.11%
APPENDIX C: COMMENTS FROM JAQ PILOT
General Comments:
Modifications to be made to general system components:
PNNL #1: Saving the website stores clear text passwords in email and the URL
SLE: Smart Grid is spelled multiple ways (e.g., Smartgrid, Smart Grid, smart grid)--need
to determine which is correct and do a global search and replace.
LRO: Survey needs a general grammar, spelling and acronym check. Some questions
have a period at the end and some do not
Modifications to be made to landing page or survey instructions:
SLE: In the instructions, change "how important is it" to "how important it is"
Maria Hayden: “In the Smartgrid Cybersecurity environment: For each statement below
please indicate how WOULD frequently each task be performed… “ “… please indicate how
frequently each task WOULD be performed.”
Craig Rosen: 1. The question of importance is hard to discern. Some people will read
this as asking for importance of the task itself and not relative to it being completed by the
person at a given skill level.
The next point is that is ambiguous is that one person's judgement of why performing a
specific task at a specific skill level is based off of what they value as important:
I may rate
"importance" as extremely because my goal is to have that person perform that task multiple
times to train them and turn them into an expert (which would be the GOAL I conjure up in my
thinking about this). However, if we're talking about relative importance of a given task to meet
a specific GOAL - (which is what I think is being asked) - like reducing risk exposure to the
Smart Grid, then that makes more sense - but it isn't clear.
(hope that makes sense).
[Related comments suggest a need to clarify what is meant by importance.]
Maco Stewart: First page of survey: This is a three-part survey that seeks to determine the
critical job tasks for Indident Response in the Smart Grid environment: --should be “Incident.”
Maco Stewart: “For each task listed below indicate how important it is to be skilled at
accomplishing the task and how frequent each task would be performed by a person with that
level of expertise.”—should be “frequently.”
Maco Stewart: Not specified how often you have to save; lost my first pass through and
had to begin again.
Bora Akyol: I completed one survey and then it immediately pushed me to a different
survey which I clicked out of. [Need to clarify that this is normal behavior
LRO: I wondering if all R* questions are really required to have an answer? If so,
perhaps there should be an “N/A” or “No opinion” answer. I realize that an answer is highly
desirable, but sometimes it may not really be an option. I do like that the survey verifies that
questions have been answered.
Nic Ziccardi: I completed the second part of the incident response survey. I don't know if
this is by design, but very few of the tasks contained in that part applied to the role of incident
response analyst. [Need to explain that if a task is not applicable for this job role it should be
rated of low frequency and low importance. This will also address the lack of an "N/A" option
above which is on purpose to avoid someone simply accepting that answer as a default.
Anthony David Scott: Recommend inserting instructions that define the different radio
button fields. Although somewhat intuitive, it would be helpful
Maria Hayden: Does “expert” mean boss? Unclear. I assumed yes. Some tasks are far
more administrative than analytical, and should therefore be expert/boss tasks instead of novice
tasks. [Need to clarify what is a novice, intermediate, expert -- use the demographic definitions
provided by Dr. Greitzer
PNNL Reviewer (AR): Change the instruction text: "In the Smartgrid Cybersecurity
enviornment" to "In the ideal Smartgrid Cybersecurity Environment." Add text that states "In
other words, how important is it that you employ a person at the listed level of expertise to do
this task?"
PNNL Reviewer (AR): "answered questions that were incomprehensible with all "Never"
and all "Unimportant" as a flag that they can be left blank.
ACTION: Suggest this as a strategy for survey taker in the instructions, along with
indicating further information in a comment at the end of the survey.
Modifications to be considered if system permits the change:
SLE: Recommend putting a vertical line between the FREQUENCY bubbles and the
IMPORTANCE bubbles. Also, in the instructions, bold or emphasize in some way the words
"frequency" and "importance."
Jason Christopher: Can enter "---" into "number only fields. [Verify if this is modifiable
on Lime platform?]
PNNL Reviewer (AR): "saving of survey session in secon/Path 2 complains about
uncompleted questions (afraid to hit that button without first moving to a fresh page). Sections 1
and 3 seem to work fine with saving in mid-stream.
ACTION: Change "full survey" to be conducted as multiple individual surveys. Is
there a way to indicate how many they have left to complete? Question for Adam.
Modifications that will not be made because they represent planned or expected
behavior
Jason Christopher: [redundancy discussed below enables verification of consistent
responses and improves factor analysis
-Many of the "communicate to stakeholders" requirements can become just one
requirement
-There are also many "find false positive" requirements that can be condensed into one
requirement.
-Escalate findings to... requirements are also redundant.
Maria Hayden: I think having “importance” separated for all three levels of experience is
redundant. Either the activity is important or it isn’t. Otherwise we’re just going to have people
clicking the same dots on both frequency and importance!
[The responses across the levels of expertise are useful in abstracting the differentiation
of task performance and a primary determinant of the theoretical model.
Pre-Survey Demographic Questions
Demographics 001
What is your gender?
Please choose only one of the following:
Female
Male
SLE: Recommend removing this question
Action: Move to end, data is important for demographic reporting of the results
Demographics 002
What is your age?
Please choose only one of the following:
Under 20
21-30
31-40
41-50
51-60
LRO: Change to What is your age range?
SLE: Recommend removing this question. If you do not remove questions 1 & 2, move
to the end and add an age range for folks over 60.
Action: Move to end, data is important for demographic reporting of the results
Demographics 003
What is your present job position?
LRO: Change to What is your present job title
Please write your answer here:
Action: Edit
Demographics 004
How long have you held this position? (Years):
Please write your answer here:
Demographics 005
How would you classify your level of expertise in your current position?
Please choose only one of the following:
Novice
Advanced Beginner
Competent
Proficient
Expert
Dreyfus’ (1986) original model of skill acquisition:
Novice: minimal knowledge, no connection to practice
Beginner: working knowledge of key aspects of practice
Competent: good working and background knowledge of the area
Proficient: depth of understanding of discipline and area of practice
Expert: authoritative knowledge of discipline and deep tacit understanding across area of
practice
SLE: Change "Advanced Beginner" to "Beginner" or "Some Experience"
Action: Edit
Demographics 006
How many people report directly to you?
Please write your answer here:
Demographics 007
What was your position before you took this job?
Please write your answer here:
Demographics 008
How long did you hold the prior position?
Please write your answer here:
Demographics 009
How many years of experience do you have in the cyber security field?
Please write your answer here:
Demographics 010
How would you classify your level of expertise in the cyber security field?
Please choose only one of the following:
Novice
Advanced Beginner
Competent
Proficient
Expert
SLE: Replace "Advanced Beginner" with one of the suggestions above.
Action: Edit
Demographics 011
What is the size of your organization?
LRO:
I would focus on the size of the organization in employee count rather than dollar
amount.
Fewer than 100 employees
100 - 1000
1000 - 5000
Greater than 5000
Please choose only one of the following:
Less than $100 million
$100 million - $999.99 million
$1 billion - $4.99 billion
Greater than $5 billion
Action: Retain, organization size based on employees tends to favor low capital
intensive organizations
Group 1
Group 1 -9638
Collect all data necessary to support incident analysis and response (Task ID:
R1-9638) *
Group 1 -9818
Map activities observed in the network to systems to help establish the baseline.
(Task ID: R1-9818) *
Maria Hayden: “to help establish the baseline.” to help map a healthy system or
known/authorized connections
SLE: Edit: "Map observed network activities to systems..."
Action: Edit, combining the above
Group 1 -9824
Convert collected behavioral data and flow information to a usable baseline. (Task
ID: R1-9824) *
LRO: Convert collected behavioral data and flow information to a usable baseline. I
was confused by this question. I assumed the first time it was referring to network flow and
behavioral data, but on closer inspection I thought this may be referring to something else
Action: Delete
Group 1 -9186
Review event correlation; e.g. looking in baseline data to determine the type and
frequency of the event during normal operations. (Task ID: R1-9186) *
SLE: Edit: "Review event correlation (for example, look at baseline data to determine
type and frequency of events during normal operations).
Action: Edit
Group 1 -9640
Analyze the intrusion looking for the initial activity and all follow-on actions of the
attacker (Task ID: R1-9640) *
SLE: Insert "by" between "intrusion" and "looking"
Action: Edit
Group 1 -9637
Assign an incident response manager for all incidents (Task ID: R1-9637) *
Group 1 -9641
Collect images of impacted system for further analysis before returning the system
to a known good and operational state (Task ID: R1-9641) *
Maria Hayden: “known good and operational state” should be reworded
SLE: replace "impacted" with "affected"
Action: Edit, per SLE and "to an acceptable operational state"
Group 1 -9639
Communicate incident information and updates to impacted users, administrators,
and security staff and request additional information that may support analysis and
response actions (Task ID: R1-9639) *
SLE: Replace "impacted" with "affected"
Action: Edit
Group 1 -9642
Establish a repository for all incident related information that is indexed and
cataloged with assigned incident numbers for easy retrevial (Task ID: R1-9642) *
LRO: Change to Establish/Update a repository for all incident related information that is
indexed and cataloged with assigned incident numbers for easy retrieval
SLE: Edits of LRO's suggestion: "Establish or update a repository for all incident-related
information and index and catalog this information with assigned incident numbers for easy
retrieval.
Action: Edit
Group 1 -9643
Test incident storage repository to make sure it functioning properly and can only
be accessed by authorized personnel (Task ID: R1-9643) *
LRO: Change to Test incident storage repository to make sure it is functioning properly
and can only be accessed by authorized personnel
Maco Stewart: Test incident storage repository to make sure it functioning properly and
can only be accessed by authorized personnel—should be “it is.”
Action: Edit, per LRO
Group 1 -9644
Verify incident or case files are complete and manage properly by the assigned
incident manager (Task ID: R1-9644) *
PNNL #1: Verify incident or case files are complete and manage(d) properly by the
assigned incident manager
LRO: Change to Verify incident or case files are complete and managed properly by the
assigned incident manager
Action: Edit per LRO
Group 1 -9137
Analyze individual threat activity by correlating with other sources to identify
trends (Task ID: R1-9137) *
Group 1 -9819
Analyze the security incident and identify defining attriubtes (Task ID: R1-9819) *
PNNL #1: Analyze the security incident and identify defining attributes
Maco Stewart: Analyze the security incident and identify defining attriubtes—should be
“attributes.”
Maco Stewart: R2-9819 *Analyze the security incident and identify defining attriubtes
(sic) and R2-9821 *Decide the best category for the security incident based on its
attributes—could be combined into “Analyze and categorize the security incident based on its
attributes.”
Action: Edit per Stewart, deleting 9821
Group 1 -9821
Decide the best category for the security incident based on its attributes (Task ID:
R1-9821) *
Action: Delete
Group 2
Group 2 -9709
Protect classified or proprietary information related to the event, but release
general incident information to stakeholders (Task ID: R1-9709) *
Group 2 -9825
Report security incident classification (category selected) to management and
record in incident management system (Task ID: R1-9825) *
Group 2 -9770
Communicate incident response plan and team members among stakeholders to
ensure they understand commitment and responsibilities when team is stood up (Task ID:
R1-9770) *
Maco Stewart: Communicate incident response plan and team members among
stakeholders to ensure they understand commitment and responsibilities when team is stood
up—might be “Communicate incident response plan and team member roles to stakeholders and
team members to ensure that they . . . .”
Action: Edit
Group 2 -9364
Communicate with other analysts to team work larger incidents (Task ID:
R1-9364) *
PNNL #1: Does this mean coordinates teams of analysts?
Maco Stewart: Communicate with other analysts to team work larger incidents—might
be “Communicate with other analysts to work as a team on larger incidents.”
Action: Edit
Group 2 -9579
Coordinate notification strategies with other units, such as Compliance (Task ID:
R1-9579) *
Group 2 -9180
Coordinate reactive and proactive repsonses (Task ID: R1-9180) *
Maco Stewart: Coordinate reactive and proactive repsonses –should be responses.
Action: Edit
Group 2 -9613
Coordinate with compliance to make all regulator required security incident reports
in compliance with the standard (Task ID: R1-9613) *
LRO: Change to Coordinate with Compliance Group to make all regulator required
security incident reports compliant with the standard; Probably want to define the standard above
Action: Edit, change "the standard" to "standards"
Group 2 -9772
Develop an incident response program / plan. (Task ID: R1-9772) *
Maco Stewart: Develop an incident response program plan.—Either put a period after
every statement or delete any periods after any of such statements for consistency.
SLE: You need to retain and add periods because some of the statements contain more
than one sentence.
Action: Edit remove period (search and replace and reduce all statements to a single
sentence)
Group 2 -9768
Develop incident response plan and team (Task ID: R1-9768) *
Maco Stewart: R2-9768 *Develop incident response plan and team—to avoid
redundancy with the preceding “R2-9772 *Develop an incident response program plan.” you
might want to have this read “Develop a detailed incident response action plan and team roles”
or something like that.
Action: Edit, using first suggestion
Group 2 -9697
Document call trees and reporting and coordinating procedures to all parties (Task
ID: R1-9697) *
Group 2 -9779
Document stakeholders that must be contacted for each affected system in an
incident (Task ID: R1-9779) *
Group 2 -9109
Identify known information and event details prior to the analytical effort. Make
sure all analyst have the information and that a sequence of events has been created and
will be added to during the analytical process. (Task ID: R1-9109) *
LRO: Change to Identify known information and event details prior to the analytical
effort. Ensure analysts have the necessary information and that a sequence of events has been
established and maintained throughout the analytical process
Maco Stewart: Identify known information and event details prior to the analytical effort.
Make sure all analyst have the information and that a sequence of events has been created and
will be added to during the analytical process.—should be “analysts.”
Action: Edit, combine suggestions
Group 2 -9771
Identify people resources by skills, expertise, and authorized roles to support future
analytical efforts (Task ID: R1-9771) *
LRO: Do we care when the analytical efforts occur? Suggest changing to Identify people
resources by skills, expertise, and roles to support analytical efforts
Action: Edit
Group 2 -9780
Maintain a knowledge of professional resoures within the organization (Task ID:
R1-9780) *
SLE: Edit: "Maintain knowledge of professional resources within the organization."
Action: Edit
Group 3
Group 3 -9773
Maintain analytical resource POC information (Task ID: R1-9773) *
LRO: Not sure that this is different from R1-2-9771. Also, I suggest never using
acronyms unless necessary. Spell out POC.
Action: Delete
Group 3 -9777
Maintain professional credentials and networking relationships with professional
organizations. (Task ID: R1-9777) *
Group 3 -9122
Prioritize alerting after analysis into pre-defined buckets. (Task ID: R1-9122) *
LRO: Not sure I understand this question. I think it means to prioritize security alerts
into predefined categories. Perhaps change to Prioritize alert into pre-defined priorities.
Jason Christopher: Needs revision
Maria Hayden: isn’t clearly written
Action: Edit, "Prioritize alerts into predefined categories"
Group 3 -9778
Recognize dissenting analytical opinions (Task ID: R1-9778) *
Jason Christopher: Needs revision
Action: Edit, "Recognize dissenting opinions among analysts"
Group 3 -9769
Select a team of internal experts that should be consulted (Task ID: R1-9769) *
LRO: I assume this is in reference to an incident response team. Perhaps change
to Establish a team of internal intrusion detection experts for second-tier incident response
Jason Christopher: Needs revision
Action: Edit, per LRO
Group 3 -9775
Test the incident response program / plan. (Task ID: R1-9775) *
Group 3 -9774
Train staff on the incident response program / plan. (Task ID: R1-9774) *
Group 3 -9776
Update the incident response program / plan based on feedback from testing. (Task
ID: R1-9776) *
SLE: Suggested revision: "Update the incident response program/plan based on testing
results."
Action: Edit
Group 3 -9706
Identify source of all infections or successful attacks (Task ID: R1-9706) *
LRO: Change to Identify the source of all infections or successful attacks
Jason Christopher: the use of the word "all" is statistically a scary thought!
Action: Edit, per LRO removing "all"
Group 3 -9701
Monitor all systems that were suspect or confirmed during an intrusion/incident
(Task ID: R1-9701) *
LRO: Change to Monitor all systems that were suspected or confirmed compromised
during an intrusion/incident
Action: Edit, adding "as being" between "confirmed" and
"compromised"
Group 3 -9704
Report eradication status and success to include confidence to management (Task
ID: R1-9704) *
LRO: Seems like this one is still on the incident response train of thought. If so, suggest
changing to Report incident response status to management
Maco Stewart: Report eradication status and success to include confidence to
management—one wonders what this means. My best guess: “Include confidence levels of
eradication and status in reports to management.”
Jason Christopher: not sure of the use of "confidence" in this sentence
Action: Edit, "Report incident response status to management, including confidence
levels for eradication actions"
Group 3 -9703
Review running processes to determine if clean up efforts removed suspect
software/code (Task ID: R1-9703) *
LRO: Change to *Review running processes to determine if incident response
sucessfully removed malware
Action: Edit, correcting spelling
Group 3 -9707
Train users if they were an unwitting party in a successful attack (Task ID:
R1-9707) *
LRO: Change to Train users in phishing identification and malware distribution methods
Jason Christopher: Why wait until after a user was unwittingly part of an
attack? Training should happen before.
Action: Edit per LRO
Group 3 -9686
Analyze response actions and performance of response team members against the
plan (Task ID: R1-9686) *
LRO: Change to Analyze incident response team actions and performance of team
members against the incident response plan
Maco Stewart: R2-9686 *Analyze response actions and performance of response team
members against the plan should include the later-appearing R2-9685 *Monitor response actions
and performance and compare to the plan. If you want to keep both, move monitor before
analyze.
Action: Edit
Group 4
Group 4 -9684
Decide a response plan for the incident and assign expected actions and time to
implement (Task ID: R1-9684) *
LRO: Change to Determine a response plan for the incident and assign actions and
deadlines
SLE: Change to "Develop a response plan for the incident and assign actions and
deadlines."
Action: Edit
Group 4 -9688
Identify impacts occurring from response actions and consider timeliness of
response efforts (Task ID: R1-9688) *
Group 4 -9685
Monitor response actions and performance and compare to the plan (Task ID:
R1-9685) *
LRO: Change to Monitor incident response actions and performance to compare to the
plan
SLE: Rewrite "Monitor incident response performance and actions and compare them to
the incident response plan."
Action: Edit
4 Group 4 - 9687
Understand deviations from the plan or unanticipated actions that were necessary
(Task ID: R1-9687) *
PNNL: As written, this was not a clear task description
LRO: Change to Understand necessary deviations or unanticipated actions from the
incident response plan
Action: Edit per LRO
5 Group 4 - 9830
Analyze reoccurring activity that is not flagged as a security event and troubleshoot
likely cause (Task ID: R1-9830) *
6 Group 4 - 9832
Coordinate watch rotation turn over so that no active event analysis is dropped
between team changes (Task ID: R1-9832) *
SLE: Change "turn over" to "turnover"
Action: Edit
7 Group 4 - 9829
Review event logs and alerts to ensure they have all been processed and categorized
(Task ID: R1-9829) *
Maco Stewart: Review event logs and alerts to ensure they have all been processed and
categorized—should probably be “Review event logs and alerts to ensure as much as possible
that they have been processed and categorized.”
Action: Edit
8 Group 4 - 9361
Review log files for signs of intrusions and security events. (Task ID: R1-9361) *
9 Group 4 - 9259
Verify results are real or false positive (Task ID: R1-9259) *
LRO: Change to Verify network scan results are real or false positives
SLE: Change to "Assess whether network scan results are real or false positives."
Action: Edit
10 Group 4 - 9206
Communicate with external agencies such as law enforcement, ICS-CERT, and
DOE regarding incident reports (Task ID: R1-9206) *
11 Group 4 - 9849
Report the time of discovery for all reportable events and incidents and the time of
notification (Task ID: R1-9849) *
12 Group 4 - 9851
Understand why a failure to report properly occurred (Task ID: R1-9851) *
Jason Christopher: Needs clarification
ACTION: Delete
13 Group 4 - 9621
Develop escalation process and procedures for systems that have not yet been
determined to be authorized. (Task ID: R1-9621) *
Maco Stewart: Develop escalation process and procedures for systems that have not yet
been determined to be authorized.—unclear, “systems that have not yet been determined to be
authorized”; do you mean “network activity that has not been shown to be authorized”?
Jason Christopher: Needs clarification
Action: Edit per Stewart ending
14 Group 4 - 9430
Verify all devices are being submitted to SIEM for full network visibility (Task ID:
R1-9430) *
LRO: Spell out acronyms: Verify all devices are being submitted to Security Information
and Event Management for full network visibility.
Action: Edit
Group 5
1 Group 5 - 9696
Collect necessary information for inclusion in the communications plan (Task ID:
R1-9696) *
2 Group 5 - 9694
Communicate with business management to identify additional parties that should
be included in communication and response plans (Task ID: R1-9694) *
3 Group 5 - 9698
Maintain an accurate list of POCs (Task ID: R1-9698) *
Maco Stewart: R2-9698 *Maintain an accurate list of POCs is a repetition of previous
content
Action: Delete
4 Group 5 - 9700
The communication plan and make changes as appropriate (Task ID: R1-9700) *
LRO: Change to Establish the communication plan and make changes as appropriate
Maco Stewart: The communication plan and make changes as appropriate—do you mean
Review the communication plan and make changes as appropriate?
Nic Ziccardi: R2-9700 is vague. Should probably state ""review"" and make changes.
Jason Christopher: lacks a verb.. create the communications plan, perhaps?
Maria Hayden: not a complete thought
Action: Edit, using Stewart's language
5 Group 5 - 9695
Understand changes to organizations and the business to identify stakeholders to be
included in the communications plan (Task ID: R1-9695) *
6 Group 5 - 9699
Verify communication plan parties and contact information with all parties at an
appropriate frequency (Task ID: R1-9699) *
LRO: Change to Verify communication plan and contact information with all parties at
an appropriate frequency
Action: Edit
7 Group 5 - 9169
Test implementation with a testing regime to include events that should trigger
alerts based on how the monitor has been configured. (Task ID: R1-9169) *
PNNL #1: Regime => Government rule; Regimen => Systematic process
LRO: Change to Test the SIEM (Security Information and Event Management)
implementation with a alert triggers based on how the monitor has been configured
Jason Christopher: Needs clarification
Action: Edit
8 Group 5 - 9591
Test incident response system and planning remains effective against the latest
attacker methodologies and tools (Task ID: R1-9591) *
9 Group 5 - 9412
Test IR specialists to verify they maintain a current understanding of threats and
how to analyze (Task ID: R1-9412) *
LRO: Change to Test IR (Information Response) specialists to verify they maintain a
current understanding of threats and how to analyze
Jason Christopher: is about training, not testing, per se.
Action: Edit, per LRO
10 Group 5 - 9676
Test remediated systems and the effectiveness of containment measures (Task ID:
R1-9676) *
11 Group 5 - 9592
Test to verify that there is a correct flow of intrusion events to incident cases and
that there is a coordinated response between IRS, IA, and SOS stakeholders (Task ID:
R1-9592) *
PNNL: More likely to be performed by other organizations such as quality assurance
LRO: Define acronyms
Action: Edit per LRO
12 Group 5 - 9208
Analyze actions for known malicous events and activity from spreading or
providing opportunities for an attacker to move, close down command and control
channels, recommend blocking outside IPs,etc. to contain the incident. (Task ID:
R1-9208) *
LRO: Change to Analyze signatures of known malicious events and determine if they
have been eliminated since contain the incident. (i.e. eliminate opportunities for an attacker to
move, close down command and control channels, block access to outside IPs, etc.)
Maco Stewart: Analyze actions for known malicous events and activity from spreading or
providing opportunities for an attacker to move, close down command and control channels,
recommend blocking outside IPs,etc. to contain the incident.—too many elements in this one,
unclearly presented.
Nic Zaccardi: R2-9208 is poorly worded and overly broad. It appears to be a focused on
executing containment strategies, but the detective steps mentioned are unclear.
Action:
Edit, create multiple tasks:
Analyze actions indicating malicious events may be spreading or providing
opportunities for an attacker to move. [from 9208]
Analyze actions indicating malicious events that provide opportunities for an
attacker to close down command and control channels. [from 9208]
Analyze actions indicating malicious events to determine strategy for blocking
outside IPs to contain an incident. [from 9208]
13 Group 5 - 9666
Analyze logs and system information to determine systems impacted by an attacker
and actions taken by the attacker (Task ID: R1-9666) *
SLE: Edit: "...to determine which systems have been affected by an attacker and what
actions were taken by the attacker."
Action: Edit
14 Group 5 - 9677
Analyze technical and business impacts as a result of the incident (Task ID:
R1-9677) *
SLE: Edit: "Analyze the incident's technical and business impacts."
Action: Edit
Group 6
1 Group 6 - 9670
Assign response team members to touch systems within the containment boundary
and collect data for analysis (Task ID: R1-9670) *
SLE: Reword: "Assign response team members to collect data for analysis from
systems within the containment boundary."
Action: Edit
2 Group 6 - 9668
Communicate the boundary around impacted systems being contained (Task ID:
R1-9668) *
SLE: Change "impacted" to "affected"
Action: Edit
3 Group 6 - 9187
Coordinate help desk to identify user complaints that may be related to the
investigated event (Task ID: R1-9187) *
LRO: Change to Coordinate with the Help Desk to identify user complaints that may be
related to the investigated event
Action: Edit
4 Group 6 - 9680
Coordinate with outside parties to determine if containment efforts are successful
(for example, call FBI confirm the C&C channel has been closed) (Task ID: R1-9680) *
LRO: Spell out acronyms: Command and Control rather than C&C
Maco Stewart: should be “to confirm” or “and confirm.”
Action: Edit to include both changes
5 Group 6 - 9673
Coordinate with system owners when contained systems and determine impact of
proposed actions (Task ID: R1-9673) *
LRO: Change to Coordinate containment with system owners and determine impact of
proposed actions
Maco Stewart: do you mean “after identifying affected systems”?
Action: Edit to include both changes
6 Group 6 - 9675
Decide if the incident needs to be re-rated and revaluate the response plan based on
containment efforts (Task ID: R1-9675) *
Jason Christopher: Needs clarification
SLE: Replace "Decide" with "Determine" and replace "revaluate" (or "revaluation")
with "re-evaluate" (or "re-evaluation")
Action: Edit
7 Group 6 - 9667
Define the boundary around suspect systems to minimize the spread and impact of
an identified security incident (Task ID: R1-9667) *
8 Group 6 - 9674
Document all actions taken to contain systems (Task ID: R1-9674) *
9 Group 6 - 9683
Document all external communications (Task ID: R1-9683) *
10 Group 6 - 9669
Ensure contained systems are being monitor or cannot communicate to systems
outside of the boundary to minimize spread of the incident (Task ID: R1-9669) *
PNNL #1: Ensure contained systems are being monitor(ed) or cannot communicate to
systems outside of the boundary to minimize spread of the incident
LRO: Change to Minimize spread of the incident by ensuring contaminated(?) systems
are monitored or cannot communicate to systems outside of the network boundary
Maco Stewart: should be “being monitored.”
Maria Hayden: “monitor” = monitored. Generally confusing
Action: Edit by creating two tasks using LRO language and splitting the "or"
statement:
Minimize spread of the incident by ensuring contaminated systems are monitored
Minimize spread of the incident by ensuring contaminated systems cannot
communicate to systems outside of the network boundary
11 Group 6 - 9671
Establish boundaries or shut down infected systems (Task ID: R1-9671) *
12 Group 6 - 9679
Identify appropriate parties to participate in the response to include legal,
communications, and others (Task ID: R1-9679) *
LRO: Change to *Identify appropriate parties to participate in the incident response
including legal, communications, and others
Action: Edit
13 Group 6 - 9682
Maintain asset management information during containment process (Task ID:
R1-9682) *
14 Group 6 - 9681
Monitor performance of incident response staff (Task ID: R1-9681) *
Group 7
1 Group 7 - 9678
Report business and technical impacts of the incident and response activities (Task
ID: R1-9678) *
2 Group 7 - 9672
Report to security management and system owners when systems have been
successfully contained (Task ID: R1-9672) *
3 Group 7 - 9856
Conduct security drills that incorporate the latest threats and vulnerabilities in the
scenarios (Task ID: R1-9856) *
4 Group 7 - 9128
Alert operators to events occurring so that they may increase system logging or
retain logs where normally such logs may be simply lost due to system storage constraints.
(Task ID: R1-9128) *
SLE: Change "where" to "when"
Action: Edit
5 Group 7 - 9401
Analyze test results to ensure systems functioning nominally (Task ID: R1-9401) *
SLE: Insert "are" between "systems" and "functioning"
Action: Edit
6 Group 7 - 9495
Attend incident response scenarios. (Task ID: R1-9495) *
PNNL #1: Does this mean attend as in a presentation?; Does this mean “attend to” as in
give attention to generate?
LRO: Change to Attend incident response scenarios planning meetings
Maria Hayden: You can’t “attend” a scenario. Reword.
Action: Delete, not a task that someone can be skilled at performing. Consider
deleting all "attend" tasks.
7 Group 7 - 9397
Develop a schedule for testing elements of the incident response plan and
organizations involved in the process (Task ID: R1-9397) *
8 Group 7 - 9407
Develop incident report template to be used when reporting the final status of an
incident response. (Task ID: R1-9407) *
9 Group 7 - 9214
Develop incident response scenarios. (Task ID: R1-9214) *
10 Group 7 - 9622
Develop schedule, test plans, evaluation criteria, and sign-off for evaluating test
success and/or failure. (Task ID: R1-9622) *
11 Group 7 - 9398
Document all incident response exercises and test (Task ID: R1-9398) *
Maco Stewart: should be “and tests.”
Action: Edit
12 Group 7 - 9409
Document gaps and outcomes to multiple parties to improve process and
procedures. (Task ID: R1-9409) *
13 Group 7 - 9400
Document shortcomings and lessons learned from IR exercises and formulate action
plans to ensure they're corrected as rapidly as possible. (Task ID: R1-9400) *
LRO: Spell out acronyms
Action: Edit
14 Group 7 - 9126
Escalate analysis findings in accordance with defined plan. (Task ID: R1-9126) *
Jason Christopher: Needs clarification
Action: Retain
Group 8
1 Group 8 - 9405
Maintain a set of packaged scenarios with injects and data to exercise the response
process (Task ID: R1-9405) *
2 Group 8 - 9139
Maintain documented procedures for analyzing logs and handling log archive (Task
ID: R1-9139) *
3 Group 8 - 9343
Maintain technical competence using industry tools for attacks (i.e. backtrack)
(Task ID: R1-9343) *
SLE: Insert comma after "i.e."
Action: Edit
4 Group 8 - 9408
Report internal and external incident stakeholders involved during and after
incident response. (Task ID: R1-9408) *
LRO: Change to Report to internal and external incident stakeholders involved during
and after incident response
Maco Stewart: should be “Report to . . . .”
Action: Edit per LRO
5 Group 8 - 9403
Report status to maagement at defined stages of response per procedure. (Task ID:
R1-9403) *
PNNL #1: Report status to maagement at defined stages of response per procedure
Action: Edit, correct spelling
6 Group 8 - 9116
Understand incident response process and initiate incident according to policies and
procedures. (Task ID: R1-9116) *
LRO: Change to Understand incident response process and initiate incident handling
according to documented policies and procedures
Action: Edit
7 Group 8 - 9191
Understand incident response, notification and log handling requirements of
business (Task ID: R1-9191) *
SLE: Insert comma after "notification"
8 Group 8 - 9239
Develop attack scenarios that might be used to intrude upon systems and networks
and use table top excercises to guage how personnel might respond in these situations.
(Task ID: R1-9239) *
Maco Stewart: should be “exercises to gauge.”
SLE: Also make "table top" one word
Action: Edit
9 Group 8 - 9106
Analyze logs by correlating all suspect systems (Task ID: R1-9106) *
10 Group 8 - 9354
Analyze system configuration (for systems under attack') by correlating with the
alerts generated to determine if the alert is real or if the IDS box is gone fishing. (Task ID:
R1-9354) *
PNNL #1: Analyze system configuration (for systems under attack') by correlating with
the alerts generated to determine if the alert is real or if the IDS box is gone fishing
LRO: Change to Analyze compromised system’s configuration by determining if the
Intrusion Detection System alert is real
Maco Stewart: remove apostrophe
Nic Zaccardi: R2-9354 contains lazy language.
Action: Edit per LRO
11 Group 8 - 9134
Report what was analyzed and the list of flagged events, key findings, issues, actions
taken (Task ID: R1-9134) *
12 Group 8 - 9351
Review logs, network captures, and traces. (Task ID: R1-9351) *
13 Group 8 - 9240
Update security tools (SEIM, IDS/IPS, Firewalls) with information pertinent to net
tools or attacks. (Task ID: R1-9240) *
LRO: Spell out acronyms; Change to Update security tools (SEIM, IDS/IPS, Firewalls)
regularly with information pertinent to network tools or attacks
Action: Edit
14 Group 8 - 9565
Configure alerts to monitor for old signatures and failed updates (Task ID:
R1-9565) *
Group 9
1 Group 4 - 9248
Collect data from proxies and email systems to profile events involving malicous links or
attachments and try to correlate to business process and assets (Task ID: R1-9248) *
Maco Stewart: should be “malicious.”
Action: Edit
2 Group 4 - 9204
Decide on a subjective and/or objective measure to determine the likelihood that an event
is an incident. (i.e. a confidence factor.) (Task ID: R1-9204) *
SLE: Insert comma after "i.e." and delete colon after "factor"
Action: Edit
3 Group 4 - 9284
Develop correlation methods, so that you can associated identified vulnerabilties with
events identified by your security monitoring solution (IDS, SIEM, etc). (Task ID: R1-9284) *
LRO: Change to Develop correlation methods to associate identified vulnerabilities with
events identified by security monitoring solutions (IDS, SIEM, etc)
Maco Stewart: should be “so that you can associate identified vulnerabilities . . . .”
SLE: I prefer Lori's edits
Action: Edit
4 Group 4 - 9135
Develop procedures for anomalies in the logs that can not be immediately identified as
known threats, etc. (Task ID: R1-9135) *
LRO: Change to Develop procedures for addressing anomalous events in the logs
that cannot be immediately identified as known threats, etc.
Action: Edit
5 Group 4 - 9121
Prioritize suspect log entries and preserve on master sequence of events list (Task ID:
R1-9121) *
6 Group 4 - 9124
Identify systems not logging or components that are blind spots. (Task ID: R1-9124)
7 Group 4 - 9184
Collect a sequence of events and continue to added information based in the investigation
process (Task ID: R1-9184) *
PNNL #1: Collect a sequence of events and continue to added information based in the
investigation process
LRO: Change to Collect a sequence of events and continue to add information based in
the investigation process
Maco Stewart: should be “continue to add information based on the . . . .”
Action: Edit, using LRO language
8 Group 4 - 9607
Verify alert thresholds and incident response procedures result in capturing enough data
to support incident analysis and response efforts (Task ID: R1-9607) *
SLE: Insert "that" between "Verify" and "alert"
9 Group 4 - 9661
Analyze all events and correlate to incidents if applicable (Task ID: R1-9661) *
Jason Christopher: Needs clarification
Action: Delete
10 Group 4 - 9658
Assign the incident to a category or type (Task ID: R1-9658) *
Maco Stewart: would be better as “R2-9658 *Assign the incident to a category or type if
possible”
Action: Edit
11 Group 4 - 9659
Decide an incident rating is also calculated based on the potential severity and impact of
the incident (Task ID: R1-9659) *
PNNL #1: Decide an incident rating is also calculated based on the potential severity and
impact of the incident
LRO: Change to Determine an incident rating calculated on the potential severity and
impact of the incident
Jason Christopher: this may be a grammar issue
Maco Stewart: might be “Determine an incident rating based upon the potential . . . .”
Action: Edit, using LRO language
12 Group 4 - 9657
Decide if an event meets the criteria to be investigated and opened as an incident (Task
ID: R1-9657) *
SLE: Replace "Decide" with "Determine"
13 Group 4 - 9655
Decide that the event is applicable to your organization (Task ID: R1-9655) *
Jason Christoper: if it's an event that impacts my organization, doesn't that make it
applicable?
Action: Retain, may be a differentiating task
SLE: Edit: "Determine if the event..."
14 Group 4 - 9194
Define incident (Task ID: R1-9194) *
Jason Christopher: Needs clarification
Action: Delete
SLE: I would retain as the definition varies from organization to organization.
Group 10
.
1 Group 4 - 9660
Document closure of all incidents (Task ID: R1-9660) *
2 Group 4 - 9656
Document events that do not meet the criteria will be logged and no further action will be
taken with the event (Task ID: R1-9656) *
LRO: Change to Document that no action will be taken for events that have been logged
but do not meet incident response criteria
Action: Edit
3 Group 4 - 9662
Document the activity being evaluated as an event (Task ID: R1-9662) *
4 Group 4 - 9665
Report all events being investigated to security management (Task ID: R1-9665) *
5 Group 4 - 9663
Review incident criteria (Task ID: R1-9663) *
Jason Christopher: Needs clarification
Action: Retain, might differentiate skill levels
6 Group 4 - 9654
Verify that the event has occurred (Task ID: R1-9654) *
7 Group 4 - 9664
Verify the event meets the criteria for further investigation (Task ID: R1-9664) *
SLE: Insert "that" between "Verify" and "the"
Action: Edit
8 Group 4 - 9356
Collect all necessary expertise in one physical or virtual room (Task ID: R1-9356) *
Jason Christopher: seems to be more of a management role
SLE: Shouldn't this indicate the purpose of collecting the expertise--e.g., "to evaluate the
event" or whatever the purpose is?
Action: Retain and edit, may differentiate skill levels
9 Group 4 - 9189
Develop an incident tracking mechanism to classify and track all security incidents
(Task ID: R1-9189) *
10 Group 4 - 9190
Open an event ticket to track the potential incident. (Task ID: R1-9190) *
11 Group 4 - 9113
Open event tickets and notify interested parties when a probable event occurs and track
the event as it unfolds. (Task ID: R1-9113) *
12 Group 4 - 9136
Identify and properly respond to situations where log management applications may be
attacked or compromised. (Task ID: R1-9136) *
SLE: replace "where" with "in which"
Action: Edit
13 Group 4 - 9588
Test the incident response procedure/plan periodically to ensure correct workflow and
functionality (Task ID: R1-9588) *
14 Group 4 - 9192
Understand the basic components of an incident response process (Prepare, Identify,
Contain, Eradicate, Recover, Lessons Learned) (Task ID: R1-9192) *
Group 11
1 Group 4 - 9203
establish clear metrics that distinguish types of incidents. Users can then correctly
categorize incidents (Task ID: R1-9203) *
PNNL #1: establish clear metrics that distinguish types of incidents. Users can then
correctly categorize incidents
Maco Stewart: need initial captial E
SLE: I recommend that you ADD periods rather than omitting them.
Action: Edit, and add only if it can be automated, otherwise might require too much
time
2 Group 4 - 9589
Document updates to incident response procedure/plan (Task ID: R1-9589) *
3 Group 4 - 9808
Communicate warning signs of security events to internal stakeholders (Task ID:
R1-9808) *
4 Group 4 - 9802
Define security events and incidents with evaluation criteria (Task ID: R1-9802) *
5 Group 4 - 9804
Define what constitutes a security event for a system (Task ID: R1-9804) *
SLE: Is this needed? It is more specific than #4 but very similar. Perhaps they can be
combined?
Action: Delete
6 Group 4 - 9803
Develop escalation procedures to take an event to an incident (Task ID: R1-9803) *
SLE: Isn't this backwards? I think of an "incident" as being smaller than an "event."
Edits: "Develop procedures to escalate an incident to an event." (or the other way around if I'm
wrong.)
Action: Retain, events do not require action, but incidents do.
7 Group 4 - 9785
Maintain a current list of stakeholders contact information and cross listing with respect
to notification requirements (Task ID: R1-9785) *
SLE: "stakeholders" should be possessive (stakeholders'). I don't understand the
sentence from "and cross listing" on. Should it say something like "and link this information to
notification requirements"?
Action: Edit
8 Group 4 - 9806
Test security staff with drills to determine if events and incidents are being properly
characterized (Task ID: R1-9806) *
9 Group 4 - 9708
Develop and publicize ways to distinguish between routine system errors and malicious
activities (Task ID: R1-9708) *
10 Group 4 - 9826
Document logic behind why event was determined to be false. (Task ID: R1-9826) *
SLE: Insert "an" between "why" and "event"
11 Group 4 - 9831
Escalate findings to appropriate personnel to review event and ensure accuracy of false
positive findings. (Task ID: R1-9831) *
SLE: Edit: "false-positive"
Action: Edit
12 Group 4 - 9117
Identify and filter out false positive; if vetted to be an incident assign to incident handler
(Task ID: R1-9117) *
SLE: Edit "...false positives; if determined to be an incident, assign to incident handler."
Action: Edit
13 Group 4 - 9719
Monitor all logs associated with third party accessing your systems, this may require a
manual review against historic use profiles (Task ID: R1-9719) *
SLE: Change comma after "systems" to a semicolon.
Action: Edit
14 Group 4 - 9327
Implement penetration testing and vulnerability assessments to improve incident
identification. (Task ID: R1-9327) *
Group 12
1 Group 4 - 9318]Understand environment (culture, personnel) to create a better
relationship for transmitting delicate and sometimes poorly understood information. (Task ID:
R1-9318) *
SLE: Change "personnel" to "staff"
Action: Edit
2 Group 4 - 9814]Escalate breaches of contract by vendor to management and legal team
(Task ID: R1-9814) *
SLE: Edit: "Escalate vendor breach of contract to..."
Action: Edit
3 Group 4 - 9786]Develop role based access control matrix (Task ID: R1-9786) *
SLE: Change to "role-based"
Action: Edit
4 Group 4 - 9784]Ensure accurate information is collected and included in reports (Task
ID: R1-9784) *
5 Group 4 - 9783]Maintain knowledge of reporting requirements associated with systems.
(Task ID: R1-9783) *
6 Group 4 - 9200]Identify repeat incidents either involving the same person(s), systems,
or adversaries. (Task ID: R1-9200) *
SLE: Reword: "Identify repeat incidents involving the same person or persons, systems,
or adversaries."
Action: Edit
7 Group 4 - 9604]Maintain incident data repository and analyze data and metrics
regarding types of incidents, frequency, systems impacted (Task ID: R1-9604) *
SLE: Insert "and" after the final comma.
Action: Edit
8 Group 4 - 9605]Review incidents over time to determine lessons learned or how to
better align security tools (Task ID: R1-9605) *
9 Group 4 - 9857]Develop a standardized process to ensure appropriate steps are taken
during and/or after an event occurs. (Task ID: R1-9857) *
SLE: Doesn't seem like an and/or thing to me. I'd say "during and after an event..."
Action: Edit
10 Group 4 - 9711]Monitor systems that had been impacted and entire sub-network for
activity associated with the attack (Task ID: R1-9711) *
SLE: reword: "systems that were affected and the entire..."
Action: Edit
11 Group 4 - 9712]Report closing of the incident and all incident response processes
were followed (Task ID: R1-9712) *
SLE: edit: "processes that were followed."
Action: Edit
12 Group 4 - 9710]Review incident response actions to ensure actions were taken
properly (Task ID: R1-9710) *
SLE: Edit: "to ensure proper actions were taken."
Action: Edit
13 Group 4 - 9855]Review incident response process and validate each action item is
being handled appropriately. (Task ID: R1-9855) *
LRO: 9855 seems to be identical to question above
Action: Delete
14 Group 4 - 9791]Monitor for unauthorized access to tools and data. (Task ID:
R1-9791) *
Group 13
1 Group 4 - 9610]Report the attack TTPs used in the last 6mo against the organization
(Task ID: R1-9610) *
LRO: Spell out acronyms and change to Report the attack TTPs (used in the last
6months against the organization)
Nic Ziccardi: define acronym
Action: Edit
2 Group 4 - 9181]Develop working theories of the attack and look for correlated evidence
to support or reject the working theories. (Task ID: R1-9181) *
Nic Ziccardi: define acronym
Action: Edit
3 Group 4 - 9202]Document the incident response activities to determine positive and
negative results from actions and security controls. These should be the starting point for
Lessons Learned discussions and follow-on preparation activities. (Task ID: R1-9202) *
4 Group 4 - 9129]Review known intrustion TTPs and observables to assist in profiling
log events and capture event information that may relate to known signatures. (Task ID:
R1-9129) *
Maco Stewart: should be “intrusion.”
Nic Ziccardi: define acronym
Action: Edit, combining recommendations
5 Group 4 - 9304]Understand how phishing attacks can adversely impact web-based
management applications. (Task ID: R1-9304) *
6 Group 4 - 9119]Verify log analysis findings through alternate means such as local log
storage or affected system state/configuration (Task ID: R1-9119) *
7 Group 4 - 9130]Document process and analysis improvements. (Task ID: R1-9130) *
Jason Christopher: Needs clarification
Action: Delete
8 Group 4 - 9634]Define how systems were initially compromised and how the attack
progressed and what observables were available for detection and response (Task ID: R1-9634) *
9 Group 4 - 9633]Develop mitigations based on incidents analyzed and recommend
improvements in security capabilities or tools as appropriate (Task ID: R1-9633) *
10 Group 4 - 9632]Identify security incidents that require training or awareness for users
and security staff (Task ID: R1-9632) *
11 Group 4 - 9635]Implement lessons learned from the analysis of material incidents
(Task ID: R1-9635) *
12 Group 4 - 9636]Test the security staff and deployed solutions against scenarios
developed from incidents with significant lessons learned (Task ID: R1-9636) *
13 Group 4 - 9114]maintain chain of custody and integrity of log files if to be used by
law enforcement at a later date (Task ID: R1-9114) *
PNNL #1: maintain chain of custody and integrity of log files if to be used by law
enforcement at a later date
Maco Stewart: capital “M” at the beginning
SLE: Additional edit: "integrity of log files if they are to be..."
Action: Edit, combine edits
14 Group 4 - 9197]Develop a chain of custody and consider forensic images if needed as
the investigation progresses (Task ID: R1-9197) *
LRO: Change to Develop a chain of custody process and consider forensic images if
needed as the investigation progresses
SLE: Additional edit: "chain-of-custody process"
Action: Edit
Group 14
1 Group 4 - 9232]Identify 3rd party vendors who specialize in remediation of security
penetrations and forensics. You should identify 2 or 3 external resources who you may place on
retainer and who can be brought in, during quickly if something were to occur. (Task ID:
R1-9232) *
SLE: Edits: "Identify third-party vendors.... You should identify 2 or 3 external
resources that can be placed on retainer and brought in quickly if an incident occurs."
Action: Edit, and delete second sentence which modifies but does not
change the action required.
2 Group 4 - 9112]Maintain access control permissions to log files (Task ID: R1-9112) *
3 Group 4 - 9797]Collect proper approvals before individuals are granted access to tools
and data. (Task ID: R1-9797) *
4 Group 4 - 9796]Define authorized staff for specific security tools and data sources
(Task ID: R1-9796) *
5 Group 4 - 9800]Develop roles and responsibilities that can be implemented through
RBAC and authorization group memberships (Task ID: R1-9800) *
SLE: Spell out RBAC
Action: Edit
6 Group 4 - 9789]Establish process to provide authorization for tool use and credentials
to access tools (Task ID: R1-9789) *
7 Group 4 - 9790]Maintain centralized RBAC Lists for all security tools (Task ID:
R1-9790) *
SLE: Spell out RBAC; change "Lists" to "lists"
Action: Edit
8 Group 4 - 9299]Access an up to date Smart Grid inventory and asset list. (Task ID:
R1-9299) *
PNNL #1: Access and up to date Smart Grid inventory and asset list
SLE: PNNL #1's edit does not make sense. It should either be "Access an
up-to-date..." or "Access and update..."
Action: Edit, to "Access "a current"
9 Group 4 - 9529]Attend change management meetings and represent security in the
change management process. (Task ID: R1-9529) *
10 Group 4 - 9548]Attend change management to identify systems that have not been
authorized. (Task ID: R1-9548) *
SLE: Insert "meetings" after "management"
Action: Delete, attendance is not a skill
11 Group 4 - 9822]Collect change management information to automatically update
baseline. (Task ID: R1-9822) *
12 Group 4 - 9526]Collect existing device configurations. (Task ID: R1-9526) *
13 Group 4 - 9110]Develop base scenario and publish results to show what the log files
would/should look like without attack or compromise (Task ID: R1-9110) *
SLE: What is a "base scenario"?
Action: Retain, vernacular may be needed to distinguish those of relevant
knowledge
14 Group 4 - 9702]Test all security controls or changes that were implemented during a
response (Task ID: R1-9702) *
Group 15
1 Group 4 - 9827]Verify security monitoring systems and management systems are
working and providing expected coverage (Task ID: R1-9827) *
SLE: Insert "that" between "Verify" and "security"
Action: Edit
2 Group 4 - 9178]Analyze security device and application configurations for technical
impacts (e.g. network congestion) (Task ID: R1-9178) *
SLE: Insert comma after "e.g."
Action: Edit
3 Group 4 - 9151]Configure system against the baseline configuration manual. (Task ID:
R1-9151) *
SLE: "Configure system in compliance with the baseline configuration manual"?
Action: Edit
4 Group 4 - 9152]Coordinate with network operations and system adminstrators to plan
for the implementation and schedule required outages or notifications during the deployment.
(Task ID: R1-9152) *
SLE: administrators Suggested Rewording: "implementation and scheduling of" (not
sure if this is write, but the statement goes awry here
Action: Edit
5 Group 4 - 9159]Coordinate with other departments to properly prepare for additional
resources required by the security monitoring solution (i.e. network, database, access
management, etc). (Task ID: R1-9159) *
SLE: Insert comma after "i.e." "etc" needs a period at the end.
Action: Edit
6 Group 4 - 9550]Coordinate with project managers to understand current and future
projects that will install systems. (Task ID: R1-9550) *
7 Group 4 - 9166]Coordinate with system administrators to reboot hosts or restart
necessary processes after the software or device has been installed to ensure the monitoring
solution is online and functioning (Task ID: R1-9166) *
8 Group 4 - 9620]Develop an approval workflow for accountability, traceability, and
reporting. (Task ID: R1-9620) *
9 Group 4 - 9434]Develop configuration manuals on all custom solutions. (Task ID:
R1-9434) *
10 Group 4 - 9551]Document certification and accreditation (baseline configuration,
vulnerability assessment, authorization to operation) (Task ID: R1-9551) *
11 Group 4 - 9175]Document deployment information in company asset management
systems (Task ID: R1-9175) *
12 Group 4 - 9332]Identify deployment risks including technological, geographic, and
privacy related. (Task ID: R1-9332) *
13 Group 4 - 9553]Review As Left configurations prior to Go Live (Task ID: R1-9553) *
LRO: Make terms “as left” and “Go Live” more generic
Action: Delete, unclear
14 Group 4 - 9543]Review checklist for implementing a device or system for necessary
sign-offs (Task ID: R1-9543) *
LRO: Make term “sign-offs” more generic. Perhaps “approvals”
Action: Edit
Group 16
1 Group 4 - 9552]Review deployment plans and as planned configurations (Task ID:
R1-9552) *
PNNL #1: Review deployment plans and as planned configurations
Action: Edit
2 Group 4 - 9545]Schedule implementation with impacted business owners and IT
support staff (Task ID: R1-9545) *
SLE: Replace "impacted" with "affected"
Action: Edit
3 Group 4 - 9546]Test implementation with planned configurations to determine any
deployment issues (Task ID: R1-9546) *
4 Group 4 - 9176]Test the installation against the functional and performance
requirements. (Task ID: R1-9176) *
5 Group 4 - 9541]Verify health status of host security tools (Task ID: R1-9541) *
6 Group 4 - 9441]Verify operating systems, services and applications are hardened in
conjunction with regulatory guidance. (Task ID: R1-9441) *
SLE: Reword: "Verify that operating systems, services, and applications are hardened in
compliance with regulatory guidance."
Action: Edit
7 Group 4 - 9549]Verify operator and implementer procedures require acknowledgement
of authorization prior to implementing (Task ID: R1-9549) *
SLE: Insert "that" between "Verify" and "operator"; correct spelling is
"acknowledgment"
Action: Edit
8 Group 4 - 9630]Update all asset management systems with deployed mitigations,
configuration changes, or patches and versions (Task ID: R1-9630) *
9 Group 4 - 9296]Decide on retirement of solutions that cannot handle abnormal network
traffic. (Task ID: R1-9296) *
SLE: What exactly is being decided on? Maybe this would be better stated as either
"Determine if solutions that cannot handle abnormal network traffic should be retired" or "Retire
solutions that cannot..."
Action: Edit using "determine"
10 Group 4 - 9612]Review closed tickets for false positives for unacceptable results
(Task ID: R1-9612) *
11 Group 4 - 9844]Review network topologies, composition, and activity to determine
security tool needs (Task ID: R1-9844) *
12 Group 4 - 9645]Test security operations staff in the planning and execution of security
operations and tools (Task ID: R1-9645) *
13 Group 4 - 9845]Test tools against existing operational environments to determine
ability to handle stress, loads, and operate as advertized (Task ID: R1-9845) *
SLE: Reword (and correct spelling): "...ability to handle stress and loads and to operate
as advertised."
Action: Edit
14 Group 4 - 9795]Test using latest attack methodologies that security tool systems and
data cannot be accessed by unauthorized internal or external entities (Task ID: R1-9795) *
SLE: Rewrite: Using the latest attack methodologies, test to ensure that security tool..."
Action: Edit, but place the "using the" at the end to maintain parallel form
of leading verb
Group 17
1 Group 4 - 9341]Maintain a security configuration/coverage map of tools used across
the enterprise (Task ID: R1-9341) *
2 Group 4 - 9173]Analyze monitoring solution to determine if newer technology better
accomplishes the mission (Task ID: R1-9173) *
3 Group 4 - 9278]Analyze which systems are being regularly scanned and which systems
are being missed (Task ID: R1-9278) *
4 Group 4 - 9433]Assign significance to custom SIEM rules for unknown event types
(Task ID: R1-9433) *
SLE: Spell out acronym. Note that I sometimes see this as SIEM and other times as
SEIM. Just spell it out everywhere and you avoid the problem.
Action: Edit
5 Group 4 - 9352]Configure alert rules for SIEM solution to automate alerts. (Task ID:
R1-9352) *
SLE: Spell out acronym.
Action: Edit
6 Group 4 - 9255]Configure assets IP address and pertinent meta-data (Task ID:
R1-9255) *
SLE: change "meta-data" to "metadata"
Action: Edit
7 Group 4 - 9105]Configure rules for SIEM tools to capture and flag events known to be
intrusion indicators. (Task ID: R1-9105) *
SLE: spell out SIEM
Action: Edit
8 Group 4 - 9131]Configure SIEM rules and alerts for unsupported devices such as those
used in the Smart Grid and AMI. (Task ID: R1-9131) *
SLE: spell out SIEM and AMI
Action: Edit
9 Group 4 - 9156]Configure system technical policies that set thresholds and parameters
for monitoring. (Task ID: R1-9156) *
10 Group 4 - 9432]Develop custom SIEM parsers for unknown event types (Task ID:
R1-9432) *
SLE: spell out SIEM
Action: Edit
11 Group 4 - 9345]Establish a test lab where tools can be practiced and learned. (Task
ID: R1-9345) *
12 Group 4 - 9593]Maintain an asset inventory of both hardware and software. Link this
inventory to other security tools. (Task ID: R1-9593) *
13 Group 4 - 9431]Review healthy log collection metrics to understand baseline from
which to measure normal performance. (Task ID: R1-9431) *
14 Group 4 - 9429]Review SLAs/OLAs to understand expected thresholds. (Task ID:
R1-9429) *
LRO: Spell out acronyms
Action: Edit
Group 18
1 Group 4 - 9348]Understand how to run wireshark and tcpdump (Task ID: R1-9348) *
Nic Ziccardi: Too specific; speak about possessing knowledge of specific tools.
Action: Retain, may differentiate skill levels
2 Group 4 - 9150]Understand the selected SEIM tool. (Task ID: R1-9150) *
Nic Ziccardi: Too specific; speak about possessing knowledge of specific tools.
Action: Retain, may differentiate skill levels
SLE: Spell out SEIM
Action: Edit
3 Group 4 - 9293]Understand the work effort required to plug the solution in to custom or
specific software and hardware. (Task ID: R1-9293) *
SLE: Delete "work" and change "in to" to "into"
Action: Edit
4 Group 4 - 9111]Verify that all systems are logging to a central location. (Task ID:
R1-9111) *
5 Group 4 - 9103]Analyze available logs and note gaps and time periods (Task ID:
R1-9103) *
6 Group 4 - 9420]Analyze system logs for NTP synchronization anomaly messages (Task
ID: R1-9420) *
SLE: Spell out NTP
Action: Edit
7 Group 4 - 9594]Attend staff and planning meetings to stay on top of policy and
procedure developments (Task ID: R1-9594) *
8 Group 4 - 9104]Configure your security log management tool to sort and filter data in a
manner that us best suited for the event being analyzed. (Task ID: R1-9104) *
SLE: Change "us" to "is"
Action: Edit
9 Group 4 - 9428]Implement a DS/NTP capability for environments where a secure
connection to a root time stamp authority is required (Task ID: R1-9428) *
SLE: spell out DS/NTP
Action: Edit
10 Group 4 - 9531]Maintain change management records for systems that are are
operated. (Task ID: R1-9531) *
PNNL #1: Maintain change management records for systems that are are operated
SLE: Change "operated" to "operational"?
Action: Edit
11 Group 4 - 9108]Maintain log file storage and archive older events. (Task ID: R1-9108) *
12 Group 4 - 9527]Update database of device configurations upon changes to
configurations. (Task ID: R1-9527) *
13 Group 4 - 9421]Verify NTP server is utilizing UTC format to avoid time zone issues
(Task ID: R1-9421) *
SLE: Spell out acronyms and change "utilizing" to "using"
Action: Edit
14 Group 4 - 9359]Decide where to install security monitoring solutions such that the
overall expense is minimized and the coverage is maximized (Task ID: R1-9359) *
Group 19
1 Group 4 - 9566]Develop procedure to perform manual updates (Task ID: R1-9566) *
2 Group 4 - 9568]Develop procedure to respond to failed alerts (Task ID: R1-9568) *
3 Group 4 - 9569]Document procedures for configuring monitoring solutions to correctly
to obtain vendor software and signature updates. (Task ID: R1-9569) *
PNNL (AR): Mixed up question
Action: Edit - remove "to" after "correctly"
4 Group 4 - 9562]Monitor the monitoring solution regularly to ensure vendor software
and signature updates are being downloaded correctly. (Task ID: R1-9562) *
5 Group 4 - 9567]Monitor vendor notifications for updates to software and signatures and
compare against deployed versions. (Task ID: R1-9567) *
6 Group 4 - 9563]Review daily, weekly and monthly reports for systems that are not
updating / are out of baseline with the rest of the system population. (Task ID: R1-9563) *
7 Group 4 - 9325]Review system security architecture and governance for new system
extensions. (Task ID: R1-9325) *
8 Group 4 - 9558]Review updates and version and confirm with vendor (Task ID:
R1-9558) *
9 Group 4 - 9559]Schedule update timelines for existing and new solutions (Task ID:
R1-9559) *
10 Group 4 - 9557]Subscribe to vendor publishings relevant to the product line at hand.
(Task ID: R1-9557) *
SLE: Replace "publishings" with "publications"
Action: Edit
11 Group 4 - 9560]Test functionality after update to ensure system is operating (Task ID:
R1-9560) *
12 Group 4 - 9571]Test to ensure that automatic updates occur securely. (Task ID:
R1-9571) *
13 Group 4 - 9570]Train procedures for configuring monitoring solutions to correctly to
obtain vendor software and signature updates. (Task ID: R1-9570) *
SLE: Rewrite: "Train staff on the procedures for configuring monitoring solutions to correctly
obtain vendor software and signature updates.
Action: Edit
14 Group 4 - 9564]Update monitoring solution with vendor software and signature
updates manually. (Task ID: R1-9564) *
SLE: Edit: "Manually update..signature updates."
Action: Edit
Group 20
1 Group 4 - 9561]Verify configuration against procedures (Task ID: R1-9561) *
2 Group 4 - 9618]Convert (and parse) unknown asset log formats to compatible log
format for given monitoring solution. (Task ID: R1-9618) *
3 Group 4 - 9574]Define which devices require logging and what level of detail logs need
to be configured for. (Task ID: R1-9574) *
4 Group 4 - 9142]Develop a centralized logging system (Task ID: R1-9142) *
5 Group 4 - 9619]Develop a periodic verification process to ensure that the assets are
logging in alignment with the operational intended architecture. (Task ID: R1-9619)*
SLE: Change "operational intended" to "intended operational"
Action: Edit
6 Group 4 - 9363]Develop and/or procure a data logging and storage architecture that
scales and is fast enough to be useful for analysis (Task ID: R1-9363) *
7 Group 4 - 9573]Develop procedure to categorize systems for monitoring (Task ID:
R1-9573) *
SLE: Change "Develop procedure" to either "Develop a procedure" or "Develop procedures"
Action: Edit
8 Group 4 - 9422]Identify holes in NTP structure system-wide (Task ID: R1-9422) *
SLE: Spell out NTP
Action: Edit
9 Group 4 - 9342]Identify sources of targets to scan. (Task ID: R1-9342) *
PNNL (AR): Don't know what is being asked here
Action: Retain... it appears that perhaps those familiar with vulnerability scanning may be
more familiar with this statement.
10 Group 4 - 9572]Implement solution to identify new devices connecting to the
network(s) (Task ID: R1-9572) *
11 Group 4 - 9145]Maintain a list of components that can direct logs to a central logging
system, and components that cannot. Configure a method of collecting forensic data from
systems that cannot. (Task ID: R1-9145) *
12 Group 4 - 9263]Test all vulnerability scanners for modes or configurations that would
be disruptive to the communication paths and networks beinq tested and host communication
processing looking for possible conlicts that may result in negative operational impacts (Task ID:
R1-9263) *
PNNL #1: Test all vulnerability scanners for modes or configurations that would be disruptive to
the communication paths and networks being tested and host communication processing looking for
possible con(f)licts that may result in negative operational impacts
Action: Edit, create two tasks, splitting the "and"
13 Group 4 - 9418]Test server periodically to make sure NTP service is operating (Task
ID: R1-9418) *
SLE: Spell out NTP
Action: Edit
14 Group 4 - 9581]Coordinate with administrators from other departments (i.e.
networking, operating systems, servers) to identify strengths and weaknesses in the
organization's logging implementations. (Task ID: R1-9581) *
SLE: Insert comma after "i.e."
Action: Edit
Group 22
1 Group 4 - 9143]Implement Web content filtering (Task ID: R1-9143) *
SLE: Change "Web" to "web"
Action: Edit
2 Group 4 - 9606]Review past incidents to determine if host security solutions and logs
are providing data that can identify an event (Task ID: R1-9606) *
3 Group 4 - 9270]Develop a scannning plan and make sure all network operations and
key stakeholders to inlude operations has input and understands when testing activity is
underway and who can be contacted if issues are noticed (Task ID: R1-9270) *
PNNL #1: Develop a scanning plan and make sure all network operations and key
stakeholders to include operations has input and understands when testing activity is underway
and who can be contacted if issues are noticed
SLE: This one is very difficult to understand, but here's an editing attempt: "Develop a
scanning plan and make sure all network operations staff and key stakeholders have a say in and
understand when testing is underway. Staff and stakeholders also must know whom can be
contacted if issues arise."
Action: Edit to correct spelling and simplify: "Develop a scanning plan and make sure
all network operations staff and key stakeholders are consulted and notified about the timing of
test initiation."
4 Group 4 - 9295]Communicate timing and schedule of scans. (Task ID: R1-9295) *
5 Group 4 - 9807]Develop SEIM rule sets to detect documented event classes for each
monitored system (Task ID: R1-9807) *
SLE: Spell out SEIM
Action: Edit
6 Group 4 - 9801]Ensure Events are well defined (Task ID: R1-9801) *
SLE: lowercase "Events"
Action: Edit
7 Group 4 - 9538]Communicate changes to user security tools and information regarding
identified events and incidents (Task ID: R1-9538) *
8 Group 4 - 9828]Change existing system logic to prevent the same false positive from
occurring again in the future. (Task ID: R1-9828) *
SLE: Change "from occurring again in the future" to "from reoccurring."
Action: Edit
9 Group 4 - 9611]Review tool configurations and target configurations to reduce false
positives based on historic information (Task ID: R1-9611) *
10 Group 4 - 9725]Access company policies to verify that the software being
downloaded is allowed. (Task ID: R1-9725) *
11 Group 4 - 9730]Ensure third-party software is tested prior to installation (Task ID:
R1-9730) *
12 Group 4 - 9734]Establish a sandbox in which experimental software may be installed
and analyzed for malevolent behavior. (Task ID: R1-9734) *
13 Group 4 - 9736]Implement technology that will create inventories/database of the
software installed for offline analysis (Task ID: R1-9736) *
14 Group 4 - 9729]Scan systems regularly in an attempt to detect the use of unacceptable
software. (Task ID: R1-9729) *
SLE: Change to "regularly to try to detect..."
Action: Edit, delete "regularly" to avoid biasing judgments of frequency (search for
this word in remainder of tasks)
Group 23
1 Group 4 - 9723]Search existing list of acceptable software prior to installing. (Task ID:
R1-9723) *
2 Group 4 - 9722]Understand company policies and procedures for downloading and
installing third-party software (Task ID: R1-9722) *
3 Group 4 - 9715]Search asset management system to collect a list of all system vendors
for prioritized technology (Task ID: R1-9715) *
4 Group 4 - 9720]Decide what mitigations should be implemented on remote connections
(Task ID: R1-9720) *
5 Group 4 - 9268]Coordinate assessment of any target systems w/System Owners ahead
of time (Task ID: R1-9268) *
SLE: Change "w/System Owners" to "with system owners"
Action: Edit
6 Group 4 - 9273]Develop a deconfliction profile for company planned and executed
scans with log analysis (Task ID: R1-9273) *
7 Group 4 - 9597]Maintain or be able to access a list of assigned system owners (Task
ID: R1-9597) *
8 Group 4 - 9254]Configure vulnerability scanners to operate in the targeted environment
in a safe and effective manner (Task ID: R1-9254) *
SLE: Edit: "Configure vulnerability scanners to operate safely and effectively in the
targeted environment."
Action: Edit
9 Group 4 - 9858]Review best practices and standards documentation to determine
appropriate configuration settings. (Task ID: R1-9858) *
10 Group 4 - 9860]Test the vulnerability assessment solution against a dev environment
to see if desired results are achieved. (Task ID: R1-9860) *
SLE: Change "against a dev" to "in a development"
Action: Edit
11 Group 4 - 9859]Understand desired outcome as well as purpose for assessment so that
the solution can be configured appropriately. (Task ID: R1-9859) *
SLE: Change "purpose for" to "purpose of"
Action: Edit
12 Group 4 - 9748]Configure security tools to automatically apply patches and apply
updates. (Task ID: R1-9748) *
13 Group 4 - 9754]Configure signatures for host and network based IPS to ensure
optimal configuration and reduce likelihood of business disruption (Task ID: R1-9754)*
14 Group 4 - 9746]Create policy/procedures for how to patch tools (Task ID: R1-9746) *
Group 24
1 Group 4 - 9744]Define criticality levels for all tool types and identify security tools as
some of the most critical security tools that need to be patched and updated properly. (Task ID:
R1-9744) *
SLE: Change to "identify security tools as among the most critical tools that..."
Action: Edit
2 Group 4 - 9750]Define repots on the current patch and update status of all security tools
and identify any variances against vendor releases. (Task ID: R1-9750) *
PNNL #1: Define repots on the current patch and update status of all security tools and
identify any variances against vendor releases
SLE: repots? Really?
Action: Retain, industry vernacular
3 Group 4 - 9755]Document current patch levels and
updates before use in critical situations (Task ID: R1-9755) *
4 Group 4 - 9743]Ensure the Configuration Management Database (CMDB) is up to date
with tool versions (Task ID: R1-9743) *
5 Group 4 - 9753]Ensure updates and patches are tested prior to implementation (Task
ID: R1-9753) *
6 Group 4 - 9751]Establish a systems and tools patching program and schedule. (Task
ID: R1-9751) *
7 Group 4 - 9739]Identify current patch level of security tools (Task ID: R1-9739) *
8 Group 4 - 9740]Identify primary support resources for each of the production tools to
ensure team members understand their responsibilities (Task ID: R1-9740) *
9 Group 4 - 9757]Implement replica production (i.e. LAB) environment for testing of
patches prior to production release. (Task ID: R1-9757) *
SLE: Insert comma after "i.e."
Action: Edit
10 Group 4 - 9749]Maintain a list of approved security tools and their approved patch
levels (Task ID: R1-9749) *
11 Group 4 - 9649]Monitor security tool providers for updates and patches for tools that
are in use (Task ID: R1-9649) *
12 Group 4 - 9738]Monitor security tool vendors for updates and patches (Task ID:
R1-9738) *
13 Group 4 - 9745]Monitor vendor feeds for published patches (Task ID: R1-9745) *
14 Group 4 - 9213]Review latest penetration test tools. (Task ID: R1-9213) *
Group 25
1 Group 4 - 9752]Review signatures (for the tools that use them) to determine
applicability once implemented (Task ID: R1-9752) *
2 Group 4 - 9756]Schedule periodic reviews to determine when patches and updates are
required (Task ID: R1-9756) *
3 Group 4 - 9781]Sign up for vendor notifications and alerts (Task ID: R1-9781) *
4 Group 4 - 9782]Test toolset upgrades against old version to ensure new patches doesn't
adversely affect results or impair performance (Task ID: R1-9782) *
PNNL #1: Test toolset upgrades against old version to ensure new patches doesn't(don’t)
adversely affect results or impair performance
Action: Edit
5 Group 4 - 9742]Understand the process by which security tools are updated before use.
(Task ID: R1-9742) *
6 Group 4 - 9747]Verify versions of security tools periodically against vendors latest
release version or review exception for not updating the software (Task ID: R1-9747)*
SLE: Edits: Periodically verify versions of security tools against vendors'...
Action: Edit, delete periodically so as not to bias frequency judgment
(search for periodically in every statement and remove)
7 Group 4 - 9690]Decide what configuration settings results in capturing the required
information for monitoring (Task ID: R1-9690) *
SLE: Replace "Decide" with "Determine" and change "results" to "result"
Action: Edit
8 Group 4 - 9689]Identify logging and monitoring capability of deployed devices (Task
ID: R1-9689) *
9 Group 4 - 9426]Implement a reference time source to remove external dependencies for
NTP (Task ID: R1-9426) *
SLE: spell out NTP
Action: Edit
10 Group 4 - 9861]Implement monitoring system that meets design criteria (Task ID:
R1-9861) *
11 Group 4 - 9691]Implement necessary communications and repository to receive data
(Task ID: R1-9691) *
12 Group 4 - 9834]Implement procedural and technical controls to ensure logs are
maintained for expected period of time per policy. (Task ID: R1-9834) *
13 Group 4 - 9523]Prioritize critical systems for monitoring (Task ID: R1-9523) *
14 Group 4 - 9693]Test the data repository to ensure it remains online and able to receive
data (Task ID: R1-9693) *
SLE: change to "...remains online and is able"
Action: Edit
Group 26
1 Group 4 - 9692]Verify system is reporting the expected information based on the
configurations (Task ID: R1-9692) *
SLE: Change to "Verify that the system..."
Action: Edit
2 Group 4 - 9599]Coordinate with system owners to modify schedule based on work or
operational evolutions that impact security scanning (Task ID: R1-9599) *
SLE: Change to "...or operational changes that affect security scanning."
Action: Edit
3 Group 4 - 9260]Define scope of systems and system exclusions for vulnerability
testing. (Task ID: R1-9260) *
4 Group 4 - 9598]Review scanning schedule results for anomalies (Task ID: R1-9598) *
5 Group 4 - 9609]Coordinate with smart grid suppliers to confirm settings and scans for
their equipment (Task ID: R1-9609) *
6 Group 4 - 9279]Coordinate with vendors running scanners aganist their equipment to
get a technical practice or relevant information to develop your scanning program (Task ID:
R1-9279) *
SLE: Should it be "on" instead of "against"? I would delete "a technical practice or
relevant"
Action: Edit
7 Group 4 - 9144]Understand the resources and processes used by the security
monitoring tool, identify constraints, impacts to host or network systems, required configurations
to develop an implementation plan (Task ID: R1-9144) *
SLE: Rewrite: "Understand the resources and processes used by the security monitoring
tool; and identify constraints, impacts to host or network systems, and required configurations to
develop an implementation plan."
Action: Edit
8 Group 4 - 9601]Verify system processes or state that are authorized for smart grid
components with the vendor to identify unauthorized processes (Task ID: R1-9601)*
SLE: Rewrite: "To identify unauthorized processes, verify with the vendor the system
process or states that are authorized for smart grid components."
Action: Edit, to maintain parallel form put the "to phrase" at the end of
the sentence
9 Group 4 - 9436]Subscribe to security benchmark libraries (CIS, etc.). (Task ID:
R1-9436) *
SLE: Spell out CIS
Action: Edit or Delete CIS, etc as unnecessary
10 Group 4 - 9765]Configure the security monitoring solution so that it provides a list of
hosts that are being monitored and cross reference that with the asset inventory in place. (Task
ID: R1-9765) *
SLE: should be "cross-reference"
Action: Edit
11 Group 4 - 9763]Coordinate an assessment of the current monitoring solutions
coverage with a third part. (Task ID: R1-9763) *
Maco Stewart: should be party.
Action: Edit
12 Group 4 - 9760]Coordinate an assessment to test the effectiveness and coverage of
security monitoring tools. (Task ID: R1-9760) *
13 Group 4 - 9140]Coordinate with administrators from other departments (i.e.
networking, operating systems, servers) to identify strengths and weaknesses in the
organization's logging implementations. (Task ID: R1-9140) *
SLE: Insert a comma after "i.e."
Action: Edit
14 Group 4 - 9160]Identify metrics that will be used to show performance of monitoring
solution (Task ID: R1-9160) *
Group 27
1 Group 4 - 9766]Implement a process and technology to re-test effectiveness after each
system update (Task ID: R1-9766) *
2 Group 4 - 9837]Configure log management systems and other log repositories to
maintain logs for documented period of time per policy and then older files. (Task ID:
R1-9837) *
SLE: I don't understand how "and then older files" connects to the rest of the statement.
Rewrite.
Action: Edit, remove "and then..."
3 Group 4 - 9833]Document in policy appropriate timeframes for document storage.
(Task ID: R1-9833) *
SLE: Rewrite: "Document in policy the appropriate length of time to store documents."
Action: Edit
4 Group 4 - 9835]Review security operating procedures and policy for data storage
requirements (Task ID: R1-9835) *
SLE: Suggested edit: "Periodically review..."
Action: Retain, maintain parallel form of leading verb and used of
periodically could bias response regarding frequency
5 Group 4 - 9839]Schedule log management system and other log repositories to purge
data that is older than the documented retention period. (Task ID: R1-9839) *
6 Group 4 - 9228]Test in a sandbox new and potentially malicious tools appropriately.
(Task ID: R1-9228) *
7 Group 4 - 9838]Test storage periods by calling up events and incidents logged by the
security operations team (Task ID: R1-9838) *
8 Group 4 - 9836]Verify event/incident categorization to make sure associated data is
being stored for the appropriate period (Task ID: R1-9836) *
9 Group 4 - 9146]Implement application (layer 7) firewalls (Task ID: R1-9146) *
10 Group 4 - 9798]Implement DLP system for security tool systems and data (Task ID:
R1-9798) *
SLE: Spell out DLP
Action: Edit
11 Group 4 - 9302]Implement penetration tests on deployed components (Task ID:
R1-9302) *
12 Group 4 - 9439]Implement the multiple (layered) solution control options for
mitigation. (Task ID: R1-9439) *
13 Group 4 - 9256]Implement vulnerability scan (Task ID: R1-9256) *
14 Group 4 - 9437]document any changes made to the OS, etc. for look-back
opportunities should something malfunction (Task ID: R1-9437) *
SLE: Rewrite: "To trace possible causes of a system malfunction, document any changes
made to the operating system, etc." Replace "etc." with the things they should document.
Action: Edit, to maintain parallel form: "Document... operating system or
other system components.. to trace possible causes of a system malfunction"
Group 28
1 Group 4 - 9792]Document system configuration and access control (Task ID:
R1-9792) *
2 Group 4 - 9794]Implement controls to prevent unauthorized access tools and data.
(Task ID: R1-9794) *
3 Group 4 - 9182]Communicate with user and system owners as part of due dilligence
(Task ID: R1-9182) *
SLE: Communicate WHAT?
Action: Delete
4 Group 4 - 9321]Develop an asset inventory of both hardware and software. Link this
inventory to other security tools. (Task ID: R1-9321) *
5 Group 4 - 9315]Develop technical libraries for all protocols inuse and noted security
issues (Task ID: R1-9315) *
SLE: "in use" (need a space). I think "noted" should be "note"?
Action: Edit
6 Group 4 - 9172]Establish Operational Level Agreements and/or Service Level
Agreements where appropriate. (Task ID: R1-9172) *
7 Group 4 - 9148]Identify business, contractual, SLA and legal requirements that can be
met by monitoring solution (Task ID: R1-9148) *
SLE: spell out SLA
Action: Edit
8 Group 4 - 9344]Understand how specific tools accomplish their results (i.e. nmap,
nessus, metasploit, etc.) - what methods and protocols are used. (Task ID: R1-9344)*
SLE: Rewrite: Understand how specific tools (e.g., nmap, nessus, metasploit)
accomplish their results (i.e., what methods and protocols are used).
Action: Edit
9 Group 4 - 9320]Understand the ANSI C12 Standards (i.e. C12.18, C12.19, C12.21,
C12.22) (Task ID: R1-9320) *
PNNL (AR): Means nothing to me, hard to determine levels of importance
SLE: Insert comma after "i.e."
Action: Retain
10 Group 4 - 9292]Update network deployments to
segragate systems that cannot handle vulnerability scans.
(Task ID: R1-9292) *
PNNL #1: Update network deployments to segregate systems that cannot handle
vulnerability scans
Action: Edit
11 Group 4 - 9335]Identify the interdependencies between the data network and the
power system including fault isolation and protection (Task ID: R1-9335) *
SLE: Change to "inter-dependencies" and insert a comma between "system" and
"including"
Action: Edit
12 Group 4 - 9155]Identify stakeholders which would be users of monitoring solution
and their unique requirements (Task ID: R1-9155) *
SLE: Change "which" to "who"
Action: Edit
13 Group 4 - 9648]Document procedures for the successful and proper use of all security
tools with a special attention to constraints (Task ID: R1-9648) *
14 Group 4 - 9650]Review security operations procedures for tool use and current
versions (Task ID: R1-9650) *
Group 29
1 Group 4 - 9847]Maintain a list of all required reporting requirements to include what is
reported, how it is to be reported and in what time frame (Task ID: R1-9847) *
SLE: Need a comma before "and" Also, "in what time frame" is ambiguous. Suggested
change "how it is to be reported, and when it is to be reported (e.g., within 24 hours)"
Action: Edit
2 Group 4 - 9850]Verify all reported events and incidents were handled in compliance
with the reporting requirements (Task ID: R1-9850) *
SLE: Change to "Verify that all..."
Action: Edit
3 Group 4 - 9823]Ensure appropriate groups and personnel provide details on current
security posture for their respective areas. (Task ID: R1-9823) *
SLE: Change "personnel" to "staff"
Action: Delete, "ensure" is a responsibility verb not a skill verb.
4 Group 4 - 9339]Communicate risks to internal stakeholders (Task ID: R1-9339) *
5 Group 4 - 9313]Document risk and impact analysis of SG components for management
- ensure business impact has been included (Task ID: R1-9313) *
SLE: Reword: "Document risk and impact analysis, including business impact, of Smart
Grid components for management."
Action: Edit
6 Group 4 - 9338]Understand NERC CIP and audit requirements (Task ID: R1-9338) *
7 Group 4 - 9525]Implement policy enforcement tool (Task ID: R1-9525) *
8 Group 4 - 9530]Report exceptions to company configuration management policy and
standards (Task ID: R1-9530) *
9 Group 4 - 9805]Review a sampling of events to determine if they were properly
characterized (Task ID: R1-9805) *
10 Group 4 - 9731]Monitor software installed on end-points for compliance the company
policy (Task ID: R1-9731) *
SLE: Insert "with" between "compliance" and "the"
Action: Edit
11 Group 4 - 9726]Monitor software utilized in the infrastructure and correlate it to a list
of acceptable software. (Task ID: R1-9726) *
SLE: Change "utilized" to "used"
Action: Edit
12 Group 4 - 9716]Verify contracts require vendors to provide proper notice of a security
breech or incident that may impact the security of your organization's systems (Task ID:
R1-9716) *
SLE: Change to "Verify that contracts" and replace "impact" with "affect"
Action: Edit
13 Group 4 - 9493]Report risk in accordance with defined risk categorization model
(Task ID: R1-9493) *
14 Group 4 - 9310]Communicate with suppliers and inventory the component supply
chain pipeline process. (Task ID: R1-9310) *
SLE: This is unclear. Possible edit: "Inventory the component supply chain pipeline
process and document it for suppliers."
Action: Edit
Group 30
1 Group 4 - 9813]Establish appropriate language within contracts such that a vendor is
required to notify you if they are breached, their system or solutions are compromised, and/or
they have a significant security issue which could directly affect you. (Task ID: R1-9813) *
SLE: Reword: "Ensure that contracts require vendors to notify you if they are breached,
their system or solutions are compromised, and/or they have a significant security issue that
could directly affect you."
Action: Edit, change to a verb connoting skill: "Review contracts to ensure
vendors will notify..."
2 Group 4 - 9812]Establish metrics for vendors to assess compliance with the contract
with respect to notifications (Task ID: R1-9812) *
SLE: Reword: Establish metrics for vendors to assess compliance with notification
requirements in the contract.
Action: Edit
3 Group 4 - 9767]Communicate results of independent security review (Task ID:
R1-9767) *
SLE: Communicate to whom?
Action: Edit, adding "to system stakeholders"
4 Group 4 - 9762]Report findings of the independent review to management (Task ID:
R1-9762) *
5 Group 4 - 9764]Schedule an independant review and verification after the security
monitoring solution has been implemented. (Task ID: R1-9764) *
PNNL #1: Schedule an independant review and verification after the security monitoring
solution has been implemented
Action: Edit
6 Group 4 - 9209]Communicate with external stakeholders (LEO, PR, Legal, IT,
Marketing) when necessary to understand regulatory requirement and breach notifications (Task
ID: R1-9209) *
SLE: Spell out acronyms.
Action: Edit
7 Group 4 - 9793]Coordinate with internal audits to audit security tool use (Task ID:
R1-9793) *
8 Group 4 - 9788]Review rights to tools and data on a defined frequency to ensure access
is appropriate. (Task ID: R1-9788) *
SLE: Should "rights" be "permissions"?
Action: Retain
9 Group 4 - 9799]Verify access control priv are working as designed (Task ID:
R1-9799) *
PNNL #1: Verify access control priv are working as designed
Action: Edit, spell out privileges
10 Group 4 - 9787]Verify tool access and logs for authorized use (Task ID: R1-9787) *
11 Group 4 - 9842]Test security staff on how to access and what is covered by company
policies and technical standards (Task ID: R1-9842) *
SLE: Unclear. Suggested rewording: "Test security staff on access procedures, company
policies, and technical standards for accessing systems."
Action: Edit
12 Group 4 - 9714]Verify you have read and understand how to access policies and
standards for refresher (Task ID: R1-9714) *
SLE: "Verify that staff have read..."?
Action: Edit
13 Group 4 - 9578]Develop policy to determine which critical systems are to be
monitored and to what level (Task ID: R1-9578) *
SLE: Change to "and at what level"
Action: Edit
14 Group 4 - 9576]Develop policy to ensure critical systems are to be monitored (Task
ID: R1-9576) *
SLE: Change to "critical systems are monitored"
Action: Edit
Group 31
1 Group 4 - 9577]Understand data classification levels and how to identify such levels
with assets. (Task ID: R1-9577) *
2 Group 4 - 9575]Understand data classification strategies in place. (Task ID: R1-9575) *
SLE: Reword: "Understand the data classification strategies that are in place."
Action: Edit
3 Group 4 - 9728]Communicate company policy for download and installing third-party
software. (Task ID: R1-9728) *
SLE: Change "download" to "downloading"
Action: Edit
4 Group 4 - 9721]Develop a policy which requires system administrators to follow
company procedures related to download and install third-party software. (Task ID: R1-9721) *
SLE: Reword: "Develop a policy that requires system administrators to follow company
procedures for downloading and installing third-party software."
Action: Edit
5 Group 4 - 9732]Establish a basis or requirement for third party software before use.
(What business purpose does it satisfy or why is it needed) (Task ID: R1-9732) *
SLE: Reword: "Establish a basis or requirement for third-party software before use (e.g.,
what business purpose does it satisfy, why is it needed, etc.).
Action: Edit
6 Group 4 - 9735]Require personnel to sign acceptable use document agreeing that they
will follow company policies for downloading and installing third-party software (Task ID:
R1-9735) *
SLE: Change "personnel" to "staff"
Action: Edit
7 Group 4 - 9733]Require personnel to submit the use of proposed third party software to
a governance body for review and approval. (Task ID: R1-9733) *
SLE: Change to "Require staff to submit a justification for use of third-party software
to..."
Action: Delete.. "require" is not a verb that connotes skill. Search for
other "require" verbs.
8 Group 4 - 9841]Access company policies and technical standards (Task ID: R1-9841) *
SLE: This is unclear. Should it be "Provide staff access to ..." Or do you mean
something like "Periodically assess company..."?
Action: Delete.. "access" is not a verb that connotes skill. Search for other
"require" verbs.
9 Group 4 - 9843]Communicate whenever an action is covered by a policy or technical
standard (Task ID: R1-9843) *
SLE: I have no idea what this means. Please reword.
Action: Delete
10 Group 4 - 9848]Develop a process by which staff must acknowledge they have read
and understand all applicable policies and procedures. (Task ID: R1-9848) *
11 Group 4 - 9846]Ensure policies, procedures, etc are in a well known and easy to
access location. (Task ID: R1-9846) *
SLE: Edit: "Ensure policies, procedures, etc. are in a well-known and easy-to-access
location."
Action: Delete - "ensure" is a responsibility not a task that connotes skill
(search and delete other ensure actions)
12 Group 4 - 9713]Review policies and standards that apply to work area (Task ID:
R1-9713) *
SLE: Recommend you change to "Periodically review policies and standards that apply
to specific work areas."
Action: Retain, periodically does not add needed information to determine
the frequency and importance of this skill, in fact might bias the frequency
response since it is periodic. The periodicity is an empirical not normative
question.
13 Group 4 - 9522]Analyze cost of monitoring solution vs. features of of each solution to
ensure maximum ROI. (Task ID: R1-9522) *
SLE: Delete extra "of." Spell out "ROI."
Action: Edit
14 Group 4 - 9226]Establish a budget to handle the scope of an incident that might have
the worst possible impact on your infrastructure and ensure that it is available in the case that
something occurs. (Task ID: R1-9226) *
SLE: Change end of sentence to "available in case an incident occurs."
Action: Edit
Group 32
1 Group 4 - 9141]Analyze market options for SIEM tools. (Task ID: R1-9141) *
Action: Edit, spell out acronym
2 Group 4 - 9761]Develop relationships with vendor partners who specialize in this
testing. (Task ID: R1-9761) *
3 Group 4 - 9758]Define scope of an independent review and budget necessary resources
(Task ID: R1-9758) *
4 Group 4 - 9647]Collect information about the security tools employed by the
organization (Task ID: R1-9647) *
5 Group 4 - 9646]Review security operations staff performance in the execution of their
duties (Task ID: R1-9646) *
6 Group 4 - 9817]Scan systems to establish baseline (Task ID: R1-9817) *
7 Group 4 - 9410]Identify training material and information sources regarding cyber
attacks and techniques (Task ID: R1-9410) *
SLE: Change "material" to "materials"
Action: Edit
8 Group 4 - 9220]Identify training opporunities that teach methodologies associated with
current attack tools such as CEH training and select personnel involved in incident response to
take such training. (Task ID: R1-9220) *
SLE: "opportunities" is misspelled. Spell out "CEH." Change "personnel" to "staff."
Action: Edit
9 Group 4 - 9705]Ensure all attack vectors have been analyzed, closed, and cleaned
appropriately (Task ID: R1-9705) *
10 Group 4 - 9810]Analyze attacker Tactics, Techniques, and Procedures (TTPs) and
deconstruct in order to evaluate the effectiveness of your protective measures, detection
capability, and inform staff through awareness and exercises (Task ID: R1-9810) *
SLE: Reword: "To evaluate the effectiveness of your protective measures and detection
capability, analyze attacker Tactics, Techniques, and Procedures and subsequently inform staff
and provide exercises.
Action: Retain to maintain parallel form of leading verb, but remove text after
"protective measures"
11 Group 4 - 9809]Collect observed attacker tactics, techniques, and procedures (TTPs)
from available sources to include ISACs, peer utilities, government sources (Task ID:
R1-9809) *
SLE: "tactics, techniques, and procedures" is initial capped everywhere else. No need to
put the acronym in. Spell out ISACs.
Action: Edit
12 Group 4 - 9306]Collect the most recent (or predicted future) threats into
comprehensive list to disseminate to all employees (Task ID: R1-9306) *
SLE: insert "a" between "into" and "comprehensive." Change "employees" to "staff."
Action: Edit
13 Group 4 - 9305]Collect vendor KBs and DOE and DHS generated testing reports of
known vulnerabilities to specific smart grid components. Supplement that information with open
source reporting and internal red teaming or table top assessments. (Task ID: R1-9305) *
SLE: "table top" should be one word.
Action: Edit, spell out knowledge bases
14 Group 4 - 9820]Develop a heat map to illustrate
current security posture at a high-level for executive
consumption. (Task ID: R1-9820) *
SLE: Reword: Develop a heat map to illustrate current high-level security posture for
executive consumption.
Action: Edit
Group 33
1 Group 4 - 9811]Identify observables that flow from particular attacker Tactics,
Techniques, and Procedures (TTPs) to optimize your security monitoring capabilities (Task ID:
R1-9811) *
2 Group 4 - 9275]Identify external scanning needs than an internal scanner may not
adequately be able to assess (Task ID: R1-9275) *
PNNL #1: Duplicate
Action: Delete
3 Group 4 - 9547]Identify external scanning needs that an internal scanner may not
adequately be able to assess (Task ID: R1-9547) *
SLE: Reword: "... that an internal scanner may not be able to adquately assess"
Action: Edit
4 Group 4 - 9544]Monitor for new systems installed on the network. (Task ID:
R1-9544) *
5 Group 4 - 9402]Report test completion providing a summary of what was tested with
results to management (Task ID: R1-9402) *
SLE: Reword: "At test completion, provide to management a summary of what was
tested and what the results were."
Action: Edit, but to remain consistent with leading verb: "Report
summary of test results to management"
6 Group 4 - 9425]Scan against configuration anomalies. (Task ID: R1-9425) *
SLE: Reword?: "Scan for configuration anomalies."
Action: Edit
7 Group 4 - 9444]Scan for gaps in system configuration against a benchmark
configuration manual. (Task ID: R1-9444) *
8 Group 4 - 9554]Scan internal and external networks for new and unauthorized systems.
(Task ID: R1-9554) *
PNNL #1: Duplicate
Action: Delete
9 Group 4 - 9555]Scan internal and external networks for new and unauthorized systems.
(Task ID: R1-9555) *
10 Group 4 - 9624]Assign a technical POC for vulnerability remediation and assistance
(Task ID: R1-9624) *
SLE: spell out POC
Action: Edit
11 Group 4 - 9625]Decide the risk ratings of the vulnerability based on the technical
information and how the technology is deployed/importance of the systems (Task ID:
R1-9625) *
SLE: Change "Decide" to "Determine" Change "deployed/importance" to "deployed and
the"
Action: Edit
12 Group 4 - 9626]Develop appropriate mitigations after consulting with the
vendor/integrators and internal system owners (Task ID: R1-9626) *
SLE: Reword: "After consulting with the vendor or integrators and internal system
owners, develop appropriate mitigations."
Action: Retain to maintain parallel form with leading verb.
13 Group 4 - 9623]Document all vulnerability information alerts or disclosures that apply
to deployed technology and note the time and responsible party to develop the risk picture and
initiate workflow (Task ID: R1-9623) *
14 Group 4 - 9627]Implement vulnerability mitigations in accordance with the plan to
include patches or additional security controls (Task ID: R1-9627) *
Group 34
1 Group 4 - 9628]Scan all impacted systems to ensure the patch or mitigations are present
and the risk associated with the vulnerability has been reduced as expected (Task ID: R1-9628) *
SLE: Change "impacted" to "affected"
Action: Edit
2 Group 4 - 9629]Test all identified mitigations or patches to make sure they remove or
mitigate the vulnerability as expected with no negative impacts (Task ID: R1-9629)*
3 Group 4 - 9326]Analyze vulnerabilities for business impact (Task ID: R1-9326) *
4 Group 4 - 9603]Develop a method to characterize vulnerabilities and score them to
determine risk (Task ID: R1-9603) *
SLE: Reword: To determine risk, develop a method to characterize and score
vulnerabilities
Action: Retain in order to keep parallel form of initial verb
5 Group 4 - 9294]Develop a process for scoring the risk associated with identified
vulnerabilities that takes into account how exploitable they are to develop priotization
recommendations for mitigation (Task ID: R1-9294) *
SLE: Reword: To prioritize mitigation recommendations, develop a process for scoring
the risk associated with identified vulnerabilities. This process should include assessment of how
exploitable the vulnerabilities are.
Action: Retain in order to keep parallel form of initial verb, but remove
"that takes into account how exploitable they are" which is a modifying phrase
that does not add significantly to definition of what sill is required.
6 Group 4 - 9229]Develop a process to prioritize and create job tickets for analysis and
distribution of information to specific receipents (Task ID: R1-9229) *
SLE: Change to "Develop a process to create and prioritize..." Correct spelling of
"recipients."
Action: Edit
7 Group 4 - 9314]Alert END users of potential risks and vulnerabilities that they may be
able to mitigate (Task ID: R1-9314) *
SLE: "Alert end users..."
Action: Edit
8 Group 4 - 9399]Coordinate with other departments to ensure that routine business
operations are not impacted during testing (Task ID: R1-9399) *
SLE: Change "impacted" to "affected"
Action: Edit
9 Group 4 - 9404]Develop a RACI (Responsible, Accountable, Consulted, Informed)
matrix to ensure all roles clearly understand their responsibilities in the testing process. (Task ID:
R1-9404) *
10 Group 4 - 9406]Identify all systems that may be affected by testing (Task ID:
R1-9406) *
11 Group 4 - 9331]Identify threat actors (Task ID: R1-9331) *
12 Group 4 - 9244]Report vulnerabilities to fellow staff and stakeholders (Task ID:
R1-9244) *
SLE: Delete "fellow"
Action: Edit
13 Group 4 - 9298]Coordinate efforts with the vendor to develop an understanding of the
component and security implications. (Task ID: R1-9298) *
14 Group 4 - 9596]Coordinate with external governments on threat intelligence (Task ID:
R1-9596) *
Group 35
1 Group 4 - 9853]Communicate new threats or newly discovered vulnerabilities to the
entire security operations staff (Task ID: R1-9853) *
2 Group 4 - 9614]Develop threat awareness content that can be included in security
awareness and outreach efforts (Task ID: R1-9614) *
3 Group 4 - 9319]Monitor industry groups and forums so that you are able to hear the
latest on security vulnerabiltieis related to smart grid security components. (Task ID: R1-9319) *
PNNL #1: Monitor industry groups and forums so that you are able to hear the latest on
security vulnerabiltieis related to smart grid security components
SLE: Reword as "Monitor industry groups and forums to stay up to date on the latest
security vulnerabilities related to smart grid components."
Action: Edit
4 Group 4 - 9815]Monitor intelligence sources for information that may show that a
vendor you are working with may have been compromised. (Task ID: R1-9815) *
SLE: Replace "may show" with "indicates"
Action: Edit
5 Group 4 - 9852]Test security staff against current understanding of threats and
disclosed vulnerabilities (Task ID: R1-9852) *
SLE: Reword this question--I'm not sure what it means. I suggest something like "Test
security to staff to assess understanding of current threats and vulnerabilities"
Action: Edit
6 Group 4 - 9854]Train security operations staff when significant changes in threat or
vulnerability have occured (Task ID: R1-9854) *
SLE: misspelling, should be "occurred"
7 Group 4 - 9416]Alert external government entities with new intelligence (Task ID:
R1-9416) *
8 Group 4 - 9212]Attend community knowledge sharing events, such as one or two
choice conferences. (Task ID: R1-9212) *
SLE: Hyphenate "knowledge sharing" (knowledge-sharing); delete "choice"
9 Group 4 - 9252]Develop a threat analaysis testing environment and sandbox where
TTPs can be analyzed and considered (Task ID: R1-9252) *
PNNL #1: Develop a threat analaysis testing environment and sandbox where TTPs can
be analyzed and considered
SLE: Spell out "TTPs"
Action: Edit correct spelling and spell out acronym
10 Group 4 - 9333]Develop attack trees of attack vectors against vulnerable systems
(Task ID: R1-9333) *
11 Group 4 - 9225]Develop possible attack techniques against specific technologies and
implementations in your smart grid deployments (Task ID: R1-9225) *
12 Group 4 - 9413]Identify sources of intelligence to use for threat analysis (Task ID:
R1-9413) *
13 Group 4 - 9615]Review threat tables and conduct analysis of existing incident
response data (Task ID: R1-9615) *
14 Group 4 - 9267]!9373 Develop a prioritized list of critical resources. (Task ID:
R1-9267) *
Group 36
1 Group 4 - 9205]Analyze events against industry sharing initiatives to identify
anomalies/possible events (Task ID: R1-9205) *
2 Group 4 - 9489]Analyze vendor KBs and DOE and DHS generated testing reports of
known vulnerabilities to specific smart grid components (Task ID: R1-9489) *
SLE: Spell out KBs
Action: Edit
3 Group 4 - 9265]Analyze vulnerability reports (Task ID: R1-9265) *
4 Group 4 - 9491]Monitor vulnerability reports (Task ID: R1-9491) *
5 Group 4 - 9262]Review vulnerability scan results. (Task ID: R1-9262) *
6 Group 4 - 9595]Maintain a prioritized list of critical resources (Task ID: R1-9595) *
7 Group 4 - 9307]Collect issues to identify trends with particular vendors or
manufactures (Task ID: R1-9307) *
PNNL #1: Collect issues to identify trends with particular vendors or manufacturers
Action: Edit
8 Group 4 - 9556]Communicate with the vendor to ensure you are registered to receive
updates (Task ID: R1-9556) *
9 Group 4 - 9201]Prioritize systems within your network to determine which ones are of
the highest, Mod, Low impactvalue. (Task ID: R1-9201) *
PNNL #1: Prioritize systems within your network to determine which ones are of the
High, Moderate, or Low impact value
Action: Edit
10 Group 4 - 9276]Review assessment results in accordance with defined risk
categorization model. (Task ID: R1-9276) *
11 Group 4 - 9718]Communicate with vendor regarding a vulnerability, incident
announcement, incident to collect information to understand risk, enhance your monitoring
efforts, and devise a mitigation strategy (Task ID: R1-9718) *
SLE: Reword: "To understand risk, enhance monitoring efforts, and devise a mitigation
strategy, communicate with vendors about a vulnerability, incident announcement, incident to
collect information."
Action: Edit, but to remain consistent with starting verb, change to
"Communicate with vendors.... ... in order to understand risk..."
12 Group 4 - 9717]Monitor security news and intelligence sources to include vendor
webpages for vulnerability disclosures, incident announcements, and knowledge briefs (Task ID:
R1-9717) *
13 Group 4 - 9230]Communicate with research firms to keep abreast of new changes and
methodologies. (Task ID: R1-9230) *
14 Group 4 - 9301]Identify methods to detect vulnerabiltiies in smart grid components
with help from industry groups and thought leaders (Task ID: R1-9301) *
SLE: "vulnerabilities" is misspelled
Action: Edit
Group 37
1 Group 4 - 9215]Identify sources for information regarding attacks, exploit capability
and tools, and newly discovered vulnerabilities. (Task ID: R1-9215) *
2 Group 4 - 9346]Review ICS-Cert, NERC and other source reports of attacks and
develop understanding of how the threats actually work against specific vulnerabilities (Task ID:
R1-9346) *
SLE: Insert comma after "NERC"
Action: Edit
3 Group 4 - 9211]Subscribe to appropriate industry security mailing lists (Task ID:
R1-9211) *
4 Group 4 - 9219]Subscribe to intelligence services and open source information
subscriptions to be awRe of events (Task ID: R1-9219) *
SLE: "awRe" should be "aware"
Action: Edit
5 Group 4 - 9222]Subscribe to various information sharing portals relevant to the content.
(Task ID: R1-9222) *
SLE: Hyphenate "information sharing" (information-sharing)
Action: Edit
6 Group 4 - 9316]Subscribe to vulnerability feeds and maintain information sharing
subscriptions. (Task ID: R1-9316) *
SLE: Should be "information-sharing"
Action: Edit
7 Group 4 - 9608]Verify assessment tool outputs contain all necessary data elements for
vulnerability analysis and risk determination (Task ID: R1-9608) *
SLE: Insert "that" between "Verify" and "assessment"
Action: Edit
8 Group 4 - 9492]Prioritize vulnerability scan results. (Task ID: R1-9492) *
9 Group 4 - 9600]Analyze vulnerabilities to determine risk based on how you have
deployed the technology and the likelihood for exploitation (Task ID: R1-9600) *
10 Group 4 - 9243]Develop contract language that requires your technology vendors and
service providers to provide information about vulnerabilities and threats to the technology you
purchase (Task ID: R1-9243) *
11 Group 4 - 9816]Map newly discovered vulnerabilities to equipment and vendors to
track compliance. (Task ID: R1-9816) *
12 Group 4 - 9759]Hire independent third-party auditor to assess/audit toolset coverage
and effectiveness (Task ID: R1-9759) *
13 Group 4 - 9602]Maintain a table of attack techniques that align with your deployed
technology and business processes (Task ID: R1-9602) *
14 Group 4 - 9253]Implement a honeypot and research the attacks it collects. (Task ID:
R1-9253) *
Group 38
1 Group 4 - 9631]Analyze all intrusions to determine lessons learned and identify
requires changes to security procedures, technology, or training (Task ID: R1-9631) *
2 Group 4 - 9616]Develop an attack technique table. (Task ID: R1-9616) *
3 Group 4 - 9231]Attend applicable training and/or sit the associated certification (Task
ID: R1-9231) *
SLE: what does "sit" mean?
Action: Delete (attendance is not a task one can perform with skill) - search and
delete all Attends
4 Group 4 - 9233]Attend industry conferences and events such as Black Hat, etc. (Task
ID: R1-9233) *
Action: Delete (attendance is not a task one can perform with skill)
5 Group 4 - 9415]Coordinate presentations on latest threats to management and senior
management. (Task ID: R1-9415) *
6 Group 4 - 9411]Develop schedule to have all IR specialists complete training to refresh
and keep knowledge current (Task ID: R1-9411) *
SLE: Spell out IR
Action: Edit
7 Group 4 - 9414]Review all internal incidents for the purposes of staying current in
threats and how to best analyze (Task ID: R1-9414) *
SLE: Suggested rewording: "to stay up to date on current threats and determine the best
way to analyze them, review all internal incidents."
Action: Edit
8 Group 4 - 9235]Train information collection, analysis, and dissemination (Task ID:
R1-9235) *
PNNL (AR): What does that mean?
SLE: Suggested rewording: "Train staff on requirements and procedures for..."
Action: Edit
9 Group 4 - 9286]Train personnel on how to utilize the vulnerability scanning solution.
(Task ID: R1-9286) *
SLE: Reword: "Train staff on using vulnerability scanning."
Action: Edit
10 Group 4 - 9217]Train various security/attack/monitoring courses for all employees.
Mandate annual attendance to ensure widespread understanding and baseline requirements (Task
ID: R1-9217) *
SLE: Reword: "Develop various security/attack monitoring courses and require all
employs to attend training to ensure widespread understanding of baseline requirements."
Action: Edit
11 Group 4 - 9224]Attend or listen to presentation at security conferences such as Black
Hat, DEFCON, Shoocon, and Toorcon (Task ID: R1-9224) *
SLE: "presentation" should be "presentations"
Action: Edit
12 Group 4 - 9586]Implement training for how to correlate alerts from normal
communication to abnormal communicate (Task ID: R1-9586) *
SLE: I have no idea what this asking. Please reword.
Action: Delete
13 Group 4 - 9590]Train non-security team members (CXO, Legal, etc.) on how to
follow incident response procedure/plans (Task ID: R1-9590) *
14 Group 4 - 9183]Understand the company's incident response process and procedures .
(Task ID: R1-9183) *
Group 39
1 Group 4 - 9536]Analyze user behavior in stopping security services or use of the tools
and services (Task ID: R1-9536) *
2 Group 4 - 9617]Train Incident Response Team on the usage of the attack technique
table. (Task ID: R1-9617) *
3 Group 4 - 9652]Train new security staff and provide refresher training at required
intervals (Task ID: R1-9652) *
4 Group 4 - 9653]Verify all security staff have the necessary training and required
certifications or qualifications to operate tools (Task ID: R1-9653) *
5 Group 4 - 9241]Communicate with new staff or external stakeholders (Task ID:
R1-9241) *
6 Group 4 - 9727]Review and familiarize one with company policies and procedures for
downloading and installing third-party software (Task ID: R1-9727) *
SLE: Replace "one" with "new staff"
Action: Edit
7 Group 4 - 9497]Develop training sessions about attack techniques (Task ID:
R1-9497) *
8 Group 4 - 9245]Develop training sessions about attack tools (Task ID: R1-9245) *
9 Group 4 - 9496]Train other department s on attack tools (Task ID: R1-9496) *
SLE: Delete extra space in the word "departments"
Action: Edit
10 Group 4 - 9498]Train other departments on attack techniques (Task ID: R1-9498) *
11 Group 4 - 9350]Develop training materials for others on the team detailing specifics
behind current attack tools, technologies, and techniques to compromise systems and intrude
upon systems and networks (Task ID: R1-9350) *
SLE: Reword: "Develop training materials for other team members about current..."
Action: Edit
12 Group 4 - 9237]Review past events and lessons learned within your organization and
share that insight with an established plan. (Task ID: R1-9237) *
SLE: It's unclear what is meant here. Suggested rewording: "Review past events and
lessons learned within your organization and develop a plan based on those insights."
Action: Edit
13 Group 4 - 9651]Develop training for new operators and refresher training (Task ID:
R1-9651) *
SLE: Add some words (indicated in bold) "and refresher training for previously trained
staff."
Action: Edit
14 Group 4 - 9840]Train security staff on accessing policies and standards and topics
addressed (Task ID: R1-9840) *
APPENDIX D: REVISED JAQ TASK LIST BASED ON PILOT TEST
TASK ID
9638
9818
TASK DESCRIPTION
Collect all data necessary to support incident analysis and response.
Map activities observed in the network to systems to help establish the baseline.
9186
9640
9637
Review event correlation (for example look at baseline data to determine the type and frequency of
events during normal operations).
Analyze the intrusion by looking for the initial activity and all follow‐on actions of the attacker.
Assign an incident response manager for all incidents.
9641
Collect images of affected system for further analysis before returning the system to an acceptable
operational state.
9639
Communicate incident information and updates to affected users, administrators, and security staff and
request additional information that may support analysis and response actions.
9642
Establish or update a repository for all incident‐related information and index and catalog this
information with assigned incident numbers for easy retrieval.
9643
9644
9137
9819
Test incident storage repository to make sure it is functioning properly and can only be accessed by
authorized personnel.
Verify incident or case files are complete and managed properly by the assigned incident manager.
Analyze individual threat activity by correlating with other sources to identify trends.
Analyze the security incident and identify defining attriubutes.
9709
Protect classified or proprietary information related to the event, but release general incident
information to stakeholders.
9825
Report security incident classification (category selected) to management and record in incident
management system.
9770
9364
9579
9180
Communicate incident response plan and team member roles to stakeholders and team members to
ensure that they understand commitment and responsibilities when team is stood up.
Communicate with other analysts to work as a team on larger incidents.
Coordinate notification strategies with other units, such as Compliance.
Coordinate reactive and proactive responses.
9613
9772
9768
9697
9779
9109
9876
9771
9780
9777
9122
Coordinate with compliance to make all regulator required security incident reports in compliance with
the standards.
Develop an incident response program / plan.
Develop a detailed incident response action plan and team roles.
Document call trees and reporting and coordinating procedures to all parties.
Document stakeholders that must be contacted for each affected system in an incident.
Identify known information to include event details and an accurate sequence of events.
Maintain a single sequence of events with change control throughout the incident investigation.
Identify people resources by skills, expertise, and roles to support analytical efforts.
Maintain knowledge of professional resources within the organization.
Maintain professional credentials and networking relationships with professional organizations.
Prioritize alerts into predefined categories.
9778
9769
9775
9774
9776
9706
Recognize dissenting opinions among analysts.
Establish a team of internal intrusion detection experts for second‐tier incident response.
Test the incident response program / plan.
Train staff on the incident response program / plan.
Update the incident response program/plan based on testing results.
Identify the source of infections or successful attacks.
9701
Monitor all systems that were suspected or confirmed as being compromised during an
intrusion/incident.
9704
9703
9707
Report incident response status to management, including confidence levels for eradication actions.
Review running processes to determine if incident response successfully removed malware.
Train users in phishing identification and malware distribution methods.
9686
9684
9688
Analyze incident response team actions and performance of team members against the incident
response plan.
Develop a response plan for the incident and assign actions and deadlines.
Identify impacts occurring from response actions and consider timeliness of response efforts.
9685
9687
9830
Monitor incident response performance and actions and compare them to the incident response plan.
Understand necessary deviations or unanticipated actions from the incident response plan.
Analyze reoccurring activity that is not flagged as a security event and troubleshoot likely cause.
9832
Coordinate watch rotation turnover so that no active event analysis is dropped between team changes.
9829
9361
9259
Review event logs and alerts to ensure as much as possible that they have been processed and
categorized.
Review log files for signs of intrusions and security events.
Assess whether network scan results are real or false positives.
9206
9849
Communicate with external agencies such as law enforcement, ICS‐CERT, and DOE regarding incident
reports.
Report the time of discovery for all reportable events and incidents and the time of notification.
9621
Develop escalation process and procedures for network activity that has not been shown to be
authorized.
9430
9696
Verify all devices are being submitted to Security Information and Event Management for full network
visibility.
Collect necessary information for inclusion in the communications plan.
9694
9700
Communicate with business management to identify additional parties that should be included in
communication and response plans.
Review the communication plan and make changes as appropriate.
9695
9699
Understand changes to organizations and the business to identify stakeholders to be included in the
communications plan.
Verify communication plan and contact information with all parties at an appropriate frequency.
9169
Test the SIEM (Security Information and Event Management) implementation with a alert triggers based
on how the monitor has been configured.
9591
Test incident response system and planning remains effective against the latest attacker methodologies
and tools.
9412
9676
Test IR (Information Response) specialists to verify they maintain a current understanding of threats and
how to analyze.
Test remediated systems and the effectiveness of containment measures.
9592
Test to verify that there is a correct flow of intrusion events to incident cases and that there is a
coordinated response between Incident Response Specialist, Intrusion Analyst, and System Operations
Specialist stakeholders.
9873
Analyze actions indicating malicious events may be spreading or providing opportunities for an attacker
to move.
9874
Analyze actions indicating malicious events that provide opportunities for an attacker to close down
command and control channels.
9875
Analyze actions indicating malicious events to determine strategy for blocking outside IPs to contain an
incident.
9666
9677
Analyze logs and system information to determine which systems have been affected by an attacker and
what actions were taken by the attacker.
Analyze the incident's technical and business impacts.
9670
9668
Assign response team members to collect data for analysis from systems within the containment
boundary.
Communicate the boundary around affected systems being contained.
9187
Coordinate with the Help Desk to identify user complaints that may be related to the investigated event.
9680
Coordinate with outside parties to determine if containment efforts are successful (for example, call FBI
and confirm the Command and Control channel has been closed).
9673
Coordinate containment with system owners and determine impact of proposed actions after identifying
affected system.
9675
Assess if the incident needs to be re‐rated and re‐evaluate the response plan based on containment
efforts.
9667
9674
9683
9877
Define the boundary around suspect systems to minimize the spread and impact of an identified security
incident.
Document all actions taken to contain systems.
Document all external communications.
Minimize spread of the incident by ensuring contaminated systems are monitored.
9878
9671
Minimize spread of the incident by ensuring contaminated systems cannot communicate to systems
outside of the network boundary.
Establish boundaries or shut down infected systems.
9679
9682
9681
9678
Identify appropriate parties to participate in the incident response including legal, communications, and
others.
Maintain asset management information during containment process.
Monitor performance of incident response staff.
Report business and technical impacts of the incident and response activities.
9672
9856
Report to security management and system owners when systems have been successfully contained.
Conduct security drills that incorporate the latest threats and vulnerabilities in the scenarios.
9128
9401
Alert operators to events occurring so that they may increase system logging or retain logs where
normally such logs may be simply lost due to system storage constraints.
Analyze test results to ensure systems are functioning nominally.
9397
Develop a schedule for testing elements of the incident response plan and organizations involved in the
process.
9407
9214
Develop incident report template to be used when reporting the final status of an incident response.
Develop incident response scenarios.
9622
9398
9409
Develop schedule, test plans, evaluation criteria, and sign‐off for evaluating test success and/or failure.
Document all incident response exercises and tests.
Document gaps and outcomes to multiple parties to improve process and procedures.
9400
9126
9405
9139
9343
9408
9403
Document shortcomings and lessons learned from Incident Response exercises and formulate action
plans to ensure they're corrected as rapidly as possible.
Escalate analysis findings in accordance with defined plan.
Maintain a set of packaged scenarios with injects and data to exercise the response process.
Maintain documented procedures for analyzing logs and handling log archive.
Maintain technical competence using industry tools for attacks (i.e., backtrack).
Report to internal and external incident stakeholders involved during and after incident response.
Report status to management at defined stages of response per procedure.
9116
9191
Understand incident response process and initiate incident handling according to documented policies
and procedures.
Understand incident response, notification, and log handling requirements of business.
9239
9106
Develop attack scenarios that might be used to intrude upon systems and networks and use tabletop
exercises to gauge how personnel might respond in these situations.
Analyze logs by correlating all suspect systems.
9354
9134
9351
Analyze compromised system’s configuration by determining if the Intrusion Detection System alert is
real.
Report what was analyzed and the list of flagged events, key findings, issues, actions taken.
Review logs, network captures, and traces.
9240
9565
Update security tools (Security Event and Information Management, Intrusion Detection/Prevention
System, Firewalls) with information pertinent to network tools or attacks.
Configure alerts to monitor for old signatures and failed updates.
9248
Collect data from proxies and email systems to profile events involving malicous links or attachments
and try to correlate to business process and assets.
9204
Decide on a subjective and/or objective measure to determine the likelihood that an event is an incident.
(i.e., a confidence factor).
9284
Develop correlation methods to associate identified vulnerabilities with events identified by security
monitoring solutions (Intrusion Detection System, Security Event and Information Management, etc).
9135
9121
9124
9184
Develop procedures for addressing anomalous events in the logs that cannot be immediately identified
as known threats, etc.
Prioritize suspect log entries and preserve on master sequence of events list.
Identify systems not logging or components that are blind spots.
Collect a sequence of events and continue to add information based in the investigation process.
9607
9658
9659
9657
9655
9660
Verify that alert thresholds and incident response procedures result in capturing enough data to support
incident analysis and response efforts.
Assign the incident to a category or type if possible.
Assess an incident rating calculated on the potential severity and impact of the incident.
Assess if an event meets the criteria to be investigated and opened as an incident.
Assess if the event is applicable to your organization.
Document closure of all incidents.
9656
9662
9665
9663
9654
9664
Document that no action will be taken for events that have been logged but do not meet incident
response criteria.
Document the activity being evaluated as an event.
Report all events being investigated to security management.
Review incident criteria.
Verify that the event has occurred.
Verify that the event meets the criteria for further investigation.
9356
9189
9190
Assess whether all necessary expertise is available to address the problem (in one physical or virtual
room).
Develop an incident tracking mechanism to classify and track all security incidents.
Open an event ticket to track the potential incident.
9113
Open event tickets and notify interested parties when a probable event occurs and track the event as it
unfolds.
9136
9588
Identify and properly respond to situations in which log management applications may be attacked or
compromised.
Test the incident response procedure/plan to ensure correct workflow and functionality.
9192
Understand the basic components of an incident response process (Prepare, Identify, Contain, Eradicate,
Recover, Lessons Learned).
9203
9589
9808
9802
9803
Establish clear metrics that distinguish types of incidents. Users can then correctly categorize incidents.
Document updates to incident response procedure/plan.
Communicate warning signs of security events to internal stakeholders.
Define security events and incidents with evaluation criteria.
Develop procedures to escalate an event to an incident.
9785
9806
Maintain a current list of stakeholders' contact information and link this information to notification
requirements.
Test security staff with drills to determine if events and incidents are being properly characterized.
9708
9826
Develop and publicize ways to distinguish between routine system errors and malicious activities.
Document logic behind why an event was determined to be false.
9831
9117
Escalate findings to appropriate personnel to review event and ensure accuracy of false‐positive findings.
Identify and filter outfalse positives; if determined to be an incident, assign to incident handler.
9719
9327
Monitor all logs associated with third party accessing your systems; this may require a manual review
against historic use profiles.
Implement penetration testing and vulnerability assessments to improve incident identification.
9318
9814
9786
9783
9200
Understand environment (culture, staff) to create a better relationship for transmitting delicate and
sometimes poorly understood information.
Escalate vendor breach of contract to management and legal team.
Develop role‐based access control matrix.
Maintain knowledge of reporting requirements associated with systems.
Identify repeat incidents involving the same person or persons, systems, or adversaries.
9604
9605
Maintain incident data repository and analyze data and metrics regarding types of incidents, frequency,
and systems impacted.
Review incidents over time to determine lessons learned or how to better align security tools.
9857
Develop a standardized process to ensure appropriate steps are taken during and after an event occurs.
9711
9712
9710
9791
Monitor systems that were affected and the entire sub‐network for activity associated with the attack.
Report closing of the incident and all incident response processes that were followed.
Review incident response actions to ensure actions were taken properly.
Monitor for unauthorized access to tools and data.
9610
Report the attack Tactics, Techniques, and Procedures (used in the last 6months against the
organization).
9181
Develop working theories of the attack and look for correlated evidence to support or reject the working
theories.
9202
Document the incident response activities to determine positive and negative results from actions and
security controls. These should be the starting point for Lessons Learned discussions and follow‐on
preparation activities.
9129
9304
Review known intrusion Tactics, Techniques, and Procedures and observables to assist in profiling log
events and capture event information that may relate to known signatures.
Understand how phishing attacks can adversely impact web‐based management applications.
9119
Verify log analysis findings through alternate means such as local log storage or affected system
state/configuration.
9634
Define how systems were initially compromised and how the attack progressed and what observables
were available for detection and response.
9633
9632
9635
Develop mitigations based on incidents analyzed and recommend improvements in security capabilities
or tools as appropriate.
Identify security incidents that require training or awareness for users and security staff.
Implement lessons learned from the analysis of material incidents.
9636
Test the security staff and deployed solutions against scenarios developed from incidents with significant
lessons learned.
9114
Maintain chain of custody and integrity of log files if they are to be used by law enforcement at a later
date.
9197
9232
9112
9797
9796
Develop a chain‐of‐custody process and consider forensic images if needed as the investigation
progresses.
Identify third‐party vendors who specialize in remediation of security penetrations and forensics.
Maintain access control permissions to log files.
Collect proper approvals before individuals are granted access to tools and data.
Define authorized staff for specific security tools and data sources.
9800
9789
9790
9299
9822
9526
Develop roles and responsibilities that can be implemented through Roles Based Access Controls and
authorization group memberships.
Establish process to provide authorization for tool use and credentials to access tools.
Maintain centralized Roles Based Access Controls lists for all security tools.
Access a current smart grid inventory and asset list.
Collect change management information to automatically update baseline.
Collect existing device configurations.
9110
9702
Develop base scenario and publish results to show what the log files would/should look like without
attack or compromise.
Test all security controls or changes that were implemented during a response.
9827
Verify that security monitoring systems and management systems are working and providing expected
coverage.
9178
9151
Analyze security device and application configurations for technical impacts (e.g., network congestion).
Configure system in compliance with the baseline configuration manual.
9152
Coordinate with network operations and system adminstrators to plan for the implementation and
scheduling of required outages or notifications during the deployment.
9159
Coordinate with other departments to properly prepare for additional resources required by the security
monitoring solution (i.e., network, database, access management, etc).
9550
Coordinate with project managers to understand current and future projects that will install systems.
9166
9620
9434
Coordinate with system administrators to reboot hosts or restart necessary processes after the software
or device has been installed to ensure the monitoring solution is online and functioning.
Develop an approval workflow for accountability, traceability, and reporting.
Develop configuration manuals on all custom solutions.
9551
9175
9332
9543
9552
9545
Document certification and accreditation (baseline configuration, vulnerability assessment, authorization
to operation).
Document deployment information in company asset management systems.
Identify deployment risks including technological, geographic, and privacy related.
Review checklist for implementing a device or system for necessary approvals.
Review deployment plans and as planned configurations.
Schedule implementation with affected business owners and IT support staff.
9546
9176
9541
Test implementation with planned configurations to determine any deployment issues.
Test the installation against the functional and performance requirements.
Verify health status of host security tools.
9441
Verify that operating systems, services, and applications are hardened in compliance with regulatory
guidance.
9549
Verify that operator and implementer procedures require acknowledgment of authorization prior to
implementing.
9630
9296
9612
9844
9645
Update all asset management systems with deployed mitigations, configuration changes, or patches and
versions.
Assess if solutions that cannot handle abnormal network traffic should be retired.
Review closed tickets for false positives for unacceptable results.
Review network topologies, composition, and activity to determine security tool needs.
Test security operations staff in the planning and execution of security operations and tools.
9845
Test tools against existing operational environments to determine ability to handle stress and loads, and
operate as advertised.
9795
9341
9173
9278
Test that security tool systems and data cannot be accessed by unauthorized internal or external
entities.
Maintain a security configuration/coverage map of tools used across the enterprise.
Analyze monitoring solution to determine if newer technology better accomplishes the mission.
Analyze which systems are being scanned and which systems are being missed.
9433
9352
9255
Assign significance to custom Security Event and Information Management rules for unknown event
types.
Configure alert rules for Security Event and Information Management solution to automate alerts.
Configure assets IP address and pertinent metadata.
9105
Configure rules for Security Event and Information Management tools to capture and flag events known
to be intrusion indicators.
9131
9156
9432
9345
Configure Security Event and Information Management rules and alerts for unsupported devices such as
those used in the smart grid and Advanced Metering Infrastructure.
Configure system technical policies that set thresholds and parameters for monitoring.
Develop custom Security Event and Information Management parsers for unknown event types.
Establish a test lab where tools can be practiced and learned.
9593
Maintain an asset inventory of both hardware and software. Link this inventory to other security tools.
9431
Review healthy log collection metrics to understand baseline from which to measure normal
performance.
9429
9348
9150
9293
9111
9103
Review Service Level Agreements/Operating Level Agreements to understand expected thresholds.
Understand how to run wireshark and tcpdump.
Understand the selected Security Event and Information Management tool.
Understand the effort required to plug the solution into custom or specific software and hardware.
Verify that all systems are logging to a central location.
Analyze available logs and note gaps and time periods.
9420
Analyze system logs for Network Time Protocol synchronization anomaly messages.
9104
Configure your security log management tool to sort and filter data in a manner that is best suited for
the event being analyzed.
9428
9531
9108
9527
9421
Implement a Datum Secure/Network Time Protocol capability for environments where a secure
connection to a root time stamp authority is required.
Maintain change management records for systems that are operational.
Maintain log file storage and archive older events.
Update database of device configurations upon changes to configurations.
Verify Network Time Protocol server is using Universal Time Code format to avoid time zone issues.
9359
9566
9568
Decide where to install security monitoring solutions such that the overall expense is minimized and the
coverage is maximized.
Develop procedure to perform manual updates.
Develop procedure to respond to failed alerts.
9569
Document procedures for configuring monitoring solutions to correctly obtain vendor software and
signature updates.
9562
Monitor the monitoring solution to ensure vendor software and signature updates are being
downloaded correctly.
9567
Monitor vendor notifications for updates to software and signatures and compare against deployed
versions.
9563
9325
9558
9559
9557
9560
9571
Review daily, weekly and monthly reports for systems that are not updating or out of baseline with the
rest of the system population.
Review system security architecture and governance for new system extensions.
Review updates and version and confirm with vendor.
Schedule update timelines for existing and new solutions.
Subscribe to vendor publications relevant to the product line at hand.
Test functionality after update to ensure system is operating.
Test to ensure that automatic updates occur securely.
9570
9564
9561
Train staff on the procedures for configuring monitoring solutions to correctly obtain vendor software
and signature updates.
Manually update monitoring solution with vendor software and signature updates.
Verify configuration against procedures.
9618
9574
9142
Convert (and parse) unknown asset log formats to compatible log format for given monitoring solution.
Define which devices require logging and what level of detail logs need to be configured for.
Develop a centralized logging system.
9619
Develop a periodic verification process to ensure that the assets are logging in alignment with the
intended operational architecture.
9363
9573
9422
9342
Develop and/or procure a data logging and storage architecture that scales and is fast enough to be
useful for analysis.
Develop a procedure to categorize systems for monitoring.
Identify holes in Network Time Protocol structure system‐wide.
Identify sources of targets to scan.
9572
Implement solution to identify new devices connecting to the network(s).
9145
Maintain a list of components that can direct logs to a central logging system, and components that
cannot. Configure a method of collecting forensic data from systems that cannot.
9263
9418
Test all vulnerability scanners for modes or configurations that would be disruptive to the
communication paths and networks being tested and host communication processing looking for
possible con(f)licts that may result in negative operational impacts.
Test server to make sure Network Time Protocol service is operating.
9581
Coordinate with administrators from other departments (i.e., networking, operating systems, servers) to
identify strengths and weaknesses in the organization's logging implementations.
9157
9580
Develop reporting logic and work with security operations staff to configure how often, what
information, and what priorities are sent from monitoring tool alerts.
Develop standard communication procedure to use when writing rules.
9587
9582
9583
Establish baselines for setting incident alert levels in Security Event and Information Management
systems and periodically review and adjust the levels to ensure optimal monitoring.
Test system for performing according to desired functionality and configured policies.
Verify that configuration alert types and alerts are working.
9585
9280
Coordinate periodic testing of alerting mechanisms to ensure the methodology is functioning as
expected.
Develop baseline scanning as a part of Configuration Management policies and procedures.
9161
9288
Develop custom internal network monitoring tools (non‐vendor solution) to detect anomalies that
vendor tools would not be able to identify.
Develop custom scan rules to provide deeper scans or avoid problematic checks.
9163
Develop management interface view to maintain situational awareness of the monitoring tools' or
agents' health and operating conditions.
9258
9149
9289
9143
Identify metrics by which tools will be measured against to ensure they are still meeting requirements
and goals.
Implement intrusion prevention/detection solution.
Implement secondary scanner should the initial scanner experience usage issues.
Implement web content filtering.
9606
Review past incidents to determine if host security solutions and logs are providing data that can identify
an event.
9270
9295
Develop a scanning plan and make sure all network operations staff and key stakeholders are consulted
and notified about the timing of test initiation.
Communicate timing and schedule of scans.
9807
Develop Security Event and Information Management rule sets to detect documented event classes for
each monitored system.
9538
9828
Communicate changes to user security tools and information regarding identified events and incidents.
Change existing system logic to prevent the same false positive from occurring.
9611
9725
Review tool configurations and target configurations to reduce false positives based on historic
information.
Access company policies to verify that the software being downloaded is allowed.
9734
Establish a sandbox in which experimental software may be installed and analyzed for malevolent
behavior.
9736
9729
9723
9722
9715
9720
9268
9273
9597
9254
Implement technology that will create inventories/database of the software installed for offline analysis.
Scan systems in an attempt to detect the use of unacceptable software.
Search existing list of acceptable software prior to installing.
Understand company policies and procedures for downloading and installing third‐party software.
Search asset management system to collect a list of all system vendors for prioritized technology.
Decide what mitigations should be implemented on remote connections.
Coordinate assessment of any target systems with System Owners ahead of time.
Develop a deconfliction profile for company planned and executed scans with log analysis.
Maintain or be able to access a list of assigned system owners.
Configure vulnerability scanners to operate safely and effectively in the targeted environment.
9858
Review best practices and standards documentation to determine appropriate configuration settings.
9860
Test the vulnerability assessment solution in a development environment to see if desired results are
achieved.
9859
9748
Understand desired outcome as well as purpose of assessment so that the solution can be configured
appropriately.
Configure security tools to automatically apply patches and apply updates.
9754
9746
Configure signatures for host and network based IPS to ensure optimal configuration and reduce
likelihood of business disruption.
Create policy/procedures for how to patch tools.
9744
Define criticality levels for all tool types and identify security tools as among the most critical security
tools that need to be patched and updated properly.
9750
9755
9751
9739
Define reports on the current patch and update status of all security tools and identify any variances
against vendor releases.
Document current patch levels and updates before use in critical situations.
Establish a systems and tools patching program and schedule.
Identify current patch level of security tools.
9740
Identify primary support resources for each of the production tools to ensure team members understand
their responsibilities.
9757
9749
9649
9738
9745
9213
9752
9756
9781
Implement replica production (i.e., LAB) environment for testing of patches prior to production release.
Maintain a list of approved security tools and their approved patch levels.
Monitor security tool providers for updates and patches for tools that are in use.
Monitor security tool vendors for updates and patches.
Monitor vendor feeds for published patches.
Review latest penetration test tools.
Review signatures (for the tools that use them) to determine applicability once implemented.
Schedule periodic reviews to determine when patches and updates are required.
Sign up for vendor notifications and alerts
9782
9742
Test toolset upgrades against old version to ensure new patches don’t adversely affect results or impair
performance.
Understand the process by which security tools are updated before use.
9747
9690
9689
9426
9861
9691
Verify versions of security tools against vendors latest release version or review exception for not
updating the software.
Assess what configuration settings result in capturing the required information for monitoring.
Identify logging and monitoring capability of deployed devices.
Implement a reference time source to remove external dependencies for Network Time Protocol.
Implement monitoring system that meets design criteria.
Implement necessary communications and repository to receive data.
9834
9523
9693
9692
Implement procedural and technical controls to ensure logs are maintained for expected period of time
per policy.
Prioritize critical systems for monitoring.
Test the data repository to ensure it remains online and is available to receive data.
Verify that the system is reporting the expected information based on the configurations.
9599
9260
9598
9609
9279
Coordinate with system owners to modify schedule based on work or operational changes that affect
security scanning.
Define scope of systems and system exclusions for vulnerability testing.
Review scanning schedule results for anomalies.
Coordinate with smart grid suppliers to confirm settings and scans for their equipment.
Coordinate with vendors running scanners on their equipment to develop your scanning program.
9144
Understand the resources and processes used by the security monitoring tool; and identify constraints,
impacts to host or network systems, and required configurations to develop an implementation plan.
9601
Verify with the vendor the system processes or states that are authorized for smart grid components.
9765
9763
9760
Configure the security monitoring solution so that it provides a list of hosts that are being monitored and
cross‐reference that with the asset inventory in place.
Coordinate an assessment of the current monitoring solutions coverage with a third party.
Coordinate an assessment to test the effectiveness and coverage of security monitoring tools.
9140
9160
9766
Coordinate with administrators from other departments (i.e., networking, operating systems, servers) to
identify strengths and weaknesses in the organization's logging implementations.
Identify metrics that will be used to show performance of monitoring solution.
Implement a process and technology to re‐test effectiveness after each system update.
9837
9833
9835
Configure log management systems and other log repositories to maintain logs for documented period
of time per policy.
Document in policy the appropriate length of time to store documents.
Review security operating procedures and policy for data storage requirements.
9839
9228
9838
Schedule log management system and other log repositories to purge data that is older than the
documented retention period.
Test in a sandbox new and potentially malicious tools appropriately.
Test storage periods by calling up events and incidents logged by the security operations team.
9836
9146
9798
9302
9439
9256
Verify event/incident categorization to make sure associated data is being stored for the appropriate
period.
Implement application (layer 7) firewalls.
Implement Data Leakage Prevention system for security tool systems and data.
Implement penetration tests on deployed components.
Implement the multiple (layered) solution control options for mitigation.
Implement vulnerability scan.
9437
9792
9794
Document any changes made to the operating system or other components to trace possible causes of a
system malfunction.
Document system configuration and access control.
Implement controls to prevent unauthorized access tools and data.
9321
9315
9172
Develop an asset inventory of both hardware and software. Link this inventory to other security tools.
Develop technical libraries for all protocols in use and note security issues.
Establish Operational Level Agreements and/or Service Level Agreements where appropriate.
9148
Identify business, contractual, Service Level Agreements and legal requirements that can be met by
monitoring solution.
9344
9320
9292
Understand how specific tools (e.g., nmap, nessus, metasploit) accomplish their results (i.e., what
methods and protocols are used).
Understand the ANSI C12 Standards (i.e., C12.18, C12.19, C12.21, C12.22).
Update network deployments to segregate systems that cannot handle vulnerability scans.
9335
9155
Identify the inter‐dependencies between the data network and the power system, including fault
isolation and protection.
Identify stakeholders who would be users of monitoring solution and their unique requirements.
9648
9650
Document procedures for the successful and proper use of all security tools with a special attention to
constraints.
Review security operations procedures for tool use and current versions.
9847
Maintain a list of all required reporting requirements to include what is reported, how it is to be
reported, and when it is to be reported (e.g., within 24 hours).
9850
9339
Verify that all reported events and incidents were handled in compliance with the reporting
requirements.
Communicate risks to internal stakeholders.
9313
9338
9525
9530
9805
9731
9726
Document risk and impact analysis, including business impact, of smart grid components for
management.
Understand NERC CIP and audit requirements.
Implement policy enforcement tool.
Report exceptions to company configuration management policy and standards.
Review a sampling of events to determine if they were properly characterized.
Monitor software installed on end‐points for compliance with the company policy.
Monitor software used in the infrastructure and correlate it to a list of acceptable software.
9716
Verify that contracts require vendors to provide proper notice of a security breech or incident that may
affect the security of your organization's systems.
9493
9310
Report risk in accordance with defined risk categorization model.
Inventory the component supply chain pipeline process and document it for suppliers.
9813
9812
9767
9762
Review contracts to ensure vendors will notify you if they are breached, their system or solutions are
compromised, and/or they have a significant security issue that could directly affect you.
Establish metrics for vendors to assess compliance with notification requirements in the contract.
Communicate results of independent security review to system stakeholders.
Report findings of the independent review to management.
9764
Schedule an independant review and verification after the security monitoring solution has been
implemented.
9209
9793
9788
9799
9787
Communicate with external stakeholders (Law Enforcement Organizations, Public Relations, Legal, IT,
Marketing) when necessary to understand regulatory requirement and breach notifications.
Coordinate with internal audits to audit security tool use.
Review access rights to tools and data on a defined frequency to ensure access is appropriate.
Verify access control privileges are working as designed.
Verify tool access and logs for authorized use.
9842
9714
9578
9576
9577
9575
9728
Test security staff on access procedures, company policies, and technical standards for accessing
systems.
Verify that staff have read and understand how to access policies and standards for refresher.
Develop policy to determine which critical systems are to be monitored and at what level.
Develop policy to ensure critical systems are monitored.
Understand data classification levels and how to identify such levels with assets.
Understand the data classification strategies that are in place.
Communicate company policy for downloading and installing third‐party software.
9721
Develop a policy that requires system administrators to follow company procedures for downloading and
installing third‐party software.
9732
Establish a basis or requirement for third‐party software before use (e.g., what business purpose does it
satisfy, why is it needed, etc.).
9848
9713
Develop a process by which staff must acknowledge they have read and understand all applicable
policies and procedures.
Review policies and standards that apply to work area.
9522
Analyze cost of monitoring solution vs. features of each solution to ensure maximum Return on
Investment.
9226
9141
9761
9758
9647
9646
9817
9410
9220
Establish a budget to handle the scope of an incident that might have the worst possible impact on your
infrastructure and ensure that it is available in case an incident occurs.
Analyze market options for Security Event and Information Management tools.
Develop relationships with vendor partners who specialize in this testing.
Define scope of an independent review and budget necessary resources.
Collect information about the security tools employed by the organization.
Review security operations staff performance in the execution of their duties.
Scan systems to establish baseline.
Identify training materials and information sources regarding cyber attacks and techniques.
Identify training opportunities that teach methodologies associated with current attack tools.
9810
Analyze attacker Tactics, Techniques, and Procedures and deconstruct in order to evaluate the
effectiveness of protective measures.
9809
Collect observed attacker Tactics, Techniques, and Procedures from available sources to include
Information Sharing and Awareness Councils, peer utilities, government sources.
9306
Collect the most recent (or predicted future) threats into a comprehensive list to disseminate to all
employees.
9305
9820
Collect vendor knowledge‐bases and DOE / DHS generated testing reports of known vulnerabilities to
specific smart grid components. Supplement that information with open source reporting and internal
red teaming or tabletop assessments.
Develop a heat map to illustrate current high‐level security posture for executive consumption.
9811
9547
9544
9402
9425
9444
9555
9624
Identify observables that flow from particular attacker Tactics, Techniques, and Procedures to optimize
your security monitoring capabilities.
Identify external scanning needs that an internal scanner may not be able to adequately assess.
Monitor for new systems installed on the network.
Report summary of test results to management.
Scan for configuration anomalies.
Scan for gaps in system configuration against a benchmark configuration manual.
Scan internal and external networks for new and unauthorized systems.
Assign a technical point of contact for vulnerability remediation and assistance.
9625
9626
Assess the risk ratings of the vulnerability based on the technical information and how the technology is
deployed and the importance of the systems.
Consult with vendor or integrators and internal system owners to develop appropriate mitigations.
9623
Document all vulnerability information alerts or disclosures that apply to deployed technology and note
the time and responsible party to develop the risk picture and initiate workflow.
9627
Implement vulnerability mitigations in accordance with the plan to include patches or additional security
controls.
9628
Scan all affected systems to ensure the patch or mitigations are present and the risk associated with the
vulnerability has been reduced as expected.
9629
9326
9603
Test all identified mitigations or patches to make sure they remove or mitigate the vulnerability as
expected with no negative impacts.
Analyze vulnerabilities for business impact.
Develop a method to characterize vulnerabilities that includes risk scores.
9294
Develop a process for scoring the risk associated with identified vulnerabilities to support prioritization
of mitigation recommendations.
9229
9314
Develop a process to create and prioritize job tickets for analysis and distribution of information to
specific recipients.
Alert end users of potential risks and vulnerabilities that they may be able to mitigate.
9399
Coordinate with other departments to ensure that routine business operations are not affected during
testing.
9404
9406
Develop a RACI (Responsible, Accountable, Consulted, Informed) matrix to ensure all roles clearly
understand their responsibilities in the testing process.
Identify all systems that may be affected by testing.
9331
9244
Identify threat actors.
Report vulnerabilities to staff and stakeholders.
9298
9596
Coordinate efforts with the vendor to develop an understanding of the component and security
implications.
Coordinate with external governments on threat intelligence.
9853
9614
Communicate new threats or newly discovered vulnerabilities to the entire security operations staff.
Develop threat awareness content that can be included in security awareness and outreach efforts.
9319
Monitor industry groups and forums to stay up to date on the latest security vulnerabilities related to
smart grid components.
9815
9852
9854
9416
Monitor intelligence sources for information that indicates that a vendor you are working with may have
been compromised.
Test security to staff to assess understanding of current threats and vulnerabilities.
Train security operations staff when significant changes in threat or vulnerability have occurred.
Alert external government entities with new intelligence.
9252
9333
Develop a threat analysis testing environment and sandbox where Tactics, Techniques, and Procedures
can be analyzed and considered.
Develop attack trees of attack vectors against vulnerable systems.
9225
9413
9615
9267
9205
Develop possible attack techniques against specific technologies and implementations in your smart grid
deployments.
Identify sources of intelligence to use for threat analysis.
Review threat tables and conduct analysis of existing incident response data.
Develop a prioritized list of critical resources.
Analyze events against industry sharing initiatives to identify anomalies/possible events.
9489
9265
9491
9262
9595
9307
9556
Analyze vendor Knowledge Bases and DOE and DHS generated testing reports of known vulnerabilities to
specific smart grid components.
Analyze vulnerability reports.
Monitor vulnerability reports.
Review vulnerability scan results.
Maintain a prioritized list of critical resources.
Collect issues to identify trends with particular vendors or manufacturers.
Communicate with the vendor to ensure you are registered to receive updates.
9201
9276
Prioritize systems within your network to determine which ones are of the High, Moderate, or Low
impact value.
Review assessment results in accordance with defined risk categorization model.
9718
Communicate with vendors about a vulnerability or incident in order to understand risk and devise a
mitigation strategy.
9717
9230
Monitor security news and intelligence sources to include vendor webpages for vulnerability disclosures,
incident announcements, and knowledge briefs.
Communicate with research firms to keep abreast of new changes and methodologies.
9301
Identify methods to detect vulnerabilities in smart grid components with help from industry groups and
thought leaders.
9215
Identify sources for information regarding attacks, exploit capability and tools, and newly discovered
vulnerabilities.
9346
9211
9219
9222
9316
Review ICS‐Cert, NERC, and other source reports of attacks and develop understanding of how the
threats actually work against specific vulnerabilities.
Subscribe to appropriate industry security mailing lists.
Subscribe to intelligence services and open source information subscriptions to be aware of events.
Subscribe to various information‐sharing portals relevant to the content.
Subscribe to vulnerability feeds and maintain information‐sharing subscriptions.
9608
9492
Verify that assessment tool outputs contain all necessary data elements for vulnerability analysis and risk
determination.
Prioritize vulnerability scan results.
9600
Analyze vulnerabilities to determine risk based on how you have deployed the technology and the
likelihood for exploitation.
9243
9816
9759
Develop contract language that requires your technology vendors and service providers to provide
information about vulnerabilities and threats to the technology you purchase.
Map newly discovered vulnerabilities to equipment and vendors to track compliance.
Hire independent third‐party auditor to assess/audit toolset coverage and effectiveness.
9602
9253
Maintain a table of attack techniques that align with your deployed technology and business processes.
Implement a honeypot and research the attacks it collects.
9631
9616
9415
Analyze all intrusions to determine lessons learned and identify requires changes to security procedures,
technology, or training.
Develop an attack technique table.
Coordinate presentations on latest threats to management and senior management.
9411
Develop schedule to have all Incident Response specialists complete training to refresh and keep
knowledge current.
9414
9235
9286
Review all internal incidents for the purposes of staying current in threats and how to to stay up to date
on current threats and determine the best way to analyze them, review all internal incidents.
Train information collection, analysis, and dissemination.
Train staff on Train staff on requirements and procedures for using vulnerability scanning.
9217
Develop various security/attack monitoring courses and require all employs to attend training to ensure
widespread understanding of baseline requirements.
9590
9183
9536
9617
9652
Train non‐security team members (CXO, Legal, etc.) on how to follow incident response procedure/plans.
Understand the company's incident response process and procedures .
Analyze user behavior in stopping security services or use of the tools and services.
Train Incident Response Team on the usage of the attack technique table.
Train new security staff and provide refresher training at required intervals.
9653
9241
Verify all security staff have the necessary training and required certifications or qualifications to operate
tools.
Communicate with new staff or external stakeholders.
9727
9497
9245
9496
9498
Review and familiarize new staff with company policies and procedures for downloading and installing
third‐party software.
Develop training sessions about attack techniques.
Develop training sessions about attack tools.
Train other departments on attack tools.
Train other departments on attack techniques.
9350
Develop training materials for other team members about current attack tools, technologies, and
techniques to compromise systems and intrude upon systems and networks.
9237
9651
9840
Review past events and lessons learned within your organization and develop a plan based on those
insights.
Develop training for new operators and refresher training for previously trained staff.
Train security staff on accessing policies and standards and topics addressed.
APPENDIX E: JAQ DEMOGRAPHIC QUESTIONS
1 [R0-001] How many employees work at your facility?
Please choose only one of the following:
Less than 10
10-99
100-999
1,000-4,999
5,000-9,999
10,000 or more
2 [R0-002] What job title best describes you?
Please choose all that apply:
Control systems engineer
Control systems operator
Control systems manager
Training specialist
IT Executive
IT manager
IT professional
IT systems administrator
Network engineer
Intrusion analysis staff
Intrusion analysis manager
Incident handling staff
Incident handling manager
Cyber security analyst
Cyber security operations staff
Cyber security operations manager
Cyber security manager
Cyber security executive
Other:
3 [R0-003] How long have you held this position? (Years):
Please write your answer here:
4 [R0-004] How many people report directly to you?
Please choose all that apply:
No direct reports
1-5
6-30
More than 30
5 [R0-005] What job title best describes the position you had prior to your
current job?
Please choose all that apply:
Control systems engineer
Control systems operator
Control systems manager
Training specialist
IT executive
IT manager
IT professional
IT systems administrator
Network engineer
Intrusion analysis staff
Intrusion analysis manager
Incident handling staff
Incident handling manager
Cyber security analyst
Cyber security operations staff
Cyber security operations manager
Cyber security manager
Cyber security executive
Other:
6 [R0-006] How would you classify your level of expertise in the cyber
security field?
Please choose only one of the following:
Novice: minimal knowledge, no connection to practice
Beginner, working knowledge of key aspects of practice
Competent: good working and background knowledge of the area
Proficient: depth of understanding of discipline and area of practice
Expert: authoritative knowledge of discipline and deep tacit understanding across area of
practice
7 [R0-007] What level of familiarity do you have with smart grid operations?
Please choose only one of the following:
Novice: minimal knowledge, no connection to practice
Beginner, working knowledge of key aspects of practice
Competent: good working and background knowledge of the area
Proficient: depth of understanding of discipline and area of practice
Expert: authoritative knowledge of discipline and deep tacit understanding across area of
practice
8 [R0-008] What level of familiarity do you have with smart grid cyber
security?
Please choose only one of the following:
Novice: minimal knowledge, no connection to practice
Beginner, working knowledge of key aspects of practice
Competent: good working and background knowledge of the area
Proficient: depth of understanding of discipline and area of practice
Expert: authoritative knowledge of discipline and deep tacit understanding across area of
practice
9 [R0-009] What is your gender?
Please choose only one of the following:
Female
Male
10 [R0-010] What is your age?
Please choose only one of the following:
Under 20
21-30
31-40
41-50
51-60
Over 60
APPENDIX F: JAQ START PAGE
APPENDIX G: SAMPLE JAQ SURVEY STATEMENTS
May 30, 2012 Status Report
Smart Grid Cybersecurity Job Analysis Questionnaire
We continue to receive JAQ responses from Wave 2 and 3 panel members. There have been no
changes in Wave 1 activity. The following is a detailed break down of the response activity since our
last report, concentrating on the activity of Wave 2 and 3.
2 new demographic pages were completed from May 23 – 29. (Table 1)
129 new completed JAQ task statements were rated from May 23 - 29. (Table 2)
Summary of responses through May 29th
●
●
●
●
●
●
●
●
301 demographic pages were accessed with 131 being completed (44% response rate from
those clicking on the demographic page) (Table 3)
8,299 completed JAQ task evaluations have been rated with 4,902 (59%) task evaluations
classified under the Security Operations job role. (Table 4)
Tables 3 & 5: 131 completed demographic pages have been submitted for Waves 2 & 3. Wave
3 utility organizations account for 79 pages or 60% of the completed pages.
Tables 4 & 6: 8,299 completed JAQ task evaluations have been rated for Waves 2
& 3. Wave 3 utility organizations account for 5,289 completed task evaluations of
64% of the total.
123 total JAQ task evaluations have been submitted for Wave 3 by utility organizations. By job
role: 40 Incident Response, 11 Intrusion Analysis, and 72 Security Operations (Table 6)
8,299 task evaluations or 16 complete surveys have been rated, as of May 29, 2012. Task
statements increased by 129 task evaluations since May 23, 2012. Our goal is to rate 51,600
task statements or 100 complete surveys. (516 task evaluations = 1 complete survey).
Table 1
Total Respondents
& Task Ratings
1
Submitted by
Week
Count of Survey ID
Week
1
2
3
4
5
6
7
8
9
10
Grand Total
Total
42
42
10
9
2
4
14
4
2
2
131
Task
Ratings
2,795
2,408
645
731
129
215
903
301
43
129
8,299
Table 2
Total Survey Pages
& Task Ratings
Submitted by
Week
Count of Survey ID
Week
1
2
3
4
5
6
7
8
9
10
Grand Total
Total
65
56
15
17
3
5
21
7
1
3
193
Task
Ratings
2,795
2,408
645
731
129
215
903
301
43
129
8,299
INTERPRETATIVE ANALYSIS AND ACTION PLANS
2
The number of submissions increased slightly. Speaking directly to the panel members
appears to result in survey submissions. However, this approach takes time.
Phase 1 report nearing completion. Data is being analyzed. Text in final stage of
editing.
● Email sent to SGC panel members on May 29th to confirm interest in participating in
Phase 2 panel. Of the replies received so far, all are committing to Phase 2.
● Reviewing strategy for panel recruitment. Will begin recruiting as soon as possible.
● Emails were sent to the SGC channels to verify distribution numbers. All outstanding
numbers have been confirmed.
● Spoke to SGC panel chair to discuss strategy for completing the remaining
questionnaires. The main reason given for not finishing the questionnaire continues
to be finding time to set aside for working on the questionnaire.
● Sent email to the panel members requesting them to complete their questionnaires
by May 30th.
●
Action Plans:
●
●
●
●
Finish Phase 1 report.
Confirm remaining Phase1 panelists for Phase 2.
Begin active recruiting.
Continue reaching out to the SGC panelists for information and ideas
Table 3
3
Demographic
Pages Submitted
Waves 2 & 3
Count of Survey ID
Org Code
2: SCE
3: NIST
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
11: EnergySec
13: EEI
14: Encari
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
34: Pacificorp
52: Ameren
59: LinkedIn
999: NBISE Website
Unknown
Grand Total
Completed
No
1
1
4
8
33
2
1
5
2
2
10
8
3
1
1
8
16
64
170
Yes
2
Grand Total
3
1
1
5
29
37
19
52
1
3
2
3
2
2
1
1
5
2
4
3
5
31
41
7
15
3
6
1
5
6
10
18
2
18
11
75
131
301
4
Table 4
JAQS Submitted
by Job Roles
Complete
(All)
Count of Survey ID
Column Labels
Incident
Response
Org Code
2: SCE
5: NIST
6: IEIA
7: NATF
8: UNITE
10: NV Energy
11: EnergySec
13: EEI
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
59: LinkedIn
999: NBISE Website
Unknown
Grand Total
1
9
6
1
12
Intrusion
Analysis
Security
Operations
3
2
9
30
9
2
1
15
1
2
3
3
3
57
3
1
1
4
22
1
2
1
1
3
27
8
1
9
11
8
114
Grand
Total
3
1
41
24
1
13
2
1
3
4
45
9
3
9
15
4
15
193
5
Table 5
Wave 3: Utilities
Demographic
Survey Pages
Count of Org Code
Column Labels
No
Org. Code
10: NV Energy
11: EnergySec
13: EEI
14: Encari
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
34: Pacificorp
52: Ameren
59: LinkedIn
999: NBISE Website
Unknown
Grand Total
1
5
2
2
10
8
3
1
1
8
16
64
121
Ye
s
2
2
1
Grand Total
3
2
1
5
4
5
41
15
6
1
6
18
18
75
200
2
3
31
7
3
5
10
2
11
79
Table 6
Wave 3: Utilities
JAQs Submitted by
Job Role
Complete
Count of Survey ID
Org Codes
10: NV Energy
11: EnergySec
13: EEI
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
59: LinkedIn
999: NBISE Website
(All)
Job Role
Incident
Response
12
Intrusion
Analysis
2
1
15
1
2
3
3
3
1
1
Security
Operations
1
2
1
1
3
27
8
1
9
11
Grand
Total
13
2
1
3
4
45
9
3
9
15
4
6
Unknown
Grand Total
3
40
4
11
8
72
15
123
7
May 23, 2012 Status Report
Smart Grid Cybersecurity Job Analysis Questionnaire
SUMMARY OF RESPONSES
Activity continues to drop off. Tables 1 and 2 summarize activity to date. Notice
the similar pattern of responses. We can expect responses to drop off within two
weeks after a push to get panel member to complete their surveys.
Total Respondents
& Task Ratings
Submitted by Week
Count of Survey ID
Week
Total Task Ratings
1
42
2,795
2
42
2,408
3
10
645
4
9
731
5
2
129
6
4
215
7
14
903
8
4
301
9
2
43
Grand Total
129
8,170
●
●
●
●
●
●
●
2 new demographic pages were submitted from May 16 – 22. (Table1)
1 new survey page was completed from May 16 – 22. (Table 2)
296 demographic pages have been accessed with 129 pages being completed.
(Table 3)
190 JAQ pages have been rated. (Table 4)
198 demographic pages were accessed by utility organizations, 78 pages were
completed. (Table 5)
121 task statements pages have been submitted by Wave 3 utilities. No change
since the May 16th report. (Table 6)
8,170 task statements, which are equivalent to 15.83 complete surveys, have been
rated, as of May 22, 2012. Our goal is to rate 51,600 task statements or 100 complete
surveys. (516 task statements = 1 complete survey)
Table 1
1
Table 2
Total Survey Pages & Task
Ratings Submitted by
Week
Count of Survey ID
Week
1
2
3
4
5
6
7
8
9
Grand Total
Total
65
56
15
17
3
5
21
7
1
190
Task Ratings
2,795
2,408
645
731
129
215
903
301
43
8,170
INTERPRETATIVE ANALYSIS AND ACTION PLANS
The number of submissions is clearly coming to a halt. It appears that submissions
increase right after a personal call to action to the panel. As mentioned in the last
update, it is very important to take an active roll in reminding and engaging the
panel. New call to action strategy has been initiated:
2
○
○
○
○
○
Michael A. continues to reach out to his contact list to find new potential panel
recruits and opportunities for piloting a future assessment tools.
Elizabeth P. has reached out to the 3 panel chairs. Calls are being
scheduled.
David T. continues working on the Phase 1 reporting. Elizabeth will assist in
the final edit process.
Phase II project plans have been approved.
Phase II kick off meeting is tentatively scheduled for June 7th
Table 3
Demographic
Pages Submitted
Waves 2 & 3
Count of Survey ID
Org Code
2: SCE
3: NIST
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
11: EnergySec
13: EEI
14: Encari
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
34: Pacificorp
52: Ameren
59: LinkedIn
Completed
No
1
1
4
8
31
2
1
5
2
2
10
8
3
1
1
8
Yes
2
1
29
18
1
2
1
1
2
3
31
7
3
5
10
Grand Total
3
1
5
37
49
3
3
1
1
5
4
5
41
15
6
1
6
18
3
999: NBISE Website
Unknown
Grand Total
15
64
167
2
11
129
17
75
296
Table 4
JAQS
Submitted by
Job Roles
Complete
Count of Survey
ID
Org Code
2: SCE
5: NIST
6: IEIA
7: NATF
8: UNITE
10: NV Energy
11: EnergySec
13: EEI
18: APPA
23: TVA
24: PG&E
30: ARRA
(All)
Column Labels
Incident
Response
1
9
5
1
12
1
15
Intrusion
Analysis
Security
Operations
3
2
9
30
9
2
1
1
1
1
3
27
8
3
Grand
Total
3
1
41
23
1
13
1
1
3
4
45
8
4
32: ERCOT
52: Ameren
59: LinkedIn
999: NBISE
Website
Unknown
Grand Total
2
3
1
3
1
3
55
4
22
1
9
11
3
9
15
4
8
113
15
190
Table 5
Wave 3: Utilities
Demographic Survey
Pages
Count of Org Code
Org. Code
10: NV Energy
11: EnergySec
13: EEI
14: Encari
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
34: Pacificorp
52: Ameren
59: LinkedIn
999: NBISE Website
Column Labels
No
1
5
2
2
10
8
3
1
1
8
15
Yes
2
1
1
2
3
31
7
3
5
10
2
Grand Total
3
1
1
5
4
5
41
15
6
1
6
18
17
5
Unknown
Grand Total
64
120
11
78
75
198
Table 6
Wave 3: Utilities
JAQs Submitted by
Job Role
Complete
Count of Survey ID
Org Codes
10: NV Energy
11: EnergySec
13: EEI
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
59: LinkedIn
999: NBISE Website
Unknown
Grand Total
(All)
Job Role
Incident
Response
12
Intrusion
Analysis
2
1
15
3
2
3
3
3
39
1
1
4
11
Security
Operations
1
1
1
1
3
27
8
1
9
11
8
71
Grand
Total
13
1
1
3
4
45
8
3
9
15
4
15
121
6
May 16, 2012 Status Report
Smart Grid Cybersecurity Job Analysis Questionnaire
SUMMARY OF RESPONSES
We continue to receive JAQ responses from Wave 2 and 3 panel members. There
has been no changes in Wave 1 activity. The following is a detailed break down of the
response activity since our last report, concentrating on the activity of Wave 2 and 3.
4 new demographic pages were completed between May 8 -15. (Table 1)
7 new task survey pages were submitted from May 8 - 15. (Table 2)
278 demographic pages were accessed with 127 being completed. (Table 3)
189 JAQ pages have been submitted, 113 were classified under the Security
Operation job role. (Table 4)
● Table 5 breaks out Wave 3 demographic submissions by utility. Out of 183
demographic pages accessed, 77 were completed.
● 121 total task statement pages have been submitted for Wave 3 by utility. By job
role: 39 Incident Response, 11 Intrusion Analysis, and 71 Security Operations
(Table 6)
● 8,127 task statements or 15.75 complete surveys have been rated, as of May 14,
2012. Our goal is to rate 51,600 task statements or 100 complete surveys. (516
task statements = 1 complete survey)
●
●
●
●
Table 1
1
Table 2
INTERPRETIVE ANALYSIS & ACTION PLANS
The number of submitted demographic and JAQ survey pages decreased from May 8 15. We know from preliminary telephone conversations with panel members that the main
reason for not completing the task statements revolves around the ability to find a chunk
of uninterrupted time to complete the survey. Implementing and maintaining an active out
reach strategy with the panel members will help to increase and maintain the level of panel
participation. A more hands on approach with the panel members will help to keep panel
members engaged and get us closer to reaching the goal of 51,600 JAQ task statements
rated.
● Michael continues to reach out to his contacts.
● Liz is actively calling SGC panel members encouraging them to complete their
version of the JAQ. During those calls, she also using the calls as opportunities to
request updates on contact information, bios, general JAQ feedback, and interest in
participating in Phase II.
● Liz will follow up with Michael’s LinkedIn contacts.
● The distribution channels will be revisited.
● Project plan for recruiting new panel members for Phase II is just getting underway.
● Reviewing the NBISE website for ways to attract and recruit new panel members.
● JAQ webinar is in the design phase. It should be ready by next week. At which
time, it will presented for review.
● Phase II project plan and budget have been submitted for PNNL review and
approval. Follow up questions have been answered. We are awaiting the final
contract documents from Kim Massie.
2
Table 3
Table 4
3
Table 5
Table 6
4
5
May 9, 2012 Status Report
Smart Grid Cybersecurity Job Analysis Questionnaire
SUMMARY OF RESPONSES
We continue to receive JAQ responses from Wave 2 and 3 participants. There was been
no changes in Wave 1 activity. The following graph shows an uptick in submissions. The
increased activity is due to reaching out to participants on an individual basis.
●
●
●
●
●
●
●
●
Table 1 shows 14 new demographic pages were completed from May 2-8
Table 2 shows 21 new survey pages were submitted from May 2-8
Table 3 shows there have been 266 demographic pages accessed with 123 being
completed.
Table 4 shows that of the 182 JAQ pages submitted, 109 were classified under
Security Operations.
Tables 5 & 6 show the detail for the Utility companies. PG&E has the highest
submission rate of the utilities.
Table 5 breaks out Wave 3 demographic submissions by utility. Out of 123
demographic pages submitted, 83 were from utilities, which is 67% of the total
number received to date from Waves 2 & 3.
Table 6 breaks out Wave 3 task statement submissions by utility. Out of 182 task
survey pages, 115 survey pages were from utilities, which is 60.4% of the total
number of survey pages received for Wave 2 &3.
Total number of task statements rated is 7,826. The goal for submission is 51,600.
Table 1
1
Table 2
INTERPRETIVE ANALYSIS & ACTION PLANS
The final response rates will be determined in two ways. Response Rate 1 will be the total
number of landing page click-thrus divided by total number of invite emails distributed. This
response rate is expected to be 2 to 5 %. Response Rate 2 will be determined by the total
number of “Yes” demographic divided by the number of landing page click-thrus. This
response rate is expected to be 30%. Table 2 shows that 7,826 task statements have been
rated by the end of week 7.
● Michael has reviewed and identified potential participants through his personal
LinkedIn account. These contacts will be called.
● Elizabeth is in the process of reaching out to the JAQ SGC panel members to
ensure that they have completed their version of the JAQ and to gauge their interest
in participating in Phase II panels.
● Project planning is underway for recruiting new panel participants.
● David, Elizabeth, and Steve are putting together a webinar to support completion of
the JAQ by PG&E staff. We are in the process of confirming a date for this webinar.
● Phase 2 plan and budget has been submitted for review and approval. Waiting on
additional questions.
2
Table 3
3
Table 4
Table 5
4
Table 6
5
May 2, 2012 Status Report
Smart Grid Cybersecurity Job Analysis Questionnaire
SUMMARY OF RESPONSES
The chart below shows the benefit of introducing the iPad Giveaway and creating a larger
JAQ slice for Waves 2 and 3. The chart combines all responses received after the iPad
announcement. Notice that the pattern of responses is similar. We can expect roughly a
one-week half-life to responses, meaning that within two weeks of an invite responses drop
to near zero.
INTERPRETATIVE ANALYSIS AND ACTION PLANS
● The response rate to the JAQ to date appears to be exceptional for surveys of this
population, however still falls below our objective of reaching 100+ fully completed JAQs.
Wave 2/3 has, as planned, produced a substantial increase in survey pages completed
warranting extension of this approach to obtain a larger sample. Accordingly we have
initiated several actions to improve the response rate over time:
○ Michael has spoken with project sponsors about the response pattern. They agreed
with our assessment that it is far above comparable studies, and want to provide
assistance to increase responses. They agreed that this should not stop progress on
Phase II, but could become a continuing task under this next phase of the project.
○ Elizabeth Pyle (NBISE’s new Panel Coordinator) has interviewed respondents
1
from PG&E who committed to take the entire JAQ, but have completed only the
demographic page. Changes needed to the instructions to clarify what is required
to complete the JAQ have been made. David and Elizabeth spoke with Jamey
Sample (CISO of PG&E) who remains very supportive, indicating that he wants to
incorporate the findings into his personnel planning, evaluation, and compensation
system. We are scheduling two webinars to support completion of the entire JAQ by
his staff of 60 over the next few weeks.
○ Michael is contacting more utility companies to obtain commitments similar to PG&E
for complete submission of the JAQ.
○ David is working with PNNL scientific staff to devise a rigorous approach to
determining the sample size needed to infer discrimination of tasks as foundational
or differentiating in predicting job performance.
● The use of JAQ item response counts rather than survey pages, in the above chart shows
the importance of distinguishing between Wave 1 with 14 task statements and Waves 2 and
3 with 43 statements per survey page, respectively. Accordingly, we recommend from here
forward that we report results in terms of JAQ item response counts.
● The response pattern analysis (shown in the JAQ submission tables below) indicated the
greatest response is coming through direct appeals to utilities. These results support the
strategy of extending Wave 3 (targeting specific utility leaders) as a promising approach to
increase the sample.
○ In the next report we will separately analyze these responses
○ Such a targeted (or purposive) sample raises statistical analysis concerns that have
been discussed with the PNNL research team. We are currently exploring methods
for determining if the use of a purposive sample would bias the results.
2
RESPONSE RESULTS
● 56 Unique Organizations (Channels) asked to participate in distribution
● Wave 1 - Began February 22
14 task statements per page
Demographic Pages Submitted: Total: 304
●
Incomplete: 165
Complete: 139
Wave 2 & 3 - Began March 21
43 task statements per page
Demographic Pages Submitted: Total: 225
Incomplete: 120
Complete: 105
Table 1: Comparison of Wave 2 & 3 to Wave 1: Demographic pages
Submitted by Week
Wave 2 & 3 Demographic pages
submitted
Week
1
2
3
4
5
6
Current Week (7)
Grand Total
Total Completed (43 items/
page)
42
42
10
8
1
5
0
108
Wave 1 (14 items/
page)
90
22
4
21
2
0
0
139
Table 2: Wave 2 & 3: Survey pages Submitted by Week
Goal: 100 Full JAQs
Week
1
2
3
4
5
6
Current Week (7)
Grand Total
Total Completed
39
35
5
7
1
1
0
88
Equivalent JAQs
3.25
2.92
0.42
0.58
0.08
0.08
0
7.33
3
Table 3: Total Complete JAQ Survey Pages Submitted by Job Role:
Wave 2/3:
51
34
3
88
Total submitted for Sec Ops:
Total submitted for Incident Response:
Total submitted for Intrusion Analysis:
Total Submitted:
JAQ Pages submitted per completed Demographic page: 0.81
Wave 1
104
19
17
140
Table 4: Wave 1 Demographics
Demographic Pages Submitted by Channel
Wave 1 Demographics Pages
Channel
NIST
ES-ISAC
EnergySec
ARRA
N/A
SEMPRA
PGE
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
Incomplete
45
30
9
17
25
12
2
4
3
3
2
5
2
4
1
1
Complete
40
7
28
11
1
11
11
5
6
5
5
1
3
1
2
2
Grand Total
85
37
37
28
26
23
13
9
9
8
7
6
5
5
3
3
165
139
304
4
Table 5: Wave 1 JAQ Submissions
Questionnaire Pages Submitted by Job Role per Channel
JAQ Survey
Pages
Job Role
Channel
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA Forum
ARRA
TVA
Grand Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
3
11
3
8
12
5
1
104
3
19
4
2
17
Grand
Total
37
8
10
6
10
31
3
8
12
9
6
140
5
Table 6: Wave 2 & 3 Demographics
Demographic Pages Submitted by Channel
Wave 2/3 Demographics Pages
Channel
2: SCE
3: NIST
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
13: EEI
14: Encari
34: Pacificorp
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Representation from Wave 3
Incomplete
1
1
4
5
22
2
1
Complete
2
5
1
2
2
13
8
3
1
49
123
2
3
48
5
3
5
11
108
Grand Total
3
1
5
17
32
3
3
1
5
1
4
5
61
13
6
6
60
231
22
18%
63
58%
85
36%
1
12
10
1
2
1
6
Table 7: Questionnaire Pages Submitted by Job Role per Channel
JAQ Survey Pages
Channel
2: SCE
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Job Role
Incident Response
Intrusion Analysis
Security Operations
Grand Total
2
1
3
2
1
12
9
1
1
13
1
1
1
34
2
1
12
3
1
12
2
1
40
2
2
5
5
88
1
3
1
1
26
2
1
5
3
51
7
April 25, 2012 Status Report
Status of Wave 1 Distribution
(14 task statements per page)
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA
Forum
ARRA
TVA
Grand
Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
8
Grand Total
37
8
10
6
10
31
3
8
12
5
1
3
4
2
12
9
6
104
19
17
140
3
11
3
Questionnaire
pages completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Demographic Pages Submitted
Org Code
NIST
ES-ISAC
EnergySec
ARRA
N/A
SEMPRA
PGE
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
Complete
No
45
30
9
17
25
12
2
4
3
3
2
5
2
4
1
1
165
Yes
40
7
28
11
1
11
11
5
6
5
5
1
3
1
2
2
139
Grand Total
85
37
37
28
26
23
13
9
9
8
7
6
5
5
3
3
304
Questionnaire Pages
RESPONSES PER WEEK
Week
1
2
3
4
5
6
7
Demographics
90
22
4
21
2
1
0
Survey Sections
91
28
0
18
3
0
0
2
Status of Wave 2 & 3 Distribution
(43 task statements per page)
Count of Questionnaire Pages
Demographic
Survey
Count of Questionnaire Pages
Org Code
2: SCE
3: NIST
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
14: Encari
34: Pacificorp
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Wave 3
Percent of column
Complete
No
Yes
1
1
4
5
22
2
1
5
1
2
2
13
8
3
1
49
120
22
18%
Grand Total
2
3
1
5
17
32
3
3
5
1
4
5
61
13
6
6
60
225
1
12
10
1
2
2
3
48
5
3
5
11
105
63
60%
85
38%
3
Questionnaire Pages Submitted by Job Role
JAQ Survey
Pages
Job Role
Org Code
2: SCE
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV
Energy
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Int
rus
ion
Ana
lysi
s
Incident
Response
1
3
1
1
Grand
Security Operations Total
2
2
1
9
12
1
2
1
12
1
13
1
1
1
33
1
3
JAQ Pages
submitted
per
completed
Demographi
c page
1
1
26
2
1
5
3
51
12
2
1
40
2
2
5
5
87
0.84
Equivalent Complete JAQs
7.25
4
(see next page for weekly
totals)
Demographic pages Submitted
by Week
Complete
Yes
Total
42
42
10
8
1
2
105
Week
1
2
3
4
5
Current Week
Grand Total
Survey pages Submitted by Week
Complete
Yes
Week
1
2
3
4
5
Current Week
Grand Total
Comparison from Wave 1
90
22
4
21
2
0
139
Total
39
35
5
7
1
0
87
Equivalent JAQs
3.25
2.92
0.42
0.58
0.08
0
7.25
5
April 18, 2012 Status Report
Status of Wave 1 Distribution
(14 task statements per page)
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA
Forum
ARRA
TVA
Grand
Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
8
Grand Total
37
8
10
6
10
31
3
8
12
5
1
3
4
2
12
9
6
104
19
17
140
3
11
3
Questionnaire
pages completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Demographic Pages Submitted
Org Code
NIST
ES-ISAC
EnergySec
ARRA
N/A
SEMPRA
PGE
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
Complete
No
45
30
9
17
25
12
2
4
3
3
2
5
2
4
1
1
165
Yes
40
7
28
11
1
11
11
5
6
5
5
1
3
1
2
2
139
Grand Total
85
37
37
28
26
23
13
9
9
8
7
6
5
5
3
3
304
Questionnaire Pages
RESPONSES PER WEEK
Week
1
2
3
4
5
6
7
Demographics
90
22
4
21
2
1
0
Survey Sections
91
28
0
18
3
0
0
2
Status of Wave 2 & 3 Distribution
(43 task statements per page)
Count of Questionnaire Pages
Demographic
Survey
Count of Questionnaire Pages
Org Code
2: SCE
3: NIST
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
14: Encari
34: Pacificorp
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Wave 3
Percent of column
Complete
No
Yes
1
1
4
5
19
2
1
3
1
2
2
13
8
3
1
46
112
22
20%
Grand Total
2
3
1
5
17
29
3
3
3
1
4
5
61
12
6
6
56
215
1
12
10
1
2
2
3
48
4
3
5
10
103
63
61%
85
40%
3
Questionnaire Pages Submitted by Job Role
JAQ Survey
Pages
Job Role
Org Code
2: SCE
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV
Energy
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Unknown
Grand Total
Incident
Response
Int
rus
ion
Ana
lysi
s
1
3
1
1
Security Operations
2
9
1
12
1
13
1
1
1
33
1
3
JAQ Pages
submitted
per
completed
Demographi
c page
1
1
26
2
1
5
3
51
Grand
Total
2
1
12
2
1
12
2
1
40
2
2
5
5
87
0.84
Equivalent Complete JAQs
7.25
4
(see next page for weekly
totals)
Demographic pages Submitted by Week
Complete
Yes
Total
42
42
10
8
1
103
Week
1
2
3
4
Current Week
Grand Total
Survey pages Submitted by Week
Complete
Yes
Week
1
2
3
4
Current Week
Grand Total
Total
39
35
5
7
1
87
Equivalent JAQs
3.25
2.92
0.42
0.58
0.08
7.25
5
April 12, 2012 Status Report
Status of Wave 1 Distribution
(14 task statements per page)
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA
Forum
ARRA
TVA
Grand
Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
8
Grand Total
37
8
10
6
10
31
3
8
12
5
1
3
4
2
12
9
6
104
19
17
140
3
11
3
Questionnaire
pages completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Demographic Pages Submitted
Org Code
NIST
ES-ISAC
EnergySec
ARRA
N/A
SEMPRA
PGE
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
Complete
No
45
30
9
17
25
12
2
4
3
3
2
5
2
4
1
1
165
Yes
40
7
28
11
1
11
11
5
6
5
5
1
3
1
2
2
139
Grand Total
85
37
37
28
26
23
13
9
9
8
7
6
5
5
3
3
304
Questionnaire Pages
RESPONSES PER WEEK
Week
1
2
3
4
5
6
Demographics
90
22
4
21
2
1
Survey Sections
91
28
0
18
3
0
2
Status of Wave 2 & 3 Distribution
(43 task statements per page)
Count of Questionnaire Pages
Demographic
Survey
Count of Questionnaire Pages
Org Code
2: SCE
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
14: Encari
18: APPA
23: TVA
30: ARRA
32: ERCOT
34: Pacificorp
24: PG&E
52: Ameren
Grand Total
Complete
No
1
4
8
16
2
1
3
2
2
8
3
1
10
1
62
Yes
2
1
24
10
1
2
2
3
3
3
31
5
87
Grand Total
3
5
32
26
3
3
3
4
5
11
6
1
41
6
149
3
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
2: SCE
05: IEIA
06: NATF
07: ES-ISAC
08: UNITE
10: NV Energy
18: APPA
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Grand Total
Incident Response Intrusion Analysis
1
4
1
1
12
1
14
1
1
1
1
19
2
1
5
46
10
1
30
Security Operations
2
2
Grand Total
2
1
19
2
1
12
2
1
29
2
2
5
78
4
April 4, 2012 Status Report
Status of Wave 1 Distribution
(14 task statements per page)
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PG&E
EnergySec
NATF
ES-ISAC
IEIA Forum
ARRA
TVA
Grand Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
3
11
3
8
12
5
1
104
3
19
4
2
17
Grand
Total
37
8
10
6
10
31
3
8
12
9
6
140
Questionnaire
pages completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Demographic Pages Submitted
Complete
Org Code
NIST
EnergySec
ES-ISAC
ARRA
N/A
SEMPRA
PG&E
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
No
45
9
29
17
23
12
2
4
3
3
2
4
2
4
1
1
161
Yes
40
28
7
10
1
11
11
5
6
5
5
1
3
1
2
2
138
Grand Total
85
37
36
27
24
23
13
9
9
8
7
5
5
5
3
3
299
Questionnaire Pages
RESPONSES PER WEEK
Week
1
2
3
4
5
Demographics
90
22
4
21
2
Survey Sections
91
28
0
18
3
2
Status of Wave 2 & 3 Distribution
(43 task statements per page)
Count of Questionnaire Pages
Demographic Survey
Count of Questionnaire Pages
Org Code
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
18: APPA
23: TVA
30: ARRA
32: ERCOT
24: PG&E
52: Ameren
Grand Total
Complete
No
4
8
13
2
1
2
2
8
3
10
1
54
Yes
1
24
7
1
2
2
3
3
3
30
3
79
Grand Total
5
32
20
3
3
4
5
11
6
40
4
133
Questionnaire Pages Submitted by Job Role
Count of
Questionnaire
Pages
Org Code
05: IEIA
06: NATF
07: ES-ISAC
08: UNITE
10: NV Energy
18: APPA
Job Role
In
ci
de
nt
R
es
p
o
ns
e
1
3
1
1
12
Intrusion Analysis
Security Operations
1
1
Grand Total
1
14
18
1
1
12
1
2
3
23: TVA
24: PG&E
30: ARRA
32: ERCOT
52: Ameren
Grand Total
10
1
29
2
1
19
2
1
1
39
1
29
2
2
1
70
4
March 30, 2012 Status Report
Status of Wave 1 Distribution
(14 task statements per page)
Questionnaire Pages Submitted by Job Role
Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PG&E
EnergySec
NATF
ES-ISAC
IEIA Forum
ARRA
TVA
Grand Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
3
11
3
8
12
5
1
104
3
19
4
2
17
Grand
Total
37
8
10
6
10
31
3
8
12
9
6
140
Questionnaire
pages completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Demographic Pages Submitted
Complete
Org Code
NIST
EnergySec
ES-ISAC
ARRA
N/A
SEMPRA
PG&E
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
No
45
9
29
17
23
12
2
4
3
3
2
4
2
4
1
1
161
Yes
40
28
7
10
1
11
11
5
6
5
5
1
3
1
2
2
138
Grand Total
85
37
36
27
24
23
13
9
9
8
7
5
5
5
3
3
299
Questionnaire Pages
RESPONSES PER WEEK
Week
1
2
3
4
5
Demographics
90
22
4
21
2
Survey Sections
91
28
0
18
3
2
Status of Wave 2 & 3 Distribution
(43 task statements per page)
Count of Questionnaire Pages
Demographic Survey
Count of Questionnaire
Pages
Complete
Org Code
5: IEIA
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
18: APPA
23: TVA
30: ARRA
32: ERCOT
24: PG&E
52: Ameren
Grand Total
No
2
5
9
2
1
2
2
8
3
10
1
45
Yes
Grand Total
2
12
14
3
3
4
5
10
5
44
4
106
7
5
1
2
2
3
2
2
34
3
61
Percentage
of
Grand Total
1.89%
11.32%
13.21%
2.83%
2.83%
3.77%
4.72%
9.43%
4.72%
41.51%
3.77%
Questionnaire Pages Submitted by Job Role
Count of Questionnaire
Pages
Org Code
6: NATF
7: ES-ISAC
8: UNITE
10: NV Energy
18: APPA
23: TVA
30: ARRA
32: ERCOT
24: PG&E
52: Ameren
Grand Total
Job
Role
Inci
dent
Respo
nse
1
1
1
12
Intrusion
Analysis
1
3
1
18
2
Security
Operations
7
1
1
2
1
19
1
32
Grand
Total
8
1
1
12
2
1
2
1
23
1
52
Percentage
of
Grand Total
15.38%
1.92%
1.92%
23.08%
3.85%
1.92%
3.85%
1.92%
44.23%
1.92%
3
Status of Distribution
Wave 1 and 2 Distribution
Channel Organization
NESCO (EnergySec members)
UNITE
NRECA
APPA
EEI
ESISAC (NERC)
AEIC
NATF Security Practices Group
NAESB
NASPI
EPRI
NIST CSWG
ISA99 committee mailing (NIST)
OpenSG Security
Smart Grid Security Blog
IEIA Forum
Southern Cal. Edison
NV Energy
Encari (LinkedIn group)
GWAC List
Individual
Individual
TVA
PG&E
Individuals (by Lori Ross O’Neil)
23 Channels
Original
Estimates
500
400
800
800
500
50
500
12
100
30
500
30
100
46
1
5
650
1
1
150
5176
Confirmed # Sent
610
39
unconfirmed totals
unconfirmed totals
600
unconfirmed totals
unconfirmed totals
unconfirmed totals
1000
unconfirmed totals
unconfirmed totals
750
500
unconfirmed totals
unconfirmed totals
52
1
5
650
unconfirmed totals
1
1
unconfirmed totals
150
unconfirmed totals
Point of Contact
Stacy Bresler
Mark Engles, member
Barry Lawson, Director
Nathan Mitchell
David Batz, Manager
Ben Miller
Len Holland, Director
Karl Perman
Rae McQuade, President
Alison Silverstein
Annabelle Lee
Marianne Swanson
James Gilsinn
Darren Highfill
Andy Bochman
John Allen
Gary Bell
Eric Ohlson
Steve Hamburg
Ron Melton
Joe Pereira
Robert B. Burke
Michael Tallent
Lori Ross O'Neil
4359
4
Wave 3 Distribution
Channel Organization
PG&E
SEMPRA
AEP
DTE Energy
Consumers Energy
ARRA SGIG
NetSectech
ERCOT Texas CIWG
Oncor
Pacificorp
Hydro One
First Energy
National Grid
NPCC
Anfield Group
Exelon
Hydro QB
TXU
IESO
NiSource - NIPSCO
Southern
Entergy
ConEd
Opwer
NRG
KCPL
Progress
Ameren
FPL
Alliant
ATC
Capital Power
PEPCO
33 Channels
Original
Estimates
400
400
Confirmed # Sent
60
20
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
400
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
10
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
unconfirmed totals
490
Point of Contact
Jamey Sample
Scott King
Jerry Freese
Lanse E LaVoy
Jim Beechey
Debbie Haught
Adam Lispon
Jim Brenton
Bill Muston
Bobby Timmons
Rene Bourassa
Brandon Bolon
Brian Betterton
Brian Hogue
Chris Humphries
Dan Hill
Dan Popwhich
Dave Andrews
David Dunn
Tim Conway
Don Roberts
Doug Mader
Jon lim
Lee Aber
Marty Sidor
Stephen Diebold
Ed Goff
Chuck Abel/Chris Sawall
Anne Pramod
Michael Churchill
Steve
Robert Johnson
Frank Dorrin
5
March 23, 2012 Status Report
JAQ Updates
1.
Completed Wave 2 of JAQ distribution on Wednesday, March 21
(List of Entities sent to attached as attachment A)
2.
Began Wave 3 of JAQ distribution to Utilities and Others for full survey responses on
Wednesday, March 21 (to be completed by March 30)
(List of Entities sent to attached as attachment B)
Status of Wave 1 Distribution (prior to start of Wave 2)
Surveys Completed by Job Role
Org Code
NIST
NV Energy
SEMPRA
EEI
PG&E
EnergySec
NATF
ES-ISAC
IEIA Forum
ARRA
TVA
Grand Total
Complete
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Job Role
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
3
11
3
8
12
5
1
104
3
19
4
2
17
Grand
Total
37
8
10
6
10
31
3
8
12
9
6
140
Surveys completed
per person
0.93
0.91
0.91
0.60
2.00
1.20
1
Demographics
Survey ID Counts
Org Code
NIST
EnergySec
ES-ISAC
ARRA
N/A
SEMPRA
PG&E
TVA
IEIA Forum
EEI
NATF
Encari
SCE
Gridwise
APPA
NV Energy
Grand Total
Complete
No
45
9
29
16
23
12
2
4
3
3
2
4
2
4
1
1
160
Yes
40
28
7
10
1
11
11
5
6
5
5
1
3
1
2
2
138
Grand Total
85
37
36
26
24
23
13
9
9
8
7
5
5
5
3
3
298
Ranking of Difficulty of Survey by Respondents
Difficulty of Survey
Easy
Not very difficult
Moderately difficult
Very difficult
Total Responses
5
13
10
2
2
Attachment A
Wave2 JAQ Channel Organizations
SCE
NIST
NIST
SCSBlog
IEIA Forum
IEIA Forum
NATF
ESISAC
UNITE
SG Security WG
NV Energy
EnergySec (Members)
EEI
GridWise(r) Architecture Council List
+ 2 Individuals from of ISO New England from initial GWAC mailing
NRECA
APPA
AEIC
NAESB
NASPI
EPRI
Encari
3
Attachment B
Wave3 JAQ Utilities / Organizations
TVA
PG&E
SEMPRA
AEP
DTE Energy
Consumers Energy
ARRA SGIG
NetSectech
ERCOT
Oncor
Pacificorp
Hydro One
First Energy
National Grid
NPCC
Anfield Group
Exelon
Hydro QB
TXU
IESO
NiSource - NIPSCO
Southern
Entergy
ConEd
Opwer
NRG
KCPL
Progress
Ameren
FPL
Alliant
ATC
Capital Power
PEPCO
4
March 14, 2012 Status Report
SGC Panel Updates
1.
2.
3.
4.
5.
A focus session of the SGC Panel was held on March 13 to determine and update status of
members’ progress in pilot of JAQ
a. Of the 35 who started the survey, 11 have completed it in its entirety
SGC Chair Justin Searle to follow up with the panel on status of those unable to attend the
March 13 meeting and those still working to complete the survey
Pilot of JAQ among panel to be complete by April 6
Recruitment of Phase 2 of SGC Panel is planned: Needs discussion
NBISE suggests posting abstracts from interim reports to PNNL on its website. Needs
approval
JAQ Updates
Wave 2 of JAQ distribution is planned for this week: Needs discussion
2. SGC Vice Chair Scott King suggested the utility representatives on panel forward an invite to
their utility staff
1.
Survey Trends
RESPONSES PER WEEK
Week 1
Week 2
Week 3
Demographics
171
43
20
Surveys
168
44
3
1
Surveys Completed by Job Role
Surveys
Job Role
Org Code
Complete
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA Forum
Grand
Total
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Security
Operations
27
8
10
6
7
20
Incident
Response
2
Intrusion
Analysis
8
3
11
3
5
9
92
16
Grand
Total
37
8
10
6
10
31
3
5
9
Surveys completed
per person
0.93
4.00
0.91
1.20
0.91
1.11
0.75
1.00
3.00
119
1.04
11
Demographics
Survey ID Counts
Org Code
NIST
EnergySec
ES-ISAC
SEMPRA
PGE
EEI
IEIA Forum
NATF
SCE
Encari
Gridwise
NV Energy
Grand Total
Complete
No
Yes
44
9
24
12
2
3
3
2
2
4
4
1
110
40
28
5
11
11
5
3
4
3
1
1
2
114
Grand
Total
84
37
29
23
13
8
6
6
5
5
5
3
224
Potential Respondents
Response Rate
500
500
100
60
60
500
46
12
60
500
100
5
2443
16.8%
7.4%
29.0%
38.3%
21.7%
1.6%
13.0%
50.0%
8.3%
1.0%
5.0%
60.0%
9.2%
2
SGC JAQ SURVEY Results as of March 7, 2012
Surveys Completed by Job Role Roles
Surveys
Org Code
Job Role
Complete
NIST
NV Energy
SEMPRA
EEI
PGE
EnergySec
NATF
ES-ISAC
IEIA Forum
Grand
Total
Security
Operations
Incident
Response
Intrusion
Analysis
Grand
Total
27
8
10
6
7
20
2
8
37
8
10
6
10
31
3
5
9
Surveys
completed per
person
0.93
4.00
0.91
1.20
1.00
1.15
0.75
1.25
3.00
119
1.07
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
3
11
3
5
9
92
16
11
Demographics
Survey ID Counts
Org Code
NIST
EnergySec
SEMPRA
ES-ISAC
PGE
EEI
NATF
SCE
Encari
Gridwise
IEIA Forum
NV Energy
Grand Total
1
COMPLETE
No
44
8
12
15
2
3
2
2
4
4
1
1
98
Yes
40
27
11
4
10
5
4
3
1
1
3
2
111
Grand Total
84
35
23
19
12
8
6
5
5
5
4
3
209
Potential Respondents
500
500
60
100
60
500
12
60
500
100
0
800
3192
Response Rate
16.8%
7.0%
38.3%
19.0%
20.0%
1.6%
50.0%
8.3%
1.0%
5.0%
0.4%
6.5%
Smart Grid Cybersecurity Panel
Update
SGC Panel
Job Analysis Survey
Status & Update
SGC JAQ Surveys
SGC JAQ Demographic
54% of respondents
complete:
• Landing page
• Selecting a job
role
• Demographic
page
Survey ID
Counts
February 24, 2012 Status Report
The next meeting of the SGC Panel is scheduled for March 13
2. IEIA Forum, Auckland, New Zealand Trip (report attached)
3. Phase 2: all necessary information submitted – waiting for final questions or comments.
Please let us know if you require additional information.
4. Deliverables:
1.
14.b. Pilot of JAQ - Submitted
● Pre-Survey Demographic Questions
● Additions to Demographics and Definitions
● Changes to Survey Page Top Text to match Importance Language
● Miscellaneous issues per PNNL addressed
● Changes to Survey items based on Panel and PNNL comments (PDF)
5. Public JAQ - To be submitted this week
Distributed JAQ through 26 Channels as of February 22.
Investigating: LinkedIn groups, direct elicitation of JAQ taking by utilities
●
●
●
●
●
●
●
●
●
Changes to statements containing “Determine” to the use of “Assess”
Further revision of demographics
Grid Titles
Personalized URLs
Final Landing page
Final Survey Page Instructions
Template with links created
Exit Page
Miscellaneous Bugs and Fixes:
○ LANDING PAGE
● Change the job descriptions to be listed down the page alphabetically
○
GROUPS
● Fix group numbering for single digit group numbers, adding a leading zero so that
they sort correctly.
● Add pause survey and send email to continue
○
EXIT PAGE TEXT:
● Thank you for your smart grid Job Analysis Questionnaire submission.
● The additional survey should require only approximately 5 minutes of your time.
Comments not working yet!
● The "Click here when finished commenting" submit returns back to the same page.
1
○
FINAL IMPORTANCE TEXT:
Importance in this context means: How important is this task for determining if an
individual has a reached the target level of expertise (novice to expert)?
2
Smart Grid Cybersecurity Job Analysis (Phase I)
Task
Smart Grid Cybersecurity Job Analysis (Phase I))
(Create Panel Charter)
(Conduct literature review)
(Establish panel of SMEs)
Form advisory group
Identify prospective panel members
Select panel leaders
Select panel members
(Interview SMEs in focus group sessions)
Administrative
Prepare focus group sessions
Coordinate panel sessions
Respond to panel survey
Design standard panel meeting agenda
Conduct panel kickoff meeting
Panel Activities
SME Panel develops list of goals and objectives
SME Panel develops list of responsibilities
SME Panel sorts responsibilities into goals
(Prepare Job Definition Report)
Supervise collection of Job Definition information
Provide technology support for Job Definition sessions
Collect and organize job definition information from panel sessions
Revise alignment with IA Framework
Draft Job Definition Report
Review and finalize Job Definition Report
(Develop and pilot Job Analysis Questionnaire (JAQ))
SME Panel develops lists of tasks
SME Panel sorts tasks,methods and tools into goals
Expected Finish
Comments
2/9/2012
Complete
Complete
Complete
Complete
Complete
Complete
Complete
12/12/2011
10/17/2011 Complete
12/15/2011
12/22/2011
12/27/2011
12/12/2011 Complete
11/26/2011
11/18/2011
11/22/2011
11/27/2011
12/2/2011
12/12/2011
1/24/2011
1/3/2012
1/10/2012
Smart Grid Cybersecurity Job Analysis (Phase I)
Task
Smart Grid Cybersecurity Job Analysis (Phase I))
(Create Panel Charter)
(Conduct literature review)
(Establish panel of SMEs)
(Interview SMEs in focus group sessions)
Administrative
Panel Activities
(Prepare Job Definition Report)
Supervise collection of Job Definition information
Provide technology support for Job Definition sessions
Collect and organize job definition information from panel sessions
Revise alignment with IA Framework
Draft Job Definition Report
Review and finalize Job Definition Report
(Develop and pilot Job Analysis Questionnaire (JAQ))
SME Panel develops lists of tasks
SME Panel sorts tasks,methods and tools into goals
Expected FinishComments
2/9/12
Complete
Complete
Complete
12/12/11
10/17/11 Complete
12/6/11
12/12/11
11/26/11
11/18/11
11/22/11
11/27/11
12/2/11
12/12/11
1/3/12
12/13/11
12/20/11
Smart Grid Cybersecurity Job Analysis (Phase I)
Task
Owner
Expected Finish
Comments
(Conduct literature review)
David Tobey
Research published sources of job classification and description informationResearch assistant
Determine JAQ sampling plan with sources
David Tobey
Interview advisory board members to document literature sources
David Tobey
(Develop Job Classification Report)
David Tobey
Supervise research of published material
Roni Reiter-Palmon
Collect and digest job classification and description literature
David Tobey
Align functional roles with IA Framework
David Tobey
Draft Job Classification Report
Roni Reiter-Palmon
Review and finalize Job Classification Report
David Tobey
(Outreach)
Conducted outreach briefing at Smart Grid West
Michael Assante
Conducted outreach briefing at NERC's Grid Security Conference
Michael Assante
10/30/2011
10/27/2011 Complete
10/27/2011
10/25/2011 Complete
10/21/2011
10/28/2011
10/27/2011
10/27/2011
10/29/2011
10/30/2011
Complete
Complete
Smart Grid Cybersecurity Job Analysis (Phase I)
Task
Owner
Expected Finish
Comments
(Conduct literature review)
David Tobey
Research published sources of job classification and description informationResearch assistant
Determine JAQ sampling plan with sources
David Tobey
Interview advisory board members to document literature sources
David Tobey
(Develop Job Classification Report)
David Tobey
Supervise research of published material
Roni Reiter-Palmon
Collect and digest job classification and description literature
David Tobey
Align functional roles with IA Framework
David Tobey
Draft Job Classification Report
Roni Reiter-Palmon
Review and finalize Job Classification Report
David Tobey
(Select panel members)
Michael
Prepare Panel SiteSpace
Michael
Finalize panel candidate roster and resumes Docs collection
Michael
Review panelist candidate information and select 30 nominees and 10 alternates
Michael
Launch panel leadership into SGC Panel SiteSpace
Michael
Conduct panel kickoff meeting(s)
Lori
Determine schedule for weekly panel sessions
David Tobey
(Outreach)
Conducted outreach briefing at Smart Grid West
Michael Assante
Conducted outreach briefing at NERC's Grid Security Conference
Michael Assante
10/21/2011
10/24/2011
10/25/2011
10/25/2011
10/21/2011
10/27/2011
10/26/2011
10/26/2011
10/29/2011
10/30/2011
11/18/2011
10/19/2011
Complete
Complete
Alternate recruiting in progress
Complete
Complete
Complete
Complete
Smart Grid
Smart
Cybersecurity
Grid Cybersec
Job
Smart Grid Cybersecurity Job Analysis (Phase I)
Task
Owner
Expected Finish
Comments
(Conduct literature review)
David Tobey
Research published sources of job classification and description information
Research assistant
Identify events and other outreach opportunities
Lori
Determine JAQ sampling plan with sources
David Tobey
Interview advisory board members to document literature sources
David Tobey
Interview advisory board members to document literature sources
Roni Reiter-Palmon
(Develop Job Classification Report)
David Tobey
Supervise research of published material
Roni Reiter-Palmon
Collect and digest job classification and description literature
David Tobey
Align functional roles with IA Framework
David Tobey
Draft Job Classification Report
Roni Reiter-Palmon
Review and finalize Job Classification Report
David Tobey
(Select panel members)
Michael
Prepare Panel SiteSpace
Michael
Finalize panel candidate roster and resumes Docs collection
Michael
Review panelist candidate information and select 30 nominees and 10 alternates
Michael
Launch panel leadership into SGC Panel SiteSpace
Michael
Conduct panel kickoff meeting(s)
Lori
10/21/2011
10/17/2011
10/12/2011 Complete
10/17/2011
10/17/2011
10/20/2011
10/21/2011
10/20/2011
10/18/2011
10/18/2011
10/21/2011
10/21/2011
11/18/2011
Complete
Complete
Alternate recruiting in progress
11/18/2011 Complete
10/12/2011 Scheduled for October 14 and 17
September 29, 2011 Status Report
Activities completed:
(Format will change)
Advisory Board:
Jamey Sample
John Allen
Bill Hunteman
Joel Garmon
Dr. Emannuel Hooper
PG&E
IEIA Forum
DOE (Retired)
Wake Forest Baptist Medical Center (CISO)
Global Info Intel, and Harvard
Officers:
●
●
Chair and Vice Chair have been fully provisioned, to select panel
membership by Friday, September 30.
Letter to go out to panel candidates notifying them of selection process and
chair and vice chair selection
Chair
Vice Chair
Justin Searle
Scott King
InGuardians
Sempra
Members:
●
Additional interest has been expressed in nominating candidates for
inclusion on panel
Candidates for Panel:
Sandeep
Bora
Ron
Balusamy (Balu)
Andy
Ryan
Jeff
Shawn
Ray
Art
Steve
Ido
Michael
Agrawal
Akyol
Ambrosio
Arumugam
Bochman
Breed
Bryner
Chandler
Cline
Conklin
Dougherty
Dubrawsky
Echols
Neilsoft Limited
PNNL
IBM Research
Infosys
IBM, Smart Grid Security Blog, DOD Energy Blog
ERCOT
PGE
Portland General Electric
Univ Houston
Univ Houston
IBM Global Technology Services
Itron
Salt River Project
1
Barbara
Bjorn
Ed
Steven
Maria
Jesse
Lisa
Cliff
James
Craig
Brennan
Terry
James
Mark
Charles
Craig
Endicott Popovsky
Frogner
Goff
Hanburg
Hayden
Hurley
Kaiser
Maraschino
Mater
Miller
O’Brien
Oliver
Pittman
Reed
Reilly
Rosen
P.
Sauer
William
Scott
Chris
Blake
Anthony David
Jason
Gale
Marty
Clay
Kevin
Mark
Don
Mike
Sanders
Saunders
Sawall
Scherer
Scott
Sonon
Sphantzer
Stoddard
Storey
Tydings
Ward
Weber
Wenstrom
University of Washington
Frogner Associates, Inc.
Progress Energy
Encari
Pentagon
NAESB Board
DHS
Southern California Edison
PG&E
NRECA
BPA
BPA
Idaho Power
Idaho Falls Power
SCADA Security & Compliance, So. Cal. Edison
PG&E
Univ. Illinois / Trustworthy Cyber Infrastructure for
the Power Grid (TCIPG) Center
Univ. Illinois / Trustworthy Cyber Infrastructure for
the Power Grid (TCIPG) Center
SMUD
Ameren
Benton PUD
Accenture
Net Privateer
Independent Consultant
CVO Electrical Systems
Avista
SAIC
PG&E
InGuardians
Mike Wenstrom Development Partners
Upcoming Activities:
Week of September 26: ● Panel Membership Decided
● Roster Finalized and Published
● Phase 1 Closed
● Operational Kickoff Meeting Scheduled
Week of October 3: ● Identify Source for JAQ
● Begin Documenting Literature Sources
● Prepare Job Classification Report
Upcoming Events/Opps for Outreach:
2
NERC GridSecCon
October 19
3
September 22, 2011 Status Report
Activities completed:
(Format will change)
Advisory Board:
●
Advisory Board has been formed, is fully provisioned, and is in the process
of selecting Chair and Vice Chair. Chair and Vice Chair to be finalized and
notified by Monday, September 26.
Advisors:
Jamey Sample
John Allen
Bill Hunteman
Joel Garmon
Dr. Emannuel Hooper
PG&E
IEIA Forum
DOE (Retired)
Wake Forest Baptist Medical Center (CISO)
Global Info Intel, and Harvard
Candidates being considered for Officer positions:
Scott King
Justin Searle
Michael Echols
Erich Gunther
Sempra
InGuardians
Salt River Project
Enerex
Members:
●
●
Additional interest has been expressed in nominating candidates for
inclusion on panel
Chair and Vice Chair to select panel membership by Friday, September 30.
Candidates for Panel:
Sandeep
Bora
Ron
Balusamy (Balu)
Andy
Ryan
Jeff
Shawn
Ray
Agrawal
Akyol
Ambrosio
Arumugam
Bochman
Breed
Bryner
Chandler
Cline
Neilsoft Limited
PNNL
IBM Research
Infosys
IBM, Smart Grid Security Blog, DOD Energy Blog
ERCOT
PGE
Portland General Electric
Univ Houston
1
Art
Steve
Ido
Barbara
Bjorn
Ed
Steven
Maria
Jesse
Lisa
Cliff
James
Craig
Brennan
Terry
James
Mark
Charles
Craig
Conklin
Dougherty
Dubrowsky
Endicott Popovsky
Frogner
Goff
Hanburg
Hayden
Hurley
Kaiser
Maraschino
Mater
Miller
O’Brien
Oliver
Pittman
Reed
Reilly
Rosen
P.
Sauer
William
Scott
Chris
Blake
Anthony David
Jason
Gale
Marty
Clay
Kevin
Mark
Don
Mike
Sanders
Saunders
Sawall
Scherer
Scott
Sonon
Sphantzer
Stoddard
Storey
Tydings
Ward
Weber
Wenstrom
Univ Houston
IBM Global Technology Services
Itron
University of Washington
Frogner Associates, Inc.
Progress Energy
Encari
Pentagon
NAESB Board
DHS
Southern California Edison
PG&E
NRECA
BPA
BPA
Idaho Power
Idaho Falls Power
SCADA Security & Compliance, So. Cal. Edison
PG&E
Univ. Illinois / Trustworthy Cyber Infrastructure for
the Power Grid (TCIPG) Center
Univ. Illinois / Trustworthy Cyber Infrastructure for
the Power Grid (TCIPG) Center
SMUD
Ameren
Benton PUD
Accenture
Net Privateer
Independent Consultant
CVO Electrical Systems
Avista
SAIC
PG&E
InGuardians
Mike Wenstrom Development Partners
Upcoming Activities:
Week of September 26: ● Panel Membership Decided
● Roster Finalized and Published
● Phase 1 Closed
● Operational Kickoff Meeting Scheduled
Week of October 3: ● Identify Source for JAQ
● Begin Documenting Literature Sources
● Prepare Job Classification Report
2
September 14, 2011 Status Report
Activities completed:
(Format will change)
●
Continuing Focused Outreach and Invitations
● Outreach Efforts
● Current Roster of Candidates:
Advisory Board:
Jamey Sample
John Allen
Bill Hunteman
Joel Garmon
Dr. Emannuel Hooper
PG&E
IEIA Forum
DOE (Retired)
Wake Forest Baptist Medical Center (CISO)
Global Info Intel, and Harvard
Officers:
Scott King
Justin Searle
Michael Echols
Erich Gunther
Neil Greenfield
Members:
Sempra
InGuardians
Salt River Project
Enerex
AEP
Sandeep
Bora
Ron
Balusamy (Balu)
Agrawal
Akyol
Ambrosio
Arumugam
Andy
Ryan
Jeff
Shawn
Ray
Art
Steve
Ido
Barbara
Bjorn
Ed
Bochman
Breed
Bryner
Chandler
Cline
Conklin
Dougherty
Dubrowsky
Endicott Popovsky
Frogner
Goff
Neilsoft Limited
PNNL
IBM Research
Infosys
IBM, Smart Grid Security Blog, DOD
Energy Blog
ERCOT (phone (512) 248
PGE
Portland General Electric
Univ of Houston
Univ of Houston
IBM Global Technology Services
Itron
University of Washington
Frogner Associates, Inc.
Progress Energy
1
Steven
Maria
Jesse
Lisa
Cliff
James
Craig
Brennan
Terry
James
Mark
Hanburg
Hayden
Hurley
Kaiser
Maraschino
Mater
Miller
O’Brien
Oliver
Pittman
Reed
Charles
Craig
Scott
Chris
Blake
Anthony David
Jason
Gale
Marty
Clay
Kevin
Mark
Don
Reilly
Rosen
Saunders
Sawall
Scherer
Scott
Sonon
Sphantzer
Stoddard
Storey
Tydings
Ward
Weber
Encari
Pentagon
NAESB Board
DHS
Southern California Edison
PG&E
NRECA
BPA
BPA
Idaho Power
Idaho Falls Power
SCADA Security & Compliance, So. Cal.
Edison
PG&E
SMUD
Ameren
Benton PUD
Accenture
Net Privateer
Independent Consultant
CVO Electrical Systems
Avista
SAIC
PG&E
InGuardians
●
Fully Provisioned Online System for Project Support
●
Upcoming Activities:
Week of September 12:
● Status Call on Program Management
(Wednesday, September 14 | 12:00 pm Eastern)
● Advisory Board Meeting
Week of September 19:
Chair and Vice Chair selections will be made
and candidates informed this week
Chair and Vice Chair with NBISE staff
to review and select candidates for panel
membership
● Panel Membership Decided
● Operational Kickoff Meeting
Week of September 26:
● Identify Source for JAQ
● Begin Documenting Literature Sources
2
● Prepare Job Classification Report
3
September 7, 2011 Status Report
Activities completed:
(Format will change)
●
Continuing Focused Outreach and Invitations
● Outreach Efforts
NESCO Staff Briefing
UNITE Security Directors Webinar Briefing
NAESB Discussion
DOE OE sent outreach note to all electricity sector industry associations
●
●
NBISE is awaiting status on PNNL outreach efforts and standing by to support as
necessary
Current Roster of Candidates:
Officers:
Scott King – Sempra
Justin Searle - InGuardians
Terry Oliver - BPA
Gal Sphantzer - Independent Consultant
Mike Mertz - PNM Resource
Ron Ambrosio - IBM Research
Members:
Craig Rosen - PG&E
Tony Suarez - TVA
Chris Sawall - Ameren
Andy Bochman - IBM, Smart Grid Security Blog, DOD Energy Blog
Sandeep Agrawal - Neilsoft Limited
Steve Dougherty - IBM Global Technology Services
Michael Echols - Salt River Project
Phil Slack - FPL
James Mater - QualityLogic
Mark Ward - PG&E
Jeff Bryner - PGE
Shawn Chandler - PGE
Erich Gunther - Enerex
Scott Saunders - SMUD
Ed Goff - Progress Energy
Ryan Breed - ERCOT
1
Neil Greenfield - AEP
Don Weber - InGuardians
Lisa Kaiser - DHS
Dr. Emmanuel Hooper - Global Info Intel and Harvard
Cliff Maraschino Southern California Edison
Charles Reilly - SCADA Security & Compliance, So. Cal. Edison
Bjorn Frogner - Frogner Associates, Inc.
Balusamy Arumugam (Balu) - Infosys
Steven Hanburg - Encari
Jesse Hurley - NAESB Board
James Pittman – Idaho Power
Advisory Board:
Jamey Sample - PG&E
John Allen - IEIA Forum
Bill Hunteman
Jason Sonon - Net Privateer
Joel Garmon - Wake Forest Baptist Medical Center (CISO)
●
Fully Provisioned Online System for Project Support
●
Upcoming Activities:
Week of September 5:
● Status Call on Program Management
(Wednesday, September 7 | 12:00 pm Eastern)
● Establish Advisory Board
Advisory panel selections will be made and
candidates informed early in the week of Sept. 5
Will hold Welcome Call and engage new
Advisory Board members in discussion of
selecting Chair and Vice Chair
● Selection of Chair and Vice Chair
Week of September 12:
● Operational Kick-off
Week of September 23:
● Panel Membership Decided
● Identify Source for JAQ
● First Panel Session: Vignettes
2
September 1, 2011 Status Report
Activities completed:
(Format will change)
●
Continuing Focused Outreach and Invitations
● Outreach Efforts
NESCO Staff Briefing
UNITE Security Directors Webinar Briefing
NAESB Discussion
DOE OE sent outreach note to all electricity sector industry associations
●
●
NBISE is awaiting status on PNNL outreach efforts and standing by to support as
necessary
Current Roster of Candidates:
Officers:
Scott King – Sempra
Justin Searle - InGuardians
Terry Oliver - BPA
Gal Sphantzer - Independent Consultant
Mike Mertz - PNM Resource
Ron Ambrosio - IBM Research
Members:
Craig Rosen - PG&E
Tony Suarez - TVA
Chris Sawall - Ameren
Andy Bochman - IBM, Smart Grid Security Blog, DOD Energy Blog
Sandeep Agrawal - Neilsoft Limited
Steve Dougherty - IBM Global Technology Services
Michael Echols - Salt River Project
Phil Slack - FPL
James Mater - QualityLogic
Mark Ward - PG&E
Jeff Bryner - PGE
Shawn Chandler - PGE
Erich Gunther - Enerex
Scott Saunders - SMUD
Ed Goff - Progress Energy
Ryan Breed - ERCOT
1
Neil Greenfield - AEP
Don Weber - InGuardians
Lisa Kaiser - DHS
Dr. Emmanuel Hooper - Global Info Intel and Harvard
Cliff Maraschino Southern California Edison
Charles Reilly - SCADA Security & Compliance, So. Cal. Edison
Bjorn Frogner - Frogner Associates, Inc.
Balusamy Arumugam (Balu) - Infosys
Steven Hanburg - Encari
Jesse Hurley - NAESB Board
Advisory Board:
Jamey Sample - PG&E
John Allen - IEIA Forum
Bill Hunteman
Jason Sonon - Net Privateer
Joel Garmon - Wake Forest Baptist Medical Center (CISO)
●
Fully Provisioned Online System for Project Support
●
Upcoming Activities:
Week of September 5:
● Status Call on Program Management
(Wednesday, September 7 | 12:00 pm Eastern)
● Establish Advisory Board
Advisory panel selections will be made and
candidates informed early in the week of Sept. 5
Will hold Welcome Call and engage new
Advisory Board members in discussion of
selecting Chair and Vice Chair
● Selection of Chair and Vice Chair
Week of September 12:
● Operational Kick-off
Week of September 23:
● Panel Membership Decided
● Identify Source for JAQ
● First Panel Session: Vignettes
2