NIST SP 800-61
Handling an Incident
4.2. Using Collected Incident Data
Lessons learned activities should produce a set of objective and subjective data regarding each incident. Over time, the collected incident data should be useful in several capacities. The data, particularly the total hours of involvement and the cost,
may be used to justify additional funding of the incident response team. A study of incident characteristics may indicate systemic security weaknesses and threats, as well as changes in incident trends. This data can be put back into the risk assessment
process, ultimately leading to the selection and implementation of additional controls. Another good use of the data is measuring the success of the incident response team. If incident data is collected and stored properly, it should provide several
measures of the success (or at least the activities) of the incident response team. Incident data can also be collected to determine if a change to incident response capabilities causes a corresponding change in the team's performance (e.g., improvements
in efficiency, reductions in costs). Furthermore, organizations that are required to report incident information will need to collect the necessary data to meet their requirements. See Section 4 for additional information on sharing incident
data with other organizations.
Organizations should focus on collecting data that is actionable, rather than collecting data simply because it is available. For example, counting the number of precursor port scans that occur each week and producing a chart at the end of the year that
shows port scans increased by eight percent is not very helpful and may be quite time-consuming. Absolute numbers are not informative – understanding how they represent threats to the business processes of the organization is what matters. Organizations
should decide what incident data to collect based on reporting requirements and on the expected return on investment from the data (e.g., identifying a new threat and mitigating the related vulnerabilities before they can be exploited.) Possible metrics
for incident-related data include:
- Number of Incidents Handled. Handling more incidents is not necessarily better – for example, the number of incidents handled may decrease because of better network and host security controls, not because of negligence by the incident
response team. The number of incidents handled is best taken as a measure of the relative amount of work that the incident response team had to perform, not as a measure of the quality of the team, unless it is considered in the context of other
measures that collectively give an indication of work quality. It is more effective to produce separate incident counts for each incident category. Subcategories also can be used to provide more information. For example, a growing number of incidents
performed by insiders could prompt stronger policy provisions concerning background investigations for personnel and misuse of computing resources and stronger security controls on internal networks (e.g., deploying intrusion detection software
to more internal networks and hosts).
- Time Per Incident. For each incident, time can be measured in several ways:
– Total amount of labor spent working on the incident
– Elapsed time from the beginning of the incident to incident discovery, to the initial impact assessment, and to each stage of the incident handling process (e.g., containment, recovery)
– How long it took the incident response team to respond to the initial report of the incident
– How long it took to report the incident to management and, if necessary, appropriate external entities (e.g., US-CERT).
- Objective Assessment of Each Incident. The response to an incident that has been resolved can be analyzed to determine how effective it was. The following are examples of performing an objective assessment of an incident:
– Reviewing logs, forms, reports, and other incident documentation for adherence to established incident response policies and procedures
– Identifying which precursors and indicators of the incident were recorded to determine how effectively the incident was logged and identified
– Determining if the incident caused damage before it was detected
– Determining if the actual cause of the incident was identified, and identifying the vector of attack, the vulnerabilities exploited, and the characteristics of the targeted or victimized systems, networks, and applications
– Determining if the incident is a recurrence of a previous incident
– Calculating the estimated monetary damage from the incident (e.g., information and critical business processes negatively affected by the incident)
– Measuring the difference between the initial impact assessment and the final impact assessment (see Section 3.2.6)
– Identifying which measures, if any, could have prevented the incident.
- Subjective Assessment of Each Incident. Incident response team members may be asked to assess their own performance, as well as that of other team members and of the entire team. Another valuable source of input is the owner of a resource that was attacked, in order to determine if the owner thinks the incident was handled efficiently and if the outcome was satisfactory.
Incident response policies, plans, and procedures
Tools and resources
Team model and structure
Incident handler training and education
Incident documentation and reports
The measures of success discussed earlier in this section