Conference PaperPDF Available

Security vulnerability categories in major software systems

Authors:

Abstract and Figures

The security vulnerabilities in software systems can be categorized by either the cause or severity. Several software vulnerabilities datasets for major operating systems and web servers are examined. The goal is to identify the attributes of each category that can potentially be exploited for enhancing security. Linking a vulnerability type to a severity level can help us prioritize testing to develop more effective testing plans. Instead of using an ad hoc security testing approach, testing can be directed to vulnerabilities with higher risk. Modeling vulnerabilities by category can be used to improve the post-release maintenance and patching processes by providing estimation for the number of vulnerabilities of individual types and their severity levels. We also show that it is possible to apply vulnerability discovery models to individual categories which can project the types of vulnerabilities to be expected in near future.
Content may be subject to copyright.
1
SECURITY VULNERABILITY CATAGORIES IN MAJOR SOFTWARE
SYSTEMS
Omar H. Alhazmi, Sung-Whan Woo, Yashwant K. Malaiya
Colorado State University
omar| woo|malaiya@cs.colostate.edu
ABSTRACT
The security vulnerabilities in software systems can be
categorized by either the cause or severity. Several
software vulnerabilities datasets for major operating
systems and web servers are examined. The goal is to
identify the attributes of each category that can potentially
be exploited for enhancing security. Linking a
vulnerability type to a severity level can help us prioritize
testing to develop more effective testing plans. Instead of
using an ad hoc security testing approach, testing can be
directed to vulnerabilities with higher risk. Modeling
vulnerabilities by category can be used to improve the
post-release maintenance and patching processes by
providing estimation for the number of vulnerabilities of
individual types and their severity levels. We also show
that it is possible to apply vulnerability discovery models
to individual categories which can project the types of
vulnerabilities to be expected in near future.
KEYWORDS: Security, Vulnerability, Model,
Taxonomy.
1. Introduction
Software vulnerabilities are the major security threats
to networked software systems. A vulnerability is defined
as “a defect which enables an attacker to bypass security
measures” [1]. The vulnerabilities found are disclosed by
the finders using some of the common reporting
mechanisms available. The databases for the
vulnerabilities are maintained by organizations such as
National Vulnerabilities Database [2], MITRE
Corporation [3], Security Focus [4] and individual
software developers. The databases also record some of
the significant attributes of the vulnerabilities.
While there have been many studies attempting to
identify causes of vulnerabilities and potential counter-
measures, the development of systematic quantitative
methods to characterize security has begun only recently.
There has been an extensive debate comparing the
security attributes of open source and commercial
software [5]. This debate demonstrates that for a careful
interpretation of the data, rigorous quantitative modeling
methods are needed. The likelihood of a system being
compromised depends on the probability that a newly
discovered vulnerability will be exploited. Thus, the risk
is better represented by the not yet discovered
vulnerabilities and the vulnerabilities discovery rate rather
than by the vulnerabilities that have been discovered in
the past and remedied by patches. Possible approaches for
a quantitative perspective of exploitation trends are
discussed in [6], [7]. Probabilistic examinations of
intrusions have been presented by several researchers
[8][9]. The vulnerabilities discovery process in operating
systems has just recently been examined by Rescorla [10]
and by Alhazmi and Malaiya [11], [12], [13].
It has been estimated that 10% of vulnerabilities
cause 90% of exposures [14]. The risk of some
vulnerabilities is minor while it is quite significant for
others. The exploitations of some of the vulnerabilities
have been widely reported. The Code Red worm [15],
which exploited a vulnerability in IIS (described in
Microsoft Security Bulletin MS01-033, June 18, 2001),
appeared on July 13, 2001, and soon spread world-wide in
unpatched systems is an example of a very critical
vulnerability.
The goal of this work is to assess the vulnerabilities
of software systems using some of their main attributes,
by two approaches, modeling and analysis. Modeling can
be used to predict future vulnerabilities and their
attributes. On the other hand, analysis can link categories
attributes to severity levels attributes, so testers can
design test cases to target those vulnerabilities that are
likely to be more critical. The analysis will look into the
distribution of vulnerabilities on categories and identify
the mistakes that have lead to having more vulnerabilities
from a specific type or severity, the distribution of
vulnerabilities can lead to better diagnosis of development
flaws and mistakes.
There have been some studies that have studied
vulnerabilities and its discovery rate [11], [12][13]. These
studies treat vulnerabilities as equally important entities,
in this study we expand the prospective to look into
specific vulnerabilities categories and severity.
For categorizing vulnerabilities by vulnerability type,
several proposed taxonomies have aimed toward a
universal taxonomy of vulnerabilities [16], [17]. These
vulnerabilities taxonomies focused on goals like: Mutual
exclusiveness, exhaustiveness and repeatability, those
qualities will provide an unambiguous taxonomy,
vulnerability taxonomy is considered an evolving area of
2
research; nevertheless, there is a taxonomy that has
gained recognition from organizations such as MITRE
Corporation [3] and the National Vulnerabilities Database
(NVD) managed by National Institute of Standards and
Technology(NIST) which has shown to be acceptable [2].
The other important taxonomy is to classify
vulnerabilities by their severities, here also where many
attempts have tried to provide objective classifications, a
heuristic-based classifications looks at several attributes
of the vulnerability is Common Vulnerabilities scoring
system (CVSS). CVSS is being adapted by the NIST
NVD it provides a standard for vulnerabilities severity
classification, the system give each vulnerability a value
from 1-10, the higher the number the more severe the
vulnerability [24].
2. Vulnerabilities Discovery Models
Use of reliability growth models is now common in
software reliability engineering [18]; SRGMs show that
as bugs are found and removed, fewer bugs remain [19].
Therefore, the bug finding rate gradually drops and the
cumulative number of bugs eventually approaches
saturation. Such growth models are used to determine
when a software system is ready to be released and what
future failure rates can be expected.
Table 1: The data sets used
Systems
Lines of code
(millions)
OS or Software
Type
Known
Vulnerabilities
Vulnerability
Density
(per Ksloc)
Release Date
Windows XP 40 Commercial
client
173 0.0043 Oct
2001
Windows 2000 35 Commercial
server 243 0.00694 Feb
2000
R H Linux 6.2 17 Open-source
server 118 0.00694 Mar
2000
Fedora 81 Open-source
server 143 0.0019 Nov
2003
IIS N/A Commercial
web server 121 N/A 1995
Apache 0.227 Open-source
web server 94 0.414 1995
Vulnerabilities are a special class of defects that can
permit circumvention of security measures. Some
vulnerabilities discovery models were recently proposed
by Anderson [5], Rescorla [10], and Alhazmi and Malaiya
[11]. The applicability of these models to several
operating systems was examined in [20]. The results show
that while some of the models fit the data for most
operating systems, others do not fit well or provide a good
fit only during a specific phase.
Here, we investigate the applicability a vulnerability
discovery model on some operating systems and web
servers vulnerabilities classified by their category or
severity level, the models is a time-based proposed by
Alhazmi and Malaiya [11]. This model has been found to
fit datasets for several of the major Windows and Linux
operating systems, and the two major web servers [21], as
determined by goodness of fit and other measures.
This model, referred to as the Alhazmi-Malaiya
Logistic model (AML), assumes that the rate of change of
the cumulative number of vulnerabilities is governed by
two factors, as given in Equation 1 below [11]. The first
factor declines as the number of remaining undetected
vulnerabilities declines. The other factor increases with
the time needed to take into account the rising share of the
installed base. The saturation effect is modeled by the first
factor. While it is possible to obtain a more complex
model, this model provides a good fit to the data, as
shown below. Let us assume that the vulnerabilities
discovery rate is given by the differential equation:
)( =
BA
dt
d
, (1)
where is the cumulative number of vulnerabilities,
t is the calendar time, and initially t=0. A and B are
empirical constants determined from the recorded data.
By solving the differential equation, we obtain
1
)( +
=
ABt
BCe
B
t
, (2)
where C is a constant introduced while solving
Equation 1. Equation 2 gives us a three-parameter model
given by the logistic function. In Equation 2, as t
approaches infinity, approaches B. Thus, the parameter
B represents the total number of accumulated
vulnerabilities that will eventually be found.
AML shows that the vulnerabilities discovery rate
increases at the beginning, reaches a steady rate and then
starts declining. Consequently, the cumulative number of
vulnerabilities shows an increasing rate at the beginning
as the system begins to attract an increasing share of the
installed base. After some time, a steady rate of
vulnerabilities finding yields a linear curve. Eventually, as
the vulnerabilities discovery rate begins to drop, there is
saturation due both to reduced attention and a smaller
pool of remaining vulnerabilities.
Vulnerabilities are usually reported using calendar
time, the reason for this is that it is easy to record
vulnerabilities and link them to the time of discovery.
This, however, does not take into consideration the
changes occurring in the environment during the lifetime
of the system. A major environmental factor is the
number of installations, which depends on the share of the
installed base of the specific system. It is much more
rewarding to exploit vulnerabilities that exist in a large
number of computers. Hence, it can be expected that a
larger share of the effort going into the discovery of
vulnerabilities, both in-house and external, would go
toward a system with a larger installed base [11].
3. Vulnerabilities by Type and Severity
Classification
In this section we preview the taxonomy that will be
used for our analysis; we look into how vulnerabilities are
classified by category and by severity level.
3
3.1 Modeling vulnerabilities by category
Vulnerabilities taxonomy is still an evolving area of
research. Several taxonomies have been proposed [16],
[17], [22], [23]; an ideal taxonomy should have such
desirable properties as mutual exclusiveness, clear and
unique definition, repeatability, and coverage of all
software vulnerabilities. These characteristics will provide
an unambiguous taxonomy; however, meanwhile, there is
a taxonomy that has gained recognition from
organizations such as MITRE Corporation and the NIST
NVD [2] which has shown to be acceptable.
Vulnerabilities can be classified using schemes based
on cause, severity, impact and source, etc. In this analysis,
we use the classification scheme employed by the
National Vulnerability Database of the National Institute
of Standards and Technology. This classification is based
on the causes of vulnerabilities. The eight classes are as
follows [2], [4]:
1. Input Validation Error (IVE) (Boundary condition
error (BCE), Buffer overflow (BOF)): Such types of
vulnerabilities include failure to verify the incorrect input
and read/write involving an invalid memory address.
2. Access Validation Error (AVE): These vulnerabilities
cause failure in enforcing the correct privilege for a user.
3. Exceptional Condition Error Handling (ECHE):
These vulnerabilities arise due to failures in responding to
unexpected data or conditions.
4. Environmental Error (EE): These vulnerabilities are
triggered by specific conditions of the computational
environment.
5. Configuration Error (CE): These vulnerabilities result
from improper system settings.
6. Race Condition Error (RC): These are caused by the
improper serialization of the sequences of processes.
7. Design Error (DE): These are caused by improper
design of the software structure.
8. Others: Includes vulnerabilities that do not belong to
the types listed above, sometimes referred to as
nonstandard.
Unfortunately, the eight classes are not completely
mutually exclusive. A proportion of vulnerabilities can
belong to more than one category. Because a vulnerability
can belong to more than one category, the summation of
all categories for a single software system may add up to
more than the total number of vulnerabilities.
3.2 Modeling Vulnerabilities by severity
The other important taxonomy is to classify
vulnerabilities by their severities, here also where many
attempts have tried to provide objective classifications, a
heuristic-based classifications looks at several attributes
of the vulnerability is Common Vulnerabilities scoring
system (CVSS).
CVSS is being adapted by the NIST NVD [2] it
provides a standard for vulnerabilities severity
classification, the system give each vulnerability a value
from 1-10, the higher the number the more severe the
vulnerability [24]. where the range 1-3.99 corresponds to
low severity, 4-6.99 to medium severity and 7-10 to high
severity; The National Vulnerability Database of the
National Institute of Standards and Technology describes
the severity levels, as follows [2]:
1. High Severity: This makes it possible for a
remote attacker to violate the security protection of a
system (i.e., gain some sort of user, root or application
account), or permits a local attack that gains complete
control of a system, or if it is important enough to have an
associated CERT/CC advisory or US-CERT alert.
2. Medium Severity: This does not meet the
definition of either “high” or “low” severity.
3. Low Severity: The vulnerability typically does
not yield valuable information or control over a system
but rather gives the attacker knowledge provides the
attacker with information that may help him find and
exploit other vulnerabilities or we feel that the
vulnerability is inconsequential for most organizations.
4. Modeling Vulnerabilities by Category and
Severity
In this section we examine the applicability of the
Alhazmi-Malaiya vulnerability discovery models on the
datasets for individual vulnerability type or severity level.
If the model is applicable to individual categories it could
be expected that estimations for future vulnerabilities of a
specific category is possible, giving estimators better
details about vulnerabilities estimation. Furthermore, if
the model applies to severity levels it can also be possible
to predict the severity levels of future vulnerabilities.
Figure 1 and Figure 2 show how the model fits the
datasets for Windows XP and Windows 2000 for the
major individual vulnerabilities categories; the
2
goodness of fit test, results in Tables 2 and 3. In this
goodness of fit test, we have examined Access Validation
Error (AVE), Exceptional Control Handling Error
(ECHE), Buffer Overflow (BOF), and Design Error (DE)
vulnerabilities. Other vulnerabilities are not statistically
significant; therefore, we can not accurately test the
goodness of fit for these vulnerabilities (i.e. Race
Condition, Environmental Error, etc.).
Table 2. Windows XP Goodness of fit results
Parameters
2
Vulnera-
bility
Type A B C P-
value
2
2
Critical
AVE 0.00103 63.595 2.011 1 17.70871
ECHE 0.00070 92.864 0.674 1 12.10664
BOF 0.00080 93.578 0.348 1 14.82454
DE 0.00285 28.984 0.195 1 13.08505
67.50
4
Windows XP
0
10
20
30
40
50
60
Oct-01
Jan-02
Apr-02
Jul-02
Oct-02
Jan-03
Apr-03
Jul-03
Oct-03
Jan-04
Apr-04
Jul-04
Oct-04
Jan-05
Apr-05
Jul-05
Oct-05
Cumulative Vulnerabilities
AVE
ECHE
BOF
DE
Figure 1. The plots for individual vulnerabilities categories data fitted to
the models for Windows XP
Windows 2000
0
10
20
30
40
50
60
70
Feb-00
May-00
Aug-00
Nov-00
Feb-01
May-01
Aug-01
Nov-01
Feb-02
May-02
Aug-02
Nov-02
Feb-03
May-03
Aug-03
Nov-03
Feb-04
May-04
Aug-04
Nov-04
Feb-05
May-05
Aug-05
Nov-05
Cumulative Vulnerabilities
AVE
ECHE
BOF
DE
Figure 2. The plots for individual vulnerabilities categories data fitted to
the models for Windows 2000
Red Hat Fedora
0
5
10
15
20
25
30
35
40
45
50
Dec-03
Mar-04
Jun-04
Sep-04
Dec-04
Mar-05
Jun-05
Sep-05
Dec-05
Mar-06
Jun-06
Cumulative Vulnerabilities
AVE
ECHE
BOF
DE
Figure 3. Plots for individual vulnerabilities categories data fitted to the
models for Red Hat Fedora
Figure 3 above shows how the AML model fits
individual vulnerabilities categories for Red Hat Fedora.
Table 4 shows the
2
goodness of fit test is applied, it
shows a significant fit for all vulnerabilities categories
examined. Similarly here, we have only examined
vulnerabilities with statistically significant data.
Table 3. Windows 2000 Goodness of fit results
Parameters
2
Vulnerability
Type A B C P-
value
2
2
Critical
AVE 0.0021 23.181 0.279 1 25.552
ECHE 0.0010 51.954 0.189 0.999 38.630
BOF 0.0006 93.862 0.495 1 19.620
DE 0.0017 54.358 0.343 0.994 45.141
92.81
Table 4. Red Hat Fedora Goodness of fit results
Parameters
2
Vulnera-
bility
Type A B C P-
value
2
2
Critical
AVE 0.1001 6.3065
172.35
1 3.502
ECHE 0.0064 25.720
0.927 0.905 21.25
BOF 0.0099 42.190
3.778 0.999 12.17
DE 0.0075 37.438
2.1773
0.988 15.86
44.99
Figure 4 and Figure 5 show how the AML model fits
data separated by severity level, the figures clearly show
an s-shaped curves with different parameters values,
showing that low severity vulnerabilities are discovered in
a faster rate, it also opens the possibility of future
prediction of vulnerabilities and their severity level. Table
5 and Table 6 show that the fit was significant at (=.05)
with a P-value>.999; with the
2
value significantly than
the critical
2
in both tables.
Apache
0
10
20
30
40
50
60
70
Mar-96
Mar-97
Mar-98
Mar-99
Mar-00
Mar-01
Mar-02
Mar-03
Mar-04
Mar-05
Mar-06
Cumulative Vulnerabilities
High
Low
Figure 4. Apache by severity level vulnerabilities fitted to the AML
IIS
0
10
20
30
40
50
60
70
80
90
Jan-97
Jul-97
Jan-98
Jul-98
Jan-99
Jul-99
Jan-00
Jul-00
Jan-01
Jul-01
Jan-02
Jul-02
Jan-03
Jul-03
Jan-04
Jul-04
Jan-05
Jul-05
Jan-06
Cumulative Vulnerabilities
High
Low
Figure 5. IIS by severity level vulnerabilities fitted to the AML
Table 5. IIS Goodness of fit results for data classified by severity
Parameters
2
Vulnerability
Type A B C P-value
2
2
Critical
High .00176 38 .999 1 28.2
Low .00127 77.9 1.21 .999 53.5 133.2
Table 6. Apache Goodness of fit results for data classified by severity
Parameters
2
Vulnerability
Type A B C Pvalue
2
2
Critical
High .00156 27.00 1.00 1 42.1
Low .00248 18.00 1.76 1 15.7 144.3
5
5. Vulnerabilities Type and Severity
The goal of this analysis is to determine which
vulnerabilities are more severe so testers can target
vulnerabilities of high severity. We can identify the most
severe vulnerabilities categories, so testers can design
tests targeting a specific category of vulnerabilities.
The analysis will include a diverse group of software
systems which are: Windows 98, Windows XP, Windows
2000, Red Hat Fedora, Apache and IIS web Servers. The
vulnerabilities data are from the National Vulnerabilities
Database maintained by NIST. The market share data
from Netcraft [25] was used.
The data shows that high severity vulnerabilities are
half the number of vulnerabilities making them the most
common vulnerabilities, a significant number of high
severity vulnerabilities are buffer overflow then design
error in windows 2000 and boundary condition. Boundary
condition errors (IVE-BCE) has shown to be of high
severity only in Windows systems (see Tables 8 and 9)
while it is not so in Fedora, IIS and Apache see Tables
10-12, this possibly indicate that there is a common flow
in windows regarding boundary testing.
Table 8 to Table 11 show a common observation that
Exceptional Control Handling Error ECHE is associated
with low severity vulnerabilities this observation is
common in all examined datasets. The tables show that
Race Condition, Environmental Error and Configuration
Error are very few which make it hard to link it to a
particular severity.
Table 10 below shows that only 35 apache
vulnerabilities are of high severity, about half of them are
Input validation errors with some buffer overflows and
general input validation error. Design errors count for 23
vulnerabilities in total with only 5 high severity
vulnerabilities.
Table 9 shows Red Hat Fedora vulnerabilities, which
shows that about 41.3% of vulnerabilities, are of high
severity, the majority is buffer overflow, and here we can
observe similarity in both Windows systems in the
dominance of Buffer overflow and other input validation
error, this can be due to using C programming language,
which does not have memory management features.
Furthermore, a significant number of high severity
vulnerabilities belong to design error which is 15
vulnerabilities. Exceptional condition handling errors are
mostly low severity vulnerabilities, similar to Windows
systems.
Table 7. Vulnerabilities Type vs. Severity for Windows 2000
AVE
DE
ECHE
IVE-
BOF
IVE-
BCE
IVE-
other
RC
EE
CE
other
Total
High 10 29 12 52 18 7 1 2 4 1 135
Medium 6 15 2 5 0 2 0 0 2 1 33
Low 6 26 32 9 3 16 1 2 3 1 100
Total 22 70 46 66 21 25 2 4 9 3 268
Table 8. Vulnerabilities Type vs. Severity for Windows XP
AVE
DE
ECHE
IVE
-
BOF
IVE
-
BCE
IVE
-
other
RC
EE
CE
other
Total
High 8 11 7 47 16 5 1 1 0 2 98
Medium 0 4 2 5 0 3 0 0 0 2 16
Low 3 15 19 4 1 12 1 1 2 0 58
Total 11 30 28 56 17 20 2 2 2 4 172
Table 9. Vulnerabilities Type vs. Severity for Red Hat Fedora
AVE
DE
ECHE
IVE
-
BOF
IVE
-
BCE
IVE
-
other
RC
EE
CE
other
Total
High 2 15 4 33 6 9 3 0 1 1 74
Medium 1 4 0 3 2 2 0 0 0 1 13
Low 3 30 19 9 9 17 0 2 1 2 92
Total 6 49 23 45 17 28 3 2 2 4 179
Table 11 below shows that 85 of IIS vulnerabilities
are of low severity while 41 are of high severity, most
high severity vulnerabilities belongs to Input Validation
Error vulnerabilities and most of them is Buffer
overflow, Design error vulnerabilities account for 26
vulnerabilities only ten of them highly severe, showing
some errors with the design of IIS. However, IIS has
matured over the years and has not shown new
vulnerabilities in some time now.
Table 10. Vulnerabilities Type vs. Severity for Apache
AVE
DE
ECHE
IVE
-
BOF
IVE
-
BCE
IVE
-
other
RC
EE
CE
other
Total
High 3 5 3 7 1 10 0 3 2 1 35
Medium 0 1 1 3 0 0 1 0 0 0 6
Low 3 17 12 3 1 14 1 1 12 4 68
Total 6 23 16 13 2 24 2 4 14 5 109
Table 11. Vulnerabilities Type vs. Severity for IIS
AVE
DE
ECHE
IVE
-
BOF
IVE
-
BCE
IVE
-
other
RC
EE
CE
other
Total
High 3 10 2 13 1 8 0 0 3 1 41
Medium 0 1 0 1 0 0 0 0 1 0 3
Low 14 15 13 4 8 22 1 3 2 3 85
Total 17 26 15 18 9 30 1 3 6 4 129
From Table 10 and Table 11 the distributions of the
severities of the Apache and IIS vulnerabilities show
similarity. About 60% of total vulnerabilities have low
severity, followed by about 30% with high severity, with
very few vulnerabilities of medium severity.
Table 10 and Table 11 illustrate how severity level
correlates with error classification. It is noticeable that
Exceptional control handling error constituted the
majority among low severity vulnerabilities for both
Apache and IIS. In IIS, Buffer overflow vulnerabilities
are associated with high severity, a relatively smaller
fraction of exceptional condition errors are of high
severity. In IIS as well, the exceptional condition errors
tend to be from among the vulnerabilities with low
severity. For IIS, most configuration errors are medium
severity. Tables 7-11 show that Input Validation Errors
other than Buffer overflow and Boundary Condition Error
are likely to be of low severity.
6
7. Conclusions
In the past the security vulnerabilities have been
often either studied as individual vulnerabilities or as an
aggregate number. This paper examines categories of
vulnerabilities. The results show that individual
vulnerabilities categories and severity level complies with
the AML vulnerabilities discovery models. This work can
be extended by examining the prediction capabilities of
the models for individual categories and severity level.
This prediction capability can be included in risk
assessment applications, similar to the application
suggested by Sahinoglu in
[26] where many factors
including estimations of future vulnerabilities factors are
used. For some categories and severity levels, the number
of vulnerabilities of those categories are statistically
insignificant and thus not suitable for statistical modeling.
The vulnerabilities categories-severities analysis was
able to link Buffer Overflow to high severity
vulnerabilities and Exceptional Control Handling Error is
linked to low severity vulnerabilities. Input Validation
errors other than Buffer overflow and Boundary
Condition error are also found to be linked to low severity
vulnerabilities.
Design Error vulnerabilities were found to be a
significant category of which a significant proportion is
usually of high severity indicating that there is a need to
take past mistakes of those vulnerabilities in consideration
when designing new software systems, integrating
security design into the lifecycle of software system
became more important, as suggested by Seacord in
[27].
Tracking a vulnerability profile can be a good idea to
learn from past mistakes and highlight the weaknesses of
the software, which can point to mistakes and practices
that were responsible for the introduction of some
vulnerabilities. Further research can add more
vulnerabilities attributes to identify possible patterns.
References
[1] Schultz, E. E., Brown, D. S., and Longstaff, L. T. A.
Responding to computer security incidents. Lawrence
Livemore National Laboratory (July 1990).
[2] National Vulnerability Database . http://nvd.nist.gov/.
[3] MITRE Corp, Common Vulnerabilities and
Exposures, http://www.cve.mitre.org/.
[4] Securityfocus, http://www.securityfocus.com/
[5] Anderson, R. Security in open versus closed
systems—the dance of boltzmann, coase and moore. In
Conf. on Open Source Software: Economics, Law and
Policy (2002), 1–15.
[6] Brocklehurst, S., Littlewood, B., Olovsson T. and
Jonsson, E. On measurement of operational security.
Proc. 9
th
Annual IEEE Conference on Computer
Assurance (1994), 257–266.
[7] Hallberg, J., Hanstad, A., and Peterson, M. A
framework for system security assessment. Proc. 2001
IEEE Symposium on Security and Privacy (May 2001),
214–229.
[8] Browne, H. K., Arbaugh, W. A., McHugh, J., and
Fithen, W. L. A trend analysis of exploitations. In IEEE
Symposium on Security and Privacy (2001), 214–229.
[9] Madan, B. B., Goseva-Popstojanova, K.,
Vaidyanathan, K., and Trivedi, K. S. A method for
modeling and quantifying the security attributes of
intrusion tolerant systems. Perform. Eval. 56, 1-4 (2004),
167–186.
[10] Rescorla, E. Is finding security holes a good idea?
IEEE Security and Privacy 03, 1 (2005), 14–19.
[11] Alhazmi, O. H., and Malaiya, Y. K. Quantitative
vulnerability assessment of system software. Proc.
Annual Reliability & Maintainability Symp. (Jan. 2005),
615–620.
[12] Alhazmi, O. H., Malaiya, Y. K., and Ray, I. Security
vulnerabilities in software systems: A quantitative
perspective. Proc. Ann. IFIP WG11.3 Working
Conference on Data and Information Security (Aug.
2005), 281–294.
[13] Alhazmi, O. H., and Malaiya, Y. K. Modeling the
vulnerability discovery process. Proc. 16
th
Intl Symp. on
Software Reliability Engineering (Nov. 2005), 129–138.
[14] Qualys, The Laws of vulnerabilities,
http://www.qualys.com/docs/laws_of_vulnerabilities.pdf.
[15] Moore, D., Shannon, C., and Claffy, K. C. Code-red:
a case study on the spread and victims of an internet
worm. In Internet Measurement Workshop (2002),
pp. 273–284.
[16] Aslam, T., and Spafford E. H. A taxonomy of security
faults. Technical report, Carnegie Mellon, 1996.
[17] Seacord, C. R. and Householder, A. D. A structured
approach to classifying vulnerabilities. Technical Report
CMU/SEI-2005-TN-003, Carnegie Mellon, 2005.
[18] Musa, J. Software Reliability Engineering. McGraw-
Hill, 1999.
[19] Lyu, M. R. Handbook of Software Reliability.
McGraw-Hill, 1995.
[20] Alhazmi, O. H., and Malaiya, Y. K. Prediction
capability of vulnerability discovery process. Proc.
Reliability and Maintainability Symposium (Jan. 2006).
[21] Woo Sung-Whan, Omar H. Alhazmi, and Yashwant
K. Malaiya, “Assessing Vulnerabilities in Apache and IIS
HTTP Servers”, Proc.2nd IEEE Int. Sym. on Dependable
Autonomic and Secure Computing (DASC'06),
Indianapolis, USA, September 29-October 1, 2000.
[22] Bishop, M. Vulnerability analysis: An extended
abstract. Proc. Second International Symp. on Recent
Advances in Intrusion Detection (Sept. 1999), 125-136.
[23] Landwehr, C. E., Bull, A. R., McDermott, J. P., and
Choi, W. S. A taxonomy of computer program security
flaws. ACM Comput. Surv. 26, 3 (1994), 211–254.
[24] Common Vulnerabilities Scoring System,
http://www.first.org/cvss.
[25] Netcraft,. http://news.netcraft.com/.
[26] Sahinoglu, M. Quantitative risk assessment for
dependent vulnerabilities. Proc. Reliability and
Maintainability Symposium (Jan. 2006), 82-85.
[27] Seacord, R. Secure Coding in C and C++. Addison
Wisely, 2005.
... From the literature [11], a taxonomy was identified describing eight commonly referenced classes of software vulnerabilities. These classes include the following: 1) input validation error (IVE), 2) buffer overflow , 3) boundary condition error (BCE), 4) access validation error, 5) exceptional conditional error handling, 6) environmental error, 7) configuration error, 8) race condition error, 9) design error, and 10) others (nonstandard vulnerabilities). ...
... Medium severity and below give the attacker less information and control of a system. Based on the results from tests conducted in Ref. [11], the vulnerability class that had the highest number of high or critical vulnerabilities was the input validation error. ...
... Boundary condition error vulnerabilities involve the failure to check for incorrect input [11]. Incorrect input is any input into a function that is not expected by the program. ...
Article
Full-text available
This paper explores computer security vulnerabilities that are generated inadvertently by a compiler. By using a novel approach of examining the assembly language and other intermediate files generated by the compilation process, it has been successfully demonstrated that the compiler’s processing of the high-level source code can create a vulnerable end product. Proper software assurance is intended to provide confidence that software is free from vulnerabilities, and compiler-induced vulnerabilities reduce this confidence level. The discovered vulnerabilities can be related to standard vulnerability classes, side channel attacks, undefined behavior, and persistent state violations. Additionally, the research revealed that the executable machine code generated by the compiler can differ in structure from the original source code due to simplifications and optimizations performed during the compilation process that cannot be disabled. This research examined both the open-source GNU C compiler and the Microsoft C/C++ compiler that is part of the Microsoft Visual Studio package. Both of these compilers are widely used and represent typical compilers in use today.
... Classification of security bugs by previous studies [18][19][20] is mostly based on the root causes and the consequences of the security bugs. Piessens et al 21 classified the software bugs in terms of software development lifecycle phase, including memory safety, race condition, secure input and output handling, faulty use of an API, improper use case handling, improper exception handling, resource leaks, and pre-processing input strings. ...
Article
Security bugs can catastrophically impact our increasingly digital lives. Designing effective tools for detecting and fixing software security bugs requires a deep understanding of security bug characteristics. In this paper, we conducted a comprehensive study on security bugs and proposed the classification criteria for security bug category, that is, root cause, consequence, and location. In addition, we selected 1076 bug reports from five projects (i.e., Apache Tomcat, Apache HTTP Server, Mozilla Firefox, Linux Kernel, and Eclipse) in the NVD for investigation. Finally, we investigated the correlation between the classification results and obtained some findings: (1) memory operation is the most common security bug; (2) the primary root causes of security bugs are CON (Configuration Error), INP (Input Validation Error), and MEM (Memory Error); (3) the severity of more than 40% of security bugs is high; (4) security bugs caused by INP mainly occur on web; and (5) security bugs caused by LOG (Logic Resource Error) usually lead to DoS (Denial of Service). We discussed these findings through data analysis, which can also help developers better understand the characteristics of security bugs.
... However, in other cases, insufficient boundary checks can lead to major security issues such as buffer overflow. Alhazmi et al. showed that a significant number of high security vulnerabilities are caused by buffer overflows which are linked to boundary condition problems [3]. Therefore, it is of utmost importance that boundary conditions are sufficiently tested. ...
... CWE thus yields the following classification There is also another taxonomy that has gained recognition from organizations such as MITRE Corporation, NIST as well as National Vulnerability Database -NVD and has proven to be acceptable. Vulnerabilities can be classified using schemes based on cause [15]. The eight classes are as follows: ...
Article
Full-text available
There are several software security standards in place but they are not comprehensive, as such, one needs an array of tools so as to achieve a single security functionality. The survey guides us through the classical implementation to the modern as well as proposed future endeavors so as to achieve an integrated and robust platform for developing a reliable model. Some of the techniques and models have been tested and have proven to be effective and efficient in achieving the main goal of software security. The conclusion gives us the direction to take with regard to attaining ideal software weakness prevention.
Article
In this work, we propose a new vulnerability discovery model by predicting the number and probability of occurrence of vulnerabilities of different severity levels in software. The severity prediction assumes that the vulnerability score is a continuous variable distributed over a range of 0–10 as per the widely accepted common vulnerability scoring system. We have further developed a risk assessment model which can be used to define the security level of software and is helpful in risk assessment and patch management. A numerical illustration is done on real-life dataset to validate the proposed model.
Chapter
Any software system is exposed to potential attack. The recent and continuous appearance of vulnerabilities in software systems makes security a vital issue if these systems are to succeed. The detection of potential vulnerabilities thus signifies that a set of policies can be established to minimize their impact. This therefore implies identifying the risks and data to be protected, and the design of an action plan with which to manage incidents and recovery. The purpose of this chapter is to provide an analysis of the most common vulnerabilities in recent years, focusing on those vulnerabilities which are specific to cloud computing. These specific vulnerabilities need to be identified in order to avoid them by providing prevention mechanisms, and the following questions have therefore been posed: What kinds of vulnerabilities are increasing? Has any kind of vulnerability been reduced in recent years? What is the evolution of their severity?
Chapter
With the interconnectedness of heterogeneous IoT devices being deployed in smart living spaces, it is imperative to assure that connected devices are resilient against Denial-of-Service (DoS) attacks. DoS attacks may cause economic damage but may also jeopardize the life of individuals, e.g., in a smart home healthcare environment since there might be situations (e.g., heart attacks), when urgent and timely actions are crucial. To achieve a better understanding of the DoS attack scenario in the ever so private home environment, we conduct a vulnerability assessment of five commercial-off-the-shelf IoT devices: a gaming console, media player, lighting system, connected TV, and IP camera, that are typically found in a smart living space. This study was conducted using an automated vulnerability scanner – Open Vulnerability Assessment System (OpenVAS) – and focuses on semantic DoS attacks. The results of the conducted experiment indicate that the majority of the tested devices are prone to DoS attacks, in particular those caused by a failure to manage exceptional conditions, leading to a total compromise of their availability. To understand the root causes for successful attacks, we analyze the payload code, identify the weaknesses exploited, and propose some mitigations that can be adopted by smart living developers and consumers.
Article
This paper presents a comprehensive analysis of software vulnerabilities based on different technical parameters. The taxonomy of vulnerabilities presented here offers an insight into their frequency; susceptibility; correlation with instances or events, exploits, and artifacts; and assessment of the successful countermeasures. Furthermore, this paper presents the current state-of-the-art in the domain of software threats and vulnerabilities. In addition, it highlights various methods for identification of different types of vulnerabilities. These methods have their own advantages, associated costs, and inherent risks. The current work would help analyze various threats that a system could face, and subsequently it could guide the security engineer to take quick and cost-effective countermeasures.
Article
Full-text available
An organized record of actual flaws can be useful to computer system designers, programmers, analysts, administrators, and users. This paper provides a taxonomy for computer program security flaws together with an appendix that carefully documents 50 actual security flaws. These flaws have all been described previously in the open literature, but in widely separated places. For those new to the field of computer security, they provide a good introduction to the characteristics of security flaws and how they can arise. Because these flaws were not randomly selected from a valid statistical sample of such flaws, we make no strong claims concerning the likely distribution of actual security flaws within the taxonomy. However, this method of organizing security flaw data can help those who have custody of more representative samples to organize them and to focus their efforts to remove and, eventually, to prevent the introduction of security flaws.
Article
Full-text available
Understanding vulnerabilities is critical to understanding the threats they represent. Vulnerabilities classification enables collection of frequency data; trend analysis of vulnerabilities; correlation with incidents, exploits, and artifacts; and evaluation of the effectiveness of countermeasures. Existing classification schemes are based on vulnerability reports and not on an engineering analysis of the problem domain. In this report a classification scheme that uses attribute-value pairs to provide a multidimensional view of vulnerabilities is proposed. Attributes and values are selected based on engineering distinctions that allow vulnerabilities to be exploited by a given technique or determine which countermeasures are effective. Successful classification of vulnerabilities should lead to greater automation in analyzing code vulnerabilities and supporting effective communication between geographically remote vulnerability handling teams and vendors.
Article
Full-text available
"The security of information systems has not improved at a rate consistent with the growth and sophistication of the attacks being made against them. To address this problem, we must improve the underlying strategies and techniques used to create our systems. Specifically, we must build security in from the start, rather than append it as an afterthought. That's the point of Secure Coding in C and C++. In careful detail, this book shows software developers how to build high-quality systems that are less vulnerable to costly and even catastrophic attack. It's a book that every developer should read before the start of any serious project." --Frank Abagnale, author, lecturer, and leading consultant on fraud prevention and secure documentsLearn the Root Causes of Software Vulnerabilities and How to Avoid ThemCommonly exploited software vulnerabilities are usually caused by avoidable software defects. Having analyzed nearly 18,000 vulnerability reports over the past ten years, the CERT/Coordination Center (CERT/CC) has determined that a relatively small number of root causes account for most of them. This book identifies and explains these causes and shows the steps that can be taken to prevent exploitation. Moreover, this book encourages programmers to adopt security best practices and develop a security mindset that can help protect software from tomorrow's attacks, not just today's.Drawing on the CERT/CC's reports and conclusions, Robert Seacord systematically identifies the program errors most likely to lead to security breaches, shows how they can be exploited, reviews the potential consequences, and presents secure alternatives.Coverage includes technical detail on how to Improve the overall security of any C/C++ application Thwart buffer overflows and stack-smashing attacks that exploit insecure string manipulation logic Avoid vulnerabilities and security flaws resulting from the incorrect use of dynamic memory management functions Eliminate integer-related problems: integer overflows, sign errors, and truncation errors Correctly use formatted output functions without introducing format-string vulnerabilities Avoid I/O vulnerabilities, including race conditions Secure Coding in C and C++ presents hundreds of examples of secure code, insecure code, and exploits, implemented for Windows and Linux. If you're responsible for creating secure C or C++ software--or for keeping it safe--no other book offers you this much detailed, expert assistance.
Article
Full-text available
Security vulnerabilities in servers and operating systems are software defects that represent great risks. Both software developers and users are struggling to contain the risk posed by these vulnerabilities. The vulnerabilities are discovered by both developers and external testers throughout the life-span of a software system. A few models for the vulnerability discovery process have just been published recently. Such models will allow effective resource allocation for patch development and are also needed for evaluating the risk of vulnerability exploitation. Here we examine these models for the vulnerability discovery process. The models are examined both analytically and using actual data on vulnerabilities discovered in three widely-used systems. The applicability of the proposed models and significance of the parameters involved are discussed. The limitations of the proposed models are examined and major research challenges are identified.
Conference Paper
Full-text available
Quantitative approaches for software security are needed for effective testing, maintenance and risk assessment of software systems. Vulnerabilities that are present in an operating system after its release represent a great risk. Vulnerability discovery models (VDMs) have been proposed to model vulnerability discovery and have has been fined to vulnerability data against calendar time. The models have been shown to fit very well. In this paper, we investigate the prediction capabilities that these models offer by evaluating accuracy of predictions made with partial data. We examine both the recently proposed logistic model and a new linear model. In addition to VDMs, we consider static approaches to estimating some of the major attributes of the vulnerability discovery process, presenting a static approach to estimating the initial values of one of the VDM's parameters. We also suggest the use of constraints for parameter estimation during curve-fitting. Here we develop computational approaches for early applications of the models and examine the predictive capability of the models. We use data from Windows 98, Windows 2000 and Red Hat Linux 7.1. We examine the impact of using a specific constraint when the parameters of the logistic model are estimated plots for the prediction error are given. The results demonstrate that the prediction error is significantly less when a constraint based on past observations is added. It is observed that the linear model may yield acceptable projections for systems for which vulnerability discovery has not yet reached saturation. The results also suggest that it may be possible to improve the prediction capability by combining static and dynamic approaches, or by combing different models