Conference PaperPDF Available

Test Concept Development for the Ballistic Missile Defense System: Test and Evaluation of Weapon System Effectiveness in a Limited-Data Environment – Part II 2011 Joint Statistical Meetings

Authors:
  • Space Dynamics Laboratory
Test Concept Development for the
Ballistic Missile Defense System
Test and Evaluation of
Weapon System
Effectiveness in a
Limited-Data
Environment Part II
2011 Joint Statistical Meetings
Jasmina Marsh Carl Gaither III
Dawn Loper
Michael Luhman, Project Leader
Institute for Defense Analyses
4850 Mark Center Drive
Alexandria, VA 22311
www.ida.org
BMDS Testing Options
Individually test all battlespace
parameters (closing velocity etc.)
Results are the easiest to interpret
Not feasible due to cost, schedule
and logistical constraints
Test multiple battlespace
parameters simultaneously
Not all combinations of parameters
are operationally realistic
The number of tests can still be too
large
Operationally-realistic scenario-
based testing
Test the most likely scenarios first
Supplement these tests with
additional test data if available
BMDS testing is complex,
expensive, and geographically
challenging
Motivation and Outline
The scale, complexity, cost and safety associated with testing
the missile defense system constitute a unique challenge for
MDA (Missile Defense Agency), test agencies and other
oversight organizations. This challenge is heightened by the fact
that missile defense assets are developed, produced, and fielded
concurrently.
-Statement of Paul Francis, Director, Acquisition and Sourcing
Management, Government Accountability Office , 25 February 2009
1) Test concepts based on
operationally realistic scenarios
2) Statistical summaries of scenarios
for developing test program features
3) Sets of tests end-to-end vs.
partial?
Operational Realism Criteria
1. Operational System Hardware and Software:
Modified to support mandatory flight safety and data collection requirements
2. Threat-Representative Target:
Consistent with best intelligence estimates
3. Realistic Warfighter Participation:
Consistent with real-world scenarios
4. Realistic Engagement:
Complete and continuous engagement sequence
5. Realistic Mission Lay-Down:
Consistent with best intelligence estimates
6. Realistic Physical Environment:
Consistent with physical environment within which the system will be required to operate
7. Battlespace Representative:
Consistent with operational battlespace
1) Test concepts based on operationally realistic scenarios
List of high-level criteria that can be applied to any test plan
with the goal of injecting operational realism into test
scenarios:
Interceptor dynamic pressure/heating
Interceptor lateral acceleration
Kill vehicle/kinetic warhead divert
Cross range
Interceptor (burnout) velocity
Acquisition range
Engagement timeline
Crossing angle
Interceptor flyout range
Intercept altitude
Closing velocity
Operational Realism Criteria:
Expanded Look
1. Operational System Hardware and Software
2. Threat-Representative Target
3. Realistic Warfighter Participation
4. Realistic Engagement
5. Realistic Mission Lay-Down
6. Realistic Physical Environment
7. Battlespace Representative
1) Test concepts based on operationally realistic scenarios
BMDS Test Range
1) Test concepts based on operationally realistic scenarios
Additional complication: A test scenario must be executable
using the current BMDS test range assets.
BMDS Pacific test range
assets not shown here:
Reagan Test Site located at
Kwajalein Atoll
Pacific Missile Range
Facility
Vandenberg AFB
interceptor launch site
Beale UEWR radar
Other mobile and overhead
sensors
Building Operationally
Realistic Scenarios
2) Statistical summaries of scenarios for developing
test program features
To build realistic scenarios, several pieces of information are
required:
Sample trajectories from North Korea to the U.S.
generated using the commercially available Satellite
Toolkit software
Launch points
Threat information
(type, number,
trajectory)
Sensor location
Weapon location
Interceptor launch
points
Impact points
Qualitative assessment of trajectories to build a list of
features of realistic engagements
Sample Features
2) Statistical summaries of scenarios for developing
test program features
Multiple threat trajectories flying perpendicular to the
ground track of the sensor boresight, sometimes at its
upper detection ranges.
Sensors must perform effective split track processing
for separation/deployment events in a multi-threat
environment.
Multiple sensor face crossings can be expected.
Threat trajectories will be observed by multiple land-
and/or sea-based sensors and more than one face of
sensor.
Large, properly-timed raids will require sensors to track
many widely-spaced objects.
Sampling the Entire
Battlespace
2) Statistical summaries of scenarios for developing
test program features
Some features can be quantified by surveying the entire
population of engagements
Engagement parameters explored using all available
parameter values for the population of engagements
A grid of impact
points
Selected launch
points
One or more
threats launched
to all
combinations of
impact points
Different
weighting
schemes
Radar Field of View
2) Statistical summaries of scenarios for developing
test program features
Example of engagement
parameters:
Span of azimuth, elevation, and
range values at initial detection
for ballistic missiles in a raid
Features Based on Statistical
Summaries
2) Statistical summaries of scenarios for developing
test program features
Statistical examination of select parameters can help
determine the most likely scenario and a list of features.
E.g., comparison of three initial detection parameters for a forward-based
radar vs. a midcourse tracking radar
Individual scenarios can offer insight into additional features
not accessible through statistical summaries.
Forward-based radar Midcourse tracking radar
Normalized azimuth span at initial detection
0 1
Normalized elevation span at initial detection
0 1
Normalized range span at initial detection 0 1
Normalized azimuth span at initial detection
0 1
Normalized elevation span at initial detection
0 1
Normalized range span at initial detection 0 1
Test Plan Considerations
3) Sets of tests end-to-end vs. partial?
How should a test plan incorporate features?
Does a test plan with only end-to-end tests result in a
better estimate for the Probability of Engagement
Success (PES) and a smaller confidence interval than a
test plan that combines fewer end-to-end tests and some
partial tests?
A test plan with more tests (combination of end-to-end
and partial) can potentially explore more features.
Monte Carlo study can help answer these questions.
Monte Carlo Studies
3) Sets of tests end-to-end vs. partial?
Build a simulated test matrix
Successful tests generated based on successful test outcomes
Varying ratio of partial tests
Comparison of N end-to-end tests + partials (up to 20 total tests)
and N+1 end-to-end tests + no partials (N = 2,3,4)
Apply PES methodology
Compute PES and jackknife confidence intervals
Compare jackknife and binomial confidence interval widths
Repeat 10,000 times
Histogram results
Determine bias as a function of partial fraction and number of
end-to-end tests
Increasing Number of Partial Tests
3) Sets of tests end-to-end vs. partial?
3 end-to-end + 1 partial test 3 end-to-end + 17 partial tests 3 end-to-end + 5 partial tests 3 end-to-end + 7 partial tests
Test
Name
D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10
Test 1 d1,0 d1,1 d1,2 d1,3 d1,4 d15 d1,6 d1,7 d1,8 d1,9 d1,10
Test 2 d2,0 d2,1 d2,2 d2,3 d2,4 d2,5 d2,6 d2,7 d2,8 d2,9 d2,10
Test 3 d3,0 d3,1 d3,2 d3,3 d3,4 d3,5 d3,6 d3,7 d3,8 d3,9 d3,10
Test 4 d4,5 d4,6
Test
Name
D0 D1 D2 D3 D4 D5 D6 D7 D8 D9 D10
Test 1 d1,0 d1,1 d1,2 d1,3 d1,4 d15 d1,6 d1,7 d1,8 d1,9 d1,10
Test 2 d2,0 d2,1 d2,2 d2,3 d2,4 d2,5 d2,6 d2,7 d2,8 d2,9 d2,10
Test 3 d3,0 d3,1 d3,2 d3,3 d3,4 d3,5 d3,6 d3,7 d3,8 d3,9 d3,10
Test 4 d4,5 d4,6
Test 5 d5,0 d5,1 d5,2 d5,3 d5,4 d5,5
: : : : : : : : : : : :
Test 20 d8,0 d8,1 d8,2 d8,3
End-to-End vs. Partial Tests:
Results
3) Sets of tests end-to-end vs. partial?
PES = 0.7
2 end-to-end
(N=2)
+
partials
3 end-to-end
(N=3)
+
partials
4 end-to-end
(N=4)
+
partials
Preliminary Observations
Operationally realistic testing should incorporate test
features that reflect real world scenarios
Features can be quantitative (based on statistical
summaries of the entire population of scenarios) or
qualitative (based on significant details from a small
number of engagements)
Monte Carlo study varying the fraction of partial tests
Examined test programs with a small number of end-to-end tests
Compared test programs with only end-to-end tests to test programs
with a significant fraction of partial tests
For 4+ end-to-end tests, PES and confidence interval width
comparable when one end-to-end test is replaced by partial tests
Replacement allows more features to be explored
Numerical studies are still ongoing
ResearchGate has not been able to resolve any citations for this publication.
ResearchGate has not been able to resolve any references for this publication.