Cover image for Operations Research Analysis in Quality Test and Evaluation.
Operations Research Analysis in Quality Test and Evaluation.
Title:
Operations Research Analysis in Quality Test and Evaluation.
Author:
Giadrosich, Donald L.
ISBN:
9781600860935
Personal Author:
Physical Description:
1 online resource (385 pages)
Contents:
Cover -- Title -- Copyright -- Foreword -- Preface -- Table of Contents -- Chapter 1. Introducion -- 1.1 History of Test and Evaluation -- 1.2 General -- 1.2.1 Test -- 1.2.2 Evaluation -- 1.3 Quality Measurement and Improvement -- 1.3.1 The Customer -- 1.3.2 The Process -- 1.3.3 The Product -- 1.4 Establishing the Need for New or Improved Systems -- 1.5 Acquiring Systems -- 1.6 Assessing and Validating Requirements -- 1.7 Program Flow and Decision Points -- 1.8 Reducing Risks Through Test and Evaluation -- 1.8.1 Development Test and Evaluation -- 1.8.2 Operational Test and Evaluation -- 1.8.3 Production Acceptance Test and Evaluation -- 1.8.4 Joint Service Test and Evaluation -- 1.9 Scientific Approach -- 1.9.1 What To Do -- 1.9.2 How To Do It -- 1.10 Ideals of Test and Evaluation -- References -- Chapter 2. Cost and Operational Effectiveness Analysis -- 2.1 General -- 2.2 Approach -- 2.3 Acquisition Issues -- 2.3.1 Threat -- 2.3.2 Need -- 2.3.3 Environment -- 2.3.4 Constraints -- 2.3.5 Operational Concept -- 2.4 Alternatives -- 2.4.1 Performance Objectives -- 2.4.2 Description of Alternatives -- 2.5 Analysis of Alternatives -- 2.5.1 Models -- 2.5.2 Measures of Effectiveness -- 2.5.3 Costs -- 2.5.4 Trade-Off Analyses -- 2.5.5 Decision Criteria -- 2.6 Scope by Milestone -- 2.7 Cost and Effectiveness Comparisons -- 2.7.1 System Concept Level Comparisons -- 2.7.2 System Design Level Comparisons -- 2.7.3 System Comparison Level -- 2.7.4 Force Level Comparisons -- 2.8 Addressing Risks -- 2.8.1 Uncertainty in Estimates -- 2.8.2 Risks in Terms of Probability -- 2.9 Other Cost and Effectiveness Considerations -- 2.9.1 Costs and Effectiveness Over Time -- 2.10 Optimization and Partial Analysis -- 2.10.1 Lagrangian Function (L) -- 2.10.2 Partial Analysis -- 2.11 Affordability Assessments -- References -- Chapter 3. Basic Principles -- 3.1 General.

3.1.1 Effectiveness -- 3.1.2 Suitability -- 3.1.3 Tactics, Procedures, and Training -- 3.2 Defining the Test and Evaluation Problem -- 3.3 Establishing an Overall Test and Evaluation Model -- 3.4 Measurement -- 3.4.1 Physical Measurements -- 3.4.2 Measurement Process -- 3.4.3 Validity of Measurements -- 3.5 Statistical Nature of Test and Evaluation Data -- 3.6 Typical Range System Measurement Capabilities -- 3.6.1 Time-Space-Position Information -- 3.6.2 Electromagnetic Environment Testing -- 3.6.3 Engineering Sequential Photography -- 3.6.4 Range-Timing Systems -- 3.6.5 Aerospace Environmental Support -- 3.6.6 Calibration and Alignment -- References -- Chapter 4. Modeling and Simulation Approach -- 4.1 General -- 4.2 Model Concept -- 4.2.1 Decomposition of a Model -- 4.2.2 Applications -- 4.3 Verification, Validation, and Accreditation -- 4.3.1 Verification -- 4.3.2 Validation -- 4.3.3 Sources of Information -- 4.3.4 Sensitivity Analysis -- 4.3.5 Tasks for Model Verification, Validation, and Accreditation -- 4.3.6 Stakeholders in Model Verification, Validation, and Accreditation -- 4.3.7 Model Verification, Validation, and Accreditation Plan -- 4.3.8 Documentation of the Model Verification, Validation, and Accreditation Efforts -- 4.3.9 Special Considerations When Verifying, Validating, and Accrediting Models -- 4.3.10 Summary -- 4.4 Establishing an Overall Test and Evaluation Model -- 4.4.1 System Test and Evaluation Environments -- 4.4.2 Other Modeling and Simulation -- 4.5 Future Challenges -- References -- Chapter 5. Test and Evaluation Concept -- 5.1 General -- 5.2 Focusing on the Test and Evaluation Issues -- 5.3 Supporting Data and System Documentation -- 5.3.1 Design Concept -- 5.3.2 Operations Concept -- 5.3.3 Maintenance and Support Concept -- 5.4 Critical Issues and Test Objectives.

5.4.1 Relating Critical Issues and Objectives to the Military Mission -- 5.4.2 Researching the Critical Issues and Objectives -- 5.5 Measures of Effectiveness, Measures of Performance, and Criteria -- 5.6 Data Analysis and Evaluation -- 5.7 Scenarios -- 5.8 Requirements for Test and Evaluation Facilities -- 5.9 Scope and Overall Test and Evaluation Approach -- 5.10 Non-Real Time Kill Removal Versus Real Time Kill Removal -- 5.10.1 Non-Real Time Kill Removal Testing -- 5.10.2 Real Time Kill Removal Testing -- 5.11 Use of Surrogates -- 5.12 Managing Change -- References -- Chapter 6. Test and Evaluation Design -- 6.1 General -- 6.2 Test and Evaluation Design Process -- 6.3 Procedural Test and Evaluation Design -- 6.3.1 Critical Issues and Test Objectives -- 6.3.2 Test and Evaluation Constraints -- 6.3.3 Test and Evaluation Method -- 6.3.4 Measures of Effectiveness and Measures of Performance -- 6.3.5 Measurements and Instrumentation -- 6.3.6 Selecting the Size of the Sample -- 6.3.7 Data Management and Computer-Assisted Analysis -- 6.3.8 Decision Criteria -- 6.3.9 Limitations and Assumptions -- 6.4 Experimental Test and Evaluation Design -- 6.4.1 Test Variables -- 6.4.2 Sensitivity of Variables -- 6.4.3 Operations Tasks -- 6.4.4 Analysis -- 6.4.5 Factor Categories -- 6.4.6 Design Considerations -- 6.4.7 Example of Categorizing Factors for a Multiple-Factor Test -- 6.4.8 Analysis of Variance -- 6.5 Sequential Methods -- 6.6 Nonparametric Example -- References -- Chapter 7. Test and Evaluation Planning -- 7.1 General -- 7.2 Advanced Planning -- 7.3 Developing the Test and Evaluation Plan -- 7.3.1 Introduction -- 7.3.2 Test and Evaluation Concept -- 7.3.3 Methodology -- 7.3.4 Administration -- 7.3.5 Reporting -- 7.3.6 Test and Evaluation Plan Supplements -- References -- Chapter 8. Test and Evaluation Conduct, Analysis, and Reporting -- 8.1 General.

8.2 Test Conduct -- 8.2.1 Professional Leadership and Test Team Performance -- 8.2.2 Systematic Approach -- 8.3 Analysis -- 8.4 Reporting -- 8.4.1 General -- 8.4.2 Planning -- 8.4.3 Gathering Data -- 8.4.4 Organizing -- 8.4.5 Outlining -- 8.4.6 Writing -- 8.4.7 Editing -- References -- Chapter 9. Software Test and Evaluation -- 9.1 General -- 9.2 Important Terms and Concepts -- 9.3 Software Maintenance -- 9.4 Modifying Software -- 9.4.1 Reducing Debugging Problems -- 9.4.2 Software Errors -- 9.4.3 Top-Down Programming -- 9.4.4 Top-Down Testing -- 9.4.5 Bottom-Up Testing -- 9.5 Assessment of Software -- 9.5.1 Design Concepts and Attributes -- 9.5.2 Performance Measurement -- 9.5.3 Suitability Measurement -- 9.6 Test Limitations -- References -- Chapter 10. Human Factors Evaluations -- 10.1 General -- 10.2 System Analysis and Evaluation -- 10.2.1 System Analysis -- 10.2.2 System Evaluation -- 10.2.3 Questionnaires and Subjective Assessments -- References -- Chapter 11. Reliability, Maintainability, Logistics Supportability, and Availability -- 11.1 General -- 11.2 System Cycles -- 11.2.1 System Daily Time Cycle -- 11.2.2 System Life Cycle -- 11.3 System Degradation -- 11.4 Reliability -- 11.4.1 Combining Components -- 11.4.2 Example Complex System Reliability Problem -- 11.5 Maintainability -- 11.5.1 Maintenance Concept -- 11.5.2 Repairability and Serviceability -- 11.5.3 Maintenance Personnel and Training -- 11.6 Logistics Supportability -- 11.7 Analysis and Predictions -- 11.7.1 Mean Time Between Failure -- 11.7.3 Mean Downtime -- 11.7.4 Availability -- 11.8 Fault-Tolerant Systems -- References -- Chapter 12. Test and Evaluation of Integrated Weapons Systems -- 12.1 General -- 12.2 Integration Trend -- 12.2.1 Segregated Systems -- 12.2.2 Federated Systems -- 12.2.3 Integrated Systems -- 12.3 System Test and Evaluation.

12.3.1 Engineering Level Test and Evaluation -- 12.3.2 Operational Level Test and Evaluation -- 12.3.3 Testability in Systems -- 12.3.4 Emphasizing Designed-ln Capabilities -- 12.3.5 Real-World Threats and Surrogates -- 12.3.6 Global Positioning System as a Precision Reference -- 12.3.7 Shared Data Bases -- 12.3.8 Summary -- 12.4 Automatic Diagnostic Systems -- 12.4.1 Purpose of Automatic Diagnostic Systems -- 12.4.2 Probabilistic Description of Automatic Diagnostic Systems -- 12.4.3 Estimating Automatic Diagnostic System Probabilities -- 12.4.4 Automatic Diagnostic System Measures of Effectiveness -- 12.4.5 Summary -- References -- Chapter 13. Measures of Effectiveness and Measures of Performance -- 13.1 General -- 13.2 Systems Engineering Approach -- 13.3 Operational Tasks/Mission Approach -- 13.4 Example Measures of Performance -- 13.5 Example Measures of Effectiveness -- 13.6 Example Measures of Effectiveness for Electronic Combat Systems -- 13.7 Example Measures of Effectiveness for Operational Applications, Concepts, and Tactics Development -- References -- Chapter 14. Measurement of Training -- 14.1 General -- 14.2 Learning by Humans -- 14.3 Transfer of Training -- 14.4 Example Complex Training System -- 14.4.1 Tracking Instrumentation Subsystem -- 14.4.2 Aircraft Instrumentation Subsystem -- 14.4.3 Control and Computational Subsystem -- 14.4.4 Display and Debriefing Subsystem -- 14.4.5 ACMI System Training Attributes -- 14.5 Example Measures for Training Activities -- References -- Chapter 15. Joint Test and Evaluation -- 15.1 General -- 15.2 Joint Test and Evaluation Program -- 15.2.1 Joint Test and Evaluation Nomination and Joint Feasibility Study Approval -- 15.2.2 Joint Feasibility Study Review and Chartering of the Joint Test Force -- 15.3 Activating and Managing the Joint Test Force -- 15.3.1 Command Relationship.

15.3.2 Example Joint Test Force Organization.
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Electronic Access:
Click to View
Holds: Copies: