Cover image for Safety of Computer Architectures.
Safety of Computer Architectures.
Title:
Safety of Computer Architectures.
Author:
Boulanger, Jean-Louis.
ISBN:
9781118600801
Personal Author:
Edition:
1st ed.
Physical Description:
1 online resource (406 pages)
Series:
Iste
Contents:
Cover -- Safety of Computer Architectures -- Title Page -- Copyright Page -- Table of Contents -- Introduction -- Chapter 1. Principles -- 1.1. Introduction -- 1.2. Presentation of the basic concepts: faults, errors and failures -- 1.2.1. Obstruction to functional safety -- 1.2.2. Safety demonstration studies -- 1.2.3. Assessment -- 1.3. Safe and/or available architecture -- 1.4. Resetting a processing unit -- 1.5. Overview of safety techniques -- 1.5.1. Error detection -- 1.5.2. Diversity -- 1.5.3. Redundancy -- 1.5.4. Error recovery and retrieval -- 1.5.5. Partitioning -- 1.6. Conclusion -- 1.7. Bibliography -- Chapter 2. Railway Safety Architecture -- 2.1. Introduction -- 2.2. Coded secure processor -- 2.2.1. Basic principle -- 2.2.2. Encoding -- 2.2.3. Hardware architecture -- 2.2.4. Assessment -- 2.3. Other applications -- 2.3.1. TVM 430 -- 2.3.2. SAET-METEOR -- 2.4. Regulatory and normative context -- 2.4.1. Introduction -- 2.4.2. CENELEC and IEC history -- 2.4.3. Commissioning evaluation, certification, and authorization -- 2.5. Conclusion -- 2.6. Bibliography -- Chapter 3. From the Coded Uniprocessor to 2oo3 -- 3.1. Introduction -- 3.2. From the uniprocessor to the dual processor with voter -- 3.2.1. North LGV requirements and the Channel Tunnel -- 3.2.2. The principles of the dual processor with voter by coded uniprocessor -- 3.2.3. Architecture characteristics -- 3.2.4. Requirements for the Mediterranean LGV -- 3.3. CSD: available safety computer -- 3.3.1. Background -- 3.3.2. Functional architecture -- 3.3.3. Software architecture -- 3.3.4. Synchronization signals -- 3.3.5. The CSD mail system -- 3.4. DIVA evolutions -- 3.4.1. ERTMS equipment requirements -- 3.4.2. Functional evolution -- 3.4.3. Technological evolution -- 3.5. New needs and possible solutions.

3.5.1. Management of the partitions -- 3.5.2. Multicycle services -- 3.6. Conclusion -- 3.7. Assessment of installations -- 3.8. Bibliography -- Chapter 4. Designing a Computerized Interlocking Module: a Key Component of Computer-Based Signal Boxes Designed by the SNCF -- 4.1. Introduction -- 4.2. Issues -- 4.2.1. Persistent bias -- 4.2.2. Challenges for tomorrow -- 4.2.3. Probability and computer safety -- 4.2.4. Maintainability and modifiability -- 4.2.5. Specific problems of critical systems -- 4.2.6. Towards a targeted architecture for safety automatons -- 4.3. Railway safety: fundamental notions -- 4.3.1. Safety and availability -- 4.3.2. Intrinsic safety and closed railway world -- 4.3.3. Processing safety -- 4.3.4. Provability of the safety of computerized equipment -- 4.3.5. The signal box -- 4.4. Development of the computerized interlocking module -- 4.4.1. Development methodology of safety systems -- 4.4.2. Technical architecture of the system -- 4.4.3. MEI safety -- 4.4.4. Modeling the PETRI network type -- 4.5. Conclusion -- 4.6. Bibliography -- Chapter 5. Command Control of Railway Signaling Safety: Safety at Lower Cost -- 5.1. Introduction -- 5.2. A safety coffee machine -- 5.3. History of the PIPC -- 5.4. The concept basis -- 5.5. Postulates for safety requirements -- 5.6. Description of the PIPC architecture7 -- 5.6.1. MCCS architecture -- 5.6.2. Input and output cards -- 5.6.3. Watchdog card internal to the processing unit -- 5.6.4. Head of bus input/output card -- 5.6.5. Field watchdog -- 5.7. Description of availability principles -- 5.7.1. Redundancy -- 5.7.2. Automatic reset -- 5.8. Software architecture -- 5.8.1. Constitution of the kernel -- 5.8.2. The language and the compilers -- 5.8.3. The operating system (OS) -- 5.8.4. The integrity of execution and of data.

5.8.5. Segregation of resources of different safety level processes -- 5.8.6. Execution cycle and vote and synchronization mechanism -- 5.9. Protection against causes of common failure -- 5.9.1. Technological dissimilarities of computers -- 5.9.2. Time lag during process execution -- 5.9.3. Diversification of the compilers and the executables -- 5.9.4. Antivalent acquisitions and outputs -- 5.9.5. Galvanic isolation -- 5.10. Probabilistic modeling -- 5.10.1. Objective and hypothesis -- 5.10.2. Global model -- 5.10.3. "Simplistic" quantitative evaluation -- 5.11. Summary of safety concepts -- 5.11.1. Concept 1: 2oo2 architecture -- 5.11.2. Concept 2: protection against common modes -- 5.11.3. Concept 3: self-tests -- 5.11.4. Concept 4: watchdog -- 5.11.5. Concept 5: protection of safety-related data -- 5.12. Conclusion -- 5.13. Bibliography -- Chapter 6. Dependable Avionics Architectures: Example of a Fly-by-Wire system -- 6.1. Introduction -- 6.1.1. Background and statutory obligation -- 6.1.2. History -- 6.1.3. Fly-by-wire principles -- 6.1.4. Failures and dependability -- 6.2. System breakdowns due to physical failures -- 6.2.1. Command and monitoring computers -- 6.2.2. Component redundancy -- 6.2.3. Alternatives -- 6.3. Manufacturing and design errors -- 6.3.1. Error prevention -- 6.3.2. Error tolerance -- 6.4. Specific risks -- 6.4.1. Segregation -- 6.4.2. Ultimate back-up -- 6.4.3. Alternatives -- 6.5. Human factors in the development of flight controls -- 6.5.1. Human factors in design -- 6.5.2. Human factors in certification -- 6.5.3. Challenges and trends -- 6.5.4. Alternatives -- 6.6. Conclusion -- 6.7. Bibliography -- Chapter 7. Space Applications -- 7.1. Introduction -- 7.2. Space system -- 7.2.1. Ground segment -- 7.2.2. Space segment -- 7.3. Context and statutory obligation.

7.3.1. Structure and purpose of the regulatory framework -- 7.3.2. Protection of space -- 7.3.3. Protection of people, assets, and the environment -- 7.3.4. Protection of the space system and the mission -- 7.3.5. Summary of the regulatory context -- 7.4. Specific needs -- 7.4.1. Reliability -- 7.4.2. Availability -- 7.4.3. Maintainability -- 7.4.4. Safety -- 7.4.5. Summary -- 7.5. Launchers: the Ariane 5 example -- 7.5.1. Introduction -- 7.5.2. Constraints -- 7.5.3. Object of the avionics launcher -- 7.5.4. Choice of onboard architecture -- 7.5.5. General description of avionics architecture -- 7.5.6. Flight program -- 7.5.7. Conclusion -- 7.6. Satellite architecture -- 7.6.1. Overview -- 7.6.2. Payload -- 7.6.3. Platform -- 7.6.4. Implementation -- 7.6.5. Exploration probes -- 7.7. Orbital transport: ATV example -- 7.7.1. General information -- 7.7.2. Dependability requirements -- 7.7.3. ATV avionic architecture -- 7.7.4. Management of ATV dependability -- 7.8. Summary and conclusions -- 7.8.1. Reliability, availability, and continuity of service -- 7.8.2. Safety -- 7.9. Bibliography -- Chapter 8. Methods and Calculations Relative to "Safety Instrumented Systems" at TOTAL -- 8.1. Introduction -- 8.2. Specific problems to be taken into account -- 8.2.1. Link between classic parameters and standards' parameters -- 8.2.2. Problems linked to sawtooth waves -- 8.2.3. Definition -- 8.2.4. Reliability data -- 8.2.5. Common mode failure and systemic incidences -- 8.2.6. Other parameters of interest -- 8.2.7. Analysis of tests and maintenance -- 8.2.8. General approach -- 8.3. Example 1: system in 2/3 modeled by fault trees -- 8.3.1. Modeling without CCF -- 8.3.2. Introduction of the CCF by factor β -- 8.3.3. Influence of test staggering -- 8.3.4. Elements for the calculation of PFH.

8.4. Example 2: 2/3 system modeled by the stochastic Petri net -- 8.5. Other considerations regarding HIPS -- 8.5.1. SIL objectives -- 8.5.2. HIPS on topside facilities -- 8.5.3. Subsea HIPS -- 8.6. Conclusion -- 8.7. Bibliography -- Chapter 9. Securing Automobile Architectures -- 9.1. Context -- 9.2. More environmentally-friendly vehicles involving more embedded electronics -- 9.3. Mastering the complexity of electronic systems -- 9.4. Security concepts in the automotive field -- 9.4.1. Ensure minimum security without redundancy -- 9.4.2. Hardware redundancy to increase coverage of dangerous failures -- 9.4.3. Hardware and functional redundancy -- 9.5. Which security concepts for which security levels of the ISO 26262 standard? -- 9.5.1. The constraints of the ISO 26262 standard -- 9.5.2. The security concepts adapted to the constraints of the ISO 26262 standard -- 9.6. Conclusion -- 9.7. Bibliography -- Chapter 10. SIS in Industry -- 10.1. Introduction -- 10.2. Safety loop structure -- 10.2.1. "Sensor" sub-system -- 10.2.2. "Actuator" sub-system -- 10.2.3. Information processing -- 10.2.4. Field networks -- 10.3. Constraints and requirements of the application -- 10.3.1. Programming -- 10.3.2. Program structure -- 10.3.3. The distributed safety library -- 10.3.4. Communication between the standard user program and the safety program -- 10.3.5. Generation of the safety program -- 10.4. Analysis of a safety loop -- 10.4.1. Calculation elements -- 10.4.2. PFH calculation for the "detecting" sub-system -- 10.4.3. PFH calculation by "actuator" sub-system -- 10.4.4. PFH calculation by "logic processing" sub-system -- 10.4.5. PFH calculation for a complete loop -- 10.5. Conclusion -- 10.6. Bibliography -- Chapter 11. A High-Availability Safety Computer -- 11.1. Introduction -- 11.2. Safety computer -- 11.2.1. Architecture.

11.2.2. Properties.
Abstract:
"The text is clearly written, well-illustrated, and includes a helpful glossary." (Booknews, 1 February 2011).
Local Note:
Electronic reproduction. Ann Arbor, Michigan : ProQuest Ebook Central, 2017. Available via World Wide Web. Access may be limited to ProQuest Ebook Central affiliated libraries.
Added Author:
Electronic Access:
Click to View
Holds: Copies: