Speaker: Ernesto Sanchez, Politecnico di Torino, Italy
Today, safety- and mission-critical applications are asking for increasing the system dependability during the operational lifetime. Actually, new standards arose in the last years try to define the minimum requests in order to guarantee reliability of such devices. In fact, during the last years, microprocessor-based safety-critical applications are introducing a series of audit processes to be applied during the whole product lifetime targeting reliability. Some of these processes are common in industrial design and manufacturing flows, including risk analysis, design verification, and validation, performed since the early phases of product development, but very often, additional test processes need to be performed during the product mission life in a periodic fashion to match reliability standards. In this talk, a brief guideline to effectively increase system dependability by exploiting functional approaches is provided. The most important constraints that need to be considered during the generation phase, as well as during the execution time are described. Additionally, a comparison checking three different strategies on a particular module of an industrial pipelined processor core is also provided.
Ernesto Sanchez received his degree in Electronic Engineering from Universidad Javeriana - Bogota, Colombia in 2000. In 2006 he received his Ph.D. degree in Computer Engineering from the Politecnico di Torino, where currently, he is an Associate Professor with Dipartimento di Automatica e Informatica. His main research interests include evolutionary computation, functional microprocessor verification, validation, and testing.
Speaker: Milan Habrcetl, Cisco CyberSecurity Specialist, Praha, Czech Rep.
Cybersecurity has become the phenomenon of the present era. Without data protection and infrastructure, it's hard to achieve of a prosperous business. Let's have a look how cybersecurity can be more efficient and automated, and what challenges await us in the near future.
Since 1998, he has been deeply involved in cyber security industry, mostly in positions of sales manager or business development manager. He has been working in Cisco since February 2016 within the Global Security Sales organization to support the sales of entire Cisco security portfolio in the Czech Republic and Slovakia.
Speaker: Alberto Bosio, LIRMM Montpellier, France
Cross-layer approach is becoming the preferred solution when reliability is a concern in the design of a microprocessor-based system. Nevertheless, deciding how to distribute the error management across the different layers of the system is a very complex task that requires the support of dedicated frameworks for cross-layer reliability analysis. In other words, the designer has to know what are the “critical” components of the system in order to properly introduce error management mechanisms. Unfortunately, system-level reliability estimation is a complex task that usually requires huge simulation campaign. This presentation aims at proposing a cross-layer system-level reliability analysis framework for soft-errors in microprocessor-based systems. The framework exploits a multi-level hybrid Bayesian model to describe the target system and takes advantage of Bayesian inference to estimate different reliability metrics.
Experimental results, carried out on different microprocessor architectures (i.e., Intel x86, ARM Cortex-A15, ARM Cortex-A9), show that the simulation time is significantly lower than state-of-the-art fault-injection experiments with an accuracy high enough to take effective design decision.
Alberto Bosio received the PhD in Computer Engineering from Politecnico di Torino in Italy in 2006 and the HDR (Habilitation Diriger les Recherches) in 2015 from the University of Montpellier (France). Currently he is an associate professor in the Laboratory of Informatics, Robotics and Microelectronics of Montpellier (LIRMM)-University of Montpellier in France. He has published articles in publications spanning diverse disciplines, including memory testing, fault tolerance, diagnosis and functional verification. He is an IEEE member and the chair of the European Test Technology Technical Council (ETTTC).
Speaker: Alex Orailoglu, University of California, San Diego, USA
The higher levels of integration and process scaling impose failure behaviors which are challenging to interpret, necessitating the continuous augmentation of fault models and test vectors in the hopes of taming the defect escape rate. The subsequent inflation in the number of test vectors coupled with the constant increase in the size of each test vector continuously boosts test cost. The economics of particularly the competitive consumer marketplace however require a constant vigilance at the test cost while ensuring a satisfactory test quality.
While the inclusion of new fault models helps boost test quality, the non-uniform distribution of various defect types and the defect coverage overlaps between fault models imply variable effectiveness of fault models and test vectors, resulting in the inclusion of a large number of ineffective vectors in test flow. A static derivation of test effectiveness however remains problematic in practice as it is well known that defect characteristics are prone to drifts throughout the product lifecycle. Furthermore, the increasing process variation and the integration of hundreds of domains within a chip result in increasingly distinct domains and individualized chip instances with diverse test resource requirements. The conventional test method of a static application of an identical test set to all chips consequently struggles to satisfy the demanding test cost and quality constraints in the face of the evolving defect behaviors and the increasing diversification in test resource requirements.
This talk addresses the simultaneous necessity for satisfactory test quality and low test cost through an adaptive test cost and quality optimization framework. The proposed methodologies not only adaptively assess the effectiveness of fault models and test vectors but also evaluate the variable test resource requirements of the chips and domains based on their distinct characteristics, enabling an effective yet efficient test through the selection of the most effective vectors and a carefully crafted allocation of test resources. The proposed methodologies are tailored for a broad set of application scenarios through the consideration of different defect classes and defect characteristic drift types while incorporating the test data gathering and delivery constraints and overcoming the associated algorithmic challenges.
Alex Orailoglu received his S.B. Degree cum laude in applied mathematics from Harvard College, Cambridge, MA, and the M.S. and Ph.D. degrees in computer science from the University of Illinois at Urbana-Champaign, Urbana.
He is currently a Professor of Computer Science and Engineering with the Department of Computer Science and Engineering, University of California, San Diego, where he directs the Architecture, Reliability and Test (ART) Laboratory, focusing on VLSI test, computer architectures, reliability, embedded processors and systems, and nanoarchitectures. He has published more than 250 papers in these areas.
Dr. Orailoglu has served as the General Chair and the Program Chair for the IEEE/ACM/IFIP International Symposium on Hardware/Software Codesign and System Synthesis, the IEEE VLSI Test Symposium, the IEEE Symposium on Application-Specific Processors (SASP), the Symposium on Integrated Circuits and Systems Design (SBCCI), the IEEE/ACM International Symposium on Nanoscale Architectures (NanoArch), the HiPEAC Workshop on Design for Reliability and the IEEE International High Level Design Validation and Test Workshop (HLDVT). He has last served as the Program Co-Chair of IFIP/IEEE International Conference on Very Large Scale Integration (VLSISoC) 2013. He has co-founded the IEEE SASP, the IEEE/ACM NanoArch, the IEEE HLDVT, and the HiPEAC Workshop on Design for Reliability.
Professor Orailoglu has served as a member of the IEEE Test Technology Technical Council (TTTC) Executive Committee, as the Vice Chair of TTTC, as the Chair of the Test Technology Education Program group, as the Technical Activities Committee Chair and the Planning Co-Chair of TTTC and as the Communities Chair of the IEEE Computer Society Technical Activities Board. He is the founding chair of the IEEE Computer Society Task Force on Hardware/ Software Codesign and the founding vice-chair of the IEEE Computer Society Technical Committee on NanoArchitectures.
Dr. Orailoglu has served as an IEEE Computer Society Distinguished Lecturer. He is a Golden Core Member of the IEEE Computer Society.
Speaker: Ernesto Sanchez, Politecnico di Torino, Italy
Today, safety- and mission-critical applications are asking for increasing the system dependability during the operational lifetime. Actually, new standards arose in the last years try to define the minimum requests in order to guarantee reliability of such devices. In fact, during the last years, microprocessor-based safety-critical applications are introducing a series of audit processes to be applied during the whole product lifetime targeting reliability. Some of these processes are common in industrial design and manufacturing flows, including risk analysis, design verification, and validation, performed since the early phases of product development, but very often, additional test processes need to be performed during the product mission life in a periodic fashion to match reliability standards. In this talk, a brief guideline to effectively increase system dependability by exploiting functional approaches is provided. The most important constraints that need to be considered during the generation phase, as well as during the execution time are described. Additionally, a comparison checking three different strategies on a particular module of an industrial pipelined processor core is also provided.
Ernesto Sanchez received his degree in Electronic Engineering from Universidad Javeriana - Bogota, Colombia in 2000. In 2006 he received his Ph.D. degree in Computer Engineering from the Politecnico di Torino, where currently, he is an Associate Professor with Dipartimento di Automatica e Informatica. His main research interests include evolutionary computation, functional microprocessor verification, validation, and testing.
Speaker: Milan Habrcetl, Cisco CyberSecurity Specialist, Praha, Czech Rep.
Cybersecurity has become the phenomenon of the present era. Without data protection and infrastructure, it's hard to achieve of a prosperous business. Let's have a look how cybersecurity can be more efficient and automated, and what challenges await us in the near future.
Since 1998, he has been deeply involved in cyber security industry, mostly in positions of sales manager or business development manager. He has been working in Cisco since February 2016 within the Global Security Sales organization to support the sales of entire Cisco security portfolio in the Czech Republic and Slovakia.
Speaker: Alberto Bosio, LIRMM Montpellier, France
Cross-layer approach is becoming the preferred solution when reliability is a concern in the design of a microprocessor-based system. Nevertheless, deciding how to distribute the error management across the different layers of the system is a very complex task that requires the support of dedicated frameworks for cross-layer reliability analysis. In other words, the designer has to know what are the “critical” components of the system in order to properly introduce error management mechanisms. Unfortunately, system-level reliability estimation is a complex task that usually requires huge simulation campaign. This presentation aims at proposing a cross-layer system-level reliability analysis framework for soft-errors in microprocessor-based systems. The framework exploits a multi-level hybrid Bayesian model to describe the target system and takes advantage of Bayesian inference to estimate different reliability metrics.
Experimental results, carried out on different microprocessor architectures (i.e., Intel x86, ARM Cortex-A15, ARM Cortex-A9), show that the simulation time is significantly lower than state-of-the-art fault-injection experiments with an accuracy high enough to take effective design decision.
Alberto Bosio received the PhD in Computer Engineering from Politecnico di Torino in Italy in 2006 and the HDR (Habilitation Diriger les Recherches) in 2015 from the University of Montpellier (France). Currently he is an associate professor in the Laboratory of Informatics, Robotics and Microelectronics of Montpellier (LIRMM)-University of Montpellier in France. He has published articles in publications spanning diverse disciplines, including memory testing, fault tolerance, diagnosis and functional verification. He is an IEEE member and the chair of the European Test Technology Technical Council (ETTTC).
Speaker: Alex Orailoglu, University of California, San Diego, USA
The higher levels of integration and process scaling impose failure behaviors which are challenging to interpret, necessitating the continuous augmentation of fault models and test vectors in the hopes of taming the defect escape rate. The subsequent inflation in the number of test vectors coupled with the constant increase in the size of each test vector continuously boosts test cost. The economics of particularly the competitive consumer marketplace however require a constant vigilance at the test cost while ensuring a satisfactory test quality.
While the inclusion of new fault models helps boost test quality, the non-uniform distribution of various defect types and the defect coverage overlaps between fault models imply variable effectiveness of fault models and test vectors, resulting in the inclusion of a large number of ineffective vectors in test flow. A static derivation of test effectiveness however remains problematic in practice as it is well known that defect characteristics are prone to drifts throughout the product lifecycle. Furthermore, the increasing process variation and the integration of hundreds of domains within a chip result in increasingly distinct domains and individualized chip instances with diverse test resource requirements. The conventional test method of a static application of an identical test set to all chips consequently struggles to satisfy the demanding test cost and quality constraints in the face of the evolving defect behaviors and the increasing diversification in test resource requirements.
This talk addresses the simultaneous necessity for satisfactory test quality and low test cost through an adaptive test cost and quality optimization framework. The proposed methodologies not only adaptively assess the effectiveness of fault models and test vectors but also evaluate the variable test resource requirements of the chips and domains based on their distinct characteristics, enabling an effective yet efficient test through the selection of the most effective vectors and a carefully crafted allocation of test resources. The proposed methodologies are tailored for a broad set of application scenarios through the consideration of different defect classes and defect characteristic drift types while incorporating the test data gathering and delivery constraints and overcoming the associated algorithmic challenges.
Alex Orailoglu received his S.B. Degree cum laude in applied mathematics from Harvard College, Cambridge, MA, and the M.S. and Ph.D. degrees in computer science from the University of Illinois at Urbana-Champaign, Urbana.
He is currently a Professor of Computer Science and Engineering with the Department of Computer Science and Engineering, University of California, San Diego, where he directs the Architecture, Reliability and Test (ART) Laboratory, focusing on VLSI test, computer architectures, reliability, embedded processors and systems, and nanoarchitectures. He has published more than 250 papers in these areas.
Dr. Orailoglu has served as the General Chair and the Program Chair for the IEEE/ACM/IFIP International Symposium on Hardware/Software Codesign and System Synthesis, the IEEE VLSI Test Symposium, the IEEE Symposium on Application-Specific Processors (SASP), the Symposium on Integrated Circuits and Systems Design (SBCCI), the IEEE/ACM International Symposium on Nanoscale Architectures (NanoArch), the HiPEAC Workshop on Design for Reliability and the IEEE International High Level Design Validation and Test Workshop (HLDVT). He has last served as the Program Co-Chair of IFIP/IEEE International Conference on Very Large Scale Integration (VLSISoC) 2013. He has co-founded the IEEE SASP, the IEEE/ACM NanoArch, the IEEE HLDVT, and the HiPEAC Workshop on Design for Reliability.
Professor Orailoglu has served as a member of the IEEE Test Technology Technical Council (TTTC) Executive Committee, as the Vice Chair of TTTC, as the Chair of the Test Technology Education Program group, as the Technical Activities Committee Chair and the Planning Co-Chair of TTTC and as the Communities Chair of the IEEE Computer Society Technical Activities Board. He is the founding chair of the IEEE Computer Society Task Force on Hardware/ Software Codesign and the founding vice-chair of the IEEE Computer Society Technical Committee on NanoArchitectures.
Dr. Orailoglu has served as an IEEE Computer Society Distinguished Lecturer. He is a Golden Core Member of the IEEE Computer Society.