為了在先進的奈米製程技術下確保產品的品質,針對微小延遲缺陷進行測試變成越來越重要的課題。 在目前商用時序感知測試向量產生軟體工具以及非時序感知測試向量選擇中,存在著測試向量產生時間過長和記憶體消耗量過大的問題。 本論文提出了運用圖形處理器之針對微小延遲缺陷錯誤模擬與測試向量選擇演算法。 本論文提出的演算法對延遲缺陷傳遞路徑的上下限進行動態與靜態分析,以快速估計延遲缺陷的傳遞路徑長度來節省計算時間。 針對實際路徑長度計算我們提出了運用圖形處理器平行計算的演算法,我們提出的演算法提出了簡單錯誤字典來代替完整錯誤字典來選擇向量,因此具有高擴展性。 根據實驗結果顯示,本論文提出的演算法選出的測試向量數量和商用時序感知測試向量產生軟體工具相比只有五分之一,而計算時間少於一半。 與運用中央處理器的測試向量選擇演算法相比,本論文運用圖處理器的演算法可以加速兩倍。
Testing for small delay defect (SDD) is gaining importance for product quality in modern nanometer technologies. Existing commercial timing-aware Automatic Test Pattern Generation (ATPG) tools and selection from timing-unaware test patterns are either suffering from long run time and large memory consumption. This paper proposes graphic processing unit (GPU) algorithms to fault simulate and select test patterns for small delay defects (SDD). The proposed technique uses static and dynamic bound analysis to quickly estimate path delay so the run time can be significantly reduced. A novel GPU algorithm for parallel path delay calculation is presented. The proposed technique is scalable because only a simple fault dictionary (SFD), instead of a complete fault dictionary, is needed for test pattern selection. Experimental results on large benchmark circuits show that the selected test sets are only a fifth in length and the run time is less than a half of that of timing-aware ATPG. Compared with a CPU-based test selection technique, the proposed GPU-based version is twice as fast.