良率提升為半導體製造公司保有競爭能力的核心議題,在導入量產階段,透過資料分析,早期辨識造成良率表現不佳的根本原因,為產品能否及時上市的關鍵。然而,半導製程複雜,生產過程迴流特性頻繁,所蒐集的資料中存在高度的共線性。再加上先進製程特殊的製造特性,站點彼此間存在顯著的交互作用影響,使得主效應不存在或微弱不明顯,多數問題屬於站點間交互作用的特性,或良率低下之現象須由多個變量來解釋,無法透過調整單一變量來提升良率。此外,導入量產階段,可供分析的樣本數相對於可能的影響因子個數是極少的(p>>n),皆使得良率分析極具挑戰。 本研究聚焦於導入量產階段的良率問題辨別,研究目的為建構一製造智慧之資料挖礦架構於錯誤偵測。三個主要的步驟如下:(1)重要變數篩選:結合Kruskal–Wallis檢定與Random Forest縮減可能的因子個數。適當的維度縮減可以加強分析品質的效率與效度(2)交互作用因子偵測:以加權最小平方法回歸偵測對反應變數具高度解釋能力的可能交互作用站點因子(3)模型建構:透過模型建構來描述因子與反應變數之間的關係。透過本研究架構分析所萃取出的資訊,以提供可能的良率問題線索並且建議可能問題的處理先後順序。本研究以台灣半導體公司之製造現場所蒐集的資料為基礎進行資料模擬與分析,驗證本研究所發展之資料挖礦架構。
Yield enhancement is a critical factor to maintain competitive ability in semiconductor manufacturing. Early identification of the yield-loss causes for ramp-up stage from data analysis in early stage is the key to shorten the time to market. However, the high col-linearity characterized by the complicated re-work flow of manufacturing process, and complicated interactions between the factors due to the characteristic of advanced process make the analysis more difficult. In addition, number of factors in ramp-up stage is larger than the sample size(p>>n), the yield analysis is a great challenge. This study focuses on troubleshooting in the ramp-up stage, and aims to construct a manufacturing intelligence framework for failure detection of data mining. Three main steps as following:(1)key factors screening:to narrow the possible factor by integrating Kruskal–Wallis test and Random Forest. A suitable dimensional reduction to insure the efficient and effective quality of analysis.(2)interaction factors detection:to detect the possible combined factors with high explanation of responses by weighted least square regression.(3)model construction:construct a model to explain the relationship between the factors and the responses. Form the extracted information we can provide the hint of root causes and the suggestion with priority of trouble shooting. At least, research simulates the data based on the real data, collected from a semiconductor foundry company in Taiwan, to validate the proposed data mining framework.