正交分頻多工(Orthogonal Frequency Division Multiplexing; 簡稱OFDM)是一種寬頻的傳輸技術,許多國家已用於數位廣播系統。但由於OFDM有很高的功率峰值對平均值比(Peak-to-Average power Ratio;簡稱PAR),所以易受無線通訊系統傳輸端的高功率放大器(High Power Amplifier,簡稱HPA)非線性扭曲失真的影響,而嚴重降低訊號傳輸的品質。為了有效消除高功率放大器的非線性現象,本研究將使用於基頻的adaptive digital signal predistorter作為補償。對HPA的預先補償器當中,常使用有多項式型態的Volterra Series與Look Up Table (LUT)兩種方式。LUT方法進行predistortion的運算簡單而容易實現,但過去研究顯示訓練收斂所需時間比Volterra Series長得多。本研究對LUT的方法進行探討,提出訓練快速的Fast Look Up Table(FLUT)方法來對HPA的非線性現象進行補償。我們以電腦模擬比較FLUT與Park及Powers所提出性能較佳且收斂較快的並列Volterra Series方法及Jeon提出的LUT方法。FLUT在訓練速度上比Jeon的LUT predistorter大幅加快,甚至超越了Park的Volterra Series predistorter,且性能上亦有更佳的效果,接近理想的預先補償器。本研究成果消除了LUT方法訓練速度過慢的瓶頸,可提供使用LUT補償放大器非線性另一種新的實現方式。
Orthogonal Frequency Division Multiplexing(OFDM) is a wideband transmission technique which has been proposed for use in digital broadcasting system in many countries. However, OFDM is significantly more sensitive to nonlinear distortion caused by high power amplifier(HPA) due to its large peak-to-average power ratio(PAR). In order to compensate for distortion introduced by nonlinear HPA, one possibility is to use adaptive digital signal predistorter operating at baseband. The most commonly studied predistorters use polynomial-type Volterra Series or Look Up Table(LUT) to implement the nonlinear inverse function of the HPA. Past research shows that LUT predistorters require little computation and are easy to implement. However, past research shows that they need significantly more training data to reach convergence compare to Volterra Series predistorters. In this research, we propose a Fast-training Look Up Table(FLUT) predistorter. Simulations are conducted to compare FLUT with Park and Powers''s parallel Volterra Series predistorter and Jeon''s LUT predistorter with OFDM signals. FLUT is shown to be much faster in terms of training compare to Jeon''s LUT predistorter, and is also faster than the parallel Volterra Series predistorter. The performance of FLUT measured by minimum total degradation approach that of the ideal predistorter. Our research result eliminates the slow-training bottleneck of LUT predistorters, and provides a new way to implement LUT predistorters.