This thesis adopts recurrent reinforcement learning (RRL) proposed by Moody and Wu to establish trading strategies for Taiwan Stock Index Futures. RRL system evaluates the performance in terms of cumulative profit by maximizing Sharpe’s ratio during the training period. We design 4 training window-trading strategy combinations, which consist of 2 sets of historical stock data from different periods. We also discuss the differences when both maximum acceptable loss and minimum acceptable profit are given. To verify our RRL algorithm, we use real historical stock data for backtesting and examine the performance of our trading strategies.