隨著網際網路指數型的成長,網路壅塞的情形也越來越嚴重。一個熱門網頁可能被數以萬計的使用者要求,而造成整個區域網路癱瘓。為解決上述存在於Web上的問題,proxy cache server讓大多數的使用者從近端proxy server上就可以取得資料,減低使用者等待資料傳遞的時間是最普遍的作法,此外,透過proxy cache server也減低網路上的資料流量,節省大量的網路頻寬,也減輕網頁伺服器的負擔。而在proxy cache server的設計上,置換策略是核心技術,適當的置換策略將使得網路上的資源得到最佳利用。 本論文針對以往的置換策略,提出新的部分置換擷取策略,其中分為兩個部份,一個是部分擷取策略,一個是部分置換策略。以提高Web cache的效能與服務品質。透過模擬分析實驗結果,部分擷取置換策略在保持最低的時間複雜度之情況下,確實可以同時改善Web cache上三種效能與服務品質,並且可以有效的解決置換策略上,因不當置換所帶來的不良影響, 以改善Web上資訊之存取效率。 部分擷取策略對於LRU在hit rate、 byte hit rate與reduced latency rate上,可以達到的最佳改善率分別為26\%、 32\% 與50\%。部分置換策略則可以達到的最佳改善率分別為12\%、18\% 與19\%。同時部分擷取置換策略也改善LRU-THOLD只在hit rate表現較好,在byte hit rate與reduced latency rate上卻表現差的情形。部分擷取策略對於LRU-THOLD在hit rate、 byte hit rate與reduced latency rate上的最佳改善率分別為17\% 、114\% 與48\%。部分置換策略的最佳改善率分別為12\%、 30\% 與20\%。 綜合以上所敘述,部分擷取置換策略,在保持最低的時間複雜度之情況下,確實可以有效改善效能與服務品質,如果實作於目前的網路環境當中,將可以改善Web上資訊之存取效率。
The performance of accessing information is crucial for the success of Web. The Web proxy server plays an important role in improving the performance and quality of services. The access latency time can be reduced, if users get objects from the near proxy servers. In addition, the load of Web servers and network traffic can be thus reduced. From our studies, we find that the caching replacement algorithm is a key issue in Web proxy server design. It decides which object will be evicted from the cache to get enough space for the new object. The design of the cache replacement algorithm influences the re-usability of the cached information. In this thesis, two novel caching replacement policies are proposed. The first is the Partial Caching LRU( PC-LRU), and the second is called the Partial Replacement LRU-THOLD( PR-LRU-THOLD). Trace-driven simulations are also performed. The experiment results show that our schemes improve the cache hit rate, the byte hit rate and the access latency. In addition, the complexity of the schemes is near O(1) on average. Compared with LRU, PC-LRU improves the hit rate by 26%, the byte hit rate by 32%, and the reduced latency rate by 50% in the receptive best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 18%, and the reduced latency rate by 19% in the best case. Compared with LRU-THOLD, PC-LRU improves the hit rate by 17%, the byte hit rate by 114%, and the reduced latency rate by 48% in the best case. PR-LRU-THOLD improves the hit rate by 12%, the byte hit rate by 30%, and the reduced latency rate by 20% in the best case. We conclude that the partial caching replacement policies indeed improve the Web proxy performance. Furthermore, the concept of our schemes can be potentially applied to other categories of replacement algorithms.