Please use this identifier to cite or link to this item: http://hdl.handle.net/123456789/1482
Title: Prefetching: A Markov Decision Process Model
Authors: Keshava, Kausthub.
Srinivasan, V.R.
Keywords: Prefetching
Markov
Decision Process Model
Issue Date: 28-Jul-2021
Publisher: IISERM
Abstract: Prefetching is a technique used to boost computer execution performance by fetch ing instructions or data before it is actually needed. Constraints on network band width and memory lead to a choice being made in terms of the specific data to be prefetched. Markov Decision Processes are a valuable tool to model stochastic decision making. One can view the prefetching process as a random process of a surfer moving on a graph and a controller trying to ensure that the surfer lands on a prefetched vertex. This is analogous to the well-known pursuit-evasion game in graph theory. Our aim is to find the optimal policy for the controller and explore the characteristics of such a policy. Throughout this thesis, we analyse and study the properties of the prefetching process modelled on a tree (rooted acyclic graph) as an MDP. Using the Bellman Optimality equations, we solve for the optimal policy of the prefetching MDP for different criteria. In the finite horizon criterion, we obtained a granular greedy optimal policy. We implemented policy iteration for the average costs infinite horizon criterion and converged to the optimal policy of the finite horizon MDP. The optimal policy is dependent on the specific shape and size of the trees. We structured the state space in a manner that eased the search for the optimal value function and corresponding optimal policy given a particular state.
URI: http://hdl.handle.net/123456789/1482
Appears in Collections:MS-16

Files in This Item:
File Description SizeFormat 
MS16010.docx12.26 kBMicrosoft Word XMLView/Open


Items in DSpace are protected by copyright, with all rights reserved, unless otherwise indicated.