Below is a predictive page replacement algorithm that can improve computer memory utilization rates. Page replacement is the process of moving process pages back and forth from main memory to secondary storage in order to optimize memory storage, enable a system to utilize secondary storage for additional memory space, and make it so memory does not have to be allocated sequentially for each process in a system. Page replacement algorithms aim to predict which pages will be needed by processes so that others can be swapped out of memory to secondary storage, and needed pages can be kept in main memory for quick retrieval. I have created a page replacement algorithm that aims to predict which pages processes will be needed by observing process behavior and incorporating this into the algorithm. Check out the full code on my Github:

https://github.com/raveenak96/pageReplacement

1. Simulator Environment

In order to test and measure the performance of the algorithm, I used a simulator that runs a random set of programs using a limited number of shared physical pages. Each process being used by the simulator has a set number of virtual pages that it might try to access. For the purposes of measuring performance and working in a resource-constrained environment, the following specifications will be used:

  • 20 virtual pages per process
  • 100 total physical pages
  • 20 processes competing for pages simultaneously
  • 128 memory unit page size
  • 100 tick delay required to swap a page in or out
  • The simulator exports three main functions that are used to build this algorithm. The first function, pageit, is where most of the paging algorithm is implemented. It is passed an array of pentry structs, one per process. Each struct contains a copy of all necessary memory information the simulator maintains for each process:

    long active A flag indicating whether or not the process has completed.
    long pc The value of the program counter for the process. The current page can be determined by dividing pc by the size of each page.
    long npages The number of pages in the process's memory space.
    long pages[] A bitmap array representing the page map for a given process. If page[X] is 0, page X is swapped out, swapping out, or swapping in. If page[X] is 1, page X is currently swapped in.

    The other two functions are pagein and pageout, which are used to swap pages in and out.

    To compile and run the simulator and algorithms:

    make
    ./test-lru
    ./test-predict

    2. Least Recently Used Algorithm

    In order to compare the performance of my predictive page algorithm to others, I also implemented another common paging algorithm called the Least Recently Used Algorithm. This algorithm chooses which page to evict from main memory to secondary storage by simply evicting the page that has not been used for the longest amount of time. The algorithm assumes that the page that has not been used for the longest amount of time is least likely to be used again soon. Although this simple implementation provides a basic predictive strategy, performance can greatly be improved using a more powerful predictive implementation.

    3. Predictive Algorithm

    The predictive paging algorithm aims to predict which pages will be needed soon by keeping track of what pages are being used by each process each time the pageit function is called. Using this information, a pattern of pages is developed for each process. Using this pattern, we can create three data structures to keep track of what pages are needed right now, the pages that will be needed soon, and the pages we know we can remove from main memory. Once we have these data structures established, each time the pageit function is called, we attempt to pagein as many pages as we can that we know are needed now. If we run out of main memory, we pageout pages we know we can remove. We then repeat the process, trying to pagein the pages we know we will need soon, based on the page pattern we created for each process. This implementation provides significantly greater performance than the Least Recently Used algorithm presented above. It greatly increases memory utilization rates compared to the LRU implementation, and can simulate the increase of main memory by allowing pages to be swapped out to secondary storage efficiently.