Cache Memory

Cache Memory

Cache memory is the fastest component in the memory hierarchy and approaches the speed of the CPU. A cache is organized between the CPU and the main memory to speed up the operation of a computer system.

Figure: Cache Organization in a system

Some of the examples of the type of data or programs stored in cache may include:

  • Table look up
  • Code of functions
  • Code of loops
  • Array of data

Advantage of using Cache:

As the access time of cache memory is less than the access time of the main memory by a factor of  5 to 10, thus reducing the total memory access time.

Locality of Reference:

Cache is used to store the most frequently used instructions or data. Execution of a main program is in sequential order and contains number of routines and loops. Reference to memory for these functions and loops tend to be localized to small area of memory. Since reference is localized to few memory, this phenomena is called property of the locality of reference.

Operation of the Cache:

Whenever the CPU access the memory, cache is first examined for the data. If the word/data is found in the cache, it is read from the cache memory. If the word is not found in the cache, then the main memory is accessed. The word or block of words containing the the one accessed is transferred to the cache memory and the CPU then reads it from the cache memory. In this manner the block of data is transferred to cache so that the subsequent reference to memory find the word in the cache itself.

Performance of the cache memory:

When a word is found in the cache it is said to produce a hit, and a word is not found in cash it counts to a miss. The ratio of the number of hits to the total number of memory access (his plus miss) is the hit ratio. This hit ratio is the measure of the performance of the cache memory. Also this hit ratio varifie the locality of reference.

Mapping Techniques:

The transformation of data from the main memory to the cache memory is referred as the mapping process.

Types of memory mapping procedures

There are three types of mapping techniques in the cache organization. These are:

  1. Associative mapping
  2. Direct Mapping
  3. Set-associative mapping.

Associative Mapping:

We know that the associative memory is the fastest memory. This fastest and more flexible cache uses associative memory. This type of memory arrangement stores both the address

Figure: Associative mapping Cache

and content of a memory word permitting the cache to store any location and its contents from the main memory.

 

Working:

The address and the contents both are stored in the cache. When the CPU generates an address for the data, the address is matched with the all the addresses stored in the address part of the associative/cache memory. If the address is found in the cache then the contents is read out. If address is not found then the address and content pair is transferred to the cache from the main memory.

If the cache is full the content of the cache is displaced room for new address-data pair. The replacement technique is the simple FIFO or round-robin.

Direct Mapping Procedure.

Since the associative memory is expansive, mapping process for using a RAM is explained here. Let us assume that the main memory has 2n words and the cache is assumed to have 2k words where n and k are the address bits of main memory and the cache memory.

Figure : direct mapping cache

The n bit main memory address bit is divided in two fields as the tag field and the index field. Number of bits in the tag field is n-k. The cache stores the tag field and the data word. This Operation of direct mapping assuming a block size of 1 word:

  • When the CPU make the memory request it generates the 5-bit memory address (i.e. 00000).
  • It uses the index part k-bits(3-bits) of the address to access the cache.
  • Reads a word from the caches tag and data = 00, 1250.
  • Tag field(5-3=2-bits) of the CPU address is matched the tag part of the word read.
  • If the tag matches(yes in this case as both tags are 00), there is a hit.
  • If tag do not match the word, there is a miss. From the figure if the CPU generates the 01101 whose index is 101 and tag as 01, a word from cache is read and the tag part is matched with tag part of CPU address. The two do not match as tag read is 10. So the the word from the main memory address 10101 is read which is 2340. It then stores the memory content together with the tag in cache.

Disadvantage of direct mapping:

The hit ratio drops considerably if the two or more words whose address have the same index but different tags are accessed repeatedly.

Set-Associative Mapping

This is an improvement over the direct mapping procedure. The main features of this type of mapping are:

Figure: Set-Associative Mapping Cache

Two or more words of the memory can be stored with same index is stored. Each data word is stored with its tag and associated data thus forming a set and hence the name set associative memory.

The word length at each index is N*(tag_width + data_width) where N is the number of word at each index.   So, for a set size of k words the cache can store k*(cache capacity) words of main memory.

When the CPU makes memory request, the word from the cache index is read, the tag of CPU address is matched with all the tags stored at this index. If a tag is matched then the tag-data is read out and this is ‘hit’ Is said to occur.

If the tag do not match, it is said to set the ‘miss’ and the set is full, one of the set (tag+tag-data) is replaced with the desired word from the main memory.

Figure shows the set-associative mapping cache.  Two words from memory address 01000 and 02000 are stored at index 000 in cache. Similarly, two words from memory 02777 and 00777 are stored at cache index 777.

Advantage 

By storing set of many tag and tag-data the hit ration will considerable increase.

Disadvantage:

By increasing the set size will increase the no. of words in cache and hence its cost and more complex comparison logic.

Replacement Algorithms :

  1. Random algorithm: replaces a set randomly from the cache.
  2. FIFO: This algorithm replaces a set in cache which is the oldest
  3. Least recently used(LRU) : The algorithm replaces the set which has been least recently used.

 Cache Updation Techniques

There are two techniques for cache updating. These are:

  1. Write-through
  2. Write-back

Write-through technique:

In this technique whenever a word is written on the main memory the cache memory is updated in parallel if it has a word at the specified address(index).  Advantage of such technique is that the cache will always contain the same data as the main memory.

Write-back technique:

In this method only the cache memory is updated during the write operation. The word will be written to main memory whenever the word is to be removed from it. This is because while in cache a word may be updated several times.  So it does not matter whether the main memory data is out of date, it will anyhow it will be updated when this data is removed from the cache.

Summary:

A cache memory is the fastest memory placed between the main memory and the CPU. It improves the CPU memory access time.

Leave a Reply

Top
error: Content is protected !!