INTERLEAVED MEMORY

Combining smaller memory modules into a single, larger memory is known as Interleaved Memory. Memory is typically not implemented as a single huge module (chip) in computer systems. 

Instead, it is created by memory interleaving a number of smaller modules together. Higher order and Lower order interleaving are two types of interleaving.

Lower Interleaved memory architecture
Lower Interleaved memory architecture
Higher Interleaved memory architecture
HIGHER ORDER INTERLEAVING LOWER  ORDER INTERLEAVING 
The memory module is identified using higher bits of the memory address. Memory module identification is done using the lower bits of the memory address. 
made use of to expand memory.
has no impact on the memory’s size.
has no impact on how many bits can be transferred in a single cycle. increases the amount of bits that can be sent in a single cycle
does not boost the processor’s speed. raises the speed as the amount of bits sent each cycle rises.  
It is optional and only carried out when we need to expand the total amount of storage. In CPUs that require more bits per cycle, it is required. E.g: Two banks (modules) make up the 8086’s 16-bit CPU. 80386 has 4 banks in a 32-bit processor, etc. 
The amount of modules used is flexible and can be altered as needed. Depending on how many bits are required in a single cycle, a predetermined number of modules are employed. 
One module failing has no impact on the others. Data is “stripped” across modules when one module fails, which affects all the other modules. 
There is excellent reliability There is excellent reliability 
The same module contains the subsequent places. Locations that follow each other are in distinct modules. 
One module at a time can be chosen. Depending on how much data has to be accessible and how much time is available, one or more modules can be chosen simultaneously. 

CACHE MAPPING TECHNIQUES

From Main Memory to Cache Memory, blocks are loaded. Which block of Main Memory goes into which block of Cache Memory is determined by Cache Mapping. Numerous mapping strategies attempt to strike a balance between Hit Ratio, Search Time, and Tag Size. Each cache block has a Tag that identifies the Main Memory block to which it is mapped. The cache directory, which is a collection of these tags and is quite similar to a page table, is used. A component of the cache is the cache directory. We need the cache directory to be as minimal as feasible because cache memory is highly expensive. Therefore, the Tag must be at least a certain size. The desired block number is contained in the processor-issued Main Memory address. 

This is contrasted with a cache block’s tag, which provides the block number that is actually present. It’s a Hit if they are tied. If not, the search might need to be done again for a number of additional blocks. It should be clear that there should be as few searches as feasible. In order to maintain a high hit ratio, the mapping technique must give the most feasible utilisation of the cache capacity. There are three common methods for mapping caches: 

1) Fully associative mapping, or associative mapping 

2) One-Way Set Associative Mapping, also known as Direct Mapping 

3) Two-Way Set Associative Mapping, or Set Associative Mapping 

1)ASSOCIATIVE MAPPING (FULLY ASSOCIATIVE MAPPING) :

Blocks are loaded into the cache memory during memory operations from main memory. Which block of Main Memory goes into which block of Cache Memory is determined by Cache Mapping. Any block of main memory can be mapped to any free block of cache memory, according to the fully associative mapping technique. The mapping is completely unrestricted by any rules. As a result, the term “Fully Associative” refers to the Full Cache being available for mapping. 

Pentium Processor Cache is an example. 

Pentium processor cache
Pentium processor cache

Tag Size: Out of a possible 22′ blocks, any block of main memory may be contained within a block of cache memory. As a result, each block’s tag in cache memory needs to be 27 bits long. 

Searches: A block of primary memory may be mapped into any one of the 256 blocks of cache memory. As a result, 256 searches in cache memory are required. 

Method of Searching:

The Processor issues a 32 bit Main Memory address. It can be divided as: 

The block number that needs to be searched is this 27-bit one. A 27-bit block number is also included in each cache block’s Tag. 

This is the block number for that particular cache block. 

We compare these two block numbers. When they are equal, the hit is made. 

If not, a new Tag from the following cache block is used for the search. 

Considering that Cache Memory has 256 blocks, this is completed 256 times in total. 

It’s a Miss if none of them match the block number we are looking for. 

Advantage

 Since the entire cache is available for mapping, the Cache Memory is used to its fullest extent, resulting in the Best Hit Ratio. 

Drawback

Tag size is too large: 27 bits.

There are too many searches: 256. 

DIRECT MAPPING( ONE WAY SET ASSOCIATIVE MAPPING) 

Any block of Main Memory can only be mapped to ONE block of Cache Memory, according to the Direct Mapping technique. One Way Set Associative Mapping is another name for this technique because there is only one way to map. The entire cache is viewed as one set. Sets, which are further separated into Blocks, make up the Main Memory. Only the same Block No. in Cache Memory can be mapped from a Block of Main Memory (of any set). The only way Block 0 of Main Memory (of any set) can be mapped into Block 0 of Cache Memory is in this way. So, Block 0 of Cache Memory can only contain Block 0 of Main Memory, but it can contain Block 0 of any Set. 

Pentium Processor Cache is an example.

Searches: We only need to search Block 0 of Cache Memory if we need Block 0 of Main Memory. So, to determine if it is a Hit or a Miss, we just need to conduct one search in the Cathe Memory. 

To determine where to start our search, we first look at the block number we require. Following that, we examine the Set No. we require and contrast it with the Tag of the associated Block No. in the Cache Memory. 

Assume that 5:0:6 is the Main Memory address. The location 6 of Block 0 of Set 5 is therefore required. We proceed to Cache Memory’s Block 0. It carries a Tag that indicates which Set No. in Main Memory’s Block 0 is located in Cache Memory. We compare these two predetermined integers. A HIT occurs if they are equal; otherwise, a MISS occurs. 

Advantage

 In 1 Search we know if it is a Hit or a Miss. Tag Size = 19 bits. 

Drawback

 Since the method is very rigid, the Hit Ratio drops tremendously. 

SET ASSOCIATIVE MAPPING  (TWO WAY SET ASSOCIATIVE MAPPING) 

Two way Set Associative Mapping technique states: A block of Main Memory can only be mapp=ed into the same corresponding Block No. of Cache Memory, in any of the two sets. Since there are two ways of Mapping, its called Two Way Set Associative Mapping. We treat the entire Cache as Two Sets. The Main Memory is divided into Sets, subdivided into Blocks. A Block of Main Mem. (of any set), can only be mapped into the same Block No. in Cache Memory again of any set. This means, Block 0 of Main Memory (of any set), can only be mapped into Block 0 of Cache Memory, into one of its two sets. In other words, Block 0 of Cache Memory can only contain Block 0 of Main Memory but of any Set. 

Consider Pentium Processor Cache:

Tag Size: The Tag must only state the Set No. of Main Memory from which the block is present because Block 0 of Cache Memory can only hold Block 0 of Main Memory, but of any Set. The Tag Size is 20 bits due to Main Memory having 220 sets.

Searches: We just need to seek Block 0 of Cache Memory, but in any of the two sets, if we need Block 0 of Main Memory. So, to determine if it is a Hit or a Miss, we need to run two searches in Cache Memory.. 

Assume the Main Memory address is 5:0:6, which indicates that we need location 6 of Block 0 of set 5, and that we go to Block 0 of Cache Memory, in both sets, because it has a Tag that gives the Set No. of Main Memory whose Block 0 is present in Cache Memory.

Advantage

We can tell if there was a hit or miss after two searches. 20 bits is the tag size. 

Drawback 

We can tell if there was a hit or miss after two searches. 20 bits is the tag size. 

Expanding logic of set associative mapping,we can derive following conclusion:

Spread knowledge

Leave a Comment

Your email address will not be published. Required fields are marked *