COA-UNIT-4 Computer memory

 UNIT-4

Computer memory

Computer memory is any physical device, used to store data, information or instruction temporarily or permanently. It is the collection of storage units that stores binary information in the form of bits. The memory block is split into a small number of components, called cells. Each cell has a unique address to store the data in memory, ranging from zero to memory size minus one.

For example, if the size of computer memory is 64k words, the memory units have 64 * 1024 = 65536 locations or cells. The address of the memory's cells varies from 0 to 65535.

Features of Memory

Following are the different features of the memory system that includes:

1.      Location: It represents the internal or external location of the memory in a computer. The internal memory is inbuilt in computer memory. It is also known as primary memory. the example of primary memory are registers, cache and main memory. Whereas, external memory is the separate storage device from the computer, such as disk, tape, USB pen drive.

2.      Capacity: It is the most important feature of computer memory. Storage capacity can vary in external and internal memory. External devices' storage capacity is measured in terms of bytes, whereas the internal memory is measured with bytes or words. The storage word length can vary in bits, such as 8, 16 or 32 bits.

3.      Access Methods: Memory can be accessed through four modes of memory.

o   DMA: As the name specifies, Direct Memory Address (DMA) is a method that allows input/output (I/O) devices to access or retrieve data directly or from the main memory.

o   Sequential Access Method: The sequential access method is used in a data storage device to read stored data sequentially from the computer memory. Whereas, the data received from random access memory (RAM) can be in any order.

o   Random Access Method: It is a method used to randomly access data from memory. This method is the opposite of SAM. For example, to go from A to Z in random access, we can directly jump to any specified location. In the Sequential method, we have to follow all intervening from A to Z to reach at the particular memory location.

o   Associative Access Method: It is a special type of memory that optimizes search performance through defined data to directly access the stored information based on a memory address.

4.      Unit of transfer: As the name suggests, a unit of transfer measures the transfer rate of bits that can be read or write in or out of the memory devices. The transfer rate of data can be different in external and internal memory.

o   Internal memory: The transfer rate of bits is mostly equal to the word size.

o   External memory: The transfer rate of bit or unit is not equal to the word length. It is always greater than a word or may be referred to as blocks.

5.      Performance: The performance of memory is majorly divided into three parts.

o   Access Time: In random access memory, it represents the total time taken by memory devices to perform a read or write operation that an address is sent to memory.

o   Memory Cycle Time: Total time required to access memory block and additional required time before starting second access.

o   Transfer rate: It describes the transfer rate of data used to transmit memory to or from an external or internal memory device. Bit transfer can be different for different external and internal devices.

6.      Physical types: It defines the physical type of memory used in a computer such as magnetic, semiconductor, magneto-optical and optical.

7.      Organization: It defines the physical structure of the bits used in memory.

8.      Physical characteristics: It specifies the physical behavior of the memory like volatile, non-volatile or non-erasable memory. Volatile memory is known as RAM, which requires power to retain stored information, and if any power loss has occurred, stored data will be lost. Non-volatile memory is a permanent storage memory that is used to obtain any stored information, even when the power is off. Non-erasable memory is a type of memory that cannot be erased after the manufactured like ROM because at the time of manufactured ROM are programmed.

Classification of Memory

The following figure represents the classification of memory:

Classification of Memory

Primary or Main Memory

Primary memory is also known as the computer system's main memory that communicates directly within the CPU, Auxiliary memory and the Cache memory. Main memory is used to kept programs or data when the processor is active to use them. When a program or data is activated to execute, the processor first loads instructions or programs from secondary memory into main memory, and then the processor starts execution. Accessing or executing of data from primary memory is faster because it has a cache or register memory that provides faster response, and it is located closer to the CPU. The primary memory is volatile, which means the data in memory can be lost if it is not saved when a power failure occurs. It is costlier than secondary memory, and the main memory capacity is limited as compared to secondary memory.

The primary memory is further divided into two parts:

1.      RAM (Random Access Memory)

2.      ROM (Read Only Memory)

Random Access Memory (RAM)

Random Access Memory (RAM) is one of the faster types of main memory accessed directly by the CPU. It is the hardware in a computer device to temporarily store data, programs or program results. It is used to read/write data in memory until the machine is working. It is volatile, which means if a power failure occurs or the computer is turned off, the information stored in RAM will be lost. All data stored in computer memory can be read or accessed randomly at any time.

Classification of Memory

There are two types of RAM:

  • SRAM
  • DRAM

DRAM: DRAM (Dynamic Random-Access Memory) is a type of RAM that is used for the dynamic storage of data in RAM. In DRAM, each cell carries one-bit information. The cell is made up of two parts: a capacitor and a transistor. The size of the capacitor and the transistor is so small, requiring millions of them to store on a single chip. Hence, a DRAM chip can hold more data than an SRAM chip of the same size. However, the capacitor needs to be continuously refreshed to retain information because DRAM is volatile. If the power is switched off, the data store in memory is lost.

Advertisement

Characteristics of DRAM

1.      It requires continuously refreshed to retain the data.

2.      It is slower than SRAM

3.      It holds a large amount of data

4.      It is the combination of capacitor and transistor

5.      It is less expensive as compared to SRAM

6.      Less power consumption

SRAM: SRMA (Static Random-Access Memory) is a type of RAM used to store static data in the memory. It means to store data in SRAM remains active as long as the computer system has a power supply. However, data is lost in SRAM when power failures have occurred.

Characteristics of Static Ram

1.      It does not require to refresh.

2.      It is faster than DRAM

3.      It is expensive.

4.      High power consumption

5.      Longer life

6.      Large size

7.      Uses as a cache memory

SRAM Vs. DRAM

SRAM SRAM

DRAM DRAM

It is a Static Random-Access Memory.

It is a Dynamic Random Access Memory.

The access time of SRAM is slow.

The access time of DRAM is high.

It uses flip-flops to store each bit of information.

It uses a capacitor to store each bit of information.

It does not require periodic refreshing to preserve the information.

It requires periodically refreshing to preserve the information.

It uses in cache memory.

It is used in the main memory.

The cost of SRAM is expensive.

The cost of DRAM is less expensive.

It has a complex structure.

Its structure is simple.

It requires low power consumption.

It requires more power consumption.

Advantages of RAM

  • It is a faster type of memory in a computer.
  • It requires less power to operate.
  • Program loads much faster
  • More RAM increases the performance of a system and can multitask.
  • Perform read and write operations.
  • The processor can read information faster than a hard disc, floppy, USB, etc.

Disadvantages of RAM

  • Less RAM reduces the speed and performance of a computer.
  • Due to volatile, it requires electricity to preserve the data.
  • It is expensive than ROM
  • It is unreliable as compared to ROM
  • The Size of RAM is limited.

Read-Only Memory (ROM)

ROM is a memory device or storage medium that is used to permanently store information inside a chip. It is a read-only memory that can only read stored information, data or programs, but we cannot write or modify anything. A ROM contains some important instructions or program data that are required to start or boot a computer. It is a non-volatile memory; it means that the stored information cannot be lost even when the power is turned off or the system is shut down.

Classification of Memory

Types of ROM

There are five types of Read Only Memory:

1.      MROM (Masked Read Only Memory):
MROM is the oldest type of read-only memory whose program or data is pre-configured by the integrated circuit manufacture at the time of manufacturing. Therefore, a program or instruction stored within the MROM chip cannot be changed by the user.

2.      PROM (Programmable Read Only Memory):
It is a type of digital read-only memory, in which the user can write any type of information or program only once. It means it is the empty PROM chip in which the user can write the desired content or program only once using the special PROM programmer or PROM burner device; after that, the data or instruction cannot be changed or erased.

3.      EPROM (Erasable and Programmable Read Only Memory):
It is the type of read only memory in which stored data can be erased and re-programmed only once in the EPROM memory. It is a non-volatile memory chip that holds data when there is no power supply and can also store data for a minimum of 10 to 20 years. In EPROM, if we want to erase any stored data and re-programmed it, first, we need to pass the ultraviolet light for 40 minutes to erase the data; after that, the data is re-created in EPROM.

4.      EEPROM (Electrically Erasable and Programmable Read Only Memory):
The EEROM is an electrically erasable and programmable read only memory used to erase stored data using a high voltage electrical charge and re-programmed it. It is also a non-volatile memory whose data cannot be erased or lost; even the power is turned off. In EEPROM, the stored data can be erased and reprogrammed up to 10 thousand times, and the data erase one byte at a time.

5.      Flash ROM:
Flash memory is a non-volatile storage memory chip that can be written or programmed in small units called Block or Sector. Flash Memory is an EEPROM form of computer memory, and the contents or data cannot be lost when the power source is turned off. It is also used to transfer data between the computer and digital devices.

Advantages of ROM

1.      It is a non-volatile memory in which stored information can be lost even power is turned off.

2.      It is static, so it does not require refreshing the content every time.

3.      Data can be stored permanently.

4.      It is easy to test and store large data as compared to RAM.

5.      These cannot be changed accidently

6.      It is cheaper than RAM.

7.      It is simple and reliable as compared to RAM.

8.      It helps to start the computer and loads the OS.

Disadvantages of ROM

Advertisement

1.      Store data cannot be updated or modify except to read the existing data.

2.      It is a slower memory than RAM to access the stored data.

3.      It takes around 40 minutes to destroy the existing data using the high charge of ultraviolet light.

RAM Vs. ROM

RAM RAM

ROM ROM

It is a Random-Access Memory.

It is a Read Only Memory.

Read and write operations can be performed.

Only Read operation can be performed.

Data can be lost in volatile memory when the power supply is turned off.

Data cannot be lost in non-volatile memory when the power supply is turned off.

It is a faster and expensive memory.

It is a slower and less expensive memory.

Storage data requires to be refreshed in RAM.

Storage data does not need to be refreshed in ROM.

The size of the chip is bigger than the ROM chip to store the data.

The size of the chip is smaller than the RAM chip to store the same amount of data.

Types of RAM: DRAM and SRAM

Types of ROM: MROM, PROM, EPROM, EEPROM

Secondary Memory

Secondary memory is a permanent storage space to hold a large amount of data. Secondary memory is also known as external memory that representing the various storage media (hard drives, USB, CDs, flash drives and DVDs) on which the computer data and program can be saved on a long term basis. However, it is cheaper and slower than the main memory. Unlike primary memory, secondary memory cannot be accessed directly by the CPU. Instead of that, secondary memory data is first loaded into the RAM (Random Access Memory) and then sent to the processor to read and update the data. Secondary memory devices also include magnetic disks like hard disk and floppy disks, an optical disk such as CDs and CDROMs, and magnetic tapes.

Features of Secondary Memory

  • Its speed is slower than the primary/ main memory.
  • Store data cannot be lost due to non-volatile nature.
  • It can store large collections of different types, such as audio, video, pictures, text, software, etc.
  • All the stored data in a secondary memory cannot be lost because it is a permanent storage area; even the power is turned off.
  • It has various optical and magnetic memories to store data.

Types of Secondary Memory

The following are the types of secondary memory devices:

Advertisement

Hard Disk

A hard disk is a computer's permanent storage device. It is a non-volatile disk that permanently stores data, programs, and files, and cannot lose store data when the computer's power source is switched off. Typically, it is located internally on computer's motherboard that stores and retrieves data using one or more rigid fast rotating disk platters inside an air-sealed casing. It is a large storage device, found on every computer or laptop for permanently storing installed software, music, text documentation, videos, operating system, and data until the user did not delete.

Classification of Memory

Floppy Disk

A floppy disk is a secondary storage system that consisting of thin, flexible magnetic coating disks for holding electronic data such as computer files. It is also known as Floppy Diskette that comes in three sizes like 8 inches, 5.5 inches and 3.5 inches. The stored data of a floppy disk can be accessed through the floppy disk drive. Furthermore, it is the only way through a new program installed on a computer or backup of the information. However, it is the oldest type of portable storage device, which can store data up to 1.44 MB. Since most programs were larger, that required multiple floppy diskettes to store large amounts of data. Therefore, it is not used due to very low memory storage.

Classification of Memory

CD (Compact Disc)

CD is an optical disk storage device, stands for Compact Disc. It is a storage device used to store various data types like audio, videos, files, OS, Back-Up file, and any other information useful to a computer. The CD has a width of 1.2 mm and 12 cm in height, which can store approximately 783 MB of data size. It uses laser light to read and write data from the CDs.

Classification of Memory

Types of CDs

1.      CD-ROM (Compact Disc Read Only Memory): It is mainly used for bulk size mass like audio CDs, software and computer games at the time of manufacture. Users can only read data, text, music, videos from the disc, but they cannot modify or burnt it.

2.      CD-R (Compact Disc Recordable): The type of Compact Disc used to write once by the user; after that, it cannot be modified or erased.

3.      CD-RW (Compact Disc Rewritable): It is a rewritable CD disc, often used to write or delete the stored data.

DVD Drive/Disc

DVD is an optical disc storage device, stands for Digital Video Display or Digital Versatile Disc. It has the same size as a CD but can store a larger amount of data than a compact disc. It was developed in 1995 by Sony, Panasonic, Toshiba and Philips four electronics companies. DVD drives are divided into three types, such as DVD ROM (Read Only Memory), DVD R (Recordable) and DVD RW (Rewritable or Erasable). It can store multiple data formats like audio, videos, images, software, operating system, etc. The storing capacity of data in DVD is 4.7 GB to 17 GB.

Classification of Memory

Blu Ray Disc (BD)

Blu Ray is an Optical disc storage device used to store a large amount of data or high definition of video recording and playing other media files. It uses laser technology to read the stored data of the Blu-ray Disk. It can store more data at a greater density as compared to CD/ DVD. For example, compact discs allow us to store 700 MB of data, and in DVDs, it provides up to 8 GB of storage capacity, while Blu-ray Discs provide 28 GB of space to store data.

Pen Drive

A pen drive is a portable device used to permanently store data and is also known as a USB flash drive. It is commonly used to store and transfer the data connected to a computer using a USB port. It does not have any moveable part to store the data; it uses an integrated circuit chip that stores the data. It allows the users to store and transfer data like audio, videos, images, etc. from one computer to any USB pen drive. The storing capacity of pen drives from 64 MB to 128 GB or more.

Classification of Memory

Memory Hierarchy Design

Memory hierarchy optimizes data access and storage. Design choices depend on the specific computer architecture, intended use, and the trade-off between speed, capacity, and cost.

Memory hierarchy design creates levels of memory based on different types and their characteristics. The memory hierarchy design looks like this:

  • Level 0: Registers.
  • Level 1: Cache.
  • Level 2: Main memory.
  • Level 3: Secondary memory, magnetic disks, or solid-state memory.
  • Level 4: Tertiary memory.

Memory hierarchy pyramid

Memory Hierarchy Characteristics

The hierarchy is based on characteristics which optimally balance performance, capacity, and cost:

  • Performance. Increases when users need to access lower memory hierarchy levels less frequently. Without the memory hierarchy, a speed gap exists between the main memory and CPU registers. 
  • Capacity. Represents a volume of information the memory is able to store. 
  • Access Time. The interval between the read/write request and the data availability.
  • Cost Per Bit. This metric represents dividing the overall memory cost by the total number of bits accessed. 

Each characteristic either increases or decreases, going from CPU registers (level 0) to the fourth level, as represented in the table below:

Characteristics

From Level 0 to Level 4

Performance

Decreases

Capacity

Increases

Access Time

Decreases

Cost per Bit

Increases

2D and 2.5D Memory organization

The internal structure of Memory either RAM or ROM is made up of memory cells that contain a memory bit. A group of 8 bits makes a byte. The memory is in the form of a multidimensional array of rows and columns. In which, each cell stores a bit and a complete row contains a word. A memory simply can be divided into this below form. 
 

2n = N

where n is the no. of address lines and N is the total memory in bytes. 
There will be 2n words. 

 

 

2D Memory organization – 
In 2D organization, memory is divided in the form of rows and columns(Matrix). Each row contains a word, now in this memory organization, there is a decoder. A decoder is a combinational circuit that contains n input lines and 2n output lines. One of the output lines selects the row by the address contained in the MAR and the word which is represented by that row gets selected and is either read or written through the data lines. 

 

https://media.geeksforgeeks.org/wp-content/uploads/222-10.png

2.5D Memory organization – 
In 2.5D Organization the scenario is the same but we have two different decoders one is a column decoder and another is a row decoder. Column decoder is used to select the column and a row decoder is used to select the row. The address from the MAR goes as the decoders’ input. Decoders will select the respective cell through the bit outline, then the data from that location will be read or through the bit, inline data will be written at that memory location. 

 

https://media.geeksforgeeks.org/wp-content/uploads/33-12.png

Read and Write Operations – 

1.      If the select line is in Reading mode then the Word/bit which is represented by the MAR will be available to the data lines and will get read.

2.      If the select line is in write mode then the data from the memory data register (MDR) will be sent to the respective cell which is addressed by the memory address register (MAR).

3.      With the help of the select line, we can select the desired data and we can perform read and write operations on it. 
 

Comparison between 2D & 2.5D Organizations – 

1.      In 2D organization hardware is fixed but in 2.5D hardware changes.

2.      2D Organization requires more gates while 2.5D requires less.

3.      2D is more complex in comparison to the 2.5D organization.

4.      Error correction is not possible in the 2D organization but in 2.5D it could be done easily.

5.      2D is more difficult to fabricate in comparison to the 2.5D organization. 

 
 

2D Memory Organization:

Advantages:

Simplicity: 2D memory organization is a simple and straightforward approach, with memory chips arranged in a two-dimensional grid.

Cost-Effective: 2D memory organization is cost-effective, making it a popular choice for many low-power and low-cost devices.

Low Power: 2D memory organization has low power consumption, making it ideal for use in mobile devices and other portable electronics.

Disadvantages:

Limited Bandwidth: 2D memory organization has limited bandwidth due to the sequential access pattern of memory chips, which can lead to slower data transfer rates.

Limited Capacity: 2D memory organization has limited capacity since it requires memory chips to be arranged in a two-dimensional grid, limiting the number of memory chips that can be used.

Limited Scalability: 2D memory organization is not scalable, making it difficult to increase memory capacity or performance without adding more memory chips.

2.5D Memory Organization:

Advantages:

Higher Bandwidth: 2.5D memory organization has higher bandwidth since it uses a high-speed interconnect between memory chips, enabling faster data transfer rates.

Higher Capacity: 2.5D memory organization has higher capacity since it can stack multiple memory chips on top of each other, enabling more memory to be packed into a smaller space.

Scalability: 2.5D memory organization is highly scalable, making it easier to increase memory capacity or performance without adding more memory chips.

Disadvantages:

Complexity: 2.5D memory organization is more complex than 2D memory organization since it requires additional interconnects and packaging technologies.

Higher Cost: 2.5D memory organization is generally more expensive than 2D memory organization due to the additional interconnects and packaging technologies required.

Higher Power Consumption: 2.5D memory organization has higher power consumption due to the additional interconnects and packaging technologies, making it less ideal for use in mobile devices and other low-power electronics.

 

 

Cache Memory:

Cache memory is a high-speed memory, which is small in size but faster than the main memory (RAM). The CPU can access it more quickly than the primary memory. So, it is used to synchronize with high-speed CPU and to improve its performance.

Cache Memory

Cache memory can only be accessed by CPU. It can be a reserved part of the main memory or a storage device outside the CPU. It holds the data and programs which are frequently used by the CPU. So, it makes sure that the data is instantly available for CPU whenever the CPU needs this data. In other words, if the CPU finds the required data or instructions in the cache memory, it doesn't need to access the primary memory (RAM). Thus, by acting as a buffer between RAM and CPU, it speeds up the system performance.

 Types of Cache Memory:

1.      L1 Cache: The L1 cache is also known as the onboard, internal, or primary cache. It is built with the help of the CPU. Its speed is very high, and the size of the L1 cache varies from 8 KB to 128 KB.

2.      L2 Cache: It is also known as external or secondary cache, which requires fast access time to store temporary data. It is built into a separate chip in a motherboard, not built into the CPU like the L1 level. The size of the L2 cache may be 128 KB to 1 MB.

3.      L3 Cache: L3 cache levels are generally used with high performance and capacity of the computer. It is built into a motherboard. Its speed is very slow, and the maximum size up to 8 MB.

Advantages of Cache Memory

1.      Cache memory is the faster memory as compared to the main memory.

2.      It stores all data and instructions that are repeatedly used by the CPU for improving the performance of a computer.

3.      The access time of data is less than the main memory.

Disadvantage of Cache Memory

1.      It is very costly as compared to the Main memory and the Secondary memory.

2.      It has limited storage capacity.

Register Memory

The register memory is a temporary storage area for storing and transferring the data and the instructions to a computer. It is the smallest and fastest memory of a computer. It is a part of computer memory located in the CPU as the form of registers. The register memory is 16, 32 and 64 bits in size. It temporarily stores data instructions and the address of the memory that is repeatedly used to provide faster response to the CPU.

Primary Vs. Secondary Memory

Primary Primary

Secondary Memo Secondary Memory ry

It is also known as temporary memory.

It is also known as a permanent memory.

Data can be access directly by the processor or CPU.

Data cannot be accessed directly by the I/O processor or CPU.

Stored data can be a volatile or non-volatile memory.

The nature of secondary memory is always non-volatile.

It is more costly than secondary memory.

It is less costly than primary memory.

It is a faster memory.

It is a slower memory.

It has limited storage capacity.

It has a large storage capacity.

It required the power to retain the data in primary memory.

It does not require power to retain the data in secondary memory.

Examples of primary memory are RAM, ROM, Registers, EPROM, PROM and cache memory.

Examples of secondary memory are CD, DVD, HDD, magnetic tapes, flash disks, pen drive, etc.

Cache Performance

When the processor needs to read or write a location in the main memory, it first checks for a corresponding entry in the cache.

If the processor finds that the memory location is in the cache, a Cache Hit has occurred and data is read from the cache.

If the processor does not find the memory location in the cache, a cache miss has occurred. For a cache miss, the cache allocates a new entry and copies in data from the main memory, then the request is fulfilled from the contents of the cache.

The performance of cache memory is frequently measured in terms of a quantity called Hit ratio.

Hit Ratio(H) = hit / (hit + miss) =  no. of hits/total accesses
Miss Ratio = miss / (hit + miss) =  no. of miss/total accesses = 1 - hit ratio(H)

We can improve Cache performance using higher cache block size, and higher associativity, reduce miss rate, reduce miss penalty, and reduce the time to hit in the cache.

Cache Mapping

There are three different types of mapping used for the purpose of cache memory which is as follows:

1.      Direct Mapping

2.      Associative Mapping

3.      Set-Associative Mapping

1. Direct Mapping

The simplest technique, known as direct mapping, maps each block of main memory into only one possible cache line. or In Direct mapping, assign each memory block to a specific line in the cache. If a line is previously taken up by a memory block when a new block needs to be loaded, the old block is trashed. An address space is split into two parts index field and a tag field. The cache is used to store the tag field whereas the rest is stored in the main memory. Direct mapping`s performance is directly proportional to the Hit ratio.

i = j modulo m
where
i = cache line number
j = main memory block number
m = number of lines in the cache

Direct Mapping

Direct Mapping

For purposes of cache access, each main memory address can be viewed as consisting of three fields. The least significant w bits identify a unique word or byte within a block of main memory. In most contemporary machines, the address is at the byte level. The remaining s bits specify one of the 2 s blocks of main memory. The cache logic interprets these s bits as a tag of s-r bits (the most significant portion) and a line field of r bits. This latter field identifies one of the m=2 r lines of the cache. Line offset is index bits in the direct mapping.

Direct Mapping - Structure

Direct Mapping – Structure

2. Associative Mapping

In this type of mapping, associative memory is used to store the content and addresses of the memory word. Any block can go into any line of the cache. This means that the word id bits are used to identify which word in the block is needed, but the tag becomes all of the remaining bits. This enables the placement of any word at any place in the cache memory. It is considered to be the fastest and most flexible mapping form. In associative mapping, the index bits are zero.

Associative Mapping - Structure

Associative Mapping – Structure

3. Set-Associative Mapping

This form of mapping is an enhanced form of direct mapping where the drawbacks of direct mapping are removed. Set associative addresses the problem of possible thrashing in the direct mapping method. It does this by saying that instead of having exactly one line that a block can map to in the cache, we will group a few lines together creating a set . Then a block in memory can map to any one of the lines of a specific set. Set-associative mapping allows each word that is present in the cache can have two or more words in the main memory for the same index address. Set associative cache mapping combines the best of direct and associative cache mapping techniques. In set associative mapping the index bits are given by the set offset bits. In this case, the cache consists of a number of sets, each of which consists of a number of lines.

Set-Associative Mapping

Set-Associative Mapping

Relationships in the Set-Associative Mapping can be defined as:

m = v * k
i= j mod v

where
i = cache set number
j = main memory block number
v = number of sets
m = number of lines in the cache number of sets
k = number of lines in each set

Set-Associative Mapping - Structure

Set-Associative Mapping – Structure

For more, you can refer to the Difference between Types of Cache Mapping .

Application of Cache Memory

Here are some of the applications of Cache Memory.

Primary Cache: A primary cache is always located on the processor chip. This cache is small and its access time is comparable to that of processor registers.

Secondary Cache: Secondary cache is placed between the primary cache and the rest of the memory. It is referred to as the level 2 (L2) cache. Often, the Level 2 cache is also housed on the processor chip.

Spatial Locality of Reference: Spatial Locality of Reference says that there is a chance that the element will be present in close proximity to the reference point and next time if again searched then more close proximity to the point of reference.

Temporal Locality of Reference: Temporal Locality of Reference uses the Least recently used algorithm will be used. Whenever there is page fault occurs within a word will not only load the word in the main memory but the complete page fault will be loaded because the spatial locality of reference rule says that if you are referring to any word next word will be referred to in its register that’s why we load complete page table so the complete block will be loaded.

Advantages

Cache Memory is faster in comparison to main memory and secondary memory.

Programs stored by Cache Memory can be executed in less time.

The data access time of Cache Memory is less than that of the main memory.

Cache Memory stored data and instructions that are regularly used by the CPU, therefore it increases the performance of the CPU.

Disadvantages

Cache Memory is costlier than primary memory and secondary memory .

Data is stored on a temporary basis in Cache Memory.

Whenever the system is turned off, data and instructions stored in cache memory get destroyed.

The high cost of cache memory increases the price of the Computer System.

Page Replacement Algorithms

Page replacement algorithms are techniques used in operating systems to manage memory efficiently when the virtual memory is full. When a new page needs to be loaded into physical memory , and there is no free space, these algorithms determine which existing page to replace.

If no page frame is free, the virtual memory manager performs a page replacement operation to replace one of the pages existing in memory with the page whose reference caused the page fault. It is performed as follows: The virtual memory manager uses a page replacement algorithm to select one of the pages currently in memory for replacement, accesses the page table entry of the selected page to mark it as “not present” in memory, and initiates a page-out operation for it if the modified bit of its page table entry indicates that it is a dirty page.

Common Page Replacement Techniques

1.      First In First Out (FIFO)

2.      Optimal Page replacement

3.      Least Recently Used (LRU)

4.      Most Recently Used (MRU)

1.      First In First Out (FIFO)

This is the simplest page replacement algorithm. In this algorithm, the operating system keeps track of all pages in the memory in a queue, the oldest page is in the front of the queue. When a page needs to be replaced page in the front of the queue is selected for removal.

Example 1: Consider page reference string 1, 3, 0, 3, 5, 6, 3 with 3 page frames.Find the number of page faults using FIFO Page Replacement Algorithm.

FIFO - Page Replacement

FIFO – Page Replacement

Initially, all slots are empty, so when 1, 3, 0 came they are allocated to the empty slots —> 3 Page Faults.
when 3 comes, it is already in memory so —> 0 Page Faults. Then 5 comes, it is not available in memory so it replaces the oldest page slot i.e 1. —> 1 Page Fault. 6 comes, it is also not available in memory so it replaces the oldest page slot i.e 3 —> 1 Page Fault. Finally, when 3 come it is not available so it replaces

2.      Optimal Page Replacement

In this algorithm, pages are replaced which would not be used for the longest duration of time in the future.

Example-2: Consider the page references 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frame. Find number of page fault using Optimal Page Replacement Algorithm.

Optimal Page Replacement

Optimal Page Replacement

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already there so —> 0 Page fault. when 3 came it will take the place of 7 because it is not used for the longest duration of time in the future.—> 1 Page fault. 0 is already there so —> 0 Page fault. 4 will takes place of 1 —> 1 Page Fault.

Now for the further page reference string —> 0 Page fault because they are already available in the memory. Optimal page replacement is perfect, but not possible in practice as the operating system cannot know future requests. The use of Optimal Page replacement is to set up a benchmark so that other replacement algorithms can be analyzed against it.

 

 

3.     Least Recently Used

In this algorithm, page will be replaced which is least recently used.

Example-3: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page faults using LRU Page Replacement Algorithm.

Least Recently Used - Page Replacement

Least Recently Used – Page Replacement

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults
0 is already their so —> 0 Page fault. when 3 came it will take the place of 7 because it is least recently used —> 1 Page fault
0 is already in memory so —> 0 Page fault .
4 will takes place of 1 —> 1 Page Fault
Now for the further page reference string —> 0 Page fault because they are already available in the memory.

4.      Most Recently Used (MRU)

In this algorithm, page will be replaced which has been used recently. Belady’s anomaly can occur in this algorithm.

Example 4: Consider the page reference string 7, 0, 1, 2, 0, 3, 0, 4, 2, 3, 0, 3, 2, 3 with 4 page frames. Find number of page faults using MRU Page Replacement Algorithm.

Most Recently Used - Page Replacement

Most Recently Used – Page Replacement

Initially, all slots are empty, so when 7 0 1 2 are allocated to the empty slots —> 4 Page faults

0 is already their so–> 0 page fault

when 3 comes it will take place of 0 because it is most recently used —> 1 Page fault

when 0 comes it will take place of 3  —> 1 Page fault

when 4 comes it will take place of 0  —> 1 Page fault

2 is already in memory so —> 0 Page fault

when 3 comes it will take place of 2  —> 1 Page fault

when 0 comes it will take place of 3  —> 1 Page fault

when 3 comes it will take place of 0  —> 1 Page fault

when 2 comes it will take place of 3  —> 1 Page fault

when 3 comes it will take place of 2  —> 1 Page fault

 

 

 

 

 

Virtual Memory

Virtual memory in computer architecture plays a crucial role by expanding available memory beyond physical limits. This innovative concept allows efficient multitasking, enhancing system performance and enabling smoother operations.

Virtual memory optimizes resource allocation by combining hardware and software, ensuring seamless execution of diverse applications simultaneously.

Concept & Purpose Of Virtual Memory

To begin with, let us understand the concept and purpose of virtual memory:

Technique In Operating Systems

Virtual memory is a technique used in computer architecture to create an illusion of having more memory by allowing programs to access more memory than is physically available.

Virtual memory provides an essential abstraction layer that enables programs to execute as if they have continuous access to a large memory block, even with fragmented or limited physical memory. This abstraction simplifies programming and enhances system performance.

Efficient Resource Utilization

By utilizing virtual memory, multiple processes can run concurrently without needing dedicated physical memory for each process. This efficient utilization of resources ensures that the system can handle various tasks simultaneously, improving overall system performance.

Advantages & Disadvantages Of Virtual Memory

Let us study the advantages and disadvantages of virtual memory in computer architecture:

Advantages Of Virtual Memory

Increased multitasking: Virtual memory allows multiple programs to run simultaneously, enhancing productivity.

Efficient use of physical memory: Virtual memory enables the system to utilize physical memory more effectively by swapping data in and out as needed.

Larger memory space: With virtual memory, applications can access a larger memory space than the physical RAM available.

Disadvantages Of Virtual Memory

Performance impact: Constant data swapping between physical memory and the hard drive can slow system performance.

Comments

Popular posts from this blog

Compiler Design UNIT-1

COA- Unit -5 Peripheral DeviceS

COA-UNIT-3 Control Unit