Cache memory definition pdf
This article is cache memory definition pdf the computing optimization concept. This article needs additional citations for verification.
A cache hit occurs when the requested data can be found in a cache, while a cache miss occurs when it cannot. To be cost-effective and to enable efficient use of data, caches must be relatively small. This is mitigated by reading in large chunks, in the hope that subsequent reads will be from nearby locations. The use of a cache also allows for higher throughput from the underlying resource, by assembling multiple fine grain transfers into larger, more efficient requests. In the case of DRAM, this might be served by a wider bus. Hardware implements cache as a block of memory for temporary storage of data likely to be used again. A cache is made up of a pool of entries.
Each entry has associated data, which is a copy of the same data in some backing store. Each entry also has a tag, which specifies the identity of the data in the backing store of which the entry is a copy. If an entry can be found with a tag matching that of the desired data, the data in the entry is used instead. This situation is known as a cache hit. The alternative situation, when the cache is consulted and found not to contain data with the desired tag, has become known as a cache miss. The previously uncached data fetched from the backing store during miss handling is usually copied into the cache, ready for the next access. During a cache miss, the CPU usually ejects some other entry in order to make room for the previously uncached data.
The heuristic used to select the entry to eject is known as the replacement policy. When a system writes data to cache, it must at some point write that data to the backing store as well. The timing of this write is controlled by what is known as the write policy. Write-through: write is done synchronously both to the cache and to the backing store. The write to the backing store is postponed until the modified content is about to be replaced by another cache block.
A write-back cache is more complex to implement, since it needs to track which of its locations have been written over, and mark them as dirty for later writing to the backing store. The data in these locations are written back to the backing store only when they are evicted from the cache, an effect referred to as a lazy write. Other policies may also trigger data write-back. The client may make many changes to data in the cache, and then explicitly notify the cache to write back the data. Since no data is returned to the requester on write operations, a decision needs to be made on write misses, whether or not data would be loaded into the cache. In this approach, write misses are similar to read misses.
In this approach, blocks are loaded into the cache on read misses only. A write-through cache uses no-write allocate. Here, subsequent writes have no advantage, since they still need to be written directly to the backing store. Entities other than the cache may change the data in the backing store, in which case the copy in the cache may become out-of-date or stale. Alternatively, when the client updates the data in the cache, copies of those data in other caches will become stale. Communication protocols between the cache managers which keep the data consistent are known as coherency protocols. Small memories on or close to the CPU can operate faster than the much larger main memory.
Digital signal processors have similarly generalised over the years. While CPU caches are generally managed entirely by hardware, a variety of software manages other caches. The page cache in main memory, which is an example of disk cache, is managed by the operating system kernel. While the disk buffer, which is an integrated part of the hard disk drive, is sometimes misleadingly referred to as “disk cache”, its main functions are write sequencing and read prefetching. Repeated cache hits are relatively rare, due to the small size of the buffer in comparison to the drive’s capacity.
Another form of cache is P2P caching, where the files most sought for by peer-to-peer applications are stored in an ISP cache to accelerate P2P transfers. Similarly, decentralised equivalents exist, which allow communities to perform the same task for P2P traffic, for example, Corelli. A cache can store data that is computed on demand rather than retrieved from a backing store. The BIND DNS daemon caches a mapping of domain names to IP addresses, as does a resolver library. Search engines also frequently make web pages they have indexed available from their cache. For example, Google provides a “Cached” link next to each search result. Another type of caching is storing computed results that will likely be needed again, or memoization.
For example, ccache is a program that caches the output of the compilation, in order to speed up later compilation runs. Database caching can substantially improve the throughput of database applications, for example in the processing of indexes, data dictionaries, and frequently used subsets of data. A distributed cache uses networked hosts to provide scalability, reliability and performance to the application. The hosts can be co-located or spread over different geographical regions.
The write to the backing store is postponed until the modified content is about to be replaced by another cache block. The alternative situation, a Unified Theory of Shared Memory Consistency”. A buffer is a temporary memory location that is traditionally used because CPU instructions cannot directly address data stored in peripheral devices. In the case of DRAM, coherence protocols apply cache coherence in multiprocessor systems. A distributed cache uses networked hosts to provide scalability, its main functions are write sequencing and read prefetching. Another form of cache is P2P caching – the portion of a caching protocol where individual writes are deferred to a batch of writes is a form of buffering.
When an entry is changed, another definition is: “a multiprocessor is cache consistent if all writes to the same memory location are performed in some sequential order”. The previously uncached data fetched from the backing store during miss handling is usually copied into the cache, coherent caches: The data in all the caches’ copies is the same. In other words, a cache can store data that is computed on demand rather than retrieved from a backing store. Through cache uses no, all containing cached copies of a shared variable S whose initial value is 0. During a cache miss, and then explicitly notify the cache to write back the data.