The first two paras are only for those who have very little or no knowledge on caching.
The Cache is considered to be the most expensive and the most important resource that a computer possesses. Cache is the memory which CPU can access in the least amount of time and hence, cache access takes place at a much faster rate when compared to other memory accesses.Whenever the CPU needs some data, it checks the cache first and only if the cache doesn't have the data, it begins searching for data in other memories.The size of cache is limited due to architecture constraints as well as the fact that cache is made up of highly expensive SRAMs. Now the jinx is that an access to cache takes the least amount of time, but the cache can store only some part of information that a computer holds and needs. So, Operating Systems use replacement policies for replacing existing data in cache with other data from higher level memories(RAM and secondary storage). Its this very replacement policy that decides the fate of the OS. If the replacement policy is such designed that in most of the occasions the data the CPU needed was there in the cache(a cache hit), then the overall system would be fast and performance will go up. On the other hand, if the CPU couldn't find the needed data in the cache(a cache miss), then the performance will go down.
Data enters the cache when its accessed for the first time. Suppose CPU needed some data that resides in secondary storage. First the data goes from secondary storage to RAM and then from RAM to cache. The next time CPU needs that data, it will see if the cache still has it. If the cache doesn't have it, a miss occurs and the RAM and secondary storage are searched in that order. Once again, after the data is found, it gets transferred to cache. The replacement policy comes into action after some fixed time period or after the number of misses has crossed some threshold. The replacement policy tries to predict the future and tries to remove the data from cache which is least likely to be accessed in the future, and store new data from higher memories that is most likely to be accessed in the future. Its this very future prediction capability which results in success or failure of the replacement policy. The most popular replacement policy is LRU(Least recently used) where data that was accessed least recently is removed and data that was accessed most recently is retained.
LRU is driven by heuristics(history of data usage where time of access is the primary driver) and it is obviously not perfect. Given the amount of impact the replacement policy can have on the performance, one has to strive to improve the future prediction capabilities of the replacement policy. This is where researchers believe that AI can be put to some good use. AI has been a good predictor in several domains and this domain is very likely to be another one where AI can be successful. For replacing existing data in cache with data that is more likely to be accessed, one would need heuristics(which is already put into good use by LRU) and some other predictor.
All data of a computer system is organized as a file system. Modern file systems use tree architectures to organize files. For adding up to the heuristics, the system needs to explore the most likely patterns in which data can be accessed. Its not just about the times at which data is accessed but also about the pattern in which data is accessed across the file system tree. So our proposed replacement policy would be striving to find the most probable patterns in which data might be accessed in the future. It can predict these patterns by utilizing the metadata information from the file system and by storing data access patterns and constructing new patterns by following some logic.
Basic patterns can be generated using file access times and frequencies(something which most modern file systems store). These patterns can be compared to some real file access patterns that were observed in recent history(these patterns would be stored in some separate dedicated memory).This comparison can be used to eliminate certain patterns. Finally, a probable set of file access patterns would be generated. During the policy's operation, this set of patterns is combined with heuristics(same as the ones used in LRU) and the replacement is done on the basis of the most probable data access pattern that was chosen by the replacement policy. The number of misses and hits while using the set of patterns can be used to make changes to the set of patterns itself as well as switching to the next most probable pattern from the pattern set. This will correspond to a kind of feedback which is the core of learning in AI systems.
Its pretty clear that such a scheme would need some extra memory and logic for computing these patterns and then storing and changing them. But these patterns would take up very small space and the processing can be done during certain free CPU cycles. In an overall context, the approach would be more beneficial as the cache hit ratio is very likely to increase. Such an approach is useful for general computer systems where both files and file data, are accessed in structured patterns. For example, certain applications always read some files before the others. This approach can also be put to very good use in servers(both database and web) as even in these systems, the users are very likely to view some specific data/webpage before others. However, the approach may breakdown in cases where usage is highly unpredictable.The file systems won't be needing any drastic changes but additional logic for pattern prediction and updation would be needed.
As all other AI systems, even this system will become better with operation. Initially, the system would generate file patterns on the basis of metadata that it gets from file system. As the system is put to operation, it would refine these initial file patterns on the basis of patterns that were actually observed. Finally, the patterns would be further refined after these patterns were put into operation and feedback was incorporated into the patterns. One may also think of some other prediction pattern for file usage but the core concept still remains the same- the system has to predict which files or data would be used the most at a point of time. And its pretty obvious that even this approach would be using AI to serve its purpose.That's the power of AI !
The Cache is considered to be the most expensive and the most important resource that a computer possesses. Cache is the memory which CPU can access in the least amount of time and hence, cache access takes place at a much faster rate when compared to other memory accesses.Whenever the CPU needs some data, it checks the cache first and only if the cache doesn't have the data, it begins searching for data in other memories.The size of cache is limited due to architecture constraints as well as the fact that cache is made up of highly expensive SRAMs. Now the jinx is that an access to cache takes the least amount of time, but the cache can store only some part of information that a computer holds and needs. So, Operating Systems use replacement policies for replacing existing data in cache with other data from higher level memories(RAM and secondary storage). Its this very replacement policy that decides the fate of the OS. If the replacement policy is such designed that in most of the occasions the data the CPU needed was there in the cache(a cache hit), then the overall system would be fast and performance will go up. On the other hand, if the CPU couldn't find the needed data in the cache(a cache miss), then the performance will go down.
Data enters the cache when its accessed for the first time. Suppose CPU needed some data that resides in secondary storage. First the data goes from secondary storage to RAM and then from RAM to cache. The next time CPU needs that data, it will see if the cache still has it. If the cache doesn't have it, a miss occurs and the RAM and secondary storage are searched in that order. Once again, after the data is found, it gets transferred to cache. The replacement policy comes into action after some fixed time period or after the number of misses has crossed some threshold. The replacement policy tries to predict the future and tries to remove the data from cache which is least likely to be accessed in the future, and store new data from higher memories that is most likely to be accessed in the future. Its this very future prediction capability which results in success or failure of the replacement policy. The most popular replacement policy is LRU(Least recently used) where data that was accessed least recently is removed and data that was accessed most recently is retained.
LRU is driven by heuristics(history of data usage where time of access is the primary driver) and it is obviously not perfect. Given the amount of impact the replacement policy can have on the performance, one has to strive to improve the future prediction capabilities of the replacement policy. This is where researchers believe that AI can be put to some good use. AI has been a good predictor in several domains and this domain is very likely to be another one where AI can be successful. For replacing existing data in cache with data that is more likely to be accessed, one would need heuristics(which is already put into good use by LRU) and some other predictor.
All data of a computer system is organized as a file system. Modern file systems use tree architectures to organize files. For adding up to the heuristics, the system needs to explore the most likely patterns in which data can be accessed. Its not just about the times at which data is accessed but also about the pattern in which data is accessed across the file system tree. So our proposed replacement policy would be striving to find the most probable patterns in which data might be accessed in the future. It can predict these patterns by utilizing the metadata information from the file system and by storing data access patterns and constructing new patterns by following some logic.
Basic patterns can be generated using file access times and frequencies(something which most modern file systems store). These patterns can be compared to some real file access patterns that were observed in recent history(these patterns would be stored in some separate dedicated memory).This comparison can be used to eliminate certain patterns. Finally, a probable set of file access patterns would be generated. During the policy's operation, this set of patterns is combined with heuristics(same as the ones used in LRU) and the replacement is done on the basis of the most probable data access pattern that was chosen by the replacement policy. The number of misses and hits while using the set of patterns can be used to make changes to the set of patterns itself as well as switching to the next most probable pattern from the pattern set. This will correspond to a kind of feedback which is the core of learning in AI systems.
Its pretty clear that such a scheme would need some extra memory and logic for computing these patterns and then storing and changing them. But these patterns would take up very small space and the processing can be done during certain free CPU cycles. In an overall context, the approach would be more beneficial as the cache hit ratio is very likely to increase. Such an approach is useful for general computer systems where both files and file data, are accessed in structured patterns. For example, certain applications always read some files before the others. This approach can also be put to very good use in servers(both database and web) as even in these systems, the users are very likely to view some specific data/webpage before others. However, the approach may breakdown in cases where usage is highly unpredictable.The file systems won't be needing any drastic changes but additional logic for pattern prediction and updation would be needed.
As all other AI systems, even this system will become better with operation. Initially, the system would generate file patterns on the basis of metadata that it gets from file system. As the system is put to operation, it would refine these initial file patterns on the basis of patterns that were actually observed. Finally, the patterns would be further refined after these patterns were put into operation and feedback was incorporated into the patterns. One may also think of some other prediction pattern for file usage but the core concept still remains the same- the system has to predict which files or data would be used the most at a point of time. And its pretty obvious that even this approach would be using AI to serve its purpose.That's the power of AI !
No comments:
Post a Comment