Buffer Cache Fundamentals

 









Oracle keeps copies of database blocks in the buffer cache. The cache may hold more than one copy of a block from different points in time, and may contain dirty blocks, that is the blocks which have been updated but not yet flushed back to disk. The database writer (DBWR) processes are responsible for writing dirty blocks to disk, but any user session (server process) can read these blocks into the cache. All blocks in the buffer cache are on an LRU list, when a process needs a buffer it scans from the LRU end of the list for a non-dirty buffer that it can use. The latch: cache buffers lru chain latches serialize operations on the LRU list.

Oracle Database uses buffer cache to store data blocks read from disk. Oracle Database bypasses the buffer cache for particular operations such as sorting and parallel reads. To use the database buffer cache efficiently, tune the SQL statements for the application to avoid unnecessary resource consumption. To meet this goal, verify that frequently executed SQL statements and SQL statements that perform many buffer gets are well tuned. When using parallel query, consider configuring the database to use the database buffer cache instead of direct reads into the PGA. This configuration may be appropriate for the systems that has large amount of memory.

 

When configuring a new database instance, it is impossible to know the correct size of the buffer cache. Typically the DBA makes a first estimate for the cache size, then runs an representative workload on the instance and examines the relevant statistics to see whether the cache is under-configured or over-configured.

 

 

 

Oracle keeps copies of database blocks in an area of the SGA known as the buffer cache. The cache may hold more than one copy of a block from different points in time, and may contain dirty blocks that is the blocks which have been updated but not yet flushed back to disk. The DBWR or DBWn are responsible for writing the dirty blocks to disk while any user session can read blocks into the cache. All blocks in the buffer cache are on a LRU (Least Recently Used) list – when a process needs a free buffer it scans from the LRU end of this list for a non-dirty buffer that it can use. Latch: cache buffers lru chain serializes the operations the LRU lists.

 

The following are the main Latches related to the buffer cache:

latch: cache buffers chains

This latch is acquired when searching for data blocks in the GA. Since the Buffer Cache is a chain of blocks, each of this chain is protected by a child of this latch when it needs to be scanned. Contention on this latch can be caused by very heavy access to a single block. This would require the application to be reviewed. Starting from 8i there are many hash buckets to each latch so there will be lots of buffers under each latch.

latch: cache buffers lru chain

Processes needs to get this latch when they need to move buffers based on the LRU block replacement policy in the buffer cache. Contention on this latch can be avoided by implementing multiple buffer pools or increasing the number of LRU latches with the parameter _DB_BLOCK_LRU_LATCHES. SQL tuning can also affect this as well by reducing the number of data blocks visited for each execution.

 

 

 

Multiple Buffer Caches:

Oracle uses a buffer cache to manage data blocks in memory. The buffer cache holds copies of the data blocks read from data files comprising the database. The buffer cache is located in the SGA and is shared by all processes connected to an instance. The advantage of using the buffer cache is to eliminate physical IO on frequently accessed blocks. Memory is fast. Oracle users have seen that the Oracle segments (tables, indexes etc) have varying usage patterns that they should be treated differently when it comes to how long Oracle should keep those objects in the buffer cache. For example, it is more important to find the blocks in the memory for a table that is frequently accessed than finding blocks for a table that is accessed at application initialization time. For the first case, we would like the blocks to be read into the memory as much as possible to eliminate the need to read the blocks from disk. However, for the second case, we don’t care if the blocks get overwritten by newly read blocks. Oracle addressed this problem by allowing users to specify CACHE clause during segment creation. Blocks of segments created with this option were loaded and kept in memory as long as the space was needed to load other blocks. If a large table – twice the size of the buffer pool – is accessed frequently in a random fashion, it is very likely that the blocks from the CACHED table will be removed from memory. If Oracle could segment the I/O on the cached table and large table into different buffer pools, then it could increase the probability of the CACHED table being in memory for a longer period of time. With multiple buffer pools feature, we can use the keep buffer pool to maintain the objects in the buffer cache, and a recycle pool to prevent an block from taking up unnecessary space in the cache. When an object is allocated to a cache, all blocks from that object are placed in that cache. Oracle maintains a default cache for objects that have not been assigned to one of the buffer pools. Each buffer pool comprises a number of working sets. A different number of sets can be allocated for each buffer pool. All sets use the same LRU Replacement policy. A strict LRU aging policy provides very good hit ratios in most cases, but we can sometimes improve the hit rate by providing some hints.

 

The main problem with the LRU list occurs when a large segment is accessed frequently in a random fashion. Here, very large mean large compared to the size of the buffer cache. Any single segment that accounts for a substantial portion of physical reads is probably one of these segments. Random reads to such large segment can cause buffers that contain data for other segments to be aged out of the cache. The large segment ends up consuming a large percentage of the cache, but does not benefit from the cache. Very frequently accessed segments are not affected by large segment reads, because their buffers are warmed frequently enough that they do not age out of the cache. The main trouble occurs with warm segments that are not accessed frequently enough to survive the buffer flushing caused by large segment reads. We have two options for solving this problem. One is to move the large segments into a separate recycle cache so that it does not disturb the other segments. The recycle cache should be smaller than the default cache and should reuse buffers more quickly than the default cache. The other approach is to move the small warm segments into a separate keep cache that is not used at all for large segments. The keep cache can be sized to minimize misses in the cache. We can make the response times for specific queries more predictable by putting the segments accessed by the queries in the keep cache to ensure that they are never aged out.

 

When we examine the system IO performance, we should analyse the schema and determine whether or not multiple buffer pools would be advantageous. Consider a keep cache if there are small, frequently accessed tables that require quick response time. Very large tables with random IO are good candidates for the recycle cache. When we partition our cache into multiple buffer pools, each buffer pool can be used for blocks from objects that are accessed in different ways. If the blocks of a particular object are likely to be reused, then we should keep the object in the buffer cache so that the next use of the block will not require another disk I/O operation. Conversely, if the block probably will not be reused within a reasonable period of time, there is no reason to keep it in the cache, the block should be discarded to make room for more popular block. By properly allocating objects to appropriate buffer pools, we can

-        Reduce or eliminate IO

-        Isolate the objects in the cache.

-        Restrict or limit an object to a part of the cache.

 

We can create multiple buffer pools for each instance. The same set of buffer pools need not be defined for each instance of the database. Between instances a buffer pool may be different sizes or not defined at all. Each instance should be tuned separately. The parameter are as follows:

NAME                     TYPE        VALUE

------------------------ ----------- ------

db_16k_cache_size        big integer 0

db_2k_cache_size         big integer 0

db_32k_cache_size        big integer 0

db_4k_cache_size         big integer 0

db_8k_cache_size         big integer 0

db_cache_size            big integer 0

db_keep_cache_size       big integer 0

db_recycle_cache_size    big integer 0

 

 

 

The syntax for the buffer pool clause is as follows:

BUFFER_POOL { KEEP | RECYCLE | DEFAULT }

For example:

create table students (roll number, name varchar2(20)) storage (buffer_pool keep);

 

keep buffer pool

The goal of the keep buffer pool is to retain objects in memory, thus avoiding IO operations. The size of the keep buffer pool therefore depends on the objects that we wish to keep in the buffer cache. We can compute an approximate size for the keep buffer pool by adding together the sizes of all objects dedicated to this pool. The keep buffer pool will have a 100% hit ratio after the buffer have been loaded into the buffer pool. Therefore, do not compute the hit ratios until after the system has been running for a while and has achieved a steady state performance. Keep in mind that 100% buffer pool hit ratio may not be necessary. Often, we can decrease the size of our keep buffer pool by quite a bit and still maintain a sufficiently high hit ratio. Those blocks can be allocated to other buffer pools. Remember that each object kept in memory results in a trade-off: it is beneficial to keep frequently accessed blocks in the cache, but retaining infrequently used blocks results in less space being available for other more active blocks.

 

Recycle Buffer Pool:

The goal of the recycle buffer pool is to eliminate blocks from memory as soon as they are no longer needed. If the application accesses the blocks for a very large object in the random fashion then there is little chance of reusing a block stored in the buffer pool before it is aged out. This is true regardless of the size of the buffer pool. Because of this, the object’s blocks should not be cached; those cached buffers can be allocated to other objects. Be careful, however, not to discard blocks from memory too quickly. If the buffer pool is mall then it is possible for a block to age out of the cache before the transaction or SQL statement has completed execution. For example, an application may select a value from a table, use the value to process some data, and then update the table. If the block is removed from the cache after the select statement then it must be read from disk again to perform the update. The block need to be retained for the duration of the user transaction. By executing statements with SQL statement tuning tool such as TKPROF or SQL trace, we can get a listing of the total number of data blocks physically read from the disk. The number of disk reads for a particular SQL statement should not exceed the number of disk reads of the same SQL statement with all objects allocated from the default buffer pool. Two other statistics can tell us whether the recycle buffer pool is too small. If the free buffer waits statistics becomes very high then the pool is probably very small. Like wise, the number of log file sync wait events will increase. One way to size the recycle buffer pool is to run the system with the recycle buffer pool disabled. At a steady state the number of buffers in the default buffer pool that are being consumed by segments would normally go in the recycle buffer pool can be divided by four. That number can be used to size the recycle cache. A good candidate for a segment to put in the recycle buffer pool is a segment that is at least twice the size of the default buffer pool and has incurred at least a few percent of total IOs on the system. A good candidate for a segment to put in the keep pool is the segment that is smaller than 10% of the size of the default buffer pool and has incurred at least 1% of the total IOs on the system.

 

Oracle Buffer Cache LRU algorithm:

The buffer cache algorithm is based on the touch counts. Buffers have a touch count field that keeps track of number of touches a DBA has encountered while it in the cache. Hits that are very close with in _DB_AGING_TOUCH_TIME which is by default 3 seconds and data base version dependent are counted as 1 hit. We can see the current number of touches per buffer by either dumping the buffer header or we might alternatively query the X$BH.TCH. Assuming that the _db_aging_touch_time seconds have passed since we incremented the touch time, the touch count is increased by 1. The buffer is not moved from its current position in the list. It stays where it is. Incrementing the touch count is done without any latching activity. Thus, we may miss an increment to the touch count occasionally as a result. So basically, we increment the touch count based on the time elapsed since we last incremented it.

 

When we look at the granules related to the buffer cache: the largest part is the array of buffers used for holding copies of data blocks; next is the array of buffer headers (made visible through the structure X$BH); and finally there is a small memory overhead. Buffer headers are tied very closely to the buffers. Don’t be fooled into thinking, though, that the buffer headers are the same thing as the block headers – they’re not; buffer headers hold some data about the block, some data about the state of the buffer, and lots of pointer to other buffer headers.

 

When we consider a single 8MB granule with an 8KB block size, we might initially assume that we should get 1024 buffers per granule, but when we remember that each buffer needs some extra bytes for buffer headers, we realize that an 9MB granule is more likely to hold closer to 1000 buffers.

 

 

Oracle allows for a total of eight different caches: five allow for different block sizes, three to allow for variations in behaviour. Most databases have an default block size of 8KB and no nonstandard block sizes; a few try to take the advantage of the keep cache and/or recycle caches, and few use nonstandard 4K or 16K block sizes.

 

Each granule is allocated to a specific buffer pool. This means that any one granule will contain buffers of just one size, which makes the positioning of buffers in the granule nice and tidy. As a minor detail, it’s also worth noting that, as with so many memory structures in Oracle, granules show us an example of mixing arrays and linked lists: there is an array of granules, but elements of the array point forward and backward to other elements in the array to link together the granules that make up each of the different components of the SGA.

 

Granules break a large volume of memory into smaller chunks for basic administrative reasons. By handling memory in equal sized chunks, it is easier to reallocate a chunk of memory from one function to another. Each of the different buffer pools can be split into several slices, simply to reduce the number of buffers that have to handle as a unit. These slices cut across the granules, so if a buffer pool consists of four working data sets, a single slice would have one-quarter of the buffers from each of the granules. Each buffer pool has the same number of working data sets, and since there are eight buffer pools, the number of working data sets in an instance is always a multiple of 8. The structure describing the working data sets is X$KCBWDS. If we see the X$KBCWDS, one of the frist columns is called the DBWR_NUM each working data is associated with a single database writer, and each database writer may be responsible for multiple working data sets. The pattern in then repeated for each of the buffer pools owned by the instance, the only variation being the number of granules in each buffer pool.

 

 

The intention behind the working data sets is two fold. First they allow the write IO workload to be shared more evenly between multiple database writers: second, they help to minimize competition between processes trying to read data blocks into the buffer pool by allowing concurrent reads to pick buffers from different working data sets. The fact that we don’t really need many writer processes explains why oracle allows each database writer to handle multiple working data sets from each buffer pool.

 

The CNUM_SET is the number of buffers in the working data set, and the set_latch is the address of the latch cache buffers LRU chain latch covering the set. For each working set there is only one LRU latch, if we have to increase the number of LRU latches setting the hidden parameter _DB_BLOCK_LRU_LATCHES (this value has to a multiple of 9 indicating LRU latches for each buffer cache and we can get the different types of buffer caches from x$kcbwbpd) or by increasing the number of database writer processes, the idea is to increase the number of working data sets . The next six columns with REPL are often referred to as LRU chain. REPL columns are the main replacement list and the REPLAX are the auxiliary replacement lists. We know that a working data set slices across several granules, and each granule holds an array of buffer headers. But we now see the nd points in the x$KCBWDS, and if we check X$BH, we will find another pairs of columns next_repl and prv_repl that tell us how the linked list works.

 

The LRU/TCH algorithm appeared in 8i as a mechanism to reduce the latch activity that typified the earlier pure LRU algorithm. Typical LRU algorithms move an object to the top of the list every time the object is used; but for something like the buffer cache, which will see a lot of object use, the cost of moving things all the time will be high and result in a lot of contention. To circumvent the problems of extreme usage. Oracle added a counter to the buffer header, and every time someone visits the buffer, they increment the touch count and update the timestamp – provided that at least 3 seconds have passed since the last update, they don’t move the buffer header. There is a commonly held theory that we can identify which block is causing the latch contention on a given cache buffer chains latch by checking for very high touch counts on all the buffers covered by the latch. Unfortunately this is not a very good method. A buffer that is visited an average of once per second for half an hour will have a touch count around 600, a buffer that has been visited 1000 million times in the last 5 minutes will have a touch count of around 100. The touch count can give us a clue, but it is not the final answer.

 

The X$KCBWDS has a column cold_hd which gives the midpoint. When we are loading a new buffer into the working data set, we have to find a free buffer to copy it into. We start by searching at the LRU end of the replacement list. Luckily we found a buffer at the end of the list that has a touch count of 1 (it’s not a very popular block and the buffer hasn’t seen much activity). We have to check if that  buffer is currently pinned and then if it isn’t whether it has to written back to disk. Assuming all the checks are good, we pin the buffer exclusively to make sure no one else can do anything we it, read the block into the buffer, update the buffer header in various ways, unlink the buffer header from the end of the list and relink it at the midpoint and then unpin the buffer.

 

Relinking a buffer:

Since we’re reading a different block into memory. We have to detach a buffer from the cache buffer chain that it was connected to and connect it to the new cache buffers chain – which means acquiring two cache buffer chain latches at the same point – and then I have to update the two object queues, detaching the buffer from one and attaching it to another. Following is a summary, not necessarily in exactly the right order, of all the changes that Oracle makes:

-        Change X$KCBWDS.PREV_LINK to point to the next buffer header anlong the list.

-        Change the next buffer header along the list to point back to X$KCBWDS.

-        Modify the buffer heads currently at the mid point to point to my buffer header rather than to each other.

-        Change X$KCBWDS.COLD_HD to point to the buffer header.

-        Detach the buffer header from its old cache buffer chain (hitting one of the cache buffers chain latches at some point) and attaching it to a new one.

-        Detach the buffer from its old object queue and attach it to a new one.

 

There are several options that could appear when we examine the last buffer on the LRU list. It might be pinned, which means we have to skip over it and examine the next buffer along, or it could be dirty which means we have to link it to the write list before we skip over it. The problem with the LRU end of the LRU list is that we really have to pick a buffer from it very quickly – and there are various reasons why we might have to do a lot of irrelevant work before we get to a buffer that we can use. Ideally  we would like the first buffer we look at to be the perfect candidate for reuse every time we need a buffer and that’s where the REPL_AUX list comes into play.

 

When our session needs to find a buffer to read in a new block. Oracle doesn’t start by looking at the LRU of the REPL_MAIN list, but starts looking at the end of the REPL_AUX list, which exists as a source of buffers that are almost certain to be immediately reusable. Because of this guarantee, our session gets a little performance benefit when it’s searching for a buffer – it won’t have to waste resources dealing with dirty buffer, pinned buffers, and other complications. When the instance is started, all the buffers are linked to the REPL_AUX, and there are no buffers linked to the REPL_MAIN (we can re-create this situation by issuing alter system flush buffer-cache). When a session needs to buffer to load a block from disc, it scans the REPL_AUX from the LRU end to find a candidate buffer, detaches the selected buffer from REPL_AUX and attaches it to the REPL_MAIN. So buffer migrate into the REPL_MAIN as time passes.

 

 

 

Oracle uses a hash table with a large number of bucket and attach a very short linked list of buffer headers to each bucket and group these buckets into small groups, protecting each small group with a separate latch – the infamous cache buffer chains latch. The following is a small example of Oracle Buffer Cache:

 

When Oracle needs to locate an SQL statement in the library cache, it applies the hash function to generate a meaningless number that it uses to identify a library cache hash bucket. This means there is a quick, relevant, way of connecting an object with the location where it should be found. The same strategy applies to the data cache. Oracle applies a hash function to the absolute file number and flock number to produce a number that identifies which hash bucket the block should be placed in. Note that we are talking about blocks at this point, not buffer. When we copy a block into buffer, it might end up in any of the available buffers, but the link between the buffer and the hash bucket is identified by the block, not by any attribute of the buffer.

 

Once we associate a block with the hash bucket, there are just a few adjustments we need to worry about putting a buffer into the right bucket as we read the block from disc; removing a buffer from a bucket when we want to use it for a different block; visiting the current version of a buffered block; modifying a buffered block, and cloning a block that is already in the buffer so that we can build a read consistent version from that clone. All these operations require us to read the linked list that is the cache buffers chain, possibly breaking the chain to insert or delete a link. This means we can expect to go through something like the following sequence of operations:

-        Work out the correct hash bucket from the block address.

-        Get the relevant cache buffer chains latch.

-        Start walking the pointers from the hash bucket to find the buffer.

-        Do something about the buffer content.

-        Release the cache buffer chains latch.

 

This overview highlights an important threat. Although we may be able to walk a single linked list very quickly, we will be holding a latch while we do something – so whatever we do, we need to do it very quickly because accessing buffered data is a popular pastime in an oracle system, and lots of sessions could be trying to get the same latch at the same time. We may be wondering why oracle uses a single latch to cover 32 or more hash buckets. The answer comes from two different directions: we want a lot of hash buckets so that we can keep the hash chains very short, on the other hand, a latch is a fairly large structure along a long code path, so we’d like to keep the number of latches to a minimum.

 

Oracle often adopts a two step strategy to latching so that it doesn’t have to hold a latch while working. There are some operations that can be completed while holding the latch, but often uses the following strategy.

-        Get the latch.

-        Find and pin the buffer.

-        Drop the latch.

-        Do something with the buffer content.

-        Get the latch

-        Unpin the buffer

-        Drop the latch.

 

 

We can also see that each buffer header has a waiters list and users list, and when we look at the x$BH structure, we will see a now-familiar pattern in the columns – nxt and prv warning us that we are looking at yet more linked lists, in this case users and waiters. If the buffer is not currently pinned, or is pinned in an incompatible mode, then we can attach a pin to the user’s list. If the buffer is already pinned in an incompatible mode, we have to attach our pin to the waiter’s list – and then go into a wait state until the buffer becomes available. The wait state is the infamous buffer busy waits event, although it is a special variant introduced in 10g, if the session we’re waiting for is currently reading the block from disc into the buffer, the event is read by other session. The structure of a pin includes the address of the session holding, or waiting, on the pin. So when the blocking holder removes their pin from the buffer header, they can post the waiter at the head of the waiter’s list. There is a timeout on the buffer busy wait of 1 second (configurable with the hidden parameter _buffer_busy_wait_timeout), so if a waiter isn’t posted inside this time limit, the session wakes up, assumes that a deadlock has occurred between the holder and waiter, reports a buffer deadlock wait event (though it doesn’t actually wait), and tries to resolve the problem by releasing any pins it is holding and tries to reacquire them. One second seems a rather long time to wait for an individual buffer, but unless we see seasonable amounts of time lost on buffer busy waits, we wouldn’t try playing with the hidden parameters.

 

The following are the things we can do while holding a cache buffer chains latch: add or remove a buffer header from the chain; add or remove a pin from the pin list, updating the pin mode. Things we can’t or don’t usually do while holding the cache buffers chains latch: modify the content of the buffer. We can do this only if we are holding an exclusive pin on the block, and at that point we don’t need to be holding the latch (the commit cleanout is an exception to this rule).

 

The final thing to note about buffer pins is that once we have attached a pin to the buffer header, we don’t have to unpin the buffer header immediately. If the query engine thinks that we will revisit the buffer in the near future, it will keep the buffer header pinned right up until the end of the database call if necessary. This is where the buffer is pinned count statistic comes from – oracle has pinned the buffer early on in the query and revisited it without unpinning it between visits: it’s cheaper buffer visit that Oracle can do.

 

Logical IO.

All physical reads starts with a logical read, so we are going to leave physical IO to the last as we work through the ways in which Oracle handles blocks. We do the arithmetic that tells us what cache buffer hash chain the block ought to be linked to. This tells us work out which latch we need to get, so we get it and walk the hash join and find the block. At this point we might be able to examine the block and drop the latch immediately. This action is reported in the instance statistic as a consistent get – examination; it’s cheaper than a normal buffer get, hutting the latch only once. The candidates for this type of “half-price” or “single-get” visit are the index root and branch blocks, index leaf blocks of unique indxes, table blocks accessed by unique scan of a unique index, and undo blocks.

 

If the block is in memory but ism’t a candidate for examination, then we might see the more traditional two-latch get action. This means that having got the latch, we will attach a shared pin to the buffer header, drop the latch, view the contents of the block, get the latch, remove the pin and drop the latch. It is possible in this case, though, for Oracle to assume that it’s going to revisit the buffer in the next few microseconds, so it might decide to keep the buffer pinned for a while, and it may decide to keep it pinned at the end of the current database call. Buffer pins are also referred to as buffer handles. The statistic buffer is pinned count tells us how many times a session has been able to take advantage of pinning to revisit buffers.

 

Even if the block is in the memory, though, someone else may have modified it since we started our quiery, so we may have to go through the process of creating a read-consistent copy. But we won’t look at the process yet because it requires us to move a buffer from repl_aux onto the hash chain, and we also have to do that when we do a physical read.

 

 

 

 



 

 

Configuring Database Caching Mode:

Starting with 12.1.0.2, there are two database caching modes: the default database caching mode used in previous versions of Oracle Database, and the force full caching mode that is new to this release. In the default caching mode, Oracle Database does not always cache the underlying data when the user queries a large table. In force full caching mode, Oracle Database assumes that the buffer cache is large enough to cache the full database and tries to cache all the blocks that are accessed by queries. By default, Oracle database uses the default database caching mode when performing full scans. In default caching, mode, Database does not always cache the underlying data when a user queries a large table, because doing so might remove more useful data from the buffer cache. If the database instance determines that there is enough space to cache the full database in the buffer cache and that it would be beneficial to do, then the database automatically caches the full database in the buffer cache. If the database determines that there is not enough large space to cache the full database in the buffer cache then.

-        Smaller tables are loaded into memory only when the table size is less than 2% of the buffer cache size.

-        For medium tables, Oracle database analyzes the interval between the last table scan and aging timestamp of the buffer cache. If the size of the table is resued in the last table scan is greater than the remaining buffer cache size, then the table is cached.

-        Large tables typically are not loaded in to the memory, unless if we explicitly declare the table for the KEEP buffer pool.

In default caching mode, Database instance does not cache NOCACHE LOBs in the buffer cache.

 

Full Force Database Caching Mode:

As more data is added to a database, buffer cache sizes may continually grow. In some cases, the size of the buffer cache may become so large that the entire database can fit into memory. The liability to cache an entire database in memory can drastically improve database performance when performing full table scans or accessing LOBs. In full force caching mode, Oracle Database caches the entire database in memory whne the size of the database is smaller than the database buffer cache size. All data files, including NOCACHE LOBs and LOBs that use securefiles, are loaded into the buffer cache as they are being accessed.

 

To improve database performance for table scans and LOB data access, especially for workloads that are limited by I/O throughput or response time, consider using force full caching mode whenever the size of the buffer cache is greater than the size of the database. Consider using force full caching mode in the following situations:

-        The logical database size is smaller than the individual buffer cache of each database instance in an RAC environment. That is applicable for non-RAC databases as well.

-        The logical database size is smaller than 80% of the combined buffer cache sizes of all database instances for well-partitioned workloads in the RAC environment.

-        The database uses SGA_TARGET or MEMORY_TARGET.

-        The NOCACHE LOBs need to be cached. The NOCACHE LOBs are never cached unless force full database caching is used.

 

For the first three situations, we should monitor the system performance periodically to verify that the performance figures are according to our expectations. When one RAC instance uses force full caching mode, then all other database instances in the RAC environment also use full force database caching mode. In multitenant environment, force full database caching mode applies to the entire CDB, including all of its PDBs. To enable the FORCE FULL caching mode issue the following SQL statemtn:

ALTER DATABASE FORCE FULL DATABASE CACHING;

 

 

 

When we flush the buffer cache, all the buffers states becomes 0 indicating that they are free but the X$BH still has the remenants of the previous details of the file block obj etc etc,


Post a Comment

0 Comments