Presentation delivered by Richard Niemiec, President of TUSC and Author of Oracle Database 10g: Performance Tuning
EDITOR’S NOTE: Last August, DOUG hosted a presentation by the world-renowned Oracle expert, Richard Niemiec. Because of the large amount of material covered during his lecture, this has been split into two articles. The first deals with grid control and tuning with STATSPACK or AWR reporting. The second article will come out in the Winter 2008 Newsletter and will concentrate on RAC.
How many of you are DBAs? Now, how many are under 30 years old? [Only two raise their hands]. How many are between 30 and 40? [Seven people raise their hands] How many are between 40 and 50? [Half the group raises their hands] How many are over 50? Look at the age of current DBAs.
There is so much to keep up and learn, and you are expected to be the technical expert. I have had DBAs come up to me and said that they just can’t keep up anymore. Now, DBAs specialize in parts of the database. There are people who only do recoveries, people who only do database tuning. You have to team up with other people.
How many would bring in a DBA fresh out of college?
[Now a days], nobody will hire newcomers to be the DBA. When [technology is brand] new, nobody knows it, so a company can hire anybody to come in and do it. However, those people become great at it and that makes it harder for people coming afterwards.
The barrier of entry to be a DBA is almost impossible and it’s certainly too big of a barrier for newcomers. I think the thing that’s needed most is getting over that IT gap where it takes so long to learn something that nobody can break in. [Learning so much is] too hard, and it takes too long. As a company, you have to make things more efficient. It’s about doing more with less by being more productive. The answer is grid control.
You can use the Grid control to monitor systems and their numbers, from database instances to individual OC4J components. Grid control instantly lets you know if any of the targets are down. It’s a tool that points me in the right direction and only you, [as a DBA], can use this for your company.
[From the breadcrumbs at the top], I can see Targets and then go into my Hosts. From here, I can set Thresholds on certain ones and have it page me if it reaches a certain level.
Finding things can be an issue when you have so many systems. And as you try to manage these systems, you will find it harder and harder to find things. [With Enterprise Console], you can combine these into a single dashboard or group pieces together in custom dashboards. Move to Groups > Combine all Production and Development Databases into Groups
From the front page, I can see any alerts. Under my Diagnostic Summary, I can see any policy violations. In a real environment, everyone has Policy Violations, cannot run a database with policy violations. Under the Maintenance tab, I can check deployments and patches.
Here, I can use the Statspack or Automated Workload Repository (AWR) reports to troubleshoot issues. Check these regularly. I’ve found that it can solve 98% of the performance issues. Statspack & Enterprise Manager are best for simple tuning and ongoing maintenance while the V$/X$ views are best for drilling deep into problems.
There are ten things that you should check regularly and we’ll spend some time talking about most of these.
1. Top 5 Wait Events | 2. Load Profile | 3. Instance Efficiency |
4. Wait Events | 5. Latch Waits | 6. Top SQL |
7. Instance Activity | 8. File I/O | 9. Memory Allocation |
10. Undo |
Top 5 Timed Events Report
CPU time should be at the top of this report. If anything is greater than CPU time, we need to investigate it. In a RAC environment, global cache cr request will appear at the top. This means that someone on a different node wants to read a consistent version of the data
If you are watching your system regularly, you will know to the tens of seconds which processes take what amount of time. All interconnect statistics should have times under 15 ms. High values could indicate network or memory problems. It could also be caused by application locking issues.
Load Profile
The load profile gives a glimpse of the database workload activity that occurred during the snapshot. For example, we can see below that an average transaction generates about 36K of redo data, and the database produces about 1.4M redo per second.
Load Profile
Per Second Per Transaction
Redo size: 1,409,245.79 36,596.21
Logical reads: 157,472.47 4,089.35
Block changes: 4,061.85 105.48
Physical reads: 5,965.05 154.90
Physical writes: 587.76 15.26
User calls: 5,922.08 153.79
Parses: 92.11 2.39
Hard parses: 0.17 0.00
Sorts: 93.88 2.44
Logons: 0.25 0.01
Executes: 5,686.76 147.68
Transactions: 38.51
% Blocks changed per Read: 2.58 Recursive Call %: 2.38
Rollback per transaction %: 1.22 Rows per Sort: 114.10
Instance Efficiency Percentages
This report contains ratios and other information about different memory structures and operations. The top portion concentrates on hit ratios and whether things are appropriately sized.
It also tells us if there is something at the lowest level that Oracle is waiting on. The second portion, shared pool operations, describes how SQL statements, once parsed, are maintained in the SGA. [Below is some explanation on the ratios.]
Buffer NoWait % – Ratio of hits on a request for a specific buffer where the buffer was immediately available in memory. If the ratio is low, then could be a (hot) block(s) being contended for that should be found in the Buffer Wait Section. Recommend that this be less than 99 percent.
Buffer Hit % – Ratio of hits on a request for a specific buffer and the buffer was in memory instead of needing to do a physical I/O. If you have unselective indexes that are frequently accessed, it will drive your hit ratio higher, which can be misleading indication of good performance. Recommend that this be less than 95 percent.
Library Hit % – A lower library hit ratio usually indicates that SQL is being pushed out of the shared pool early (could be due to a shared pool that is too small). Recommend that this be less than 95 percent.
A lower ratio could also indicate that bind variables are not used or some other issue is causing SQL not to be reused (in which case a smaller shared pool may only be a band-aid that will potentially fix a library latch problem which may result). You must fix the problem (use bind variables or CURSOR_SHARING) and then appropriately size the shared pool.
In-Memory Sort % – In an OLTP system, you really don’t want to do disk sorts. Setting the PGA_AGGREGATE_TARGET (or SORT_AREA_SIZE) initialization parameter effectively will eliminate this problem. This should less than 95 percent in OLTP.
Latch Hit % – If it is greater than 99%, it is usually a big problem. Finding the specific latch will lead you to solving this issue.
Top Wait Events
Whenever an Oracle process waits for something, it records the wait using one of a set of predefined wait events. Non-idle events indicate nonproductive time spent waiting for a resource or action to complete. The ultimate goal on all of these is achieving faster I/O.
A high sequential read wait event indicates many index reads. To correct, you would need to tune the code, especially the joins. A scattered read wait event indicates many full table scans. For these, you could add indexes, tune the code, or cache small tables.
Buffer waits are the most confounding problems to resolve and are common. Increasing the DB_CACHE_SIZE and shortening the checkpoint can fix Free Buffer wait problems. You can also use multiple database writers (DBWR) and tune the code to use less dirty blocks.
According to Donald Burleson, “Buffer busy waits occur when an Oracle session needs to access a block in the buffer cache, but cannot because the buffer copy of the data block is locked. This occurs because either the block is being read into the buffer by another session or another session has the buffer block locked in a mode that is incompatible with the waiting session’s request. With Buffer Busy Waits, you can query the specific type of buffer causing wait issues from the V$WAITSTAT view.” Once you know the type, you can follow specific recommendations on how you can resolve the issue:
.
- Segment Header – Add freelists (if inserts) or freelist groups (esp. RAC). Use ASSM.
- Data Block – Separate ‘hot’ data; potentially use reverse key indexes; fix queries to reduce the blocks popularity, use smaller blocks, I/O, Increase initrans and/or maxtrans (this one’s debatable); Reduce records per block.
- Undo Header – Add rollback segments or increase size of segment area (auto undo)
- Undo Block – Commit more (not too much) Larger rollback segments/area. Try to fix the SQL.
Locally managed tablespaces (LMT) manage space locally using bitmaps. With LMTs, there are no tablespace fragmentation issues and better performance handling on large segments. Since there is lower contention for central resources, there are no ST enqueue errors and fewer recursive calls. To implement LMT, specify “EXTENT MANAGEMENT LOCAL” clause during tablespace creation.
Using both LMTs and ASSM can eliminate any of your enqueue wait problems. The fastest systems that I know have the smallest block sizes, no contention and never have to worry about blocks going back and forth.
How does Automatic Segment Space Management (ASSM) actually work?
ASSM eliminates rollback segment management and simplifies administration. That means no more FREELISTS, FREELIST GROUPS, PCTUSED, etc. ASSM improves space utilization and concurrency – and enhances performance. You can set undo retention time; and if set to the longest running query time, you won’t encounter any more “Snapshot too old” errors! To enable ASSM, specify “SEGMENT SPACE MANAGEMENT AUTO” clause during tablespace creation.
The Automatic Segment Space Management features works well for most applications. However, using ASSM in some high-DML environments can result in poorer performance. Here, you will need to manually set the values for INITRANS and FREELISTS (as well as FREELIST GROUPS and PCTFREE and PCTUSED). For those segments, you can reduce the contention over blocks by reducing the number of rows per block, adjusting the block size to a smaller block size, and modifying both the INITRANS and FREELISTS values.
For Log Buffer Space problems, you should increase the Log Buffer or use the faster disks for the Redo Logs. If your archive destination is slow or full, you could be experiencing Log File Switch problems, these can be corrected by adding more or larger redo logs. And with Log File Sync wait problems, you can benefit from committing more records at a time or using raw devices.
Latch Waits
Latches are low-level queuing mechanisms used to protect shared memory structures in the System Global Area (SGA). They are like locks on memory that are very quickly obtained and released. Latches are used to prevent concurrent access to a shared memory structure. If the latch is not available, a latch free miss is recorded. When latch miss ratios are greater than 0.5 percent, you should investigate the issue.
Most latch problems are related to:
- The failure to use bind variables (library cache latch)
- Redo generation issues (redo allocation latch)
- Buffer cache contention issues (cache buffers lru chain)
- Hot blocks in the buffer cache (cache buffers chains).
Latches can be divided into willing to wait or not willing to wait.
Latches that are willing to wait try to acquire a latch. If none are available, it will spin and then request the latch again. It will continue to do this up to the _SPIN_COUNT initialization parameter (note that spinning costs CPU). If it can’t get a latch after spinning up to the _SPIN_COUNT, it will go to sleep. It will wake up after one centisecond (one hundredth of a second). It will do this twice. It will then start this process again, spinning up to the _SPIN_COUNT and then sleeping for twice as long (two centiseconds). After doing this again it will double again. So the pattern is 1,1,2,2,4,4 etc. It will do this until it gets the latch. Every time the latch sleeps, it will creates a latch sleep wait. An example of a “willing to wait” latch is a library cache latch.
Some latches are “not willing to wait.” This type of latch does not wait for the latch to become available. They immediately time out and retry to obtain the latch. A redo copy latch is an example of a “not willing to wait” latch. A not willing to wait latch will generate information for the immediate_gets and the immediate_misses columns of the V$LATCH view and also in the statspack report. The hit ratio for these latches should also approach 99% and the misses should never fall below 1 percent misses
Use bind variables to avoid latch issues. If you don’t use bind variables in your application code you usually end up with latch contention with the shared_pool and library cache latches. This latch wait time can be reduced by changing just the top couple of executed statements that were using literal SQL instead of bind variables. Another fix for latch issues is to adjust the shared_pool_size parameter.
Top SQL
Tuning the top 25 buffer get and top 25 physical get queries has yielded system performance gains of anywhere from 5 percent to 5000 percent. The SQL section of the statspack report tells you which queries to potentially tune first. The top 10 of your SQL statements should not be more than 10 percent of your buffer gets or disk reads.
select sum(pct_bufgets) percent
from (select rank() over ( order by buffer_gets desc ) as rank_bufgets,
to_char(100 * ratio_to_report(buffer_gets) over (), ‘999.99’) pct_bufgets
from v$sqlarea )
where rank_bufgets < 11;
To practice data partitioning, the database designer segregates the data for each database functional area to separate tablespaces and/or partitions.
Tune the SQL so less is going through the Interconnect. A bad piece of SQL moves 10,000 records instead of just the 10 [that are needed]. Add an index so it will not pull all of them through.
Reading Instance Activity
This section contains statistics on how a database instance operated within a certain period.
Statistic | Description |
Session Logical Reads | All reads cached in memory. Includes both consistent gets and also the db block gets. |
Consistent Gets | These are the reads of a block that are in the cache. They are NOT to be confused with consistent read (cr) version of a block in the buffer cache (usually the current version is read). |
Db block gets | These are block gotten to be changed. MUST be the CURRENT block and not a cr block. |
Db block changes | These are the db block gets (above) that were actually changed. |
Physical Reads | Blocks not read from the cache. Either from disk, disk cache or O/S cache; there are also physical reads direct which bypass cache using Parallel Query (not in hit ratios). |
Database Cache
When a tablespace is created with the blocksize clause, Oracle checks the cache availability of that particular block size at the instance where the tablespace is being created.
Let’s take an example. Assume that a tablespace is created in Node 2, where the non-standard db_2k_cache_size is set. Since a cache size of 2K is available, the create statement succeeds:
SQL> create tablespace TBS_2K blocksize 2K;
Tablespace created.
Now, let’s assume that after creating the above tablespace with a 2K-block size, the instance is restarted without the 2K cache. The instance comes up normally, however, the first time Oracle attempts to access an object from the tablespace or tries to place blocks into the cache, the following error will be received:
ORA-00379: no free buffers available in buffer pool DEFAULT for block size x
It is recommended that all instances be started with the same set of non-standard block size buffers (sub-cache) using the db_nk_cache_size parameter. Note that Oracle does not enforce the same set of sub-caches when the instance joins the cluster. It is better to create the sub-cache with all possible cache sizes (2K, 4K, 8K, 16K, 32K etc) to avoid runtime errors.
Richard J. Niemiec is the President of TUSC, an Oracle Expert-Level consulting firm. He is respected around the world as a master in database administration. Members of the media often consult with Rich on technological trends and the impact they’ll have on the industry at large. In 2001, Rich was named by Oracle Corp. as an Oracle Certified Master — one of the first six so recognized around the world. In 2007, Rich authored “Oracle10g Performance Tuning Tips & Techniques,” a follow-up to “Oracle9i Performance Tuning Tips & Techniques” that he wrote four years earlier. He is the former president of the International Oracle Users Group and the president of the Midwest Oracle Users Group.