How Incorrect Stats Can Ruin Your Queries

Posted: March 4, 2014 in Database Administration
Tags: ,

Welcome back, Oracle Maniacs!!

For the last three days, our Oracle Development contractor and I have been troubleshooting a process that used to complete in about thirty minutes. Over the weekend, the job got stuck after about twenty minutes. OEM showed it stopped (due to error). Thanks, OEM! Very helpful. The alert log showed us nothing but the process kept running in the background until I killed it. On a whim, we decided to drop the unique constraint because it appeared the index was hindering the process. That did not work at all. The job died again in about twenty minutes.

Stats were run by the auto gather stale stats procedure over the weekend but here’s the weird part. On the details tab on the  partitioned table it showed 300,000 rows but a count(*) against the table showed 21 million rows. This morning I compared the indexes against my test environment after I had recreated the unique constraint. In the test environment, the index was a local index but, in production, it was a normal (global) index. So I dropped the index, recreated it as a local, unique index and gathered stats. It now showed the full 21 million rows.

We kicked off the package again and it finished in thirty minutes. So what was the deal? Bad table stats. It wasn’t enough just to run stats again. The index itself was inappropriate for a partitioned table.

More to come on this subject. Stats are critical to your database performance and, in this case, to your database health.


Jason Brown

follow me on Twitter!! @dba_jay

I received some feedback from Tim Gorman, our very own RMOUG President who also happens to be consulting for us…

The reason stats were incorrect for this particular process had more to do with when the stats were gathered for this table. The package was written in such a way that the table stats are updated right after the package is executed.

Quoting Tim Gorman…

Information about modifications to tables are stored in in-memory data structures, flushed down automatically to tables every 15 minutes.  If the timing is bad, the GATHER procedures in the DBMS_STATS procedures may not have the information that indicates the table is “stale” yet, so it may not gather statistics for certain partitions or at the “global” level.

To ensure that all of the in-memory information about table modifications are available on disk for the DBMS_STATS package to use, I recommend “flushing” it just before running any GATHER_*_STATS procedures, so I propose modifying the ANALYZE_TABLE procedure in the **** package as follows, adding the code highlighted in red typeface…

/******************** ** ANALYZE_TABLE *********************/

PROCEDURE analyze_table


i_module     IN VARCHAR2,

i_table_type IN VARCHAR2

) IS

CURSOR c_tables IS

SELECT substr(record_source, 0, instr(record_source, ‘.’) – 1) AS owner,

substr(record_source, instr(record_source, ‘.’) + 1) AS table_name

FROM record_source

WHERE (module = i_module OR i_module IS NULL)

AND (table_type = i_table_type OR i_table_type IS NULL);


FOR i IN c_tables



      if c_tables%rowcount = 1 then /* only fun this once if any tables will have stats gathered */


      end if;


dbms_stats.gather_table_stats(i.owner, i.table_name);


END analyze_table;

When I dropped the global index and re-created it as a local index it essentially did the same thing but the next time this package was executed, it would end up with bad stats again. By incorporating  dbms_stats.flush_database_monitoring_info,  we should have updated stats every time!!

Thanks, Tim!!


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )


Connecting to %s