SQL timings for ASH II

  1. http://dboptimizer.com/2011/05/04/sql-execution-times-from-ash/
  2. http://dboptimizer.com/2011/05/06/sql-timings-for-ash-ii/
  3. http://dboptimizer.com/2011/05/06/sql-ash-timings-iii/

Here is a max, min, avg  execution time  for SQL executions in ASH data along with count of execution and histogram of their execution times

(tomorrow I’ll look at showing the execution date time of the longest elapsed times)

col 1 for 99999
col 2 for 99999
col 3 for 9999
col 4 for 9999
col 5 for 9999
col av for 99999.9
with pivot_data as (
select
      sql_id,
      ct,
      mxdelta mx,
      mndelta mn,
      avdelta av,
      width_bucket(delta,0,mxdelta+.1,5) as bucket
from (
select
   sql_id,
   delta,
   count(*) OVER (PARTITION BY sql_id) ct,
   max(delta) OVER (PARTITION BY sql_id) mxdelta,
   min(delta) OVER (PARTITION BY sql_id) mndelta,
   avg(delta) OVER (PARTITION BY sql_id) avdelta
from (
   select
        sql_id,
        sql_exec_id,
        max(delta) delta
     from ( select
                 sql_id,
                 sql_exec_id,
              ((cast(sample_time    as date)) -
               (cast(sql_exec_start as date))) * (3600*24) delta
           from
              dba_hist_active_sess_history
           where sql_exec_id is not null
        )
     group by sql_id,sql_exec_id
)
)
)
select * from pivot_data
PIVOT ( count(*) FOR bucket IN (1,2,3,4,5))
order by mx,av
/
SQL_ID                CT         MX         MN       AV      1      2     3     4     5
------------- ---------- ---------- ---------- -------- ------ ------ ----- ----- -----
5k7vccwjr5ahd       2653       1963          0     33.4   2623     15     8     4     3
ds8cz0fb8w147        161       2531         13    273.8    136     18     5     1     1
bzyny95313u12        114       2599          0     46.5    113      0     0     0     1
0hbv80w9ypy0n        161       4089          0   1183.9     27    116     9     6     3
71fwb4n6a92fv         49       4481         30    676.9     38      6     2     2     1
0bujgc94rg3fj        604       4929          0     24.7    601      1     1     0     1
64dqhdkkw63fd       1083       7147          0      7.2   1082      0     0     0     1
990m08w8xav7s        591       7681          0     51.8    587      0     0     2     2
2spgk3k0f7quz        251      29607          0    546.1    247      2     0     0     2
497wh6n7hu14f         49      69438          0   5498.2     44      1     0     1     3


Uncategorized

  1. Trackbacks

  1. Comments

  2. Kyle Hailey
    May 19th, 2011 at 01:15 | #1

    >> Thanks for the great posts on this topic.
    >> I didn’t realize that sql_exec_id had been added to
    >> dba_hist_active_sess_history. We have some queries with
    >> skewed and this will give us some insight into the variation
    >> in execution times for those queries. We had a question about
    >> the values that you are passing to width_bucket(). You are
    >> passing 0 and mxdelta+.1 for the min_value and max_value parameters,
    >> respectively. That results in very skewed histograms in your
    >> examples. (There’s 1 row where there aren’t any values in
    >> the 1st histogram.) Is there any reason why you are passing
    >> those values instead of mndelta and mxdelta?

    Good questions. One is easier than the other

    1) Why did I use 0 for the min bucket? It was the easiest quickest thing to put for minimum value. A better value would have been the min value ever seen for that query as the starting bucket. I thought about this but never got around to it.

    2) Why did I pass in mxdelta+.1 for the max bucket? Well this is a bit of hack that I didn’t look into closely but at least meant to document but forgot to.
    With width_bucket, if I ask for 5 buckets I get 2 extra for a total of 7.
    The bottom is anything below my minimum, which in this case will always have 0 values as no query should have a negative response time, so I can ignore this bucket. The largest will be anything starting with my largest value on up so if I add a tiny bit to my max value, then this bucket never have values and always be 0 so I can ignore it and I’m back to my 5 buckets.
    What if I just asked for 4 buckets instead? Wouldn’t this work? This would give me 6 buckets and I could ignore the first giving me 5 buckets, right?
    If I asked for 4 buckets then the fifth bucket would only have single max values in it , ie no range. For the most part this bucket will just have the value 1, unless the exact maximum elapsed time happened more than once. I’d prefer the 5th bucket to actually show a range rather than a single value.

  3. kunwar
    August 11th, 2013 at 07:03 | #2

    Hi Kyle, One question:
    Are the columns MX,MN,AV showing values in seconds or millisecond????
    I am getting the below stats for one of my sql id:

    SQL_ID CT MX MN AV 1 2 3 4 5
    ————- ———- ———- ———- ——– —— —— —– —– —–
    7xzf09srjz3gu 1756 2 0 1.1 190 0 1263 0 303

    Which means that maximum time for any execution is 2 seconds??
    From v$sql i see below stats.As far as i know any execution time for this sql won’t be more than a second.

    Buffer Disk Gets CPU Eleapsed IO App
    SQL Buffer Disk Rows Cache Gets Reads Per Time Time Wait Wait
    Id PHV CHNO Exec gets Reads Proc Ratio per Exec per Exec Row Per Exec(ms) Per Exec(ms) Per Exec(ms) Per Exec(ms)
    ————— ———- ———- ———- ————- ————- ———– ——- —————- —————- ———– ————- ————- ———— ————
    Avg
    Slaves Direc
    Per Exec Write
    ——— ———-
    7xzf09srjz3gu 2560245071 0 1 15 0 0 100.00 15.00 .00 15 10.00 58.59 0 0
    0 0

    7xzf09srjz3gu 1463930066 1 7576456 90270307 11546542 3531964 87.21 11.91 1.52 26 .64 16.26 16 0
    0 0

    The same trend is seen for any sql id. So are you saying that for some execution the execution time was indeed approximately 2 seconds??

  4. August 12th, 2013 at 17:25 | #3

    Hi Kunwar, yes the ASH output says your query ran more than 1 second at some point. It looks like from the to me that your query ran
    under 1 second 190 times
    over 1 second 1263 times
    over 2 seconds 303 times
    for an average of 1.06 seconds based on statistical sampling
    As far as the output from v$sql, I’m having trouble reading it because of the formatting in the comment.

  5. kunwar
    August 16th, 2013 at 04:23 | #4

    @Kyle Hailey
    Kyle,
    Below is the query i am using the get the stats from v$sql and all i wanted to show was i was getting Elapses/exec of 58.59 ms and 16.26 ms for childs 0 and 1 .
    But the data from ASH shows very high figures.

    set lines 200
    set pages 50000
    set trimspool on

    col module head “Module” for a8 trunc
    col sql_id head “SQL|Id” for a15
    col executions head “Exec” for 999999999
    col buffer_gets head “Buffer|gets” for 999999999999
    col gpere head “Buffer|Gets|per Exec” for 999999999999.99
    col dread head “Disk|Reads” for 999999999999
    col ppere head “Disk|Reads|per Exec” for 999999999999.99
    col rproc head “Rows|Proc” for 9999999999
    col gperw head “Gets|Per|Row” for 9999999999
    col loads head “Tot|Load” for 9999
    col invalidations head “Tot|Inva” for 9999
    col cputs head “CPU|Time|Per Exec(ms)” for 999999999.99
    col elat head “Eleapsed|Time|Per Exec(ms)” for 999999999.99
    col iwait head “IO|Wait|Per Exec(ms)” for 99999999
    col await head “App|Wait|Per Exec(ms)” for 99999999
    col ccwait head “Conc|Wait(ms)” for 9999999
    col slaves head “Avg|Slaves|Per Exec” for 99999999
    col dwait head “Direc|Write” for 999999999
    col snpdt head “Snap|Date” for a8
    col cratio head “Cache|Ratio” for 999.99

    SELECT s_hist.*
    FROM
    (
    SELECT
    dst.sql_id sql_id,
    dst.plan_hash_value phv,
    dst.child_number chno,
    dst.executions executions,
    dst.buffer_gets buffer_gets,
    dst.disk_reads dread,
    dst.rows_processed rproc,
    (1-dst.disk_reads/decode(dst.buffer_gets,0,1,dst.buffer_gets))*100 cratio,
    round((dst.buffer_gets/decode(dst.executions,0,1,dst.executions)),2) gpere,
    round((dst.disk_reads/decode(dst.executions,0,1,dst.executions)),2) ppere,
    round((dst.buffer_gets/decode(dst.rows_processed,0,1,dst.rows_processed)),2) gperw,
    ((dst.cpu_time)/1000)/decode(dst.executions,0,1,dst.executions) cputs,
    ((dst.elapsed_time)/1000)/decode(dst.executions,0,1,dst.executions) elat,
    ((dst.user_io_wait_time)/1000)/decode(dst.executions,0,1,dst.executions) iwait,
    ((dst.application_wait_time)/1000)/decode(dst.executions,0,1,dst.executions) await,
    ((dst.px_servers_executions))/decode(dst.executions,0,1,dst.executions) slaves,
    dst.direct_writes dwait
    FROM
    v$sql dst
    WHERE dst.sql_id = ‘&1’
    ) s_hist
    /

  6. August 16th, 2013 at 16:42 | #5

    @Kuwar: the above query is just the averages. It doesn’t tell you max and min executions nor when the max and min happened for example. The query on http://dboptimizer.com/2011/05/06/sql-ash-timings-iii/ will show you when the maximum execution happened for example. Unfortunately it doesn’t give the bind variables at the time of the slowest execution, but for queries that take more than a second it can provide a better picture than the simple averages.

  7. August 16th, 2013 at 16:52 | #6

    @Kuwar: also using the sql_exec_id from the longest execution you can go back into ASH and see where the query was spending it’s time


seven × = 42