ASH Masters

January 4th, 2011

Welcome the ASH Masters and to the world of ASH, also know as “Active Session History”. Oracle 10g introduced ASH and Oracle 11 builds upon that foundation. ASH is a radically new way to gather performance data. “Radical?” You might ask. Yes, “radical” because ASH only takes samples of data once a second and what happens between samples is lost. The lost data bothers many at first but this unease quickly passes when the power of ASH is understood. ASH provides information necessary to solve the toughest performance problems. Before ASH this information was often too difficult and expensive to get. Previous performance gathering techniques such as STATSPACK lacked the information to solve many performance bottlenecks. Session tracing provides much of the same information as ASH but is much more costly and has to be set up before a problem arises, proving to be impractical in many situations. Starting in Oracle 10g, ASH is always running, sampling every second, and saved for a week on disk (configurable), thus providing the data need to identify and solve a problem that may have only lasted seconds and days ago. ASH is way of sampling the state of sessions connected to an Oracle database in order to monitory database load as well as drill down into any performance issues that may arise. ASH is a technology that can be applied to any system with connected users such as another database like SQL Server or an operating system or even real life systems. Oracle is the first system that I know of using ASH to collect performance data. ASH by default on an Oracle database, samples once every second and logs some 30 to 60 (depending on version) pieces of information on any session that is active at the time of sampling. One hurtle to accepting ASH as a model had been the question “how fast do we have to sample to get worth while data”. There has been a tendency (I admit having felt this) to want to sample super fast like 10-100 times a second (easily possible with a C program) but experience has shown that most monitoring and problem resolution can be easily accomplished with 1 second sampling. Higher rate of sampling serve only for rare cases that are better left to tracing. For 99.9% of the time, sampling once a second is sufficient to both show clearly the load on the system as well as provide detailed information on system activity that would otherwise be too difficult or prohibitively expensive to collect (due to the load caused by the data collection itself). To find out more go to:

http://ashmasters.com/

.

  1. Trackbacks

  2. No trackbacks yet.
  1. Comments

  2. No comments yet.