Author Archive

Best Oracle Peformance Tools?

January 12th, 2011

What tools do you use? What are your favorites and why? As someone who has been working  Oracle performance tools for the last 15 years, I’m always interested in what the best tools are.  Here is my list of tools that I’m aware of (what other tools are out there?)

price db perf multi-db sql tune notes
orasrp free yes sql trace analyzer
snapper free yes x command line performance  analysis
hotsos tools ?? sql trace analyzer, other stuff
ASHMon free yes reads ASH data (requires license) unless SASH is installed and pointed to
Mumbai free yes x integrates snapper and orasrp
Richmon free x
Lab128 $500/seat yes x x
DB Optimizer $1500/seat yes x visual sql tuning diagrams
Quest Spotlight $1000/cpu x x x
Quest SQL Optimizer $1690/seat x
Quest Performance Analyzer $3000/cpu yes x x
Oracle Diag Pack $5000/cpu yes x x
Oracle SQL Tuning Pack $5000/cpu x x
Confio ignite $1500/core? x x
Precise Indepth I3 $2000/cpu? x x
other stuff
fourth elephant

(some of the prices I got off of programmers paradise, others are word of mouth, so any clarifications on prices would be appreciated)

The tools marked “yes” are ones that I use or would use.  I would use Quest’s Performance Analyzer given the chance. I’ve never used it but from the demos of the tool, I like what I see.  I have never used Mumbai, but plan to and like again what I see in the demos.

All the other tools marked “yes” I use.

All the tools except “other stuff”, I consider reasonable tools for one job or another. Some are better than others in different ways. The “other stuff” tools I don’t see much point in.

I generally consider OEM with diagnostic pack too expensive and slow, but hey, if customers have it, I’ll use it ! (I better like it as I designed the performance page and top activity page ) Plus diag pack comes with v$active_session_history and all of AWR which I can query directly with OEM.

I tried to design “DB Optimizer” to be simple to install on my laptop and just point at databases and immediately start monitoring and tuning.  The biggest advantage of “DB Optimizer” over other tools, for me, is the  Visual SQL Tuning diagrams.  Other nice features are being able to load test code by multiple concurrent sessions.  With concurrent sessions running code examples, I can easily demonstrate issues that arise in the database with the profiling page.

But for absolute ease of install and speed, I have to hand it to Lab128. Lab128 runs fast, runs it’s own ASH collection (which doesn’t require diag pack) and has the option to read v$active_session_history as well.

Though Lab128 is fast, collects ASH info and can monitor multiple databases simultaneously for days on end, it still doesn’t allow me a real easily accessible centralized database of performance data. That’s why I created “S-ASH” which can collect ASH data from multiple databases into a central repository where I can run analytic queries or visualize it with ASHMon.

ASHMon and  S-ASH are still rough pieces of code as I spent the last 3 years working solely on DB-Optimizer.

Script libraries

Karl Arao
Tim Gorman

I’m not so much addressing alerting systems or general NOC montioring systems but those are of interest as well.
Big Brother
Big Sister


Other stuff

10053 Viewer (replace .doc with .zip and unzip)

xplan – extend explain plan info from Alberto Dell Era

latchprof - Tanel’s latch collection query

waitprof - Tanel’s wait collection query

tvdxstat - Trivadis Extended Tracefile Analysis Tool

spviewer this looks interesting for statspack/AWR viewing and comparison

enteros does some statspack/AWR trending and other monitoring

I wonder how many smaller companies are out there with Oracle monitoring tools.

Question – how do you graph data from Grid Control and DB Control? Some nice links from Karl Arao:[[Performance%20Formulas]]

Kyle Hailey Uncategorized

New Year – New Look on WordPress

January 11th, 2011

Ringing in the new year with a new look on wordpress. WordPress offers me better aesthetic possibilities and customization options than blogspot.  Blogspot was great to get started fast.  Fast believe it or not has more appeal to me than aesthetics. Aesthetics, graphics and design are super important to me, so once I  have something working and have the time I then  want to spend some time improving it’s look and feel, but really the technology, functionality and easy of creation are the first and foremost.

Thanks to Tanel Poder and Greg Rahn  for the template idea and for tweaks in the wordpress theme. The theme was tweaked by Greg and then repurposed by Tanel who help me get a copy and then get started.

For plugins, I used

  • Blogger Importer – import blogspot posts and comments
  • SBS Blogroll – for  blogspot type blogroll (I tweaked the code to be able to change the title of people’s blogs)
  • amr shortcode any widget – to put blogroll on it’s own page
  • SyntaxHighlighter Evolved – for marking up SQL code (as well as other code)
  • W3 Total Cache – improve load times
  • Akismet – anti spam
  • Google Analytics for WordPress

Kyle Hailey Uncategorized

Exponental Learning and Mumbai

January 11th, 2011

Many of my endeavors have been accelerated tremendously by networks of friends and acquaintances I have. One group of acquaintances and friends is the members of the Oaktable who have from time to time accelerated my projects by their answers, advice and feedback. For example, back when I wrote a program to directly read Oracle’s SGA memory, many people added bits and pieces that kept me on track. I orignally was just dumping memory and looking at the contents when someone pointed me to James Morles work in the area and James pointed me to Jonathan Lewis’s discovery of an offset table etc etc.

Such accelerated learning has been outlined in the book the “Power of Pull” by John Hagel III , John Seely Brown and Lang Davison.

An individual or even a fixed team will have at in general linear output when things are going well, but when different  groups  and far flung individuals work together in tight feedback loops, output, learning and creation can accellerate.

The acceleration is happening more and more with the communication and collaboration that is enables and facilitated by the web.

This collaborative tight feedback loop is what I want to see happen in the database performance tuning arena  and graphical monitoring and what I think I see the beginnings of in  Marcus Mönnig’s Mumbai tool. Marcus creates his own Oracle database tuning tool but also pulls in the work of Tanel Poder’s Snapper package and Egor Starostin Orasrp.

Kyle Hailey Uncategorized

Why did Google succeed? Yahoo shuts altavista down

January 7th, 2011

Yahoo is shutting Altavista down

Dan Rosensweig former COO at Yahoo responding to the question: Why did Google succeed in search over Yahoo?

We were many things to many people.
They were one great thing to everybody
Sometimes it's just what you believe in.
It's how the question get's phrased.
When I came to yahoo people were not interested
in investing in search because there was no way
to make money.
Google didn't care about making money
They cared about improving peoples ability to search
and they were originally going to be enterprise.
Yahoo was actually using their search engine.
Then something happened.
Somebody created a business model around search.
To get around patents Google improved on Overtures business model.
The logic [Yahoo] had was you bid the most you come up first.
It was sort of self policing.
Why would you bid the most to come up first
if you weren't the most relevant?
Google figured something out.
It's not who bids the most,
its how you generate the most amount of value
off each click.
Its a combination of location and price and click through.
That enabled them to make more money on every click
than we did which allowed them to  invest a lot more money in increased
monetization engines and then buy distribution.
They could buy distribution at half the cost that we could
or a third the cost that we could.
Here we were as a public company trying to balance multiple
businesses and they were focused on one
and they were making more money on every click
and they were able to run the table successfully in search.

Why do companies succeed? Big question with lots of answers.
Google and the mass media seem to propagate the story that Google’s search is better and that’s why Google won. Clean and simple. Black and white. People like black and white, but the story is more complicated than that. The search space was competitive when Google came on the scene. I personally used Altavista before Google came one the scene and well after. I compared many of the search results and I was always impressed with Altavista. Google finally won my heart when the came out with The simple map interface was so logical and so simple that it put mapquest to shame for good reason.

But the story of search much more complicated and interesting. For more reading check out

Kyle Hailey Uncategorized

Graphing packages: which are the best and why?

January 5th, 2011

What web enabled graphing packages are the best and why?

Here is a good list of free charts
wondering what the pro’s and con’s are
at least for Google API for example you have to be connected to the internet to access the libraries hosted at Google which is a big drawback IMO.

Kyle Hailey Uncategorized

Diff’ing AWR reports

January 4th, 2011

I don’t know if you are ever asked to compare to AWR periods. AWR period comparison is pretty easy if you have access to the two periods in the same AWR repository. AWR in the same repository can be compared with

         [db_id ],
         [instance id],
         120, -- start snapshot id
         121, -- end snapshot id
         [db_id of target,
         [instance id] ,
         122, -- start snapshot id
         123  -- end snapshot id));

and it can be run for single instance as

              (select dbid from v$database),
              120, -- start snapshot id
              121, -- end snapshot id
              (select dbid from v$database),
              122, -- start snapshot id
              123  -- end snapshot id));

This puts out a bit of a messy but useful report.
A similar, but cleaner simpler report that I partially designed can be run from OEM

but what if someone sends you two AWR reports? How can they be compared? These days I’m receiving at least a couple a week to compare, so I put together a compare script.
usage: [type] file1 file2
where type

  • sevt = system events , ie wait events
  • stats = system statistics
  • load = load profile section
  • init = init.ora

for example sevt awr1.txt awr2.txt
... Statistics requested is load
... 1st report.txt
... 2nd report.txt

============================= load_psec ==============================
Name                               Ratio 1/2   Value1     Value2     Delta
Physical_reads:                   :    0.29:    266.20:    905.33:    639.13
Physical_writes:                  :    0.70:    585.32:    836.75:    251.43
Logons:                           :    0.86:      1.27:      1.48:      0.21
Logical_reads:                    :    1.04: 747342.68: 718259.28:  -29083.4
Redo_size:                        :    1.17:3516126.09:2995591.47:   -520535
Sorts:                            :    1.31:   3981.16:   3027.78:   -953.38
User_calls:                       :    1.38:  16476.53:  11948.71:  -4527.82
Parses:                           :    1.39:   4541.51:   3279.06:  -1262.45
Executes:                         :    1.44:  10619.75:   7350.55:   -3269.2
Hard_parses:                      :    1.89:      0.17:      0.09:     -0.08
Block_changes:                    :    2.38:  18936.62:   7942.27:  -10994.3

============================= load_ptrx ==============================
Name                               Ratio 1/2   Value1     Value2     Delta
Logons:                           :    0.00:      0.00:      0.01:      0.01
Physical_reads:                   :    0.11:      0.43:      3.94:      3.51
Physical_writes:                  :    0.26:      0.95:      3.64:      2.69
Logical_reads:                    :    0.39:   1218.11:   3123.70:   1905.59
Redo_size:                        :    0.44:   5730.99:  13027.80:   7296.81
Sorts:                            :    0.49:      6.49:     13.17:      6.68
User_calls:                       :    0.52:     26.86:     51.96:      25.1
Parses:                           :    0.52:      7.40:     14.26:      6.86
Executes:                         :    0.54:     17.31:     31.97:     14.66
Block_changes:                    :    0.89:     30.87:     34.54:      3.67

of course if your AWR report is an html file, then the current script won’t work. One workaround is to run the html through a text converter like

Again the script is available here:

This script was originally written back before statspack and was based on utlstat. If you look closely you will even see that the code is actually modified by Connie Dialeris, aka the writer of statspack. Before Connie put together statspack, she was looking at the usability of my scripts. I had written a couple of scripts, and The idea of these scritps was to continuously looped collecting database statistics to flat files. Flat files were used to avoid an extra overhead of inserting data into the database. The data could be formatted the into a utlstat like report with Instead of writting a diff report on the raw data, I wrote a diff report that could be used for two different utlstat reports from customers as well as the raw data. This strategy was lucky because it was easy to update the diff script for statspack and AWR.

Kyle Hailey Uncategorized

Excel connect to Oracle – 64bit and 32bit issues

December 29th, 2010

Wow, thanks to

Process Monitor

I was able track down why I couldn’t connect to Oracle from Excel.

I had wanted to try some of the examples Charles Hooper has posted on connecting to and monitoring Oracle, for example

I kept getting the error “Provider not found”
Now what kind of trace info is there for an error like this in Excel? None AFAIK. Time to start guessing.
I’m on windows 7 64 bit. I have the 64bit 11gR2 Oracle installed.  Excel shows up in task manager as “EXCEL.EXE  *32″. My first guess was, “oh, excel must want the 32bit libraries” so I got the 32 bit instant client from Oracle. Unzipped them into a directory and put them first into the path. Still no dice.
Did a lot of google searches and turned up that I needed


but this wasn’t in any of the instant client zips that I downloaded from

Turns out it’s in a download halfway down the page:

*Instant Client Package – ODAC: Includes ODP.NET, Oracle Services for MTS, Oracle Providers for ASP.NET, Oracle Provider for OLE DB, and OO4O with Oracle Instant Client

Downloaded this, put all the dlls in the first directory in my path. Still no dice.

I tried “strace”, but it gave no output. Rrrr.

Then I tried process monitor – bingo.

With process monitor, I filtered on processes containing “excel”, ran the connect, got tons of output, but knew it was a problem with libraries. I found “oraoledb10.dll” among the output. It was listed in an old Oracle 10 install directory. Oracle 10 32bit had been initially been installed. The install gave no warnings but bombed out late in the game so I remvoed the 10g and I installed Oracle 11gR2 64bit. (Oracle 11gR2 32bit won’t even begin the install)
So now, I scoured the registry with regedit and found the old oraoledb10.dll here


I changed this to the new location of oraoledb11.dll
and now it works.

Kyle Hailey Uncategorized

New Oracle Monitor – what language would you use?

December 19th, 2010

I don’t consider myself a programmer and my exposure to some of these languages is quite superficial, so please feel encouraged to jump into this discussion with your experiences, corrections and comments.

What language and UI would you use to create an Oracle monitor? Why?
What are the pros and cons ? I want a tool to be
  • easy to install
  • quick to program
  • have interactive graphics
  • run fast
  • web enabled
These goals are hard, if not impossible, to achieve in the same product. A product’s choice of programming language will be limited if interactive graphics are involved. I think graphics are the only way to make a tool easy to use and information easy to grasp but graphics definitely limits the coding options. A really nice option, and practically a commercial requirement is to run in a web browser and running in a browser even further limits the the coding options and and the possibilities of achieving all the goals
What programming languages could be used? Some choices are
  • C
  • SQL*Plus
  • PL/SQL
  • perl
  • python
  • tcl/tk
  • ruby(on rails)
  • Java fat client
  • Java web app (with Flex)
  • Delphi
  • Visual Basic
  • Apex
And tangential languages
  • ksh(bash/sh/csh) with AWK
  • HTML5
Here is a quick (5 minute off the cuff ) impression of the languages
Do they require an install? or can the be run immediately with simple executable?
Are interactive graphics possible (within reason)
Are they easy or difficult to program in?
Do they have cross platform support? (*nix, mac, win)
Are they web enabled?
Are the fast or are they memory (and CPU) hogs?
Do you have to pay for the development platform?


To have an absolutely brain dead install would require the tool to be written in C, Delphi or VB as these are compiled into single executables. All the other languages, AFAIK, require the language interpreter or engine to be installed and generally require Oracle libraries to be there. Actually, C, Delphi and VB might require Oracle libraries as well, but at least for C, it seems like there would be a way to link a static executable. Oracle is not required for most Java applications as they are generally shipped with a JDBC driver for Oracle, though it’s just another file to scatter about on your file system as part of a complicated install. On the other hand Embarcadero has a neat option where everything, even in a java app, is packaged into one executable called an “instant on” executable, much more akin to running a self contained executable on UNIX via some windows magic in the sauce.
The installs required by other products vary. I’d say most are pretty easy, though how easy depends largely on how the language environment is packaged along with the tool. If the tool does no prepackaging and lets the user set up the required packages then installation can be a nightmare. A tool like ASHMON written in TCL/TK for examples requires graphing package, thread package, math extension and an Oracle connection package. I’ve packaged all these up into a single download which still requires the Oracle client libraries to be pre-installed. All and all it takes a good bit of upkeep to maintain the binaries of all packages on the platforms people might use and even upon installation it doesn’t have single executable, but an execution shell that has to launch and find the scripts which depend upon directory locations and it has to find the Oracle install and tnsname.ora. It would be so awesome if I could just make one executable with everything. So installation for Perl, Python and ruby would be similar, AFAIK. A java app is a bit better – one executable though it depends on java already being installed, and for APEX – I believe all sorts of stuff has to be installed and configured on an Oracle database and either installing a full Oracle database for the tool or requiring the target database to have APEX installed is a lot to ask for.
The software that is most likely to be installed is SQL*Plus but even SQL*Plus may not always be available on a customers laptop or desktop. My laptop did not have SQL*Plus until yesterday, almost 2 months after I started using the laptop. I was just using SSH and other tools. Yesterday I finally installed SQL*Plus to start working on a new performance monitoring tool. Why a new tool? I want a new tool that will give me performance data from customer’s Oracle databases that I can review at a later point in time. I want a tool that the customer can run as quickly and easily as possible and that requires the least amount of user knowledge and interaction. Beyond the goal of collection performance data, I would also want the tool to give some immediate feedback on performance. Of course SQL*Plus should be the easiest thing to use. Maybe the laptop or desktop does not have SQL*Plus installed, but in the realm of Oracle performance, nothing should be more reasonable than expecting the customer to have SQL*Plus installed on the desktop/laptop.


AFAIK, the only reasonable interactive graphics options are Java Applications, Java in the web using FLEX (SGV was an option but is been shelved), TCL/TK, Ruby with RMagick, Visual Basic and Delphi. Not such luck for SQL, PLSQL, Perl or Python.
Apex may have some, but not sure how robust they are.
Java has gobs, but how good they are, not so sure and how do you find the right one? Many are pay for and I’d like to first commit to Java before paying and I want to be sure the package I’m buying is the best. Ideally I’d rather not have to pay
TCL’s BLT package has a quite good graphing library.
For graphs, there are lots of small but important options – auto resize, filled area stacking, log scales, mixing bars and lines, easily adding and deleting items from the graph, actions on graph contents triggered by mouse passing over or clicking, etc. Sure, graphing can be done in almost anything, but having a package to take care of all the minutia is the only kind really worth my time. TCL’s BTL graphs also use “vectors” or very efficient lists of numbers. I can act on these lists – divided two lists, add to lists, multiply a list by a value or modify points, add points, delete points and have the graph updated immediately.
Delphi and Visual Basic have graphic packages though, like JAVA, there are lots of options and many are pay for.
Ruby seems to have Rmagick. Not sure if there are other options.
Java web applications have Flex which requires Adobe’s IDE.

Data Manipulation

SQL and PL/SQL of course are pretty simple – at least they provide the ability to collect and manipulate data in a far more powerful and easy way than any other language. The con with SQL of course is it’s not procedure. PLSQL is of course procedural but often leads to creating things in the database like global temporary tables, packages and procedures and creating things in the target database is a “no no” for me. I want to affect as little as possible the database I’m monitoring. The best approach is to collect the data via SQL or even PLSQL with as little manipulation and then manipulate the data on a local, not target database. I’ve done this in the past with AWK but it’s tedious and brittle. Similarly I’ve manipulated the data in TCL and again, though clearer than AWK it’s still quite tedious. I’ve tried it in JAVA and JAVA just gives me the hebee jebees – it might not be as tedious, but I just get the feeling that the data structures are inefficient, non-scalable, slow and memory intensive. Another maybe better alternative is using a locals database but this requires installing a databases which is a bit overkill at least if the database is Oracle. The solution to this is to use an embeddable database like HSQL, H2, SQLite or firebird (is Derby embeddable?) For other languages we could also manipulate the data in local structures or objects but tacking aggregations, groupings, sums, averages, counts etc is just so tedious in code and so easy and effortless in a database. I assume it’s pretty easy to embed a databases in almost all the languages except maybe C because then I’d assume we’d be loosing the advantage of having C be an easy to install executable, though maybe it’s easy.
I started to play with SQLite and TCL a several years ago and though it may be simple it was certainly taking me more time than just interfacing TCL with a running Oracle database.
Beyond whether to use a database or not and how, there is the question of how productive is a coder in a language. The lower level the language the more tedious. For example C, though elegant and fast, is quite tedious. At the other end is something like Delphi which is a piece of cake. Another aspect is just how heavy the language is. I find Java heavy. The overhead to set up simple functionality feels simply crippling to me.

UI layout

Delphi is easy to layout UI.
TCL/TK is difficult.
I think Java is somewhere in between
For SQL the only UI is just hacking with ASCII – generally not pretty.
Ruby – I don’t know.
Flex – not sure either though I’d guess its good though no where near as easy as Delphi.
VB – I assume it’s like Delphi.
Perl and Python – not much options unless they are married to something like TK.
Apex – don’t know, I’ve head good things but it is Oracle and from what I’ve seen of APEX it’s not a cakewalk, at least to get up and running and ready. Maybe when all the pieces are in place its straight forward.


Java IMO seems like crap – it’s like “new money” , you know those LA mansions that are gaudy fake, expensive and needlessly huge.
Perl is not pretty but at leasts it’s efficient. You don’t get any LA mansions in Perl. It’s more like a souped up jalopy. It’s sort of junky but it runs fast.
Python seems good to me.
Ruby seems like a cross between TCL and Python – seems nice – sort of zen, though I’ve only written a few lines.
TCL – little strange, but ok – might have been nice but it’s been long forgotten.
Delphi uses a modified Pascal and what’s easier than Pascal?
Java is a pig.
Flex is a pig.
OK, OK don’t tell me that it’s not the language its the programmer. I’ve never seen a C program take up 600Ms to collect a few K of data and I’ve never seen a Java program run under 50Ms. (more like 100M+)
Delphi is like 10M, runs fast. What’s not to like? (pay for IDE and it’s washed up, the UI looks like so 20 years ago)
C of course is lightning fast.
TCL seems fine, maybe at bit slow, but for the structures I care about like vectors and in graphs have been well optimized
Python seems fast.
Perl – I assume it’s pretty fast. I tried to write a direct memory access program in Perl and it was slower than executing SQL but that’s because the API attached and detached from the Oracle SGA for every bit of data I collected. I theoretically could have gone in and modified the Perl API for shared memory.


Who works on the web for a GUI? Apex, Java with Flex, Ruby on Rails with Rmagick. Rmagick may or may not provide GUI interaction – I have yet to try it. I spent a couple hours last weekend trying to get it to work under Cygwin and eventually gave up and just installed it straight under windows, but have yet to play with it.
Java with Flex is probably the premier solution but it’s fat and the IDE is payfor.
What about HTML5?
Is FLEX an option? maybe for an enterprise application but not for a desktop web enabled application as far as I can tell.

My Story

I’ve been thinking about this for years. I started programming TCL/TK graphics back in 1994. The only other option before 2000 or so was Visual Basic and Delphi . I didn’t hear of Delphi till about 2007 (!) and Visual Basic never seemed like an option since it was limited to Windows. The windows limitation seems much less important now days, but back then I only worked on UNIX and didn’t have a laptop, so TCL/TK was the only option until around 2000 when Java became an option. In 2002 when I started working on OEM 10gR1. When I arrived at Oracle in 2002 to work on OEM, I had no idea it was web based and pure HTML. HTML ?! I had thought that I would be working on the 9i java fat client. The 9i java client was relatively good at least compared to OEM 7 and 8 and I thought with a little facelift the 9i client could be good. I could hardly believe the 10g OEM was HTML. Pure HTML ? No interactive graphics?! and this was the monitor that was suppose to performance shoot, monitor and tune the some of the most important production databases in the world? Are you kidding me? The “solution” back pre-10gR1 was to have use gifs for any graphics. Any change in the gifs causes not a partial page refresh but a full page refresh. and these gifs would only be updated occasionally. The refresh rate on the Oracle home page could be as much as an hour out of date. Wow.
Well we change things – got SVG in, got the refresh rate to be much faster, but OEM is still slow and the data collection has hardly improved in the last 6 years. So much more could be done with some small changes, for example adding the average wait times for different kind of IOs, harnessing the wait histograms better, high lighting some key statistics , trending, forecasting, showing the differences between selected periods clearly, adding more drill down and aggregation options on the ASH data and externalizing in a clear graphical way the new extended row course data in ASH. I had want to get Adobe Flash/Flex into OEM but Oracle was reluctant. The compromise was SVG. OEM does use Flex but I’ve heard that Flex is going to be ripped out in lieu of ADF, and what’s even more twisted is ADF builds it’s graphic components upon Flex. Strange.
In all cases OEM is slow and huge and slow to change. Java, whether it’s in the middle tier or a Java application is memory pig and generally slow.


  • I want the tool to run fast which means using as little CPU and memory as possible.
  • I want a tool that’s fast to install.
  • I want a tool that has interactive graphics.
  • I want a tool that is easy to create and program.
  • I want the tool to be web enabled.
For collection, the best sounds like using SQL*Plus with anonymous PLSQL – this could possibly be wrapped in C for a single collection executable.
For aggregation, anything works as long as it has a database.
For UI Delphi sounds like the easiest to program *and* run (possibly VB – anyone know the pros and cons of VB verses Delphi?)
For web – it’s either JAVA and FLEX and a memory pig and a pay for IDE or is RUBY on rails a possibility? Ruby has an Oracle , SQLite and UI packages.
What about HTML5 ? Too new? Does it have Oracle access? does it have an embeddable database? What graphic options does it have?

Kyle Hailey Uncategorized

Solaris AWK still broken after all these years

December 8th, 2010

It’s amazing how time flies. I’ve got the above book sitting on my desk at work. It’s the only book on my desk at work. Its probably my favorite computer book for it’s terse clear style and simple power. I’ve keep it for about 20 years, and 20 years later, I’d expect any bugs in basic code examples to be worked out. For the most part on most platforms this is true, but not Solaris.
AWK is an awesome utility always available on UNIX and makes a perfect tool to for scripts that will run anywhere with no need for dependencies nor if’s, and’s or but’s. For that reason it’s all the more dissapointing that the default awk on Solaris is broken for syntaxes that were valid over 20 years ago. On the default awk on Solaris the following are broken
  1. awk -vVAR=value
  2. if ( $0 ~ “string” )
  3. gsub , sub

Probably the most useful thing to know is for point 1, passing in a shell variable, you can do it by setting a variable in the shell such as “VAR=10″ and accessing it in the awk script with:
Point 2, the if construct can be gotten around a bit less elegantly  with

 /string/   { ...

Point 3, gsub and sub are broken is just plain annoying and it takes writing your own routines with index and substr.
Which was too bad for me. I wanted to write a script to eliminate more than 2 redo logs from database creation for “scratch” databases. I don’t know if you can imagine a create script like:
STARTUP NOMOUNT pfile='/mnt/provision/redo2a/datafile/initredo2a.ora.provision'

) SIZE 52428800,

) SIZE 52428800,
) SIZE 52428800

I just wanted 2 redo logs and one member for group, so I wanted to pull out all that’s highlighted in orange. Simple with AWK:
cat ${MNTDIR}/ | \awk -vNREDO="$NREDO" 'BEGIN { output="on" members="off" nlogs=0 last="no"}{# membersif ( $0 ~ "GROUP"     ) { members="on" ; nlogs=0}# outputif ( $0 ~ NREDO       ) { last="yes" }if ( $0 ~ "DATAFILE"  ) { output="on"  }if ( $0 ~ "SIZE"      ) { members="off" ; nlogs=0}if ( output == "on"   ) {   nlogs++       gsub(/,/,"") } if ( nlogs < last ="=" last = "no" output="off"> /tmp/redo2.$$
The script works on AIX, HPUX, Redhat but not on Solaris. For Solaris I had a few choices. I could try and use nawk or gawk which would require me using some sort of “ifdef” and even then using nawk would require some changes. AFAIK gawk would have to be installed on the machines, thus ruling it out. For me, I wanted the safest option. The safest option for me was just to re-write the code in a way that worked around the bugs on Solaris default awk.
cat ${MNTDIR}/ | \awk  'BEGIN {output="on"members="off"nlogs=0last="no"size="no"NREDO="'"$NREDO"'"}/DATAFILE/               { output="on"  } # turn output back on/SIZE/                   { members="off" ; nlogs=0; size="yes"}  # end of group members/GROUP/                  { members="on"  ; nlogs=-1 }  # start counting group members{ if ( index($0,NREDO) )   last="yes"    } # reach the maximum  of redo logs to recreate{if ( output == "on"   ) {str=$0if ( members == "on"  ) { # we are in the log member section   nlogs++             # count members   i=index(str,",");   # get rid of comma at end of line   if ( i > 0 ) {       str=substr(str,0,i-1)   }}if ( nlogs < size ="=" last ="=" str="$0" i="index(str,"> 0 ) {                str=substr(str,0,i-1)            }            last = "no"            output="off"   # we reach max # of redo logs, turn output off till we see DATAFILE         }      }      print str   }}}{ size = "no" }'  > /tmp/redo2.$$

Kyle Hailey Uncategorized

Job Openings for Oracle experts

December 2nd, 2010
You know it’s a good company when the support desk looks like this:

Delphix, the company I’ve been at for the past month, is hiring! We are interested primarily in Oracle expertise centered around backup and recovery, rman, dataguard and ASM.
Big plus for solid knowledge of NFS, storage systems, zfs and networking with an eye for performance tuning.
We are looking for sales consultants, support, engineers and QA.
I’m super happy with this new position – small company where everyone makes an impact and the people are super smart, motivated and experienced.
We’ve got an amazing team from the the creators of Dtrace and zfs to principle players in the development of Oracle RAC, cache fusion and dataguard to the founder of Avamar and the marketing VP who took VMware public. Come join us !
If interested, contact me at kyle.hailey @ delphix dot com.

Delphix turns physical database infrastructure into software that can run in a fraction of the space, automating some of the most challenging and time consuming operations in the datacenter: refreshing and provisioning of databases. With our database virtualization software, enterprises can reduce complex provisioning tasks from 10 days to 10 seconds and shrink 10 TB into 1 TB.
Enterprises typically create multiple copies of production databases for development, testing, QA, staging, UAT, pilots, training, operational reporting, etc. By linking with production databases via standard APIs, Delphix enables the benefits of consolidation and virtualization for all supporting database infrastructure—with no need to make changes to production systems. As a result, an investment in Delphix can generate a high-impact ROI with very low risk.
The team at Delphix has been responsible for developing products that have generated billions in annual revenue, including Oracle RAC, used in more than 20% of Oracle deployments, and EMC Avamar, the solution that pioneered data de-duplication, one of the fastest growing technologies in datacenters today. Our board of directors includes founding board members for Oracle, Business Objects, Informatica, Riverbed, Brocade, and Avamar. One of our lead investors was the co-founding chairman of Oracle, founding chairman of Business Object and founding chairman of Informatica. Our Advisory board has a number of recognized industry icons including the original WW VP’S of Field ops from Oracle and VMware. I We are head quartered in Palo Alto. Delphix will be hard to beat in terms of working on very interesting technology, a great business, and the ability to grow both your career and technical ability.

Kyle Hailey Uncategorized