Best practices for accessing Oracle from scala using JDBC

I’ve been looking for an excuse to muck about with scala for a while now.  So I thought i’d do a post similar to those I’ve done the past for .NET, python, perl and R.  Best practices for Java were included in my book Oracle Performance Survival Guide (but I’d be more than happy to post them if anyone asks).

One of the great things about scala is that it runs in the JVM, so we can use the Oracle JDBC drivers to access Oracle.  These drivers are very mature and support all the best programming practices. 

Best practices for programming Oracle in any language require at least the following:

  1. Use bind variables appropriately.
  2. Ensure you are performing array fetch when retrieving more than one row
  3. Ensure that you use array insert when doing bulk inserts

You can get the scala program which contains the code snippets below here.

Connecting

If you’ve ever used Oracle with JDBC, you’ll find things very familiar.   Here’s a snippet that connects to an oracle database with username,password, host and service specified on the command line (assumes the default 1521 port, but of course this could be parameterized as well):

   1:  import java.sql.Connection
   2:  import java.sql.ResultSet
   3:   
   4:  import oracle.jdbc.pool.OracleDataSource
   5:   
   6:  object guyscala2 {
   7:    def main(args: Array[String]) {
   8:      if (args.length != 4) {
   9:        println("Arguments username password hostname serviceName")
  10:        System.exit(1)
  11:      }
  12:   
  13:      val ods = new OracleDataSource()
  14:      ods.setUser(args(0))
  15:      ods.setPassword(args(1))
  16:      ods.setURL("jdbc:oracle:thin:@" + args(2)+":1521/"+args(3))
  17:      val con = ods.getConnection()
  18:      println("Connected")

 

Using Bind variables

As in most languages, it's all to easy to omit bind variables in scala.  Here's an example where the variable value is simply concatenated into a SQL string

   1:     for (cust_id <- 1 to rows) {
   2:        val s1 = con.createStatement()
   3:        s1.execute("UPDATE customers SET cust_valid = 'Y'"
   4:          + " WHERE cust_id = " + cust_id)
   5:   
   6:        s1.close()
   7:      }

 

On line 3 we build up a SQL statement concatenating the value we want into the string and immediately exeucte it.  Each execution is a unique SQL statement which requires parsing, optimization and caching in the shared pool. 

Here’s an example using bind variables and a prepared Statement:

   1:    val s2 = con.prepareStatement(
   2:        "UPDATE customers SET cust_valid = 'Y'"
   3:          + " WHERE cust_id = :custId")
   4:   
   5:      for (cust_id <- 1 to rows) {
   6:        s2.setInt(1, cust_id)
   7:        s2.execute()
   8:      }
   9:   
  10:      s2.close()

 

Slightly more complex:  we prepare a statement on line 1, associate the bind variable on line 6, then execute on line 7.   It might get tedious if there are a lot of bind variables, but still definitely worthwhile.   Below we see the difference in execution time when using bind variables compared with concanating the variables into a string.  Bind variables definitely increase execution time.

image

 

As well as the reduction in execution time for the individual application, using bind variables reduces the chance of latch and/or mutex contention for SQL statements in the shared pool – where Oracle caches SQL statements to avoid re-parsing. If many sessions are concurrently trying to add new SQL statements to the shared pool, then some may have to wait on the library cache mutex.   Historically, this sort of contention has been one of the most common causes of poor application scalability – applications which did not use bind variables risked strangling on library cache latch or mutex as the SQL exectuion rate increased. 

Exploiting the array interface

Oracle can retrieve rows either from the database one at a time, or can retrieve rows in “batches” sometimes called “arrays”. Array fetch refers to the mechanism by which Oracle can retrieve multiple rows in a single fetch operation. Fetching rows in batches reduces the number of calls issued to the database server, and can also reduce network traffic and logical IO overhead.   Fetching rows one at a time is like moving thousands of people from one side of a river to another in a boat with all but one of the seats empty -  it’s incredibly inefficient.

Fetching rows using the array interface is simple as can be and in fact enabled by default - though with a small default batch size of 10.  The setFetchSize method of the connection and statement objects sets the number of rows to be batched.  Unfortunately, the default setting of 10 is often far too small – especially since there is typically no degradation even when the fetch size is set very large – you get diminishing, but never negative, returns as you increase the fetch size beyond the point at which every SQL*NET packet is full  

Here’s a bit of code that sets the fetch size to 1000 before executing the SQL:

   1:  val s1 = con.createStatement()
   2:   
   3:  s1.setFetchSize(1000)
   4:  val rs = s1.executeQuery("Select /*fetchsize=" + s1.getFetchSize() + " */ * " +
   5:          "from customers where rownum<= " + rows)
   6:  while (rs.next()) {
   7:          val c1 = rs.getString(1)
   8:          val c2 = rs.getString(2)
   9:   }
  10:   rs.close()

 

Here’s the elapsed times for various fetchsizes for the above query:

image

While the default setting of 10 is clearly better than any lower value, it’s still more than 6 times worse than a setting of 100 or 200. 

Inserting data is another situation in which we normally want to consider the array interface.  In this case we need to change our code structure a bit more noticeably.   Here’s the code we probably would write if we didn’t know about array processing:

   1:    val insSQL = "INSERT into arrayinsertTest" +
   2:        " (cust_id,cust_first_name,cust_last_name,cust_street_address) " +
   3:        " VALUES(:1,:2,:3,:4)"
   4:      val insStmt = con.prepareStatement(insSQL)
   5:      val startMs = System.currentTimeMillis
   6:      var rowCount = 0
   7:      while (rs.next()) {
   8:        insStmt.setInt(1, rs.getInt(1))
   9:        insStmt.setString(2, rs.getString(2))
  10:        insStmt.setString(3, rs.getString(3))
  11:        insStmt.setString(4, rs.getString(4))
  12:        rowCount += insStmt.executeUpdate()
  13:      }
  14:      val elapsedMs = System.currentTimeMillis - startMs
  15:      println(rowCount + " rows inserted - " + elapsedMs + " ms")
  16:      con.commit()

 

We prepare the statement on line 4, bind the values to be inserted on lines 8-11, then execute the insert on line 12. 

WIth a few minor changes, this code can perform array inserts:

 

   1:      val insSQL = "INSERT into arrayinsertTest" +
   2:        " (cust_id,cust_first_name,cust_last_name,cust_street_address) " +
   3:        " VALUES(:1,:2,:3,:4)"
   4:      val insStmt = con.prepareStatement(insSQL)
   5:      val startMs = System.currentTimeMillis
   6:      var rowCount = 0
   7:      while (rs.next()) {
   8:        insStmt.setInt(1, rs.getInt(1))
   9:        insStmt.setString(2, rs.getString(2))
  10:        insStmt.setString(3, rs.getString(3))
  11:        insStmt.setString(4, rs.getString(4))
  12:        insStmt.addBatch()
  13:        rowCount += 1
  14:        if (rowCount % batchSize == 0) {
  15:          insStmt.executeBatch()
  16:        }
  17:      }
  18:   
  19:      val elapsedMs = System.currentTimeMillis - startMs
  20:      println(rowCount + " rows inserted - " + elapsedMs + " ms")
  21:      con.commit()

 

On line 12, we now call the addBatch() method instead of executeUpdate().  Once we’ve added enough rows to our batch (defined by the batchsize constant in the above code) we can call executeBatch() to insert the batch. 

Array insert gives about the same performance improvements as array fetch.  For the above example I got the performance improvement below:

 Capture

To be fair, the examples above are a best case scenario for array processing – the Oracle database was running in an Amazon EC2 instance in the US, while I was running the scala code from my home in Australia!   So the round trip time was as bad as you are ever likely to see.  Nevertheless, you see pretty impressive performance enhancements from simply increasing array size in the real world all the time.

Conclusion

If you use JDBC to get data from Oracle RDBMS within a scala project, then the principles for optimization are the same as for Java JDBC – use preparedStatements, bind variables and array processing.  Of course there’s a lot more involved in optimizing database queries (SQL Tuning, indexing, etc), but these are the three techniques that vary significantly from language to language.  The performance delta from these simple techniques are very significant and should represent the default pattern for a professional database programmer. 

Using SSD for a temp tablespace on Exadata

I seem to be getting a lot of surprising performance results lately on our X-2 quarter rack Exadata system, which is good – the result you don’t expect is the one that teaches you something new.

This time, I was looking at using a temporary tablespace based on flash disks rather than spinning disks.  In the past – using Fusion IO PCI cards, I found that using flash for temp tablespace was very effective in reducing the overhead of multi-pass sorts:

image

See (http://guyharrison.squarespace.com/ssdguide/04-evaluating-the-options-for-exploiting-ssd.html)

However, when I repeated these tests for Exadata, I got very disappointing results.  SSD based temp tablespace actually lead to marginally worse performance:

image

Looking in depth at a particular point (the 500K SORT_AREA_SIZE point), we can see that although the SSD based temp tablespace has marginally better read times, it involves a significantly higher write overhead:

image

I can understand the higher read overhead (at least partially).  It’s Yet Another time when sequential write operations to an SSD device have provided disappointing performance.  However, it’s strange to see such poor read performance.  How can a spinning disk serve blocks up at effectively the same latency an SSD?

So I dumped all the direct path read waits from a 10046 trace and plotted them logarithmically:

image

We can see in this chart, that the SDD based tablespace suffers from a small “spike” of high latencies between 600-1000 us (eg .6-1 ms).  These are extremely high latencies for an SSD !  What could be causing them?  Garbage collection being caused by the almost writes to the temp tablespaces?  There was negliglbe concurrent activity on the system and the table concerned had flash cache disabled so for now that is my #1 theory. 

For that matter, why are the HDD reads times so low?  An average disk read latency of 500 us for a spinning disk is unreasonably low, is the storage cell somehow buffering temporary tablespace IO?  

As always I’m wondering if there’s someone with more expertise in Exadata internals who could shed some light on all of this!

Can the Exadata Smart Flash Cache slow smart scans?

I’ve been doing some work on the Exadata Smart Flash Cache recently and came across a situation in which setting CELL_FLASH_CACHE to KEEP will significantly slow down smart scans on a table.

If we create a table with default settings, then the Exadata Smart Flash Cache (ESFC) will not be involved in smart scans, since by default only small IOs get cached.  If we want the ESFC to be involved, we need to set the CELL_FLASH_CACHE to KEEP.  Of course, we don’t expect immediate improvements, since we expect that the next smart scan will need to populate the cache before subsequent scans can benefit. 

HOWEVER, what I’m seeing in practice is that the next smart scan following an ALTER TABLE … STORAGE(CELL_FLASH_CACHE KEEP) is significantly degraded, while subsequent scans get a performance boost.  Here’s an example of what I observe:

image

 

The big increase in CELL IO time is in an increase in both the number and latency of cell smart table scans.  The wait stats for the first scan with a default setting looked like this:

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  gc cr disk read                                 1        0.00          0.00
  cell single block physical read                 2        0.01          0.01
  row cache lock                                  2        0.00          0.00
  gc cr grant 2-way                               1        0.00          0.00
  SQL*Net message to client                    1021        0.00          0.00
  reliable message                                1        0.00          0.00
  enq: KO - fast object checkpoint                2        0.00          0.00
  cell smart table scan                        9322        0.14          7.60
  SQL*Net message from client                  1021        0.00          0.02

For the first scan with KEEP cache it looked like this:

Elapsed times include waiting on following events:
  Event waited on                             Times   Max. Wait  Total Waited
  ----------------------------------------   Waited  ----------  ------------
  SQL*Net message to client                    1021        0.00          0.00
  reliable message                                1        0.00          0.00
  enq: KO - fast object checkpoint                2        0.00          0.00
  cell smart table scan                       14904        1.21         33.37
  SQL*Net message from client                  1021        0.00          0.02

Looking at the raw trace file didn’t help – it just shows a bunch of lines like this, with only a small number (3 in this case) of unique cellhash values… I couldn’t see a pattern:

WAIT #… : nam='cell smart table scan' ela= 678 cellhash#=398250101 p2=0 p3=0 obj#=139207 tim= …

I’m at a loss to understand why there would be such a high penalty for the initial smart scan with CELL_FLASH_CACHE KEEP setting.  You expect some overhead from constructing and storing the result set blocks in the cache, but an IO penalty of 200=300% seems way too high.   Anybody seen anything like this or have a clear explanation?

Test script is here, and formatted tkprof here

Redo on SSD: effect of redo size (Exadata)

Of all the claims I make about SSD for Oracle databases, the one that generates the most debate is that placing redo logs on SSD is not likely to be effective.  I’ve published data to that effect in particular see  Using SSD for redo on Exadata - pt 2 and 04 Evaluating the options for Exploiting SSD.

I get a lot of push back on these findings – often on theoretical grounds from Flash vendors (“our SSD use advanced caching and garbage collection that support high rates of sequential IO”) or from people who say that they’ve used flash for redo and it “worked fine”.

Unfortunately, every single test I do comparing performance of redo on flash and HDD shows redo with little or no advantage and in some cases with a clear disadvantage.    

One argument for flash SSD that I’ve heard is that while for the small transactions I use for testing  flash might not have the advantage but for “big” redo writes – such as those associated with LOB updates – flash SSD would work better.  The idea is that the overhead of garbage collection and free page pool processing is less with big writes since you don’t hit the same flash SSD pages in rapid succession as you would with smaller writes.    On the other hand a reader who knows more about flash than I do (flashdba.com) recently commented:  “in foreground garbage collection a larger write will require more pages to be erased, so actually will suffer from even more performance issues.”

It’s taken me a while to get around to testing this, but I tried on our Exadata X-2 recently with a test that generates a variable amount of redo and then commits.  The relationship between the size of the redo and redo log sync time is shown below

image

 

I’m now putting on my flame retardant underwear in anticipation of some dispute over this data….   but,  this suggests that while SSD and HDD (at least on Exadata) are about at parity for small writes, flash degrades much more steeply than HDD as the size of the redo entry increases.  Regardless of whether the redo is on flash or HDD, there’s a break at the 1MB point which corresponds to log buffer flush threshold.  When a redo entry is only slightly bigger than 1MB then the chances are high that some of  it will have been flushed already – see Redo log sync time vs redo size for a discussion of this phenomenon.

The SSD redo files were on an ASM disk group carved out of the Exadata flash disks - see Configuring Exadata flash as grid disk to see how I created these.  Also the redo logs were created with 4K blocksize as outlined in Using SSD for redo on Exadata - pt 2.   The database was in NoarchiveLog mode.  Smart flash logging was disabled.  As far as I can determine, there was no other significant activity on the flash disks (the grid disks were supporting all the database tablespaces, so if anything the SSD had the advantage). 

Why are we seeing such a sharp dropoff in performance for the SSD as the redo write increases in size?   Well one explanation was given by  flashdba in this comment thread.  It has to do with understanding what happens when a write IO which modifies an existing block hits a flash SSD.   I tried to communicate my limited understanding of this process in Fundamentals of Flash SSD Technology.   Instead of erasing the existing page, the flash controller will pull a page off a “free list” of pages and mark the old page as invalid.   Later on, the garbage collection routines will reorganize the data and free up invalid pages.  In this case, it’s possible that no free blocks were available because garbage collection fell behind during the write intensive workload.  The more blocks written by LGWR, the more SSD pages had to be erased during these un-optimized writes and therefore the larger the redo log write the worse the performance of the SSD.

Any other theories and/or observations?  

I hope soon to have a Dell system with Dell express flash so as I can repeat these tests on a non-exadata system.  The F20 cards used in my X-2 are not state of the art, so it’s possible that different results could be obtained with a more recent flash card, or with a less contrived workload.

However, yet again I’m gathering data that suggests that using flash for redo logs is not worthwhile.  I’d love to argue the point but even better than argument would be some hard data in either direction….

Exadata Write-back cache and free buffer waits

Prior to storage server software version 11.2.3.2.0 (associated with Exadata X3), Exadata Smart Flash Cache was a “write-through” cache, meaning that write operations are applied both to the cache and to the underlying disk devices, but are not signalled as complete until the IO to the disk has completed.

Starting with 11.2.3.2.0 of the Exadata storage software[1], Exadata Smart Flash Cache may act as a write-back cache. This means that a write operation is made to the cache initially and de-staged to grid disks at a later time. This can be effective in improving the performance of an Exadata system that is subject to IO write bottlenecks on the Oracle datafiles.

Writes to datafile generally happen as a background task in Oracle, and most of the time we don’t actually “wait” on these IOs. That being the case, what advantage can we expect if these writes are optimized? To understand the possible advantages of the write-back cache let’s review the nature of datafile write IO in Oracle and the symptoms that occur when write IO becomes the bottleneck.

When a block in the buffer cache is modified, it is the responsibility of the database writer (DBWR) to write these “dirty” blocks to disk. The DBWR does this continuously and uses asynchronous IO processing, so generally sessions do not have to wait for the IO to occur – the only time sessions wait directly on write IO is when a redo log sync occurs following a COMMIT.

However, should all the buffers in the buffer cache become dirty then a session may wait when it wants to bring a block into the cache – resulting in a “free buffer wait”.

image

Free buffer waits can occur in update-intensive workloads when the IO bandwidth of the Oracle sessions reading into the cache exceeds the IO bandwidth of the database writer. Because the database writer uses asynchronous parallelized write IO, and because all processes concerned are accessing the same files, free buffer waits usually happen when the IO subsystem can service reads faster than it can service writes.

There exists just such an imbalance between reads and write latency in Exadata X2 – the Exadata Smart Flash Cache accelerates reads by a factor of perhaps 4-10 times, while offering no comparable advantage for writes. As a result, a very busy Exadata X2 system could become bottlenecked on free buffer waits. The Exadata Smart Flash Cache write-back cache provides acceleration to datafile writes as well as reads and therefore reduces the chance of free buffer wait bottlenecks.

The figure below  illustrates the effectiveness of the write-back cache for workloads that encounter free buffer waits. The workload used to generate this data was heavily write-intensive with very little read IO overhead (all the necessary read data was in cache). As a result, it experienced a very high degree of free buffer waits and some associated buffer busy waits. Enabling the write-back cache completely eliminated the free buffer waits by effectively accelerating the write IO bandwidth of the database writer. As a result, throughput increased four fold.

image

However, don’t be misled into thinking that the write-back cache will be a silver bullet for any workload.  Workloads that are experiencing free buffer waits are likely to see this sort of performance gain. Workloads where the dominant waits are for CPU, read IO, global cache co-ordination, log writes and so on will be unlikely to see any substantial benefit from the write-back cache.   


[1] 11.2.3.2.1 is recommended as the minimum version for this feature as it contains fixes to significant issues discovered in the initial release.

Exadata Smart Flash Logging–Outliers

In my last post, I looked at the effect of the Exadata smart flash logging.  Overall,  there seemed to be a slight negative effect on median redo log sync times.  This chart (slightly different from the last post because of different load and configuration of the system), shows how there’s a “hump” of redo log syncs that take slightly longer when the flash logging is enabled:

image

But of course, the flash logging feature was designed to improve performance not of the “average” redo log sync, but of the “outliers”. 

In my tests, I had 40 concurrent processes writing redo as fast as they could.  Occasionally this would result in some really long wait times.  For instance, in this trace you see an outlier of 291,780 microseconds (the biggest outlier in my tests BTW) within an otherwise unremarkable set of waits:

WAIT #47124064145648: nam='log file sync' ela= 1043 buffer#=101808 sync scn=1266588527 p3=0 obj#=-1 tim=1347583167588250
WAIT #47124064145648: nam='log file sync' ela= 2394 buffer#=130714 sync scn=1266588560 p3=0 obj#=-1 tim=1347583167590888
WAIT #47124064145648: nam='log file sync' ela= 932 buffer#=101989 sync scn=1266588598 p3=0 obj#=-1 tim=1347583167592057
WAIT #47124064145648: nam='log file sync' ela= 291780 buffer#=102074 sync scn=1266588637 p3=0 obj#=-1 tim=1347583167884090
WAIT #47124064145648: nam='log file sync' ela= 671 buffer#=102196 sync scn=1266588697 p3=0 obj#=-1 tim=1347583167885294
WAIT #47124064145648: nam='log file sync' ela= 957 buffer#=102294 sync scn=1266588730 p3=0 obj#=-1 tim=1347583167886575

To see if the flash logging feature was successful in removing these outliers, I extracted the top 10,000 waits from each of the roughly 8,000,000 waits I recorded in each category.  Here’s a plot (non-logarithmic) of those waits:

image

So – the flash log feature was effective in eliminating or at least reducing very extreme outlying redo log sync times.    Most redo log sync operations will experience no improvement or maybe even a slight degradation. But for the small number of log syncs that would have experienced a really excessive delay, the feature works as advertised – it reduces the chance of really excessive log file syncs. 

In my opinion, this effect doesn't imply that the flash can process a redo log write faster than the magnetic disks - in fact probably the opposite is true.  But given two desitinations to choose from, we avoid really long delays that occur when one of the destinations only is overloaded. 

Exadata smart flash logging

Exadata storage software 11.2.2.4 introduced the Smart flash logging feature.  The intent of this is to reduce overall redo log sync times - especially outliers - by allowing the exadata flash storage to serve as a secondary destination for redo log writes.  During a redo log sync, Oracle will write to the disk and flash simultaneously and allow the redo log sync operation to complete when the first device completes. 

Jason Arneil reports some initial observations here, and Luis Moreno Campos summarized it here.

I’ve reported in the past on using SSD for redo including on Exadata and generally I’ve found that SSD is a poor fit for redo log style sequential write IO.  But this architecture should at least do now harm and on the assumption that the SSD will at least occasionally complete faster than a spinning disk I tried it out. 

My approach involved the same workload I’ve used in similar tests.  I ran 20 concurrent processes each of which performed 200,000 updates and commits – a total of 4,000,000 redo log sync operations.  I captured every redo log sync wait from 10046 traces and loaded them in R for analysis.

I turned flash logging on or off by using an ALTER IORMPLAN command like this (my DB is called SPOT):

ALTER IORMPLAN dbplan=((name='SPOT', flashLog=$1),(name=other,flashlog=on))'

And I ran “list metriccurrent where objectType='FLASHLOG'” before and after each run so I could be sure that flash logging was on or off.

When flash logging was on, I saw data like this:

Before:

     FL_DISK_FIRST                     FLASHLOG     32,669,310 IO requests
     FL_FLASH_FIRST                    FLASHLOG     7,318,741 IO requests
     FL_PREVENTED_OUTLIERS             FLASHLOG     774,146 IO requests

After:

      FL_DISK_FIRST                     FLASHLOG     33,201,462 IO requests
     FL_FLASH_FIRST                    FLASHLOG     7,337,931 IO requests
     FL_PREVENTED_OUTLIERS             FLASHLOG     774,146 IO requests

 

So for this particular cell the flash disk “won” only 3.8% of times (7,337,931-7,318,741)*100/(7,337,931-7,318,741+33,201,462-32,669,310) and prevented no “outliers”.  Outliers are defined as being redo log syncs that would have taken longer than 500 ms to complete. 

Looking at my 4 million redo log sync times,  I saw that the average and median times where statistically significantly higher when the smart flash logging was involved:

> summary(flashon.data$synctime_us) #Smart flash logging ON
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
    1.0   452.0   500.0   542.4   567.0  3999.0
> summary(flashoff.data$synctime_us) #Smart flash logging OFF
   Min. 1st Qu.  Median    Mean 3rd Qu.    Max.
   29.0   435.0   481.0   508.7   535.0  3998.0
> t.test(flashon.data$synctime_us,flashoff.data$synctime_us,paired=FALSE)

    Welch Two Sample t-test

data:  flashon.data$synctime_us and flashoff.data$synctime_us
t = 263.2139, df = 7977922, p-value < 2.2e-16
alternative hypothesis: true difference in means is not equal to 0
95 percent confidence interval:
33.43124 33.93285
sample estimates:
mean of x mean of y
542.3583  508.6763

Plotting the distribution of redo log sync times we can pretty easily see that there’s actually a small “hump” in times when flash logging is on (note logarithmic scale):

image

This is of course the exact opposite of what we expect, and I checked my data very carefully to make sure that I had not somehow switched samples.  And I repeated the test many times and always saw the same pattern.  

It may be that there is a slight overhead to running the race between disk and flash, and that that overhead makes redo log sync times slightly higher.  That overhead may become more negligible on a busy system.  But for now I personally can’t confirm that smart flash logging provides the intended optimization and in fact I observed a small but statistically significant and noticeable degradation in redo log sync times when it is enabled.

Using SSD for redo on Exadata - pt 2

In my previous post on this topic, I presented data showing that redo logs placed on an ASM diskgroup created from exadata griddisks created from flash performed far worse than redo logs placed on ASM created from spinning SAS disks.

Of course, theory predicts that flash will not outperform spinning magnetic disk for the sequential write IOs experienced by redo logs, but on Exadata, flash disk performed much worse than seemed reasonable and worse than experience on regular Oracle with FusionIO SSD would predict (see this post).

Greg Rahn and Kevin Closson were both kind enough to help explain this phenomenon.  In particular, they pointed out that the flash cards might be performing poorly because of the default 512 byte redo block size and that I should try a 4K blocksize.   Unfortunately, at least on my patch level (11.2.2.3.2), there appears to be a problem with setting a 4K blocksize

ALTER DATABASE add logfile thread 1 group 9 ('+DATA_SSD') size 4096M blocksize 4096
*
ERROR at line 1:
ORA-01378: The logical block size (4096) of file +DATA_SSD is not compatible with the disk sector size (media sector size is 512 and host sector size is 512)

According to Greg, the F20 SSD cards are incorrectly reporting their physical characteristics and this is fixed in the current patch level.   Luckily, you can override the check by setting

ALTER SYSTEM SET "_disk_sector_size_override"=TRUE SCOPE=BOTH;

Greg and Kevin really know their stuff:  setting a 4k redo log block size resulted in dramatic improvements to redo log throughput – elapsed time reduced by 70%:

image

As expected,  redo log performance for SSD still slightly lags that of SAS spinning disks.     It’s clear that you can’t expect a performance improvement by placing redo on SSD, but at least the 4K blocksize fix makes the response time comparable.  Of course, with the price of SSD being what it is, and the far higher benefits provided for other workloads – especially random reads – it’s hard to see an economic rationale for SSD-based redo.    But at least with a 4K blocksize it’s tolerable.

When our Exadata system is updated to the latest storage cell software, I’ll try comparing workloads with the Exadata smart flash logging feature.

Using flash disk for Redo on Exadata

In this Quest white paper and on my SSD blog here,  I report on how using a FusionIO flash SSD compares with SAS disk for various configurations – datafile, flash cache, temp tablespace and redo log.  Of all the options I found that using flash for redo was the least suitable, with virtually no performance benefit:

image

That being the case,  I was surprised to see that Oracle had decided to place Redo logs on flash disk within the database appliance, and also that the latest release of the exadata storage cell software used flash disk to cache redo log writes (Greg Rahn explains it here).   I asked around at OOW hoping someone could explain the thinking behind this, but generally I got very little insight.

I thought I better repeat my comparisons between spinning and solid state disk on our Exadata system here at Quest.  Maybe the “super capacitor” backed 64M DRAM on each flash chip would provide enough buffering to improve performance.  Or maybe I was just completely wrong in my previous tests (though I REALLY don’t think so :-/).

Our Exadata 1/4 rack has a 237GB disk group constructed on top of storage cell flash disk.  I described how that is created in this post.   I chose 96GB per storage cell in order to allow the software to isolate the grid disks created on flash to 4 24GB FMODs (each cell has 16 FMODs).    Our Exadata system has fast SAS spinning disks – 12 per storage cell for a total of 36 disks.  Both the SAS and SSD disk groups had normal redundancy.

I ran an identical redo-intensive workload on the system using SAS or SSD diskgroups for the redo logs.  Redo logs were 3 groups of 4GB per instance.   I ran the workload on it’s own, and as10 separate concurrent sessions.  

The results shocked me:

image

When running at a single level of concurrency,  the SSD based ASM redo seemed to be around 4 times slower than the default SAS-based ASM redo.  Things got substantially worse as I upped the level of concurrency with SSD being almost 20 times slower.  Wow.

I had expected the SAS based redo to win – the SAS ASM disk group has 36 spindles to write to, while the SSD group is (probably) only writing to 12 FMODs.  And we know that we don’t expect flash to be as good as SAS disks for sequential writes.  But still, the performance delta is remarkable. 

Conclusion

 

I’m yet to see any evidence that putting redo logs on SSD is a good idea, and I keep observing data from my own tests indicating that it is neutral at best and A Very Bad Idea at worse.  Is anybody seeing any similar?  Does anybody think there’s a valid scenario for flash-based redo?

Comparing Hadoop Oracle loaders

Oracle put a lot of effort into highlighting the upcoming Oracle Hadoop Loader (OHL) at OOW 2011 – it was even highlighted in Andy Mendelsohn's keynote.  It’s great to see Oracle recognizing Hadoop as a top tier technology!

However, there were a few comments made about the “other loaders” that I wanted to clarify.  At Quest, I lead the team that writes the Quest Data Connector for Oracle and Hadoop (let’s call it the “Quest Connector”) which is a plug-in to the Apache Hadoop SQOOP framework and which provides optimized bidirectional data loads between Oracle and Hadoop.  Below I’ve outlined some of the high level features of the Quest Connector in the context of the  Oracle-Hadoop loaders.  Of course, I got my information on the Oracle loader from technical sessions at OOW so I may have misunderstood and/or the facts may change between now and the eventual release of that loader.  But I wanted to go on the record with the following:

  • All parties (Quest, Cloudera, Oracle) agree that native SQOOP (eg, without the Quest plug-in) will be sub-optimal: it will not exploit Oracle direct path reads or writes, will not use partitioning, nologging, etc.   Both Cloudera and Quest recommend that if are doing transfers between Oracle and Hadoop that you use SQOOP with the Quest connector.
  • The Quest connector is a free, open source plug in to SQOOP, which is itself a free, open source software product.  Both are licensed under the Apache 2.0 open source license.  Licensing for the Oracle Loader has not been announced, but Oracle has said it will be a commercial product and therefore presumably not free under all circumstances.   It’s definitely not open source.
  • The Quest loader is available now (version 1.4), the Oracle loader is in beta and will be released commercially in 2012.
  • The Oracle loader moves data from Hadoop to Oracle only.  The Quest loader can also move data from Oracle to Hadoop.   We import data into Hadoop from an Oracle database usually 5+ times faster than SQOOP alone.
  • Both Quest and the Oracle loader use direct path writes when loading from Hadoop to Oracle.  Oracle do say they use OCI calls which may be faster than the direct path SQL calls used by Quest in some circumstances.   But I’d suggest that the main optimization in each case is direct path.
  • Both Quest and the Oracle loader can do parallel direct path writes to a partitioned Oracle table.  In the case of the Quest loader, we create partitions based on the job and mapper ids.  Oracle can use logical keys and write into existing partitioned tables.  My understanding is that they will shuffle and sort the data in the mappers to direct the output to the appropriate partition in bulk.  They also do statistical sampling which may improve the load balancing when you are inserting into an existing table. 
  • The Quest loader can update existing tables, and can do Merge operations that insert or updates rows depending on the existence of a matching key value.  My understanding is that the Oracle loader will do inserts only - at least initially.
  • Both the Quest connector and the Oracle loader have some form of GUI.  The Oracle GUI I believe is in the commercial ODI product.  The Quest GUI is in the free Toad for Cloud Databases Eclipse plug-in.  I’ve put a screenshot of that at the end of the post.
  • The Quest connector uses the SQOOP framework which is a Apache Hadoop sub-project maintained by multiple companies most notably Cloudera.  This means that the Hadoop side of the product was written by people with a lot of experience in Hadoop.   Cloudera and Quest jointly support SQOOP when used with the Quest connector so you get the benefit of having very experienced Hadoop people involved as well as Quest people who know Oracle very well.   Obviously Oracle knows Oracle better than anyone, but people like me have been working with Oracle for decades and have credibility I think when it comes to Oracle performance optimization.

Again,  I’m happy to see Oracle embracing Hadoop;  I just wanted to set the record straight with regard to our technology which exists today as a free tool for optmized bi-directional data transfer between Oracle and Hadoop. 

You can download the Quest Connector at http://bit.ly/questHadoopConnector.  The documentation is at  http://bit.ly/QuestHadoopDoc.

15-09-2011 3-01-01 PM import

21-09-2011 9-21-41 AM Hadoop solutions

Configuring Exadata flash as grid disk

The default – or at least a very common - configuration for Exadata is to configure all the flash as Exadata Smart Flash Cache (ESFC).   This is a simple and generally performant configuration, but won’t be the best choice for all cases.  In particular, if you have table which is performance critical, and it could fit in the flash storage you have available, you might be better off configuring some of your flash as grid disk, creating an ASM disk group from that, and putting the table there.

Here’s the procedure:

1. Drop the flash cache, create a new flashcache of a smaller size, then create the griddisks from the unallocated space.  These CELLCLI commands do that:

CellCLI> drop flashcache
Flash cache exa1cel01_FLASHCACHE successfully dropped
CellCLI> create flashcache all size=288g
Flash cache exa1cel01_FLASHCACHE successfully created
CellCLI> create griddisk all flashdisk prefix=ssddisk

There’s 384G of flash on each storage cell, so the above commands create about 96G of SSD grid disk.   Run those commands on each cell node, perhaps by using the CCLI command (see this post for an example).

2. The above procedure will create disks in the format o/cellIpAddress/ssddisk_FD_*_cellnode.  Log into an ASM instance, and issue the following command to create a diskgroup from those disks:

SQL> 
  1  create diskgroup DATA_SSD normal redundancy disk 'o/*/ssddisk*'
  2  attribute 'compatible.rdbms'='11.2.0.0.0',
  3  'compatible.asm'='11.2.0.0.0',
  4  'cell.smart_scan_capable'='TRUE',
  5* 'au_size'='4M'

Alternatively you can use the database control for the ASM instance to create the new diskgroup.  Your new flash disks should show up as candidate disks.

The relative performance of flash disks, vs flash cache is similar in Exadata to what I’ve seen using the Database flash cache.  Placing an object directly on flash is faster than using the cache, although the cache is very effective.  Here’s the results for 200,000 primary key lookups across  1,000,000 possible primary keys:

image

Clearing the Exadata smart flash cache using dcli

I’ve been doing some performance benchmarks on our exadata box specifically focusing on the performance of the smart flash cache.  I found that even if I switched the CELL_FLASH_CACHE storage setting to NONE,  the flash cache will still keep cached blocks in flash and would therefore give me artificially high values for “cell flash cache read hits” statistic when I set CELL_FLASH_CACHE back to DEFAULT or KEEP.  What I needed was a way to flush the Exadata flash cache.

Unfortunately there doesn’t seem to be a good way to flush the flash cache – no obvious CELLCLI command.   Maybe I’ve missed something, but for now I’m dropping and recreating the flash cache before each run.

Luckily the dcli command lets me drop and recreate on each cell directly from the database node and even sets up passwordless connections.  Here’s how to do it.

Firstly, create a script that will drop and recreate the flash cache for a single cell:

$ cat flushcache.sh
cellcli <<!
drop flashcache;
create flashcache all;

Now, use ccli to execute that on each cell node (I have three named exa1cel01,exa1cel02,exa1cel03:

$ dcli -c exa1cel01,exa1cel02,exa1cel03 --serial -k -l userid -x flushcache.sh

The “-k” option copies the ssh key to the cell nodes which means that after the first execution you’ll be able to do this without typing in the password for each cell node.   The “—serial” option makes each command happen one after another rather than all at once – you probably don’t need this…

Anyone know a better way to flush the Exadata flash cache?

A perl utility to improve exadata cellcli statistics

I’ve only recently had the opportunity to start experimenting with our relatively new Exadata 1/4 rack.  Like most Exadata newbs the most immediately useful new tool is the cellcli utility which lets us get as storage cell utilities.  Kerry Osborne’s blog post on top 10 commands is required reading when getting started.

Amongst Cellcli annoyances are:

  1. You can’t join between METRICDEFINITION and METRICCURRENT or other metric tables.  So you can’t easily see the descriptions of the metric values you are viewing.
  2. CELLCLI only reports the values for the current storage cell, so you can’t get a global view across the cells.

I decided a good first project would be to write a utility that could display the metric definitions and the values together, and which would merge values from all of the storage cells.   Since my old friend perl is present on the storage cells, it wasn’t too hard to put together a tool to do this.   The script is here

Setup

 

You’ll want to have password-less SSH connectivity between cells for this to work.   Many of us will be familiar with doing this for RAC, but if you don’t, a quick primer is available at  http://blogs.translucentcode.org/mick/archives/000230.html.   I wrote this script to partially automate the procedure:

   1: # Helper script to setup passwordless SSH
   2: if [ $# -eq  0 ]
   3: then
   4:     echo "Usage $0 remoteHost"
   5:     exit 1
   6: fi
   7:  
   8: ssh-keygen -t dsa
   9: while [ $# -gt 0 ]; do
  10:     export remoteHost=$1
  11:     scp ~/.ssh/id_dsa.pub ${remoteHost}:
  12:     ssh $remoteHost 'mkdir .ssh; cat ~/id_dsa.pub >>~/.ssh/authorized_keys;chmod 644 ~/.ssh/authorized_keys '
  13:     shift
  14: done

 

From a database node, run the script with the names of all the storage cells on the command line.  I have three cells so I issued the command like this:

[melbourne@exa1db02 ~]$ bash setup_ssh.sh exa1cel01 exa1cel02 exa1cel03

Then you will be prompted with various SSH messages about passphrases, and so on.  Just hit return or “y” if prompted, except when asked for passwords.  You’ll have to enter the password for each storage cell twice to get everything setup. After that you should be able to ssh to the storage cells without being prompted for a password.

Usage

 

Running the command with no arguments will generate the following usage message:

 

[melbourne@exa1db02 ~]$ perl cellcli.pl
Usage perl cellcli.pl [options]
Options:
        --hosts -H      comma seperated lists of cell nodes
        --mask -m       print metrics starting with this string
        --desc -d       print descriptions of metrics
        --cells -c      print individual cell metrics
        --sci -s        Use scientific notation
        --stats -a      Print min/max/avgs
        --help -h       print this

Example:
        perl cellcli -m FC_IO_RQ_R -h exa1cel01,exa1cel02,exa1cel03

NB: utility assumes passwordless SSH from this host to the cell nodes
see
http://guyharrison.squarespace.com/blog/2011/7/31/aggregating-exadata-cellcli-statistics.html

 

The only compulsory argument is “—hosts”, which requires a comma-seperated list of the cell nodes.    Most of the time you will want to use the “—mask” argument as well, which applies a filter to the metric names.   With no other arguments you’ll then get a list of each matching statistic and the sum of values across all the cells:

[melbourne@exa1db02 ~]$ perl cellcli.pl --hosts exa1cel01,exa1cel02,exa1cel03 --mask FC_IO_RQ_R.*

Name                    Sum
----------------------- ------------
FC_IO_RQ_R               8.55164e+06
FC_IO_RQ_R_MISS               675223
FC_IO_RQ_R_MISS_SEC                0
FC_IO_RQ_R_SEC                   0.1
FC_IO_RQ_R_SKIP            1.742e+08
FC_IO_RQ_R_SKIP_SEC             33.6

Note that mask matches Unix regular expressions, not Oracle style “%” wildcards - '.*' matches anything. 

The --desc argument adds the description (somewhat truncated) for each metric:

image

The --cells argument prints the values from each individual cell:

image

 

The --stats argument causes min,max and average values to be printed.  The average is the average of sums across cells, not the average of all values.  SOme statistics will have more than one value in each cell (GD_IO_RQ_W_SM for example) and the average is the average SUM of values across cells.  

image

You can use all the arguments together, but you’re output might be very wide!

Restrictions and caveats

 

When a metric has multiple object values they are simply summed.  That probably works OK for some statistics but maybe not for others.  For instance the disk grid IO counters are broken down by the name of the grid disk within the cell:

image

My utility simply sums all of those values:

image

 

 

Hope you find this of use.  Please let me know if you’ve got any suggestions…..

 

A first look at Oracle on Amazon RDS

It's been possible for a some time to run Oracle within the Amazon cloud.  I wrote an introduction to Oracle on EC2 back in 2008,  and we use an EC2 pretty extensively here at quest for testing and deployment.   Up until now, Oracle has been made available within EC2 instances which appear for all purposes to be remote hosts.  Recently, Amazon  made oracle available in a more directly cloud based fashion as one of the options to their Relational Database Service (RDS).  Using RDS, Amazon takes complete responsibility for database management providing a more complete Database As A Service (DBaaS) offering, similar to what Microsoft offers with SQL Azure.   Amazon handles installation, upgrades, backups and so forth.  In theory, you can use the database without needing to do any administrative work.

Indeed Amazon on takes so much responsibility that long-time DBAs like me are unlikely to be really happy; the very notion of being unable to connect to the operating system for instance is disturbing, and it’s currently missing support for many of our favourite tools.

However I wanted to give it a test drive, so here's some screenshots and observations to give you a feel for it.

Creating the Oracle RDS database

 

To use any of the Amazon cloud facilities, you need and Amazon AWS account.  Once you have that, you create a new RDS database from the AWS console, where you can launch a new MySQL or Oracle database:

image

You then specify the parameters for the new instance.  At the time of writing, some of these screens clearly had not been updated from the older MySQL options.  Here we can specify:

  • Type of license.   If you have a site license you can use that (as I have below).  If not you can pay Amazon by the hour and Amazon presumably kick some of that back to Oracle.
  • Oracle version.  Note also that you can ask to have the version updated for you with patches automatically – quite cool. 
  • Instance Class.  This is the size of the virtual machine and determines memory and CPU availability.  These match the equivalent EC2 instance types (see http://aws.amazon.com/ec2/#instance) and vary from 1 core with 1.7GB to 8 (faster) cores with 64GB).   Charging varies accordingly of course.
  • DB instance identifier.  At first you might think that is the database service name, but NO, it’s an Amazon identifier.  You specify the service name on the next page.
  • Username and Password of the DBA account you’ll use.

image

On next page we specify the database (eg Service/SID) name and the port.  NOTE that at the moment the default port is 3306, which is the MySQL listener port – you should change it if you want a more familiar listener port like 1521.

image

 

Click OK a couple more times and your database will start up.  Takes 5 minutes or so to get running.

Connecting to the database

 

Before you can connect, however, you’ll need to create a security group which essentially defines the Amazon firewall for your instance.   Below I’m configuring to let anything through (not ideal, but I travel a lot so I can’t predict my IP address range).  The CDIR/IP is a mask that defines what IP addresses will be allowed to connect to the database instance. 

image

Now we can connect.  We need to know the hostname, which is shown as the “Endpoint” in the DB Instance description page.    The port and service name are also here:

image

We could create a TNSNAMES entry using those details or connect via SQL Plus in the form username/password@host:port/Service.  Below, I specify these details directly into TOAD:

 

image

Restrictions

 

As easy as it is to get started with RDS, there are a lot of restrictions.  Most significantly:

  • You can’t connect to the OS. 
  • Datafiles must be Oracle Managed Files.  No filesytem specifications are allowed.  ASM is not supported.
  • Some commands are not supported, including ALTER SYSTEM.  However there are PLSQL packages that provide a lot of the common facilities such as switching logs, checkpointing, tracing and killing sessions.
  • You can only access the alert log and trace files via an external table interface
  • No OEM!!!  And for that matter, monitoring products such as Quest’s own Spotlight and Foglight cannot be installed (due to the above restrictions).   Even some parts of TOAD won’t function since they rely on forbidden commands 

The idea with RDS and these missing features  is that “you ain’t going to need it” -  the database is self managing and these DBA utilities simply aren’t required.   For more details check out this part of the RDS manual.

 

Changing the database CPU and memory

 

One of the great things about RDS is that you can resize the instance.  If  you want more memory or CPU, you can use the modify tab to change the specification:

image

This does require a reboot, which either happens during the maintenance window or – if you so specified – immediately.   Still, much easier than achieving the same result on EC2 – which is almost impossible.

Monitoring

 

Although there’s no OEM and no way to run Spotlight or other 3rd party monitoring tools, there is a very basic monitoring screen.

image

Parameters

Parameters are managed through parameter groups, which assign values dynamically.  For instance, in the default parameter group, memory target is set to 75% of total system memory.  If you reboot the system into a higher class, the memory will change.  You can change the values for a parameter group and you can create your own.

image

Conclusion

 

End to end, it probably takes about 10 minutes to get set up once you have your account details.  Not bad.  And Amazon takes care of backups and the like.

However, it’s probably not something I’ll ever do again, because I can’t really cope with the idea of not being able to go directly to the alert log, trace files, etc.  And I definitely don’t like not being able to use tools like Spotlight or even OEM.   Amazon provides work arounds for lots of this stuff – see this page for instance – but I’m too used to the command line Smile.  Still, it’s worth knowing how this all works and – who knows – it might be handy one day when I want to quickly set up a 64GB SGA database

RJDBC 2.0 performance improvements

But hopefully this will be the last RJDDC posting the I make for a while :-).

Simon Urbanek has made some fixes to RJDBC which resolve the performance issues five referred to in my last post.  As you can see below, these fixes have led to a pretty dramatic performance improvements:

17-05-2011 10-29-16 AM RJDBC 2.0

Furthermore, RJDBC is substantially easier to install than Roracle, and is of course portable across different databases.

If you’re really concerned with RJDBC performance and have memory to burn you can improve performance someone by selecting all rows in a single internal fetch, like this:

jdata2<-fetch(dbSendQuery(jcon,sqltext),n=rowsToFetch);

This improves performance, though Roracle still has the advantage:

18-05-2011 10-56-47 AM RJDBC2

You can get the new RJDBC here.

RORACLE vs RJDBC

In this post I looked at hooking up the R statistical system to Oracle for either analysing database performance data or analysing other data that happened to be stored in Oracle.   A reader asked me if I’d compared the performance of RJDBC – which I used in that post - with the RORACLE package.  I hadn’t, but now I have and found some pretty significant performance differences.

RJDBC hooks up pretty much any JDBC datasource into R, while ROracle uses native libraries to mediate the connection.  RJDBC is probably easier to start with, but ROracle can be installed pretty easily, provided you create some client libary entries.  So for me,  after downloading the ROracle package, my install looked something like this (run it as root):

cd $ORACLE_HOME/bin
genclntsh
genclntst

R CMD INSTALL --configure-args='--enable-static' /root/Desktop/ROracle_0.5-9.tar.gz

It was pretty obvious right away that RJDBC was much slower than ROracle.  Here’s a plot of elapsed times for variously sized datasets:

8-05-2011 3-09-24 PM R1

The performance of RJDBC degrades fairly rapidly as the size of the data being retrieved from Oracle increases,  and would probably be untenable for very large data sets. 

The RJDBC committers do note that RJDBC performance will not be optimal:

The current implementation of RJDBC is done entirely in R, no Java code is used. This means that it may not be extremely efficient and could be potentially sped up by using Java native code. However, it was sufficient for most tasks we tested. If you have performance issues with RJDBC, please let us know and tell us more details about your test case.

The default array size used by RJDBC is only 10, while the default for Roracle is 500… could this be the explaination for the differences in performance?

You can’t change the default RJDBC fetch size (at least, I couldn’t work out how to), but you can change ROracle's.  Here’s a breakdown of elapsed time for RJDBC and Roracle using defaults, and for ROracle using a fetch size of 10:

10-05-2011 11-56-52 AM R3

As you can see, issue does not seem to be the array size alone.   I suspect the overhead of building up the data frame from the result set in RJDBC is where the major inefficiency occurs.  Increasing the array fetch size might reduce the impact of this, but the array fetch size alone is not the cause of the slow performance.    

Conclusion

The current implementation of RJDBC is easy to install and fairly portable, but doesn’t provide good performance when loading large data sets.   For now, ROracle is the better choice.   

Scripts:

Create test table (SQL)

scalability testing (R script)

Comparison of elapsed times and array size (R script).

Statistical analysis of Oracle performance data using R

R is without doubt the Open Source tool of choice for statistical analysis, it contains a huge variety of statistical analysis techniques – rivalled only by hugely expensive commercial products such as SAS and SPSS.   I’ve been playing with R a bit lately, and – of course – working with data held in Oracle.  In particular, I’ve been playing with data held in the Oracle dynamic performance views.

 

This post is a brief overview of installing R, connecting R to Oracle, and using R to analyse Oracle performance data.

Installing R

 

R can be install in linux as a standard package:

yum install R

On Windows, you may wish to use the Revolution R binaries:  http://info.revolutionanalytics.com/download-revolution-r-community.html.  I had a bit of trouble installing the 32-bit binaries on my system as they conflicted with my 64-bit JDBC.  But if you are 32-bit you might be OK.

The easiest way to setup a connection to Oracle in to install the RJDBC package.

[oracle@GuysOEL ~]$ R

R version 2.12.1 (2010-12-16)
Copyright (C) 2010 The R Foundation for Statistical Computing
ISBN 3-900051-07-0
Platform: x86_64-redhat-linux-gnu (64-bit)

<snip>

Type 'demo()' for some demos, 'help()' for on-line help, or
'help.start()' for an HTML browser interface to help.
Type 'q()' to quit R.

> install.packages("RJDBC")

Using the StatET Eclipse plug-in

 

I use a free eclipse plug-in called StatET.  It provides an editing environment and GUI console for the R system.  The configuration steps are a little laborious, but it has a online getting started module that guides you through the steps.  Once you have it installed, I doubt you’ll go back to the command line. 

 

 

You can get StatET at http://www.walware.de/goto/statet.  Using the eclipse environment is really handy if you’re going to use R with Oracle, since you can also use the free Toad for Eclipse extension to work on your SQLs.  Eclipse becomes a complete environment for both R and Oracle.

Getting data from Oracle into R

 

Once you’ve installed R, it’s pretty simple to get data out of Oracle and into R.   Here’s a very short snippet that grabs data from the V$SQL table:

   1: library(RJDBC)
   2:  
   3: drv <- JDBC("oracle.jdbc.driver.OracleDriver",
   4:                 "/ora11/home/jdbc/lib/ojdbc6.jar")
   5:  
   6: conn <- dbConnect(drv,"jdbc:oracle:thin:@hostname:1521: service","username","password")
   7: sqldata<-dbGetQuery(conn, "SELECT cpu_time cpu,elapsed_time ela,disk_reads phys,
   8:                                   buffer_gets bg,sorts sorts
   9:                              FROM V$SQL ")
  10: summary(sqldata)

 

Let’s look at that line by line:

 

Line Comments
1 The library command loads the RJDBC module, which will provide connectivity to Oracle.
3 We create a driver object for the Oracle JDBC driver.  The second argument is the location of the Oracle JDBC jar file,  almost always $ORACLE_HOME/jdbc/lib/ojdbc6.jar.
6 Connect to the Oracle database using standard JDBC connections strings
7 Create an R dataset from the result set of a query.  In this case, we are loading the contents of the V$SQL table. 
10 The R “summary” package provides simple descriptive statistics for each variable in the provided dataset.

 

Basic R statistical functions

R has hundreds of statistical functions,  in the above example we used “summary”, which prints descriptive statistics.  The output is shown below;  mean, medians, percentiles, etc:

image

Correlation

Statistical correlation reveals the association between two numeric variables.  If two variables always increase or decrease together the correlation is 1;  if two variables are absolutely random with respect of each other then the correlation tends towards 0.

cor prints the correlation between every variable in the data set:

image

cor.test calculates the correlation coefficient and prints out the statistical significance of the correlation, which allows you to determine if there is a significant relationship between the two variables.  So does the number of sorts affect response time?  Let’s find out:

image

The p-value is 0.19 which indicates no significant relationship – p values of no more than 0.05 (one chance in 20) are usually requires before we assume statistical significance.

On the other hand,  there is a strong relationship between CPU time and Elapsed time:

image

Plotting

plot prints a scattergram chart.  Here’s the output from plot(sqldata$ELA,sqldata$CPU):

image

Here’s a slightly more sophisticated chart using “smoothScatter”, logarithmic axes and labels for the axes:

image

Regression

Regression is used to draw “lines of best fit” between variables.  

In the simplest case, we use the “lm” package to create a linear regression model between two variables (which we call “regdata” in the example).  The summary function prints a summary of the analysis:

image

This might seem a little mysterious if your statistics is a bit rusty, but the data above tells us that there is a significant relationship between elapsed time (ELA) and physical reads (PHYS) and gives us the gradient and Y axis intercept if we wanted to draw the relationship.   We can get R to draw a graph, and plot the original data by using the plot at abline functions:

image

Testing a hypothesis

 

One of the benefits of statistical analysis is you can test hypotheses about your data.  For instance, what about we test the until recently widely held notion that the buffer cache hit rate is a good measure of performance.  We might suppose if that were true that SQL statements with high buffer cache hit rates would show smaller elapsed times than those with low buffer cache hit rates.  To be sure, there are certain hidden assumptions underlying that hypothesis, but for the sake of illustration let’s use R to see if our data supports the hypothesis.

Simple correlation is a fair test for this, all we need to do is see if there is a statistically signifcant correlation between hit rate and elapsed time.  Here’s the analysis:

image

The correlation is close to 0, and the statistical significance way higher than the widely accepted .05 threshold for statistical significance.  Statements with high hit ratios do not show statistically signficantly lower elasped times that SQLs with low hit ratios.

Conclusion

 

There’s tons of data in our Oracle databases that could benefit from statistical analysis – not the least the performance data in the dynamic performance views, ASH and AWR.  We use statistical tests in Spotlight on Oracle to extrapolate performance into the future and to set some of the alarm thresholds.  Using R,  you have easy access to the most sophisticated statistical analysis techniques and as I hope I’ve shown, you can easily integrate R with Oracle data.

Correlation probability in Oracle

Being at the OReilly Strata conference has re-energized my inner statistic geek,  so I thought I’d share a PL/SQL package I some  years ago to calculate the probability associated with correlation coefficients calculated by Oracle.

Correlation is a statistical measure of the association between two variables.  Oracle introduced direct support for correlation in the SQL language from at least 10g (may have even been 9i).  So we can write SQL statements that calculate correlation.  So for instance,  this statement generates the correlation between the number of sorts and the elapsed times of SQL statements still cached in the SQL area:

The correlation coefficient “r” will be 1 if the two variables always increase or decrease together and –1 if one variable always increases as the other decreases.  A correlation coefficient of .33 probably represents some relationship, but anyone who has done statistics knows that you should calculate the statistical significance – chance that the result did not arise from chance – before making assertions.

Oracle doesn’t report correlation probabilities, so I implemented the formula to calculate the probability in  a PLSQL package.  It’s here.

If you pass the correlation coefficient and the number of samples (rows input to the CORR function) into the CORR_PROG_PKG.PROB_R function then it will show you the probability associated with that correlation.  So below we see the probability that this correlation is due to random fluctuation is effectively 0 (down to 4 significant figures):

Generally if the probability is less than .05 or .01 (one in 20 or 1 in 100 respectively) , then we would refer to the correlation as being statistically significant and that the two variables are related.  So we would be justified in claiming there was a relationship between sorts and elapsed time.

Just for kicks,   let’s use this technique to test the hypothesis that the Buffer Cache Hit ratio is a reliable indicator of performance.  Here’s the result:

The correlation is very low and the probability that there is no association is very high – way above the .05 probability.  In other words, in this database there is no statistically significant relationship between the elapsed time of an SQL and it’s buffer cache hit rate!

Working with Cassandra 0.7

In this post,  I experimented with inserting data from Oracle into Cassandra column families using Hector.  Unfortunately, that code isn’t compatible with the latest Cassandra 0.7 release, so I had to rework it.  The new version uses the addInsertion method of the Mutator object and while not totally intuitive didn’t take long to get working.  Here are the key changes:

   1: private static void insertSales(Connection oracleConn, Keyspace keyspace,
   2:         String cfName) throws SQLException {
   3:     int rows = 0;
   4:     ColumnPath cf = new ColumnPath(cfName);
   5:     Statement query = oracleConn.createStatement();
   6:  
   7:     String sqlText = "SELECT cust_id, cust_first_name,  cust_last_name, prod_name, "
   8:             + "           SUM (amount_sold) sum_amount_sold,sum(quantity_sold) sum_quantity_sold "
   9:             + "          FROM sh.sales    "
  10:             + "          JOIN sh.customers USING (cust_id) "
  11:             + "          JOIN sh.products  USING (prod_id)  "
  12:             + "         GROUP BY cust_id, cust_first_name,  cust_last_name,  prod_name "
  13:             + "         ORDER BY cust_id, prod_name ";
  14:     ResultSet results = query.executeQuery(sqlText);
  15:     int rowCount = 0;
  16:     int lastCustId = -1;
  17:     while (results.next()) { // For each customer
  18:         Integer custId = results.getInt("CUST_ID");
  19:         String keyValue = custId.toString();
  20:  
  21:         if (rowCount++ == 0 || custId != lastCustId) { // New Customer
  22:             String custFirstName = results.getString("CUST_FIRST_NAME");
  23:             String custLastName = results.getString("CUST_LAST_NAME");
  24:             System.out.printf("%s %s\n", custFirstName, custLastName);
  25:             // Create a supercolumn for customer details (first, lastname)
  26:             Mutator<String> mutator = HFactory.createMutator(keyspace,
  27:                     stringSerializer);
  28:             mutator.addInsertion(keyValue, cfName, HFactory
  29:                     .createSuperColumn("CustomerDetails", Arrays
  30:                             .asList(HFactory.createStringColumn(
  31:                                     "customerFirstName", custFirstName)),
  32:                             StringSerializer.get(), StringSerializer.get(),
  33:                             StringSerializer.get()));
  34:             mutator.addInsertion(keyValue, cfName, HFactory
  35:                     .createSuperColumn("CustomerDetails", Arrays
  36:                             .asList(HFactory.createStringColumn(
  37:                                     "customerLastName", custLastName)),
  38:                             StringSerializer.get(), StringSerializer.get(),
  39:                             StringSerializer.get()));
  40:  
  41:             mutator.execute();
  42:         }
  43:         // Insert product sales total for that customer
  44:         String prodName = results.getString("PROD_NAME");
  45:         Float SumAmountSold = results.getFloat("SUM_AMOUNT_SOLD");
  46:         Float SumQuantitySold = results.getFloat("SUM_QUANTITY_SOLD");
  47:         // Supercolumn name is the product name
  48:         Mutator<String> mutator = HFactory.createMutator(keyspace,
  49:                 stringSerializer);
  50:         mutator.addInsertion(keyValue, cfName, HFactory.createSuperColumn(
  51:                 prodName, Arrays.asList(HFactory.createStringColumn(
  52:                         "AmountSold", SumAmountSold.toString())),
  53:                 StringSerializer.get(), StringSerializer.get(),
  54:                 StringSerializer.get()));
  55:         mutator.addInsertion(keyValue, cfName, HFactory.createSuperColumn(
  56:                 prodName, Arrays.asList(HFactory.createStringColumn(
  57:                         "QuantitySold", SumQuantitySold.toString())),
  58:                 StringSerializer.get(), StringSerializer.get(),
  59:                 StringSerializer.get()));
  60:         mutator.execute(); 
  61:         lastCustId = custId;
  62:         rows++;
  63:     }
  64:     System.out.println(rows + " rows loaded into " + cf.getColumn_family());
  65: }

The reason why I wanted to do this was to play with Cassandra using our (relatively) new Toad for Cloud Databases Eclipse client.  Toad for Cloud Databases lets you work with non-relational datasources such as Cassandra, HBase, SimpleDB, etc, using SQL.   

Here’s how it works.  We select the column family we want to map from the Cassandra server:

14-01-2011 3-21-00 PM Map Cassandra1

That column family contains data loaded from both the Oracle CUSTOMER and SALES tables.   Toad recognizes that the data in that single column family is best represented by two normalized tables,  and gives us the opportunity to specify the names for the primary and foreign keys.  We can also rename the “tables” (more like views really) that Toad will create:

14-01-2011 3-22-34 PM map cassandra2

The resulting tables look similar to the tables that we originally loaded from Oracle, and we can issue SQL queries against them just as we could have with Oracle.  The queries get translated from SQL to thrift calls against the underlying Cassandra Server:

14-01-2011 4-16-50 PM cassandra query

I definitely find it easier to issue SQL than write a 200 line Java program to do the same thing!  Of course, I'm not much of a Java programmer, but at a minimum having Toad to query the Cassandra data is invaluable when checking to see that your program did was it was intended to do 

Accelerating Oracle database performance with SSD

I was recently asked to provide advice on how best to accelerate Oracle database performance with flash SSD.   In this post I’ll attempt to outline some thoughts on that.

There’s some thinking that SSD will soon replace spinning disk altogether - we just have to wait for the price to come down.  However, I’m not so sure that this complete replacement is likely to occur in the immediate future.   There are two competing forces at work:

  • The economies of “big data”, which drive us to reduce the cost per unit of storage ($$/TB):   magnetic disk costs only about 5% of the cost of a high end SSD per terabyte.
  • The economies of high IO throughput, in which we are motivated to reduce the cost of providing a given IO rate.  SSD (PCi flash) can generate IO rates at 1/20th of the cost of magnetic disks.

These two competing economies are not likely to change in the immediate future:  magnetic disk can store large amounts of data much more cheaply than SSD;  SSD can deliver very high IO rates more cheaply than magnetic disk.  For most databases, it will not be cost effective to place the entire database on SSD – the best outcome will be obtained when we place parts of the database on SSD and some on magnetic disk.

SSD Performance basics

I’ve posted in the past on how SSD works with the Oracle flash cache,  and I’ve presented on SSD performance at OOW.  You might like to review those items.  However,  here’s a quick review of the basics:

  • All flash drives offer pretty good read performance – say 25 microseconds for a single page read
  • When inserting data into an empty page, performance is slower, but not awful – around 250 microseconds
  • In order to update an existing page, a block erase is required – much, much slower – maybe 2000 microseconds

image

Enterprise SSD vendors - Fusion IO, Virident, etc - all have sophisticated algorithms to avoid the write penalty associated with SSD block erase operations.  Amongst other techniques, they will maintain a pool of unused blocks.  When a page needs to be updated it will be marked as invalid in the original location and moved to one of these blocks.  Later on, garbage collection routines will clear up the invalid entries.  The result is that updates don't always have to incur the high overhead of block erase operations.

Nevertheless, you want to avoid placing write intensive files on a flash disk, because as the disk fills up with modified blocks - and especially if the write rate exceeds the garbage collection capabilities - you might see the disk slow down dramatically.

To summarize:

Flash-based disks perform reads much faster than writes, and can suffer from performance degradation if subjected to intensive sustained write operations.

 

Options for SSD deployment on Oracle

 

Given the performance characteristics of SSD,  how best to use SSD to boost Oracle performance?  There are a few options:

  • Put the entire database on SSD
  • Place selected segments on SSD
  • Use the 11GR2 flash cache
  • Put temporary tablespace on SSD
  • Put the redo logs on SSD

Let’s look at each of these in turn:

Put the entire Database on SSD

This works great – if you can afford it.  For most databases the cost of putting it all on SSD is too high.  Placing data that is rarely accessed on SSD is not very cost effective, because you are paying a high cost per GB, but not getting any benefit from the relatively cheap IO rates. 

Place selected segments on SSD

This is probably the best option if you don’t have enough SSD for the entire database, but it does require a bit of configuration.   We identify the segments with the most IO,  perhaps using a query on V$SEGMENT_STATISTICS:

 

We are looking for object that have a high read/write ratio, contribute to a significant number of reads, and are small enough to fit on our SSD.   We create a tablespace on SSD, then move these objects to that tablespace.

If our most read intensive tables are massive, then they might not fit on the SSD.  In this case we could consider partitioning them and placing the hottest partitions on SSD.  

Use the 11GR2 flash cache

If you have 11GR2 and are on a compatible platform (Solaris or Oracle Enterprise Linux), then you can setup the Oracle DB flash cache.  This is absolutely the easiest option – all you need to do is set a few parameters and bounce the database.  The flash cache automatically caches the hottest blocks and usually provides an excellent performance improvement.

The improvement is not as significant as moving objects directly onto flash storage, since there still has to be magnetic disk IO getting things on and off.  Also, the architecture tends to create a lot of writes to the SSD, which can challenge garbage collection.  The chart below sums up a typical outcome – a huge boost from the flash cache, but not as big a boost as we could get by putting objects directly on SSD:

image

Put temporary tablespace on SSD

 

Temporary tablespace IO (from sorts and hash operations) can sometimes be the most significant form of database IO, and often the entire temporary tablespace will fit on the SSD.  So should we consider putting temporary tablespace on the SSD?

I’m a bit reluctant to do this:  by definition temporary tablespace IO is equal parts reads and writes, and since we are motivated to avoid write IO to the SSD, this seems like a dubious option.  However, if your performance is absolutely dominated by temp tablespace IO and your SSD is tolerant of high write rates (eg has excellent garbage collection,etc) then it might be worth trying.  After all,  the SSD will definitely speed up the read side of temporary tablespace IO, even if the write IO does not get a significant boost. 

Put redo logs on SSD

Although redo logs are very IO intensive, it’s almost all write IO and therefore this is probably not the best option.   During sustained writes the Garbage Collection will probably break down and the disk might not end up performing that much better than a spinning disk.  I'd not recommend this. 

Conclusions

 

You can use SSD to boost database performance even if you don’t have enough for the entire database.  Best options are:

  1. Place segments with high read but low write rates directly on SSD
  2. Use the SSD with the 11GR2 database flash cache (if you are on OEL or Solaris)
  3. If temporary segment IO is absolutely your bottleneck – but only then – consider placing the temporary tablespace on SSD.

I don’t think using SSD for redo logs is a good idea.  The high rates of sustained sequential write IO is not the ideal type of IO for SSD.