Tuesday 16 October 2007

It's CLOBbering time

Sometimes in your database life, VARCHAR2(4000) just wont cut the mustard. And as I discovered recently, that's as big as Oracle VARCHARs will go. It's time to break out the CLOBs...

CLOBs stand for... well, Character LOBs. I'm afraid LOB doesn't mean anything (so says Jim Starkey, and he invented the term). But some people call them Large objects, others say Locator OBjects as they work like pointers. This also makes dropping a CLOB in for a VARCHAR less than straight forward, as they work from this pointer based access mechanism. Like SAX and DOM with XML parsing, it's two different philosophy's on large data access.

But I am a modern day Spring + Hibernate Java guy, so straight away I look to Hibernate. This offers support for mapping java.sql.Clob or java.sql.Blob objects into the database. But I'm not too keen to add them into my domain model and tie it to JDBC. It might also wreak havoc with my unit tests, as you can't access LOB properties without a database connection.

So onward up the stack to Spring and it's OracleLobHandler which is an Oracle specific version of the more generic DefaultLobHandler class, which is an:
Abstraction for handling large binary fields and large text fields in
specific databases, no matter if represented as simple types or Large OBjects (from the LobHandler javadoc).
Figuring out the best way to use this object came from Mark Allen's Weblog, which suggests simply adding the LobHandler to the Spring SessionFactory, then use a Spring support type for CLOBs, such as ClobStringType, in the Hibernate mapping file. Simple !

So firstly I add the following in my datasources Spring config file:


<bean id="sessionFactory" name="sessionFactory"
class="org.springframework.orm.hibernate3.LocalSessionFactoryBean">
...
<property name="lobHandler">
<bean class="org.springframework.jdbc.support.lob.OracleLobHandler">
<property name="nativeJdbcExtractor">
<bean class="org.spring...CommonsDbcpNativeJdbcExtractor" />
</property>
</bean>
</property>
</bean>


Then I tweak the property in my Hibernate mapping to use the Spring support class:


<property name="message" column="MESSAGE" lazy="true"
type="org.springframework.orm.hibernate3.support.ClobStringType" />


And that pretty much does the magic. I change the column from VARCHAR2(4000) to CLOB in the database, and then the above code seemlessly turns the new CLOBs back into Java Strings as I pull them back, requiring no code changes at all. Hibernate can be a pain at times, but as always, adding Spring to the mix makes a change like this very smooth. Once you know how, of course ;)

Friday 3 August 2007

Spring Framework Integration Tests

Having been a huge fan of the Spring framework, and esp. the JDBC support within it, I have had a shocking two days working on the most simple of tests. I wrote a simple Repository / DAO to pull out a four parameter javabean from a four column table, providing two finder methods to query for an object by two of the four parameters.

This is the ABC of database access, and code that most of our team can write in our sleep. But the problem came not with the code, but attempting to TEST the code.

The obvious choice for testing Spring JDBC Repository code is the famously named AbstractTransactionalDataSourceSpringContextTests class. This provides a transaction to work in, so all changes are wiped between tests, as well as a convenient JdbcTemplate object configured in your Spring context to point to the database. All in all this allowed me to write a quick test to wipe the table I was interested in clean, insert one row, then use the Repository I was testing to pull back that one row and confirm the object wasn't null.

And it failed... so I tweaked and fixed and refactored and guessed and shouted. But it still failed. I got down to just running two lines of code in my test:

jdbcTemplate.update("Insert into CLIENTS (ID, DESCRIPTION) VALUES (1, 'TEST'");
assertTrue(jdbcTemplate.queryForInt("SELECT count(0) from CLIENTS ") > 0);

And... it failed !?! This led to others (convinced of my stupidity) chipping in with ideas like checking the output of the deletes and inserts - they returned the numbers of rows affected as expected. But it seemed that EVERY SQL statement ran seemed to have no effect on the others - as if the transaction might be rolling back between every single statement ! I could insert the same row with the same primary key twice, and each time it told me it had added a row, and each time a select statement bought back 0 rows.

This was frustrating - the "my first SQL for dummies" step in Spring JDBC and it had me grinding my teeth and swearing under my breath, not to mention impacting on a tight deadline (what other kind are there ?!) and raising the stress levels - I needed to step back. Something wasn't right with my transaction manager, but what could it be...

First step - got in with the debugger and look what I've setup. My test case has the jdbcTemplate setup with the same DataSource as the TransactionManager. Next step - go through the code. It seemed that all the Jdbc statements were running with numberOfTransactions set to 0 and transactionStatus set to null - clearly no transaction was working around the code here ? But equally a transactionManager MUST be set up, otherwise the test case would not actually start (the spring code throws an exception).

Some more investigation and delving into the spring source code, and suddenly I see my error...

I have overridden onSetUp(). With the spring transactional test cases, the setUp() method is final, so onSetUp() is provided. But this method is non-empty in AbstractTransactionalSpringContextTests, and it is here that all the transactional magic happens. And of course, in overriding it, I had not thought to call super.onSetUp(), so stopped all of the transactional set up happening. I can only assume the side effect of every statement not affecting the other was caused by autoCommit being set to false on the connection as well.

The lesson here - make sure to call the super.onSetUp() method (a good rule for all Junit set up methods, not just this example. Or more specific to this problem, use the provided onSetUpBeforeTransaction() and onSetUpInTransaction() methods which are empty place holder methods for subclasses to add behaviour. This quick change applied and everything works normally and makes sense again. Finally my brain works and I can get onto some new code :)

Tuesday 26 June 2007

Automated parametrized testing or testing without the repetition

The world of software testing is huge. Unit testing, integration testing, system testing, user acceptance testing, QA testing, load testing, DR testing and so forth. And good luck on finding two people who agree on the definitions of each testing category...

From a developers view, the water starts to get muddy on the fringes of unit tests. Basic unit tests are easy, and if well designed, more advanced tests with mock objects or stubs can fill out the unit test suite to the powerful and respected tool that many software development maven's proclaim it to be.

But then you find yourself in integration testing territory - where the options and pathways of what could happen tend to grow exponentially with each layer. And we developers love layers and interfaces. So sometimes integration testing to pull out error conditions and alternate pathways can be tricky, and perhaps redundant if our unit test suite has covered it.

But one thing I never really use integration tests for enough is automated "poke" testing - grabbing a whole bunch of data and firing it at the server to see if I can break it. Normally this requires live data and a little business understanding to do well, not often things developers might possess ! Tools like FitNesse are built around this principle. But as a developer I still find myself prodding software manually, rather than automated this "just bung a load of data in".

One of the historical reasons for this has been a lack of availability of parametrized testing frameworks or tools in the open source space to make this easier for me. But with the general acceptance point that JUnit 4.0 is "standard", I can make use of the parametrized test runner that provides this functionality. It's not too clean (the TestNG approach looks a little cleaner) but it seems to do the trick. Now when I get new test data, I can just throw it onto the stack and whenever I need to sanity check my code just inject all the test data through the relevant part of the system in one click.

In many ways, this approach is obvious. Child's play almost. But sometimes I can get so buried in the wide sea of information on Java and programming in general that I fail to stop, take a couple of minutes and reflect. It's only then that such things become highlighted as so obvious, and with that reflective knowledge I should be able to build up a powerful set of test suites. Just don't ask we which of them are integration tests and which are systems tests...