Summary
A recent Java.net article by Vikram Veeravelu shows how the Spring framework can simplify JDBC data access. Not only does Spring help reduce the amount of plumbing code required with plain JDBC, but Spring also presents database exceptions via a more usable hierarchy than what JDBC alone provides.
Advertisement
Although every enterprise application must access a database for persistent storage, there is no single best way or best practice for Java data access. The only common feature among the various solutions is that at some point JDBC is used to interact with the database server.
Although JDBC 4 has made great strides toward an easier-to-use API, writing pure JDBC code is still a rather tedious task. This is brought home in Vikram Veeravelu recent java.net article that compares coding with plain JDBC to using the Spring framework to reduce database code complexity:
Spring JDBC implements the Template design pattern, meaning the repetitive plumbing parts of the code are implemented in template classes. This approach simplifies the use of JDBC, since it handles the creation and release of resources. This helps to avoid common errors like forgetting to close the connection. It executes the core JDBC workflow tasks like statement creation and execution, leaving application code to provide SQL and extract results.
In Veeravelu's article, the use of Spring's JDBC framework reduces the code required for the simple DAO example by about fifty percent.
In addition to code reduction, Spring's JDBC has a more developer-friendly exception hierarchy than what JDBC alone provides:
When dealing with exceptions, Spring examines the metadata available from a database connection to determine the database product. It uses this knowledge to map SQLException to the correct exception in its own hierarchy. So, we need not worry about proprietary SQL state or error codes; Spring's data access exceptions are not JDBC-specific, so your DAOs are not necessarily tied to JDBC because of the exceptions they may throw.
Unfortunately, the article fails to mention how Spring's exception hierarchy will work in the context of JDBC 4. As we noted in an earlier Artima article on JDBC 4, the latest JDBC spec provides a much more refined exception hierarchy than its predecessors did. Notable are the divisions of JDBC 4 exceptions into transient and non-transient exceptions. An SQLTransientException indicates failure of an operation that might succeed if retried. For instance, attempting to connect to a database server may result in a transient JDBC exception, indicating to the caller that the connection attempt should be retried.
Reading Veeravelu article raises the question of why anyone would write high-level application code directly to JDBC when frameworks such as Spring, or even O/R mapping frameworks, so vastly simplify database access.
One reason I can think of is the desire to avoid tying code to any specific framework apart from the "pure" Java stack. But is that a valid reason for incurring almost twice as much code as would be possible with, say, Spring's JDBC support? Are there other reasons for writing application code directly to JDBC?
> <p>One reason I can think of is the desire to avoid tying > code to any specific framework apart from the "pure" Java > stack. But is that a valid reason for incurring almost > twice as much code as would be possible with, say, > Spring's JDBC support? Are there other reasons for writing > application code directly to JDBC?</p>
You can write your own class to set up, execute, do a call back for rows, deal with errors, and shut things down very simply and quickly compared to taking on the Spring framework.
JDBC can be even easier than that. I just wrote this little anno processor for myself which uses the excellent DButils from Jakarta. This is all you have to do:
@Column("person_id") private int personId; @Column("fname") private String fname; @Column("lname") private String lname; @Column("phone") private int phone; @Column("address") private String address; @Column("count") private int count;
public static void main(String[] args) {
SelectUtility su = new SelectUtility(); ArrayList<Person> people; String sql;
sql = "select count(*) as count, lname " + "from person " + "group by lname " + "order by count desc"; people = (ArrayList<Person>) su.selectBeanList(Person.class,sql); for (Person person : people) { printfln("%s people by name %s", person.getCount(), person.getLname()); } }
i just wanted to add my anno version uses the field type definitions to invoke the correct get* from ResultSet. I wrote it after longing for plain ole sql after a long night with Hibernate.
> JDBC can be even easier than that. I just wrote this > little anno processor for myself which uses the excellent > DButils from Jakarta. This is all you have to do: > > > @Column("person_id") private int personId; > @Column("fname") private String fname; > @Column("lname") private String lname; > @Column("phone") private int phone; > @Column("address") private String address; > @Column("count") private int count; > > public static void main(String[] args) { > > SelectUtility su = new SelectUtility(); > ArrayList<Person> people; > String sql; > > sql = "select count(*) as count, lname " + > "from person " + > "group by lname " + > "order by count desc"; > people = (ArrayList<Person>) > >) su.selectBeanList(Person.class,sql); > for (Person person : people) { > printfln("%s people by name %s", > person.getCount(), > person.getLname()); > } > } >
This is interesting.
I'm wondering what were your reasons for rolling your own annotation processor vs using a data binding solution that relies on JDBC under the covers. I'm just interested to see if it might have been performance, or some other requirement that out-of-the-box data binding solutions were not able to meet.
This is necessarily reflective, so performance wasn't my key consideration. I learned database while working with a database api that populates a typeless map with the associated column value. When I came to Java, I wanted something similar, but populating HashMaps is annoying because you have to cast everything on it's way out.
DBUtils includes a BeanProcessor which does a similar functionality, but it depends on column name matching. Hibernate is a powerful system, but really all I needed was a "database-out" solution, since most of my data was populated by bulk via some sort of csv loader. The annotations angle makes it's configuration easy and implicit- "it just knows" that i want my column in this type. I leave to runtime exceptions to warn me if I have any problems.
We use Hibernate here at the office, but the custom processor I wrote for my own "one-man" projects. We definitely don't spend our time with JDBC plumbing :)