Summary
On the surface, REST operations roughly correspond to CRUD actions performed on persistent entities. Based on this apparent similarity, it may be tempting to directly map entity operations to RESTful HTTP verbs when exposing data on the Web. Not so fast, argues David Van Couvering, who illustrates his point with vignettes from database memory lane.
Advertisement
A very large number of enterprise applications are arguably about exposing data stored in a database on the Web, allowing clients to query, update, delete, and even create new instances, of that data. The basic CRUD operations performed on persistent entities in databases also happen to correspond to the basic HTTP verbs championed by the REST architectural principles.
Based on that similarity, it may be tempting to map persistent entities directly as REST resources, as some frameworks have done. Most recently, Rails 1.2 introduced RESTful resources, even providing a REST resource generator that can create a database table definition, an entity class, and a RESTful controller, along with associated tests, in one fall swoop.
In his most recent blog post, Mapping Entities to REST - Learning from History, David Van Couvering ponders the relationship between CRUD and REST, and notes why such direct entity-to-resource mapping may not be a good idea:
I was working at Sybase in the late eighties. Sybase was taking off because it had introduced some revolutionary architectures and technologies that significantly improved performance and maintainability of database applications.
One major invention was procedural SQL and stored procedures. Stored procedures introduced the following key benefits:
Your SQL was pre-compiled and stored in the server, so it did not need to be interpreted each time
You could run complex procedural logic over your data in the same process as the database. This reduced the amount of data that was shipped to the client and the overall number of network round-trips you had to perform to accomplish a task.
You could centralize business logic in the server. You didn't have to make sure each client enforced business rules consistently
Stored procedures provide a layer between the database schema and the interface used by your applications. This allows you to modify the database schema (e.g. for performance optimizations) without breaking all your applications...
What does all this history have to do with mapping entities to REST? Well, if you're not careful, you completely short-circuit the business tier, and you can find yourself transported back to the early eighties.
Van Couvering's point is that circumventing the business logic tier ties clients into the data schema, causing problems such as:
If you want to perform an operation over two or more resources (tables) as a single unit of work, it's basically impossible. There are no transactional semantics for an HTTP client - each request needs to be its own unit of work. You could maybe devise a transactional API using cookies, but I wouldn't advise it.
You have locked your database schema to your web interface when you do a direct mapping between entities and REST resources. This makes it very hard to migrate your database schema over time.
Web clients become responsible for enforcing business rules, since you're going directly to your database schema. Given that you want to support third-party web clients from who knows where or whom, this is probably a bad idea. OK, a Very Bad Idea...
REST resources are meant to describe real "resources" that, if stored persistently, can often be the exact same data structures as an application's persistent entities. Van Couvering's blog, however, illustrates some of the pitfalls of making too much of that similarity.
What do you think is the best way to think of the relationship between persistent entities and RESTful resources?