Versioning and ease of use
The way to “properly” version REST services, according to most texts I’ve read and people I’ve talked to, is to use custom media types. I think this is a good solution implementation wise. Usability wise I think it sucks because it makes it very hard to test the service using your browser since you need to specify a custom accept header.
Instead of rambling about pros and cons here, like many have already done. I made an example using Java (Jersey, Guice, Tomcat) and nginx to illustrate how to both implement a media type versioned service as well as publishing it in a way that also makes it easy to explore using a browser.
It’s all here https://github.com/chids/java-rest-versioning/.
In chapter 4 “Motivation” of his book Implementation Patterns (Addison-Wesley Professional, 2008) Kent Beck cites Structured Design: Fundamentals of a Discipline of Computer Program and Systems Design (Prentice Hall, 1979) with the formula for calculating the cost of software as the sum of the cost of development plus the cost of maintaining it:
COSTtotal = COSTdevelop + COSTmaintain
The cost of maintenance is then broken down to:
COSTmaintain = COSTunderstand + COSTchange + COSTtest + COSTdeploy
I like this formula and when I recently revisited Implementation Patterns I started thinking about the cost components of software maintenance and how we work with them.
I’ve put together a short illustrated text on my reasoning and published it here.
Here’s the macro, example of it’s usage and the XSLT used to pretty print.
We document our HTTP based services in Confluence and I wanted to…
- embed service invocations in wiki pages to show the result of calling a certain URL
- pretty print the output
- not have to write a Java plugin in Confluence
Making a HTTP request from a macro
I started of with the HTML macro and verified that it worked.
Formatting the response
Our services produces as compact XML as possible to save space, they don’t pretty print. Hence I needed to format the service response and after finding these XSLT’s I saved one of them as an attachment to a page within Confluence and paired it with the XSLT macro.
Alternating the URL displayed (and/or used to make the request)
As I’ve written about earlier we document all of our services in one space and selectively “publish” a subset of those to another space from which we allow export of PDF’s as the deliverable documentation. In order for this to work we need the service examples to show, and use, different domain names for accessing services based on wether the documentation is viewed in our internal space or in the space used to export PDF documentation to our clients. The logic for this is the first block in the macro in which it looks at which space the macro is rendered and selects the base URL based on that.
Due to what appears to be a limitation or bug in the handling of user macros Confluence won’t properly render the macro if the URL to the service contains an equal (=) sign. So in order to support query parameters the macro replaces any occurrences of two colons (::) with an equal sign. It’s all illustrated in the Gist showing the macro.
What about JSON?
We use this solution for JSON as well but without the XSLT formatting stage which renders our JSON responses butt ugly, if you have a solution on how to format JSON as well – feel free to post a comment.
Most of our services responds to HTTP GET requests and produce an XML or JSON response. The format of the response is determined by the HTTP Accept header. This is fine when our users program their clients but since our APIs are exposed over HTTP most of the initial sampling and testing can be done simply by pointing a browser to the endpoint and experimenting with query params etc. So how to do play with your browser and still be able to choose wether to get the result as JSON or XML? Well set the accept header by using Poster for Firefox or Simple REST client, REST Console, or Advanced REST Client for Chrome. There’s certainly more alternatives available for any and all browser that you may use. Or you can fire up a console and run cURL.
All clients and command lines are fine, and usually invaluable to have when your getting deep with the API. But for those initial play around tests it’d be nice just to be able to surf to the resource and say: I want this resource to respond with XML or JSON.
Keep it simple…
…therefore we support the use of file extensions .xml and .json respectively. So a service that we expose as:
might also be invoked as:
Accept header manipulation in Nginx
Now, where not really interested in introducing this handling in all our services. They all have automagic response format serialization through the Accept header. Hence we’d simply like to alter the accept header for requests that provide a file extension and we already use Nginx as a reverse proxy in front of our application servers – see this gist for an example nginx configuration.
I’ve written earlier about how we utilize Atlassian Confluence to document our APIs in general and my user macro hack to actually perform service calls and display results on page rendering (read the section “User macro to render service response” for more details).
Our space structure
We mainly classify services into three groups; public, partner and internal. Public being services where we don’t really care about their usage – we might not encourage broad public use but we’re not trying to limit it. Partner services are offered within some form of commercial agreement and requires a API key. Internal services are not accessible from outside our internal networks.
This has led me to do the following space structure in our Confluence wiki:
- API, this is the space that contains all of our services
- PARTNER, this space contains the subset of the API contents that are available to partners
The partner space uses the include macro to include those pages from the API space that are available to partners. The bread and butter of all our service docs are the various macros that call our services and display the result to provide concrete examples. These will always use the internal address of a service to circumvent the need for any API key handling. However the internal domain should not be used in the examples displayed to our partners.
Context aware includes
The partner space uses includes from the api space and the page in the api space is the one using the macro to perform and display the service call. Hence I needed to have the macro display a different base URL depending on the space key. This wasn’t bloody obvious but proved very simple once I grasped the Veolcity context and the objects it provide. The magic happens with:
See the gist for a complete example.
I’ve been a member of Transfer for some time now and thought it might be worth a mention. Specifically I’d like to hear about similar initiatives elsewhere, if you know any – please drop me a line. Transfer is a non-profit knowledge transfer network whose aim is to bring professionals in all sorts of diverse areas to Sweden’s upper secondary schools. Teachers can use the Transfer website to request a lecture in an area and Transfer then matches those requests with the profiles of the affiliated professionals, which can then either accept or decline the request. Sort of like a dating service.
To be fair, I’ve only done two lectures and the amount of requests within my area of expertise are somewhat sparse. Nonetheless, giving a lecture to a young audience that’s still in school and usually very curious is challenging and thus very rewarding. I sincerely recommend participating, regardless of what area you do work in.
The slides from my latest lecture (given in November 2010) on databases in general are now up on Slideshare.
Basho posted “Data Durability Is Not An After-market Add-on; Announcing KillDashNine” and since I’m a fan of durable data in general, Riak and Dry Martini in particular I decided to do the Stockholm version of #killdashnine. The venue was Little Quarter (the inner bar at restaurant Marie Laveau), home to my favorite bar staff.
Me, Jesper, Sven and Fredrik made only four people in attendance so we certainly have some room for improvement. Also we didn’t actually kill any databases on location. Although me and Jesper did spend some time last week beating the crap out of ActiveMQ using kill -9 on the broker while producing and consuming persistent messages. Using our very non-scientific, nor exhaustive, approach it all looked well although some people experience problems with KahaDB.
I’m aiming to keep this going on the 9th of the month and hopefully I’ll muster the energy to prep some proper kill-demos for the next gathering.