Sunday, May 17, 2009

Impala 1.0 M6 released

I am pleased to announce the 1.0M6 release of Impala. The 1.0M6 is an important release for Impala, as it is the last pre-1.0 final release to include major enhancements and API changes. I am now pretty comfortable that the APIs and abstractions are correct and suitable for the 1.0 final release, but as always welcome any feedback from users on this.

The 1.0M6 release includes a major reworking of the shared service registry and proxying mechanisms, a new Spring namespace for importing and exporting services, and enhancements to the dynamic reloading of modules.

The headline improvements include the following.
  • Configuration of Impala services is now much simpler, as a new Spring 'service' namespace has been provided for easily exporting and importing services.
  • Service export and import can now be done not only by name but also by type or by custom attributes, the latter using a model similar to that used in OSGi.
  • Impala's mechanism for proxying services obtained from the service registry has improved, and is now more easily configurable.
  • It is now possible to export and import Impala services without having to specify an interface - proxying of the service implementation class is now also supported.
  • Impala now supports exporting and importing services based on Spring beans which are not singletons or not created using non-singleton Factory beans. It does this in a way that is totally transparent to users of the services, effectively allowing clients to treat all beans as singletons.
  • Impala now provides implementations of java.util.List and java.util.Map, dynamically backed by beans imported from the service registry.
The 1.0M6 release also introduces a number of improvements to make dynamic module reloading more robust, particularly when using the module 'auto-reload' feature, as well as other minor enhancements and bug fixes.

For more information on this release see
http://code.google.com/p/impala/wiki/Release1_0M6Announcement.

Saturday, May 16, 2009

Extending Spring MVC's annotation controller

In my latest project I am using Spring MVC's annotation based controller. I definitely am a fan of annotations for wiring up web applications, and suppose, relatively speaking, I can claim to be an early in this area having created Strecks, an annotation based framework for Struts.

I must say I am enjoying using the new Spring MVC - it's a massive improvement over the original framework which I found pretty clunky, especially with regard to form handling.

Setup

Configuring the application is a doddle. All you need to do is register the annotation HandlerAdapter (which does the main request processing work) and HandlerMapping (which maps URLs to your controllers). You can do this using Spring config like:

<bean class="org.springframework.web.servlet.mvc.
annotation.DefaultAnnotationHandlerMapping">

<bean class="org.springframework.web.servlet.mvc.
annotation.AnnotationMethodHandlerAdapter">

and then you're ready to go. Controllers definitions can found automatically via class path scanning, or  added explicitly into the Spring config files, which I prefer to do.

One really nice thing about the controllers is the simple way to map URLs to methods as well as to provide arguments to the methods, both using annotations. An example is show below:
@RequestMapping("/warehouse/postProductsSubmit.htm")
public String postProductsSubmit(
      Map model,
      @ModelAttribute("command") PostProductsForm command,
      BindingResult result) {
//do stuff

//redirect when finished
return "redirect:postProductsForm.htm";
}

So what's missing?

There were still a few bits I felt needed to be added to make the Spring MVC annotation truly usable for my application. Here's what they are.

Missing annotations for obvious argument types

The Spring MVC annotations recognise a whole bunch of argument types. Many of these will be automatically recognised from the Servlet API including HttpServletRequest, HttpServletResponse, ServletRequest, ServletResponse, HttpSession, Principal, Locale, InputStream, Reader, OutputStream and Writer. Others will be recognised from Spring MVC annotations, such as @ModelAttribute and @RequestParam (which binds a request parameter).

What would be nice would be some built in annotation types which you could extract other types of information from the Servlet API environment in an non-intrusive way. Here I am thinking of the following:
  • @SessionAttribute: extract and bind a named session attribute.
  • @RequestAttribute: do the same for a named request attribute.
  • @RequestHeader: extract a request header.
  • Plus various others
Luckily, there is an easy way of creating your own annotations for extracting information from requests and binding this to method arguments, via the WebArgumentResolver implementation. I have created a number of these in the Impala MVC extensions project. See for example the SessionAttributeArgumentResolver and it's associated @SessionAttribute annotation.

Flash Scope

Flash scope, popularised initially by Rails, is a mechanism for transferring state from one request to the next without having to pass it via URLs. It is implemented through a session scoped attribute which is removed as soon as the value is consumed in the subsequent request. It works particuarly well with redirecting after a post.

Flash scope is especially convenient for certain use cases because it combines the convenience of session-based attributes without the long running overhead of having state hanging around in a session over a long period.

Spring MVC annotations currently don't support flash scope, so added an extension to AnnotationMethodHandlerAdapter which support flash scope. Basically you can use it as follows. In your controller method, simply set a model attribute with the prefix "flash:". The attribute will be available in the next request using the @RequestAttribute annotation. An example below demonstrates this.
@RequestMapping("/submit.htm")
public String submit(
      Map model) {
//do stuff

//redirect when finished
model.put("flash:mydata", mydataObject);
return "redirect:show.htm";
}

@RequestMapping("/show.htm")
public void(@RequestAttribute("mydata") MyDataClass mydata) {
//if you redirected using flash, mydata 
//will contain the mydataObject instance
//from the last call
}

No subclass hooks for manipulating model

When I first started with the annotation-based controller I found it a little frustrating that there were no subclass hooks in the provided AnnotationMethodHandlerAdapter for manipulating the model. The only way you can do this is in the mapped request method as well as in special @ModelAttribute methods, which are also present in your controllers. An example is below:

@ModelAttribute("command")
public PostProductsForm getPostProductsForm() {
return new PostProductsForm();
}
 

I'm not sure if this is still a problem because I have found quite acceptable workarounds which solve the problems I was trying to solve, without having to resort to such a technique. Nevertheless, is does strike me as a sensible thing to be able to do, provided it is is done in a well-defined and controlled way.

Concluding Remarks

Spring MVC annotations have added a great deal of convenience to Spring MVC without sacrificing any of the flexibility which has always been its true strength. It's not perfection, but with a few extra fairly minor features it is easy to use and work very productively with. And of course - here comes the obligatory shameless plug! - it works even better when you use it with Impala.

Monday, May 4, 2009

Why developers don't just jump at OSGi

On paper, the choice to use OSGi should be an easy one. After all, OSGi offers an escape from the "jar classpath hell" that Java developers have been living with for years. The ability to compose systems through modules that can be composed and loaded dynamically promises a solutions to a range of important problems that enterprise developers have been grappling with, unsuccessfully, for years.

Yet the takeup of OSGi has been quite slow. I was curious enough the other day to take a look on Jobserve on job postings requiring OSGi, and I found only a handful of positions available. While there seems to inexorable movement towards OSGi driven in particular by application server vendors (who, remember, also drove the adoption of the now infamous EJB 1 and 2), and a few evangelists, we are yet to see a groundswell of enthusiasm from the mainstream developer community, in the same way as, for example, with technologies like Grails and, before it, Spring.

This is a shame, because I believe the ideas that underpin OSGi are fundamentally important to writing flexible systems which can remain manageable as they grow in size and complexity.

I'd like to comment on some of the reasons why OSGi has still not taken off in a way which cements it's role as the foundation for Enterprise Java applications, and also to explain why I haven't used OSGi as the basis of Impala. My intention is not to spread FUD, but to identify some of the perceptions (and potentially misconceptions) held on OSGi and to give my interpretation on to what extent they are justified.

Some people just don't "get it"
Not everybody thinks that the idea of partitioning an application into modules is a good one. Some developers are happier just to lump all classes together under a single source directory, and don't see how an application can benefit from modules. Maybe they haven't worked on projects that really require this kind of partitioning, or have suffered from a botched attempt to modularise an application. Clearly, these developers are not going to be early adopters of OSGi or, for that matter, a technology like Impala.

OSGi won't really help the productivity of my development
Clearly, there is more work involved in setting up an OSGi application than a regular Java application. You need to ensure that all your jars, both for your application and for the third party libraries, are OSGi compliant. For your application's jars, you'll be responsible for the bundle manifest yourself, making sure that its content fits in with the structure and organisation of your application. You'll definitely want some tool to make this job easier. Also, you'll have to source third party libraries which are OSGi compliant, or, in the worse case, add the necesary OSGi metadata yourself.

The productivity advantages of dynamically updatable modules will probably kick in at some point, but not until you have a smooth running development environment set up. You can accelerate this process with the help of an OSGi-based framework such as SpringSource's dm Server, or ModuleFusion.

While OSGi will undoubtedly help you write better and more flexible applications, you don't get many wild claims that OSGi will allow you to build your applications dramatically faster. Developers who come to OSGi with those kinds of expectations will probably be disappointed.

OSGi requires a complex environment
Enterprise Java has a reputation for being complex. Not only do you need to know the Java language, you need to know Spring, Hibernate, web technologies, relational databases, etc. etc.. You need to know all sorts of related technologies, test frameworks, ANT or Maven, and more. And this is just to write traditional Java applications.

To write OSGi-based Enterprise applications, there is much more to know. You'll need a good conceptual understanding of how OSGi works - both in the way that it manages class loaders and the way services are exported and consumed. Not Java 101 stuff. You'll also need a practical understanding of the idiosyncracies of your own OSGi environment. There will be differences in the way you build and deploy applications, and the way you manage the tools and runtime, depending on which OSGi containers and helper frameworks you use. You won't need to be a rocket scientist to figure this all out, but you will need some time, patience and experience. The wave of books coming out on OSGi will definitely help, but don't expect the junior members of your team to be able to jump straight into an OSGi project and hit the ground running.

How do I know it will all work?
Some people might be put off OSGi because of lingering thoughts that they will run into difficulties getting their applications to work in an OSGi environment, especially those with large existing code bases.

Some of the most commonly used frameworks out there are not very OSGi-friendly, typically either because they are designed and packaged in a not very modular way, or because they use class loaders in a way which does not align with the OSGi specification, for example, by using the thread context class loader to load classes. Naive use of these libraries in an OSGi environment will lead to unexpected problems.

You'll need to find a way to work around these issues. The hard way will be to try to do it yourself. The easy way will be to rely on a packaged OSGi solution, again such as dm Server or ModuleFusion. But remember, even here, there are trade-offs. In the case of the dm Server, you'll be very closely tied in to SpringSource as a vendor, and with ModuleFusion, you may need to accept a technology stack which does not include your favourite frameworks.

OSGi applications are difficult to test
This, in my opinion, is a real achiles heel of OSGi. Because OSGi applications need to run in an OSGi container with class loading managed in a very specific way, you cannot run low level integration tests without the tests themselves running in a container. This makes testing OSGi applications particularly challenging.

The only serious attempt I am aware of to address this problem is the Spring Dynamic Modules test framework, which dynamically creates a test bundle using your application code, launches an OSGi container, deploys the test bundle to the OSGi container (as well as the bundles you need to test plus some infrastructure bundles), and runs your test code. It's not especially pretty, but there's no substitute for real integration tests as opposed to unit tests or tests using mock objects.

For me, ease of testing is of fundamental importance in choosing technologies - it certainly is a large part of the reason for the emergence of Spring. I certainly have no appetite for a return to the days of EJB 1 and 2 when applications could only be tested on a container.

Some concluding remarks

Let me make my position clear. I am not an OSGi evangelist. I prefer to think of myself as OSGi-neutral. I have deliberately chosen not to base Impala on OSGi, but I have designed it in a way which accomodates OSGi - indeed I even have a working example of Impala running on OSGi. As OSGi gains traction - and if users demand it - Impala will provide much better support for OSGi and even offer a simple migration route to OSGi which users can choose to adopt on a per project basis.