DroidCon Berlin – 2013

Last week I went to DroidCon, a community driven conference. The conference center was in the Kosmos (in former East Berlin), a small but nice conference center. Their where about 500+ attendees. The event started on Sunday with a Hackaton session (with some predefined topics). I did not participate in the hackaton because I still had some last minute changes needed to be made to the NFC app.

On Monday there was a BarCamp. The idea behind a BarCamp is that in the beginning the attendees in the beginning of the day propose some talks and they will get scheduled throughout the day. A lot of people present something they learned while creating their apps.

Tuesday and Wednesday it was the real conference. The DroidCon sessions had a length of 30 minutes, that had it good and bad side. The good was, when a session was bad it only took half and hour. The downside was that really good talks where too short.

My personal highlights:

  • New Android Build System: This talk was given by the founder of Gradleware, the company behind gradle. The talk confirms that it was actually a good idea too look at gradle for our builds.
  • ProGuard: ProGuard is the optimiser and obfuscator provided by the Android SDK. Interesting talk about the tool and why you should use it, even when you don’t care about obfuscation.
  • RoboGuice: Dependency injection for Android. Now we can use the same mechanisms for injection as we do for Java Enterprise applications.
  • RoboElectric: RoboElectric is a JUnit runner that let you run your unit test without starting the emulator. But we also learned to look at for the pitfalls (it’s not for everything).

In general I was quite happy, but I think the percentage of good talks should be a bit higher. I found about 50% of the talks where good or ok of the talks I followed. All in all I was happy that we went, because the really good talks compensate the bad once. Next time I’ll need to prepare my own talk for the BarCamp though.


Trouble with the clock

You know what they say, time is money. But for us engineers, time needs to be correct. When it isn’t you lose a lot of it and thus money. Trouble with the time can even pop-up in unexpected places: Your lab.

The lab is a playground for software engineers where they get things done. Viewed by corporate IT as a necessary evil and block from the rest of the world. IT will generally give zero to no support for machines in the lab, so the engineers are left maintaining the machines themselves. Although they are creative, they are not the best operators.

The last thing I came across in our lab was trouble with the clock. The last months I noticed, when we had rapid build/test cycles, the latest build was not picked up by our machines that we’re running the tests on. In general it was OK, but when you needed it the most it sometimes failed to pick up the most recent build.

We have a fairly complex setup with different servers and agents. We have our Bamboo CI-server pushing our artifacts to the repository. On our build agents will let Maven poll the repository for the latest artifacts. The Maven Parent POM’s on our build agents are slightly different from the POM’s that the developers have on their machines. The agent POM’s allow getting SNAPSHOT build so we can move our not-released artifacts to our agents, for further processing and testing.

Over the past months the problem got worse so it was time to start investigating. The problem was quickly found: clock drift. Because our agents where in an IP range that had no access to the internet the default time sync servers where not accessible. In about half a year we had drift of about 30 minutes. So if we had a build/test cycle in that 30 minute slot we where testing an older release. Lets go over what happened:

Cycle 1:

  • Bamboo Server (14:30) -> Repo (14:30)
  • Agent (14:35 + 30m drift) asks for new artifact, my last is from 9:12?
  • Repo -> I got a newer file (14:30)
  • Agent downloads, and saves to disk (14:35 + 30m drift = 15:05)

Cycle 2:

  1. Bamboo Server (14:45) -> Repo (14:45)
  2. Agent (14:50 + 30m drift) asks for new artifact, my last is from 15:05?
  3. Repo -> My file is older, it’s from (14:45)
  4. Agent uses the previous older version

The biggest problem was that this went unnoticed for a long time, because the problem only occurred when a new cycle is started on the same agent (we have multiple agents) within the slot created by the clock drift.

The solution is simple though, use one of your server that have access to the standard time-server and use that as a delegate for the sync requests.

# in not yet installed
sudo yum -y install ntp
# sync the time from your internal NTP server
sudo /usr/sbin/ntpdate -v
# edit the ntp config and start/restart the NTP deamon
sudo vim /etc/ntp.conf # set server to
sudo /etc/init.d/ntpd start
sudo /sbin/chkconfig ntpd on


Making sure that the clocks of your machines are in sync is not only a matter for the production server. It’s also important in your lab. It would be easier if IT didn’t disconnect the lab from the rest of the world, but this is just reality. We now added a new item on our agent build checklist: Make sure the machines a synced automatically with a sync server it can access. Could be an interesting feature for our Atlassian Bamboo CI server though: Send a notification to the admins that the agents have clock drift.

java, software engineering

Aggregated code coverage using Maven, Clover and Bamboo

Finally we got it working… we wanted to know what our total test coverage was on our product with all the effort we did over the last year by adding new tests. It wasn’t easy because once you leave the path of simple unit testing you come in the terrain of multi-server setups. With multiple server it’s not easy anymore too collect all your metrics and get a descent report. But we pulled it off. So here’s our story.

We’re using the complete tool chain of Atlassian, so this story involves their tools. In this case Clover (the code coverage tool), Bamboo (Continues Integration server) and Maven (as out build tool). Let’s start with Clover. Clover is quite a clever coverage tool. It integrates in your build tool to create a special version of your product. It changes the source by injecting code to do the instrumentation. At the same time it will build a database with a collection of all the methods that are instrumented. With the special build of your product and the database it will collect metrics while your tests are running. So with a single Maven command line we had a complete coverage report of our unit tests.

But the tight integration with our build tool comes at a price. That price is not knowing what magic is going on behind the screen. And you need to know how everything fits together whenever you want to get more out of your tools. And that’s what we wanted… Let’s start at the beginning and start building our instrumented server.

Building the instrumented server

As I said, the Clover plugin for Maven does a lot a magic and after a normal Clover maven build you get a nice report of what the code coverage is of your unit tests. Getting this into Bamboo (our integration server) is easy. Start a source code checkout, do the clover maven build as you would on the commandline and publish the report as an artifact. Set this up first before you go any further. This will be the basis for our next phase and will already provide some usefull insights. Then it’s time to think about what we need if we want to get metrics on our server running on different machines. Lets have a look at the components.
Our instrumented product. Thats the easy part, your instrumented server will be located at the usual place in your maven project after a build. Just make sure that you don’t deploy this special version to your local or remote repository, so limit yourself to package (this still runs the unit tests).
This special build needs the clover.jar to be present on the application server. In our jboss server it’s enought that it’s present in the lib folder. Make sure to look at the server logs when you deploy your ears. If you find clover related NoSuchMethod exception the jar is in the wrong place.
Last thing we need to get on the server is the clover.db, without this database you will not get eny metrics.
The last component we need to worry about is how to get the instrumentation metrics in a consistent manner so we can get it easily from the remote server back to our Bamboo agent for later processing.

Looking at the list, the biggest problem is the clover.db, by default each maven module within a maven project has it’s own database. This would be a pain if we wanted to distribute those to a different server. Luckily we can force clover to build a single database by providing a path to the database. Although this had the strange effect that the report was generated in an unstable location (it’s actually the last module in the maven reactor). But you can force the report path as well. Here is an example of the modification in our parent pom:


The most importing thing to note here is the database path. This is a shared location we create that is available on each Bamboo agent. The user that the agents are running with need to have access to that location, so don’t forget to set the correct access rights. This location is outside of the normal Bamboo working directory because we’re going to manage it ourselves. It’s that working location that we are going to replicate on our test servers as well.

To create the first job of our master plan, we started out with our normal maven clover build. The build, with our adaptations, already produces the database, the instrumented server and the metrics of the unit test. Now we only need to add scripts to manage our central location. A cool feature of Bamboo is it’s task infrastructure. A job can comprise out of different tasks, as jobs can run in parallel, tasks will run sequential within a job. A lot of task types are provided or you write your own. One of the tasks that’s used here is the inline script task. Very useful for prototyping your CI build and you still can decide to later put the script in your source repository (Bamboo is able to do multiple source checkouts within one job) as the checkout is modeled as a task. Our pre-clover maven build prepares our shared location, basically cleaning up leftovers and recreating the structure we need.

rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db
rm -f clover-db
ln -s /opt/work/clover/example/db clover-db

Next task in our “Build Instrumented Server” job is running maven. Always start cleaning everything, followed by clover2:setup which will build the database and modify the source code. Then do a package, that will build the server, run the unit tests and output the metrics. Finally the clover2:clover will generate the report of the unit test part.

clean clover2:setup package clover2:clover -Dclover.reportPath=${bamboo.build.working.directory}

The nice side effect off having the single clover db is that all the metrics are saved at that location as well. That makes it easy to create the post execute script. The script is another inline Bamboo task that removes the database and tar’s the data in a single file.

# Remove old data
rm -f clover-instr.tar.gz
rm -rf clover-instr
# Copy from symbolic link, and delete clover.db
cp -RL clover-db clover-instr
rm -f clover-instr/clover.db
# Archive and compress
tar cvzf clover-instr.tar.gz clover-instr/

Finally we publish the clover database, instrumented ears and the metrics as a shared artifact that will be used in a later stage.

Running the integration tests

Stage two in our plan is running the different test suites. The different stages are jobs grouped together in a single stage. That makes it possible to run the tests concurrently on different agents. Building the job for the integration test can get a bit tricky. Not only are their 2 machines that come into play, now we need the use the instrumented server and the clover database as well.

As the clover.db and the instrumented server are published in the previous stage they are available to all the next stages. We only need to specify the artifacts we are interested in the location where we like to have them. When we got our artifacts we can use them in our task. Here is the list of tasks we have in one of our jobs:

Checkout the server. Needed for generating the scoped report after the run.
Checkout the test suite, in a sub-directory so it doesn’t conflict with the server.
Inline script to cleanup old data, create a mirror of our working directory on the remote server, install the instrumented server and the clover.db on the remote server.
Maven build that runs the integration run.
Inline script to get the metrics back from our remote server to our agents.
Maven build that uses the metric and our server source code checkout to generate a report (scope to these tests).
Script to remove the clover.db from our working directory and pack the metrics data in a tar (just like in our server build).

Quite a number of tasks. Lets show one of them in detail. The inline script we’re showing here is the script that prepares everything and sends everything to the remote server.

# Remove all old Clover data
rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db

# Recreate symbolic links
rm -f clover-db
ln -s /opt/work/clover/example/db clover-db

# Unpack clover database
cp ext/clover-db/* clover-db

# Prepare and send instrumented server
rm -rf deploy
mkdir -p deploy
find ./ext/server -iname 'example*.ear' -exec cp \{\} deploy \;
tar c deploy | ssh -i /opt/secure/qa-fat.private root@qa-fat tar x -C /opt/example/server/default/

# copy clover db
ssh -i /opt/secure/qa-fat.private root@qa-fat rm -rf /opt/work/clover/example/db
ssh -i /opt/secure/qa-fat.private root@qa-fat mkdir -p /opt/work/clover/example/db
scp -i /opt/secure/qa-fat.private /opt/work/clover/example/db/clover.db root@qa-fat:/opt/work/clover/example/db/

The rest of the scripts are trivial, just collect the data from the remote server and continue. Don’t forget to setup your remote server by adding the clover.jar to the libraries of your app server and add a VM parameter to tell clover where to find the database -Dclover.initstring.basedir = /opt/work/clover/example/db. The other test suites have similar setups, so we’re thinking about adding the scripts to a small git repo, so they can be reused in different jobs, with an extra checkout.

Creating the aggregated report

The final stage is not that difficult. In this stage we want to collect all the information collect throughout the previous stages. Again we use the artifacts shared by the previous stages, put them in the desired place and start a script to unpack them alongside the clover database. Because the metric data file names are always unique they will not clash with the files produced on the other machines. So we only need to place them in one place and start the maven clover report build again but now on all the metrics (unit-, api- and ui tests). Here’s a example of what the unpack script could look like:

# Remove all old Clover data
rm -rf /opt/work/clover/example/db
mkdir -p /opt/work/clover/example/db

# Recreate symbolic links
rm -f clover-db
rm -f clover-instr
ln -s /opt/work/clover/example/db clover-db
ln -s /opt/work/clover/example/db clover-instr

# Unpack instrumentation data
cp ext/clover-db/* clover-db
tar xvzf ext/instr/unit/clover-instr.tar.gz
tar xvzf ext/instr/api/clover-instr.tar.gz
tar xvzf ext/instr/ui/clover-instr.tar.gz


Creating this plan took a while, but I think the information you can collect from the code coverage is worth the cost. With this information we can adapt our test plan to write extra tests for the code that’s not covered and we thought we did. A special thank to Francois and Bert for dedicating some of their time to make this plan possible.

java, software engineering

Evolve your REST representation with a MessageBodyWriter

A less known component of the JAX-RS spec is the MessageBodyWriter. It’s a pity because it’s one of the most important building blocks if you plan on evolving your representation of your REST resources. A MessageBodyWriter will also keep the representation logic out of your annotated web-resource classes.

Build in support

A JAX-RS implementation already supports a number of conversions from types to different representation. The classes responsible for the conversions are all implementent as MessageBodyWriters and Readers. Here are a few:

  • byte[]
  • java.lang.String
  • javax.xml.transform.Source
  • javax.xml.bind.JAXBElement
  • MultivaluedMap

It’s already quite a rich set of types that are supported by default. On simple REST interfaces this will probably suffice. The support for JAXB is especially popular for creating XML representations. An DTO/JAXB implementation could look like this:

public XmlCompany getCompany(@PathParam("companyId") id ) {
  return XmlAdapter.toXml(companies.getId(companyId));

@Path("companies ")
public Response createCompany(@Context UriInfo uriInfo,XmlCompany jaxbCompany) {
  Company company = XMLAdapter.fromXML(jaxbCompany);
  company = service.create(company);
  ResponseBuilder responseBuilder = Response.created(uriInfo.getAbsolutePath().resolve(company.getID));
  return responseBuilder.entity(xmlProfile).build();

But if your interface evolves, so will your representation. At some point in time you will even need to support more then one representation, because you have customers still rely on the old.

The REST style

Before looking at the implementation we should have a mechanism to make a distinction between the different representations. This can be done by the putting our version number in the MIME type. MIME types allow for a vendor namespace where we can create all our private representations. This is how the MIME type could look like:


You are free to define the structure between the application/vnd. and the +xml. Here I’ve created my private namespace vanboxel.labs for my app application, version v1 and an entity of type company. You can be even more fine grained and version your individual entities as well, but this adds to the complexity. And finally don’t forget to finish your MIME type with the real format, this could be +xml, +json or something else.

A reflex is now to add new methods to handle the new conversions, and add the MIME type to the @Consumers and @Produces annotation, in your resource class. But remember that by doing this you will be multiplying the methods by the number of representations. This will just add clutter to the classes. You should keep the representation logic out of the resource classes. Let them handle the path’s, caching, e-tag’s, etc…


What you could do is write a MessageBodyWriter that handles the conversion to different representations (be it format or version). Writing one is easy, you need to implement 3 methods of the interface with the same name. The writeTo method is obvious, this method will be called with the object returned by the resource class, and here you do the actual conversion. The getSize is called before the writeTo method, to determine the size upfront. If you don’t know the size before the conversion, just return -1.

Now the most interesting method is isWriteable. Here you write the code, to detect if this is the correct writer for the object. The JAX-RS implementation will iterate over each known MessageBodyWriter and call the isWritable method to determine if the class is capable of converting the object.

public class XMLMessageBodyWriter implements MessageBodyWriter<Object> {
 public long getSize(Object arg0, Class<?> arg1, Type arg2, Annotation[] arg3, MediaType arg4) {
   return -1;
 public boolean isWriteable(Class<?> clazz, Type type, 
       Annotation[] annotations, MediaType mediaType) {
   if ("be.vanboxel.labs.app.dto".equals(clazz.getPackage().getName())
       && mediaType.getType().equals("application")
       && mediaType.getSubtype()
   ) {
     return true;
   return false;

 public void writeTo(Object object, Class<?> clazz, Type type, 
         Annotation[] annotation, MediaType mediaType, 
         MultivaluedMap<String, Object> map, 
         OutputStream out) throws IOException, WebApplicationException {
   if (object instanceof Company) {
     marshal(out, XMLAdapter.toXML((Company) object));
   else if (object instanceof Customer) {
     marshal(out, XMLAdapter.toXML((Customer) object));

In this example we only use 2 parameters of the supplied information to decide if the class is capable of the conversion. The first is the class of the object. We are only interested in converting our own DTO classes. They are all packaged together so the package name is verified. The other parameter we check is the MIME type. Here we check if it’s the private namespace witch includes our version number.

Now the only thing left is to remove the conversion code from the methods in the resource class and add the @Produces annotation, with the MIME type.

public XmlCompany getCompany(@PathParam("companyId") id ) {
  return companies.getId(companyId);

Say the REST interface evolved a newer and ritcher representation that is not compatible with the current and adds a JSON representation on top. We only need to write the two new MessageBodyWriters (and MessageBodyReaders) for the new XML representation and JSON. Once the classes are written, you only need to add the @Produces and @Consumer MIME types to the resource classes.

public XmlCompany getCompany(@PathParam("companyId") id ) {

If you already have a written a bunch of adapters to convert your dto’s to your XML representation you could consider making them MessageBodyWriter/Readers by implementing the interfaces there.

java, software engineering

JAX-RS: Life-cycle, part 1

I know, we all do it. We start to play with new technology, start with some example, take shortcuts, read blogs, google when a problem pops up… But what we almost never do it read the spec. Well working in a company where we make products that comply to specs, I learned a few things about those specs. For starters, they are almost always boring, but they are full of useful information.

So to prepare for a presentation about JAX-RS, I picked up the spec and read it. And as expected I found lots of interesting stuff out, and now I understood why, while preparing the examples, I had some problem. So I decided to use some of the concept found in the spec and used them in my presentation.

Well now my presentation is over, I will try and share some of it here on my blog. So, to start of JAX-RS is the latest addition to the J2EE6 spec to build RESTful Web Services, but if your reading this you probably already know that, and this article is not an introduction but some look at the internals of the specification.

Let’s have a look at the life-cycle of a Resource in JAX-RS. Understanding the life-cycle will help you troubleshoot problems in your application. So what is the life-cycle of a simple resource. An example:

public class LifeCycleResource {
  private String _segment;
  private int _qPage;
  private int _qEntries;

  @HeaderParam("Accept") private String _accept;

  public LifeCycleResource(
      @PathParam("segment") String seqment,
      @QueryParam("page") int page,
      @QueryParam("entries") @DefaultValue("10") int entries) 
    _segment = seqment;
    _qPage = page;
    _qEntries = entries;

  public String get() {
    String val = " segment: " + _segment + "\n"
      + " start: " + _qPage + "\n"
      + " entries: " + _qEntries + "\n\n"
      + " Accept:" + _accept +"\n";
    return val;

OK, we have our example. Now let’s look at what the spec and it’s description of the life-cycle.

  1. Path’s are matched with Resource classes.
  2. Constructor is called.
  3. Dependencies are injected.
  4. Appropriate method is called.
  5. Resource is garbage collected.

So when accessing the resource though the URI web-app/lifecycle/constructor/something?start=2. We know the following steps are happing:

  • All classes with a @Path annotation are searched for the matching path, so the example Resource will be found because life-cycle/constructor/something has the closes match to the lifecycle/constructor/{segment}
  • Next the constructors of the class are scanned for the closest match, we have one, so that was easy.
  • Now the class is instantiated with that constructor, looking for the annotations on the parameters and filling in the appropriate values. In this case a part of the path to segment, the query parameter to start , and because entries has a default value it will get the value 10.
  • Now the resource is constructed, and the injection can begin. It scans the field for known annotations and initializes those fields (note: remember this point).
  • And finally the methods are scanned for annotations (another path annotation or the http verbs). Here it’s found the get method, because it is annotated with @GET and that one is called, it will return a value and that value will be send to the client.
  • The resource is de-referenced and is prepared to be garbage collected.

We’ve gone over the sequence of one call from the client. What can we tell about of what is safe? Well it’s safe to use the fields, without worrying about concurrent calls, except static fields. This shows that each resource doesn’t match a servlet, but we have one servlet that manages the life-cycle of these resources.

But there is a gotcha as well, and that is the injection. Because the injection happens after the object is constructed means you can’t injection fields in the constructor. In our example we can’t use the _accept field because at that time it still is null. So don’t wonder why you get NPE‘s when you use them there. You can use them afterwards when the framework calls the method.

That’s enough for now, in the next article I’ll look at sub-resources.

java, software engineering

Writing a basic Service Provider loader

Last week I was at the Devoxx conference, and I was following the talk about the new features of J2EE6. One of the new features is that a framework can now implement a few Service Provider Interfaces and the application server will register it automatically. No hassle with adding the configuration information to the web.xml, just add the frameworks jar to your webapp.

Well it inspired me to use the same kind of mechanism in my benchmark framework. Why should I have to specify a class name in my scripts when I could let the framework discover all the providers, by implementing the standard Service Provider mechanism.

Since most people don’t come in direct contact with Service Provider interfaces, it’s not a well understood concept. That’s a pity, because it’s an easy way to implement extensible applications.

My framework needs to run on J2SE1.5 onwards, so I can’t use the ServiceLoader utility that is available in 1.6. So I decided to write me a basic service provider loader from scratch. And now it’s time to dissect the loader.

But before looking at the loader, let’s take a peek what it takes to create an actual Service Provider. Not much actually, you need an interface that describes the provider, java.sql.Driver is a famous example. And an actual implementation of that interface that is the actual provider. If you installed JavaDB with your JDK you can take a look at “derbyclient.jar”, what is an actual service provider, in this example the derby client jdbc driver. The class that implement the Driver interface is org.apache.derby.jdbc.ClientDriver.

Now the only thing left to do in making this a real service provider is adding one little file. The filename is actually the name of the interface, and the content is the name of the class implementing that interface.

The location is also important, in META-INF/services/. In our example we have a file named java.sql.Driver with the content org.apache.derby.jdbc.ClientDriver.

Now having everything in place we can start with implementing our loader. It will be a basic loader that starts it search and load the Class for the requested provider at construction time. Let’s make it Iterable, so we can use it in the cool for loops introduced since SE5.

public class SPILoader implements Iterable<Class> {

  private LinkedHashMap<String, Class> _spis = new LinkedHashMap<String, Class>();

The following code does the actual heavy lifting. Let’s go over it step by step. I prefer calling the constructor with the actual interface, in our example java.sql.Driver.class.

Once we have it canonical name we can start searching for every instance of the file that has the file META-INF/services/java.sql.Driver in it’s classpath ( see line 19 ). It will search for it in every jar or path known to the current classloader.

The only thing left to do is enumerate over every file a create a Class of that instance ( line 33 ), and store it in our HashMap.

   * Try to load all known provider classes. 
   * @param spiClass
   * @throws Exception
  public SPILoader(Class spiClass) throws Exception {

   * Try to load all known provider classes. 
   * @param spiClass
   * @throws Exception
  public SPILoader(String spiClass) throws Exception {
    ClassLoader classLoader = getClass().getClassLoader();
    Enumeration resourceEnumeration = classLoader.getResources("META-INF/services/" + spiClass);

    while (resourceEnumeration.hasMoreElements()) {
      URL url = resourceEnumeration.nextElement();
      Reader reader = new InputStreamReader(url.openStream());
      char[] line = new char[1024];
      int length = reader.read(line);
      if (length <= 0) {
        throw new RuntimeException("Error loading SPI: Error loading service file");
      StringBuffer buffer = new StringBuffer();
      for (int ix = 0; (Character.isJavaIdentifierPart(line[ix]) || line[ix] == '.') && ix < length; ix++) {
      Class spi = Class.forName(buffer.toString());
      if (spi == null) {
        throw new RuntimeException("Error loading SPI: " + buffer.toString() + " has no class defined");
      _spis.put(buffer.toString(), spi);

Next is write a methods to get our discovered providers. I needed one that returned only one, so I opted to write on that accepted a list of preferred providers if they are available. I have my favorite providers you know 🙂

   * Get a SPI provider class, optionally providing an ordered list of preferred providers.
   * @param prefered
   * @return
  public Class get(String... preferred) {
    if (preferred != null) {
      for (String p : preferred) {
        Class clazz = _spis.get(p);
        if (clazz != null) {
          return clazz;
    Iterator<Class> iterator = _spis.values().iterator();
    if (iterator.hasNext()) {
      return iterator.next();
    return null;

To top the loader of, lets implement the iterator method so we can loop over all discovered classes.

   * Iterate over the known provider classes.
  public Iterator<Class> iterator() {
    return _spis.values().iterator();

Now it’s time to use our loader, with some examples. Let’s search for all JDBC drivers that are available in the current classpath.

    SPILoader jdbcDrivers = new SPILoader(java.sql.Driver.class);
    for(Class<?> driverClass : jdbcDrivers) {

If you added the derbyclient.jar mentioned in the beginning of this post to the classpath the code should output something like this

class sun.jdbc.odbc.JdbcOdbcDriver
class org.apache.derby.jdbc.ClientDriver

And if you’re only interested in one provider (example: a SAX parser, you only need one) use the get method.

    SPILoader loader = new SPILoader(labs.SPIClass.class);
    Class<?> clazz = loader.get("labs.SPIImpl1","labs.SPIImpl1");
    if(clazz != null) {
      // create instance of your favorite SPI implementation
      labs.SPIClass inst = (labs.SPIClass)clazz.newInstance();

If you are lucky enough to have J2SE 6 as your minimal platform you are lucky enough to have java.util.ServiceLoader available to you, I just don’t understand that it took till SE 6, to have a utility class available in the JDK. Anyway I hope this walk trough gives you some inside how auto discovery works for service providers. Have fun writing your own.