Saturday, 7 November 2009

Introducing Taban - The JSON REST Database for OSGi

I've been toying with the concept for a while that some web based applications (call them RIAs, Web 2.0, whatever) don't really need much in the way of 'server' infrastructure, and that a lot of the time they just need somewhere to persist their data. It is easy to make this complex but since the UI often knows the model it wants to use for persistence, it could be a lot easier. Even if they do need something complex rapid development of UI prototypes often exclude the persistence layer.

Here is where I introduce 'Taban - The JSON REST Database for OSGi'. However, at this stage I don't know if I'll actually use it, or if anyone else will find it useful, but I am also using this to 'practice' my OSGi coding (not that I don't do enough) and to play around with some web client technologies, namely Apache Pivot (I am a PPMC member and minor committer) and GWT (for no other reason than just because, really).

The idea behind Taban is that an web application will simply use HTTP methods (GET, PUT and DELETE) to perform the usual CRUD operations required for persistence, but using JSON as the data exchange format. In fact, the default implementation of Taban will pretty much store the JSON as is in to a db4o database, interrogating the JSON only to allow 'indexing' of the objects for the purpose of querying/filtering.

Alternative implementations could convert the JSON to a real object model and store that, again either straight in to db4o (which is the absolute easiest option), or if they really want the pain, in a RDBMS via some ORM technique. Taban will facilitate all these options, but will be fully useable 'out of the box'.

Since REST is more of a concept than a specification I have designed the following approach to URI handling, but please consider that this is still a work in progress:

GET - If the URI ends in a forward slash, the default action is to return a list of children as a JSON array. However, if the URI does not end in a forward slash the JSON content at that location (if it exists) will be returned directly.

PUT - If the URI ends in a forward slash the JSON content will be inserted and an ID will be automatically generated. ID generation is plug-able, but the default will be to use an integer based auto-increment for that particular URI. If the URI does not end in a forward slash the JSON content will overwrite any content at that address, if it exists, inserting it otherwise.

DELETE - Only URIs without a forward slash are supported by the DELETE method.

POST, TRACE, OPTIONS - Not likely to be used, though I am toying with the idea that a POST might support 'updating' the JSON at that location, rather than replacing it, but how and what to update at this point doesn't seem trivial to determine, though I have done similar work on this on Apache Pivot's resource handling which is JSON based.

In addition to the HTTP methods listed there will be a number of additional HTTP headers used in order to allow more fine grained control over the response as well as facilitating filtering and limiting of results.

Taban is (will be) implemented as OSGi bundles and will call out to/depend on a number of OSGi services, namely:
  • Configuration Admin (optional)
  • Event Admin (optional, but will allow other bundles to do stuff with the JSON that is being stored)
  • HTTP Service (required)

It will also depend on:
  • db4o to provide the default implementation of the database, though it will be designed such that any persistence layer could be implemented and replace this dependency
  • Arum Glue, an interface/convention based Inversion of Control and Dependency Injection manager
  • Jackson JSON, which is a fantastic and super-fast JSON library

Finally, securing the database is also pluggable but Taban will not be shipped with an authentication module, at least initially. I envisage it would be very easy to add BASIC or Digest authentication support, but for the purpose of getting Taban off the ground this is a low priority for me.

So, applications served from the web which have a rich client (by rich I mean most or all of the functionality is loaded in to the client/browser as opposed to a click-request-response approach) can use Taban to persist their application model data with very little effort in terms of setting up 'backend' services, and yet Taban will allow developers to make the server as complex as their business requirements dictate. We make no distinction between these front end technologies, so long as they can support a full range of HTTP interactions (sorry Flex people).

At this stage, it is likely to be a programming exercise, so I am very interested to hear from people if they think this is something they might find useful. If there is enough demand I will probably push my work to github.com.

Oh, I should also point out that Taban is a celtic/gaelic name that means 'genius' - not blowing my own trumpet, but more of an expression of the simplicity and flexibility of the product.

Wednesday, 28 October 2009

Weak References in Java

There seems to be confusion about when weak references should be used. I'm going to present two cases - the first one is when NOT to use them, then second case demonstrates what they should be used for.

Don't use weak references for listener management

So firstly - don't use weak references for handling 'listeners'. Say you have an object that allows listeners to be added to it, it may occur to you that those listeners might never get removed. The reason they don't get removed is :
a) because they aren't supposed to be
b) the programmer is lazy and/or doesn't understand 'lifecycle management'

It would be easy to think that storing the listener as a weak reference would avoid these potential memory leaks but what actually happens is that you introduce seemingly random behaviour to your application, since the listener could be garbage collected *at any time*.

The solution is - learn how to clean up after yourself.

Do use weak references for objects you don't care about

For instance, let's say your application exposes a front end to a database. The user is traversing your application instanciating objects probably based on data loaded from the database. In order to speed things up a little you might keep a handle to these objects for future reference, but actually you don't care if the object is available or not because you can reload it from the database as when it is needed again in the future.

What does that sound like ... that's right - a cache!

Weak references are ideal for this scenario.


Sunday, 11 October 2009

Google Wave Robot - Twitter Wave Bot

Just a quick one.

I got my Google Wave invite the other day. I'm enjoying it, but it is a little bit too quiet for me to use productively at the moment. When I get some invites myself, I'll send them out to the people who've asked for them.

What I really wanted to post about was the robot I wrote. I was quite impressed by the easy of the API, and the integration with Google App Engine, which makes writing a robot really easy to do. My robot, Twitter Wave Bot, takes #hashtags and @usernames and simply links them up to twitter. It took me about 8 hours to implement and you can see the code here:

http://code.google.com/p/twitterwavebot/

To get this robot working, add 'twitterwavebot@appspot.com' to your google contacts, and then add it to any wave and it'll start working for you straight away.


Wednesday, 19 August 2009

Arum DataEye™ as a Service

I've been fairly busy recently with a couple of exciting activities.

Firstly, in my spare time I have continued to increase my experience with the Android platform, but more Android stuff can be found on my Android projects home page. I plan on submitting at least one, possibly two applications, to the Android Developer Challenge.

For my paid job, at Arum I have been exploring the possibility of using DataEye in a hosted environment. Symetriq.com were kind enough to allow us to Beta their new Cloud product and after a minor false start, which with the help of Sytemtriq I got over quickly enough, it didn't take long to get DataEye up and running on their cloud infrastructure.

You can see DataEye in action right now by clicking here:
Arum DataEye Demo

Once the application loads you can login with the user "E:Benson" and password "p" but without the quotes.

DataEye as a hosted service presents both an opportunity and a challenge.

For organisations who are light on infrastructure or internal IT support the hosted solution means they don't have to worry about looking after DataEye, backing up the data and those other IT tasks, reducing the cost of ownership considerably. However, the challenge is that we still have get their management information data in to DataEye.

To address this challenge I was able to get even more leverage out of the fact that DataEye is based on OSGi. DataEye is constructed using what is known as "the whiteboard model". Simply put this means that components register services in preference to looking them up. This level of de-coupling then makes it easy to replace services with alternative implementations.

So to cut a long story short, I created a bundle that can accept DataPoint information over the web (via HTTP) as JSON. The bundle decodes the JSON and then inserts the objects in to our embedded db4o via the regsitered DataPointRegistry interface.

On the other end of the wire, which would be in the customer's infrastructure, I created a minimal OSGi environment using Equinox and then installed a bundle which registers an implementation of the DataPointRegistry interface which is responsible for pushing the data to the remote DataEye server.

By using the same interface on the server and client, we can then build integration bundles on customer site as if DataEye was hosted locally. If we later decided to move DataEye in to the customer's environment the same integration bundles will work without *any* change, because they are talking to the same interface.

At this point it feels appropriate to talk about Distributed OSGi. Distributed OSGi is a change to the current specification which lays out my approach above in a standard and container supported way, but at the same time without enforcing any particular transport protocol.

The main advantage of the new specification is that in my DataPointRegistry changes above I have hard coded the ability to receive JSON data over HTTP. With the new specification I would simply define the interface and drop in a provider which can do the 'remoting' for me. While the above change only took a couple of days to implement, the new specification would reduce that to almost nothing.

Once the new specification is widely available and in use, we will move DataEye to use an OSGi server based on the latest OSGi specification, which then also gives us access to a raft of new, enterprise focus, features in the OSGi container.

Wednesday, 24 June 2009

JSR 294 - why is this still going on?

A few random thoughts about JSR 294:
  • After Gosling's recent comments about OSGi (you can find them for yourselves) it is pretty obvious that the Sun crowd don't actually get modularity. 
  • JSR took a step backward this week when someone asked why annotations weren't being used. 
    • Alex Buckley promptly tried to re-close that Pandora's box but failed.
    • Has any progress actually been made?  Doesn't look like it to me
  • Gradually Guice has started appearing along side OSGi in the JSR-294 mails as an example of another module system.
    • Another example of why we don't need a module system as part of the language.
    • Why can't Java remain the platform it is? - it is already flexible enough to support a number of initiatives to create module systems
  • Where are the requirements for JSR 294? 
    • The JSR itself is vague
    • The original 'requirements' (some seemingly random blog posts) appear to be quite far from what is being discussed (super-packages anyone)
  • 'package' as an access modifier ... ? I suggested that! Kind of.
  • Why is this JSR, potentially the most damaging change to be proposed to Java, being allowed to continue?
    • The whole JCP approach is fundamentally flawed
    • Oracle should step in and kill the process, or at least JSR 294.


Monday, 8 June 2009

osgijc and osgibb moved to eclipseosgitools

I decided to the merge the osgijc and osgibb projects in to a single project which can be found here:

http://code.google.com/p/eclipseosgitools/

I need to do something about the Service Provider (internal Sun API) dependency in osgijc, but otherwise they are both usable and I use them all the time for live projects.

I also plan to add more tools for assisting with OSGi projects build with Eclipse. I don't like to specify meta information in more than one place. MANIFEST.MF is readable enough for me and even easier when using PDE, however, there's a lack of tools for actually doing anything with that project outside of Eclipse, e.g. headless building. osgijc can compile your Java code based on the information in your project, and likewise osgibb can build your project in to an OSGi bundle (Jar file).

The next tool might be something along the lines of (or along side of) Ivy for dependency management. I have found Ivy to be really useful (especially with the http://www.ops4j.org repository) but it annoys me that I yet again have to declare meta information about my bundles in Ivy files - all that information is already in the files generated by Eclipse!

Friday, 29 May 2009

Code Coverage of Junit in Ant without instrumentation

Until today we were using Corbertura to do our code coverage report. This required a two phase build process in which the first phase involved compilation, instrumentation and testing, the second phase being compilation and packaging.

Arum DataEye was taking 11 minutes 31 seconds to build and this felt a little too long, even though we're building and testing 26 OSGi bundles and two pure ActionScript libraries. This also violate's Fowler's keep the build fast rule.

I've been using the Eclemma plugin in Eclipse to give me coverage reports on the fly. The integration is simple but effective, always the best way. It occurred to me this morning that this plugin must be using something to do the coverage and that's when I discovered Emma.

So I decided to set about coverting our Ant based build to use Emma instead of Corbertura. I hit a snag - you still need to do instrumentation to run it with the JUnit task ... or at least it appears you do.

Emma provides a number of Ant tasks, most relevant to this discussion is the emmajava task which essentially just replaces the java task setting up Emma support automatically. However, this wasn't going to be enough to get on-the-fly instrumentation running with JUnit.

To cut a long story short, you simply have to trick the Junit ant task in to running the Emma command line runner. This is done with the following Ant snippet:



The important thing here is that the Junit task is forked. We then fool Junit in to running the Emma runner for each fork and voila, no need to instrument.

As a result, we were able to reduce our build process to a single phase of compilation, test, and packaging reducing the total build time Arum DataEye to 4 minutes 33 seconds! A massive saving of nearly 7 minutes!

This (fairly old now) post will get you a good chunk of the way and of course there's the Emma user guide.

Thursday, 28 May 2009

OSGi ClassCastException with Equinox 3.4

Edit - problem solved thanks to Stuart McCulloch. I need to refresh the framework as well. So stopping both bundles, updating them and then calling refresh before starting them fixed the problem.

--

OK, I'm hoping this isn't going to make me sound like too much of a noob but I've been experiencing ClassCastExceptions with some bundles I've been playing around with.

This is using Equinox 3.4 on Mac.

Essentially this is the situation:

Bundle A exports an interface and is designed to load and create instances of classes from other bundles that have a certain manifest header and implement this interface.

Bundle B has a class which implements the interface and the classname specified in the required manifest header.

Bundle A creates an instance of Bundle B's class - using Bundle B's loadClass method to get the class. No problems thus far.

However, OSGi is a dynamic environment so maybe Bundle A needs to go away or be updated for some reason:

Update Bundle A and a ClassCastException is now thrown when loading Bundle B's class. That seems reasonable because I haven't updated Bundle B though I am surprised it is still in the STARTED state. Since it had a dependency on the now updated Bundle A I kind of expected it to be in the RESOLVED state.

So I take more severe action:

Stop Bundle A (now in STOPPED state).

Stop Bundle B (now in STOPPED state).

Update Bundle A (now in INSTALLED state).

Update Bundle B (now in INSTALLED state).

Start Bundle A (Bundle B now in RESOLVED state).

Start Bundle B and I still get a ClassCastException while trying to create an instance of Bundle B's classes even though both bundles have been updated.

Same thing happens even if I completely uninstall Bundle A and/or B.

What I'm asking is should I get the ClassCastException after both bundles have been updated?

It is as though Bundle B is still using the class definition of the interface in Bundle A that was loaded right at the beginning of this exercise.

After an update of both bundles (and certainly a reinstall) I would have expected Bundle B to be using the interface definition from the recently updated Bundle A.

Any thoughts?

Thanks in advance.

--
The obvious answer is to create a third bundle that contains the interface in question, but then I have two bundles for my functionality and I only want one, especially since this is such a lightweight activity. Also, if I then update the bundle containing the interfaces I suspect that will mess things up for even more bundles.

Also, I really should try this on a different OSGi container such as Felix or Knoplerfish.

Wednesday, 13 May 2009

thoughts on jsr294 - v0.0.3

It's been a while since I posted some thoughts on JSR 294. I was quite vocal on the observer list some time ago but then decided that I should shut up since I am not on the expert group.

In all honesty, I find the JSR 294 mails confusing. The mails that appear on the EG mailing list (copied to the observer mailing list) seem to go round in circles and contain contradictory information, yet the spec lead seems determined push forward and still references vague and informal specifications that were published seemingly years ago, while seemingly ignoring the please of other members of the EG.

Frankly, reading the JSR mails makes me feel a little ill about the future of Java. JSR 294 is going to be a vague specification that breaks Java just enough to affect the leading module system (OSGi) in a very negative way. It wouldn't be so bad if I could see that Java would be in whole unaffected - because existing module system(s?) could just carry on unaffected and ignore the spec, but that just isn't clear.

My main worry is that 'modules' in Java will be come overly complex to achieve regardless of the module system by adding what appears to be minimal (and low-value) changes to the language.

Thursday, 7 May 2009

MoneyTracker 1.3

Made a minor update to MoneyTracker which can be downloaded from the previous URL and now from the Android market, though I'm not sure how to find a public link for it yet.

Basically added a custom icon and correctly specified the minimum SDK requirements.

Next up, I'm going to look at graphics, animation and general Android games stuff... right now I'm wondering if you can talk to another handset via Bluetooth or worst case on a (WIFI) LAN, though I'm always willing to consider enhancements to MoneyTracker. :)

Wednesday, 6 May 2009

Android App : MoneyTracker

Here's my first (public) Android app:
http://files.brindy.org.uk/MoneyTracker.apk

And here's the source:
http://github.com/brindy/MoneyTracker/tree/master

Basically you can specify a 'disposable' amount and then add expense items showing you how much is remaining. That's it, pretty noddy. It isn't a million miles from the notepad tutorial, the most significant difference being that I use db4o as the data storage instead of SQLLite. Unfortunately, that increases the apk file from 20k to 413k, but I am going to investigate getting that down to something more reasonable.

That said, being able to use db4o is much nicer than having to mess about with database tables and SQL. I had to write my own

I haven't actually tried this on an Android device yet, as I don't own one, so if someone wants to give it and try and let me know how it goes, I'd be grateful.

Obvious disclaimers apply - if your phone goes kaput, don't blame me. :)

This is also my first foray in to using Git and github.com, which so far has been relatively easy to get on with. I guess I should have tagged/forked/branched(?) my changes from SQLLite to db4o, but I don't really know how just at the moment.

One of the things you get out of the box when using SQLLite is the ability to use the SimpleCursorAdapter which automatically maps some given data to the views in a layout. I of course had to do this manually, but given the number of lines of code I'd deleted because of using db4o adding these few lines of code was not a problem.

Android vs iPhone : The winner is ... Android

A friend of mine finally convinced me to have a play with the Android SDK. Since the 1.5 SDK is now available and 1.5 updates are slowly winging their way to Android handsets I decided to get stuck in and have a play.

I've had a MacBook Pro for a while, specifically for the purpose of building Mac OS and iPhone apps, but the one thing I've had to contend with is the Apple SDK. I can handle Objective C, trust me, but the examples are out of date and there's no simple tutorials which give you a broad overview of the device's capabilities. I love my Mac and the apps that I'm seeing become available for it are awesome and obviously developed with the Apple SDK. The iPhone has access to the very same SDK and even shares the same SDK documentation with notes and annotations when something is not available for iPhone.

All the Apple SDKs are low level. Programming in C was my first job, but rapid application development shouldn't require knowledge of pointers and there's no way to avoid this even with Objective C. The one thing that is slightly better is the UI builder. The Android UI builder is not quite so slick, but eventually I think a UI for building a UI can become a hinderence and I certainly end up defining the UI programmatically and not using the WYSIWYG features of the toolset. This is unavoidable with the Apple SDK because it is creating binary resources for you as you build your UI.

As a Mac user it is also easy for me to forget that you have to use Mac OS to develop with the Apple SDK. For Android the SDK will run on any platform supported by Eclipse and the SDK, which I believe includes Mac, Linux and Windows. A much wider community of developers can instantly get stuck in.

The first thing that hit me about the Android SDK was how easy it was to get going. The download page has detailed instructions on how to get setup, including downloading Eclipse, installing the plugins and installing the Android SDK. There are then two tutorials - a very quick Hello World and a more indepth Notepad example. Neither are particularly broad, but introduce you to the essential concepts.

Note that you don't even need to use Eclipse - but it is a great tool and the Android peeps have provided some great tooling based on it. Likewise I can imagine how you might not actually need to use XCode to develop Mac / iPhone apps, but I would think XCode makes it a lot easier.

(Once you've done these tutorials I recommend reading this: http://developer.android.com/guide/topics/fundamentals.html it blew my mind - in a good way!)

So I was able to do in a couple of evenings with Android what I had failed to do in several months of having the Apple SDK at my finger tips... but of course, that's just me. I believe there are several other reasons why Android 'wins'.

As a techy and developer, the most important feature that Androd has, which iPhone doesn't, is the ability to run multiple applications at the same time. This is actually very clever life-cycle management (see above link), but the iPhone only allows one foreground application and iTunes to run which is very limiting and in flexible. The Android SDK encourages collaboration, which seems to be the Google way.

After a bit more investigation I find that the way Android handles applications is just mind blowing. If my Android application (as complicated as I want, e.g. with multiple views, called Activities) exposes a service, other applications can access that service. No big deal until you find out that Android is able to instanciate *just* the service part of my application if it is not already running.

There's a whole bunch of other amazing technical things happening under the covers, and it is all layed out to bare on the http://developer.android.com/ page. In contrast with iPhone, while all the technical information for iPhone is available online, I find it difficult to read, obtuse, out of date and with unclear examples.

The next reason Android 'wins' is that it isn't restricted to a single device. OK, iPhone apps work on iPod Touch as long as they don't use GPS, Mic or Camera, but Android is already appearing on multiple handsets (the G1, the Samsung i7500 and HTC Magic) and on multiple providers (t-Mobile and Vodafone). There is no way the likes of Orange will let something like Android elude them, so it won't be long before all the carriers jump on board and before we know it Android is everywhere. Where will iPhone be ... well, still on the iPhone and in O2 of course, though I can't imagine O2 not having an Android phone as well, unless they've agreed not to with Apple.

Anything else? Well one more thing - the market place. Apple's App Store is rapidly gaining a reputation as difficult for developers to interact with. Android's approach is typically Google like in that it is more open. You don't have to use the store to get your apps on to devices, and Google isn't as restrictive about what apps it allows. Further more the Android market place has a refund policy. Return the app within 24 hours for a full refund.

Does iPhone have anything going for it? Well yes, a few things, but I think they are actually pretty minor. Firstly, it is ultra-cool looking, like all Apple gear. Secondly, it has multi-touch. As far as I know multi-touch isn't supported on Android.

Lastly, games. As weird as this is, given how hard it is to get games for Mac, games for iPhone are appearing all the time and by big name developers, e.g. Spore and I recently purchased the arcade classic Silent Scope. With people like John Carmack of "id Software" saying things about iPhone like:
more powerful than a Nintendo DS and PSP combined
it is easy to see that people's attention will be directed at the iPhone/iPod Touch as a mobile games platform. I'm not going to disagree because I love playing games on my iPod Touch. "id Software" recently released Castle Wolfenstein 3D for the iPhone - pretty cool.

I would definitely be interested to see a comparison of the graphical capabilities of iPhone vs Android. Of course, the advantage that Android has is that someone could create a piece of hardware that runs Android with gaming specifically in mind - the iPhone still has to wear its one size fits all glove.

So in conclusion, while I think iPhone is the 'coolest' handset on the market at the moment, and seems to be a really good gaming platform gaining popularity with games developers, from a phone application point of view iPhone doesn't do anything Android can't do (apart from multi-touch) and in fact, Android does so much more. In addition, by being handset independent (there are rumours of an Android driven 'web pad' on the horizon), it makes more sense for developers to target Android than it does iPhone, especially since it is easier and less restrictive for developers to get their Android applications out there.

I love my MacBook Pro for computing, my iPod Touch for music and cool games on the go and am waiting to get my hands on an Android phone for everything handset related.

Tuesday, 5 May 2009

Ebay Flash Error

This isn't the kind of thing you like to see just after you've made a payment on Ebay.
SecurityError: Error #2121: Security sandbox violation: LoaderInfo.content: https://securertm.ebaystatic.com/3/RTMS/Image/UK_RTM-ME_ph1_NGXO_UAT_Apr20_06.swf cannot access http://rtm.ebaystatic.com/3/RTMS/Image/UK_RTM-ME_ph1_NGXO_UAT_Backup_Apr17_05.swf. This may be worked around by calling Security.allowDomain.
at flash.display::LoaderInfo/get content()
at com.ebay.merch.widgets.deals::Application/onBackUpLoaded()
I'm sure it is innocent enough though. Having the debug version of Flash Player installed is a pretty scary business.

Sunday, 3 May 2009

Scottish Developer Day - Developer Developer Developer!

Yesterday was the Developer Developer Developer event in Glasgow. A Microsoft sponsored event arranged by the Scottish Developers group.

There were four tracks of talks, A, B, C and SQL Bits.

jQuery Deep Dive In
Andy Gibson

I've been waiting ages to get to a jQuery talk and find out what it's about so this was the first talk I went to. The speaker and most of the audience seem pretty in awe of this library, but I wasn't all that impressed. I'm sure it is an impressive piece of work under the hood, but the first thing that turns me off is the list of supported browsers, or rather the fact that there is a list of supported browsers. Just another reason why I think HTML/CSS/JavaScript are a dying breeds. No one supports standards in the same way. The most interesting thing I suppose is the fact you can use CSS 3 like selectors to find the exact element you want in a DOM tree and then manipulated. I guess this is handy if you're doing this HTML/CSS/JavaScript stuff all the time but it doesn't really have much impact on me.

The next thing I didn't like was all this passing of functions are around. People seem to love it, but it just makes the code hard to read and less metainable. However, in the constraint of an individual web page I suppose it doesn't matter so much.

TDD? I don't have time
Craig Nicol

Didn't really elicit anything new about Test Driven Development, except that the tools for working with Microsoft are not free like the they are for other platforms. However, I did like the big flashy arrow saying things like "Do something here" and pointing at the code, though I think that novelty would wear off quite quickly.

SQL Server Optimisation: Best Practice for Writing Efficient Code and Finding and Fixing Bad SQL to Improve Performance
Iain Kick


This was a great session. We use SQLServer with one of our DataEye customers, so this was of interest.

Iain started by going through the native tools available for looking at performance on SQL Server then ended by showing Quest's own product, which was frankly very impressive and I'll be talking to our DataEye customer to find out if they've used it.

What is functional programming?
Barry Carr


I was a little worried about this, especially since I'd more or less had enough Microsoft spin on everything all day, but Barry actually used Scala to talk about functional programming, so this was pretty good as Scala runs on both Java and .NET. Have to admit that I learned more from Barry's talk than I did from Ted Neward at The Serverside Java Symposium.

However, I'm still not sure where I see Scala or in fact any functional programming language, fitting in to enterprise. That said, the paradigm suits a parallel computing project quite well since immutability means that small chunks of work can easily be delegated to other processors, either on the same hardware node, or a remote one.

Apparently some people say that Scala will replace Java on the VM. I very much doubt that. It is a different way of thinking and the syntax is not straight forward. If anything I think that Groovy will have a bigger impact in the Java community as it isn't asking developers to deviate from anything they already know and can slot in to existing projects quite easily.

Scrum Pain: Lessons learned swimming against the current
Abid Quereshi

Finally, this was also a great session and I will definately be taking some feedback to my employeer. Courage!

Conclusion

Overall a great day, especially for the price (free). I even got a couple of free Microsoft t-shirts (great for painting, so I'll be letting my wife have them) and a free ball thing. Good stuff.

Friday, 17 April 2009

Simple OSGi Bundle Visualisation with Flare - Updated to 1.0.1

I've created a Google project to store my wee bundle projects and have added the source and an updated version of my visualisation bundle:
http://code.google.com/p/brindybundles/

Further updates can be followed from there or my twitter @arumbrindy

Simple OSGi Bundle Visualisation with Flare

Last night I knocked together a simple OSGi Bundle Visualisation using Flare, a visualisation framework for ActionScript/Flex released under BSD license. You can see the fruits of my labour by following the instructions below. My work here is released under no specific license, so you're free to copy, modify, redistribute, whatever, as you see fit.

You can see a demo of some static data generated from my container by clicking here.

Requires:
1) A running HttpService

Installation:
1) Download the bundle from here. Source is available here.
2) Install it in your OSGi container
3) Start it
4) Access http://localhost/uk.org.brindy.osgi.fvis/DGApp.html where http://localhost is the root of your container's HttpService (e.g. http://localhost:8080 in a lot of cases)

Usage:
The visualisation currently shows a list of the installed bundles displayed in a radial layout. Mouse over a bundle to see where it imports packages from (red) and where other bundles import packages from it (green). Click once on a bundle to show the import connections, click again to show the export connections, click again to hide connections. Click anywhere not on a bundle to reset the vis.

Note that you can also tab through the bundles and use enter to 'activate' them.


Method:
Firstly I register a servlet which exports all the bundle information as JSON at the following URL:
http://localhost/uk.org.brindy.osgi.fvis/bundles.json

This was frustratingly more complex than I had anticipated. I had hoped the OSGi APIs would expose which packages are imported/exported but I actually had to go through each Bundle's headers in order to work it out and this becomes a parsing nightmare. In the end I took and modified some code from an other OSGi project (namely QuotedTokenizer from Felix) to handle the parsing of the header values.

The visualisation itself is based on the DependencyGraph example that comes with Flare. I modified it for my Flex environment and then copied it to a class called OSGiGraph. This class simply adds each bundle to the dependency tree (as a child of the root) and then creates edges between bundles by looping through each imported package and finding an export package. Apart from changing a few other minor variables the result is as you can see.

Limitations and future enhancements:

- Does not consider package versions
- Does not consider 'uses'
- Does not consider 'Require-Bundle'
- Does not display imported and exported packages
- Does not display bundle states
- Restricted to a single layout (CircleLayout)
- No high level grouping (e.g. vendor/category)
- Does not display services registered or used

I've tested the visualisation with a container that has 19 bundles in it and it looks OK. However, I can imagine that as that number increases it would be increasingly more difficult to see. For instance, if LinkedIn's 400+ bundles were to be displayed it would quickly become a lot of static noise. To address this I will probably create a more complicated bundle hierarchy based on another property from the bundle's manifest, for instance Bundle-Category (which appears to be widely under-used, IMHO) and/or Bundle-Vendor. A text input filter could also help here.

I also plan on allowing the visualisation layout to be changeable on the fly. The radial view looks OK, but there's nothing to say it has to look like that. Representations of the bundles floating around randomly is just as representative, IMHO, though perhaps sorted by name rather than truly random. Alternative layouts might allow extra useful information to be displayed, such as the packages imported/exported and services registered/used.

Conclusion:

While this is a simplistic visualisation of OSGi bundles it demonstrates Flare's capabilities, though somewhat crudely, and is also a starting point for a more advanced drop-in visualisation of an OSGi container's state.

Thursday, 9 April 2009

OSGi™ Users' Forum UK

I had the pleasure of attending the first meeting of the OSGi™ Users' Forum UK on Tuesday evening. We have over 40 members and around 30 of them attended the first meeting.

Mike Francis of Paremus kicked off with some introductions and then Neil Bartlett took the stand to go over some of the things discussed at OSGi DevCon at EclipseCon.

After Neil, Dave Savage, also of Paremus, then stepped up to discuss OSGi tooling.

After a bit of open floor discussion we then headed around to the Viaduct public house for a few beers. Thanks to Paremus for sponsoring the alcohol consumption! =)

Overall it was a good evening of information exchange and networking. Thanks to all involved and especially Mike who has really done all the hard work in getting it going.

Wednesday, 25 March 2009

How do I update twitter and LinkedIn status at the same time?

... and why should I?

The answer to the first question is ... http://ping.fm. Ping.fm actually lets you update a whole bunch of social networking, blogging and messaging applications with a single message.

Event better, I can use twhirl, an AIR based twitter client, to update Ping.fm so I don't even need to go to the website, I can do it straight from my desktop.

So why should I?

Well, I have two twitter accounts. One is for business and technical use and the other is for my personal use. My tweets appear on my company profile on my company's website:
http://arum.co.uk/team.php#chrisbrind

And because of Ping.fm those tweets are also appearing on the LinkedIn.com:
http://www.linkedin.com/in/chrisbrind

What this is doing is keeping my contacts up to date with what I am up to, but more importantly it is keeping me in my contacts' peripheral vision in the hope that I can provide them with some service further down the line and they'll think of me first for related activities.

Thursday, 26 February 2009

uid property on ActionScript object breaks mx:List

I don't know if this is expected behaviour, but this evening I noticed, much to my annoyance, that having a uid property on an Object causes mx:List to stop handling rollovers and selections properly. Take the following extremely simple example (compared to the code I was actually working on):

<?xml version="1.0" encoding="utf-8"?>
<mx:Application xmlns:mx="http://www.adobe.com/2006/mxml" layout="absolute">

<mx:ArrayCollection id="sampleData">
<mx:Object uid="" label="Sample 1" />
<mx:Object uid="" label="Sample 2" />
<mx:Object label="Sample 3" />
<mx:Object uid="" label="Sample 4" />
<mx:Object uid="" label="Sample 5" />
</mx:ArrayCollection>

<mx:List dataProvider="{sampleData}" />

</mx:Application>

Only the last and 3rd item in the list receive roll over events, and while selection does actually appear to work, the highlighting of the selected item doesn't.

You can see it working (or not) here.

backups

It seems to me that backups are a fundamental requirement for any IT professional. I'm not talking about those monolithic backups for various disparate systems, I'm talking about backing up your day to day working environment or even just those essential documents that you happen to keep on your machine, if things go wrong this can cause a lot of hassle, extreme embarrasement, or worse - loss of income.

The Good

Mac. One of the things I love about my Mac(s) is Time Machine. Having TimeCapsule also helps, but I suspect that one could even manage without that. I don't have to worry what I need to backup, I worry about what I don't need to backup. Time Machine gives me incremental backups and a fantastic UI to go hunting for those old files when I need to do a restore.

It has already proved it's worth to me when I cocked up my Eclipse environment. Setting up Eclipse, various plugins and Flex Builder can take several painful hours, but I just headed back to the last known good version of Eclipse that was backed up and simply restored it.

And in a really bad situation, for instance if my MacBook Pro were to go kaput and I have to get a new one, I simply select to do a full restore from my TimeCapsule when I set up the OS for the first time. It might take an hour or so because it'll be several GB in size, but it's easier than reconfiguring the OS and reinstalling all that software from scratch.

The Bad

Windows. I was never able to work out a really satisfactory backup solution. I used AlwaysSync for some stuff and xcopy batch scripts for others. I had to maintain what was backed up and when and the OS just didn't seem to come with a straight forward backup solution. At a minimum you should synchronize important folders *daily* and preferably to multiple locations.

The Ugly

Not having any backup. Using Google Documents (as I do) and Google Mail (as I do) means that actually I don't have a lot to worry about, but having important documents on your computer and not backing them up is unforgiveable. Even if your machine doesn't out-right die, you risk corruptions or virus infection.

So ask yourself - how quickly could you get back up and running if your machine died right now, or if your important document folder somehow got deleted? Have you tested restoring from your backup strategy? Give yourself some piece of mind; get a backup strategy and test it. Make it the next thing on your to do list.

Backups - come on ... that's IT 101!

Thursday, 5 February 2009

thoughts on jsr294 - v0.0.2

It occurred to me overnight that the solution I proposed in my last entry does not address the problem. The main reason is, there is nothing stopping a developer from writing a class and giving it the same package as one of my classes. As a result, that class now has access to any of the classes with the new modifier I was proposing.

But what is to stop someone writing a class and marking it as part of my module, thus granting it access to those classes I wanted to restrict access to?

I think my brain is catching up with the reason for some of the discussions I've been reading in the observer mailing list.

Wednesday, 4 February 2009

thoughts on jsr294 - v0.0.1

I've been trying to follow the recently renewed discussions around JSR 294. The discussion can be followed here:
http://cs.oswego.edu/mailman/listinfo/jsr294-modularity-observer

I'm not sure how you get to contribute to the JSR itself, by being a member of the Expert Group, I suppose, but you do seem to be able to post to the observer mailing list and have separate discussions which the main contributors may or may not read/choose to respond to.

At first I thought the JSR seemed somewhat vague, but after reading it and understanding what it is really saying, I'm think I'm starting to get it. These are my thoughts on what I understand so far and these thoughts are likely to change as time goes on.

Firstly, one thing that concerns me is that so far the main people contributing to discussion seem to be talking at cross purposes, putting forward how they think modularity should work without really addressing what the JSR sets out:

Today, an implementation can be partitioned into multiple packages. Subparts of such an implementation need to be more tightly coupled to each other than to the surrounding software environment. Today designers are forced to declare elements of the program that are needed by other subparts of the implementation as public - thereby making them globally accessible, which is clearly suboptimal.

Alternately, the entire implementation can be placed in a single package. This resolves the issue above, but is unwieldy, and exposes all internals of all subparts to each other.

The language changes envisioned will resolve these issues. In particular, we expect to introduce a new notion of modules (superpackages) at the language level where existing public access control would apply only within a language level module and access to API's from outside a module would be restricted to API's the module explicitly exports.

It may not be immediately obvious what this means, so here's a quick explaination.

If you have class Foo in package com.mypack and you want class Foo to access class Bar which is in package com.mypack.support then class Bar has to be public for class Foo to access it.

Foo.java:
package com.mypack;

public class Foo {

private Bar bar = // ... initialise the bar instance variable somehow

// ... methods to do something with the bar instance variable

}

Bar.java:
package com.mypack.support;

public class Bar {

// ... further definition of Bar

}

That seems to be a fairly typical example. However, maybe this is a utility library of some kind that you want to release in to the public domain and you only wanted Foo to be the access point in to the library. The bad news is that users have direct access to Bar as well, because it is public.

The JSR then points out that a work around is to put all classes in the same package, but this is undesirable since developers tend to group classes together in to packages (especially if trying to maintain low-coupling and high cohesion, not to mention structuring the code helps maintain a mental picture of it).

So the answer is a module system. This solves the problem by allowing the developer to explicitly state which artifacts (classes or packages or maybe both, I'm still not clear) are exported to other modules for direct access.

I would hope that this is optional and that code that hasn't been 'modularised' just behaves in the same way or that's backwards compatibility out of the window. I will address this later, when it is clearer how modules are going to be marked up.


How does this differ from OSGi?

OSGi does not do this at compile time. Details about which packages are exported and imported are part of the meta information that accompanies the 'code container', typically a standard JAR file with some additional headers in the MANIFEST.MF and collectively called a bundle.

The discussions around JSR 294 seems to be proposing that this module information will be stated in the code itself and statically checked at compile time. When I first read this, I gave out a little sigh, but now I am seeing how important this actually is.

OSGi is effectively oblivious about how you compile and build your bundles so long as the headers correctly define the dependencies. You only need to compile your class with the right classes in the classpath, the compiler does not do any checking about whether you're compiling against the right version or not. (That said, Eclipse PDE can enforce usage of the right package version during the development process, but that is a feature of the tool, not the language.)

For example with OSGi, I decide that I will import a package from a library which I know will be available in my deployment environment. I still need to compile my class against the classes in the package I'll be using so I download a version of that library to compile my class against and then specify in my meta information that I'm importing package com.x.y version 1.2. After installing my bundle in to my OSGi container then starting it I get an error because while the package I depend upon is being exported and my bundle is importing it, I actually compiled my class against the wrong version of the library which contains an extra method not available in my deployment environment.

Of course, realistically I would download the version I expect to be available and specify the verison of the package I want to use explicitly. In OSGi it is possible to import a package and not specify a version, but that can be risky if you are not in full control of your development environment.

Is JSR 294 and OSGi compatible?

Yes, I can't see why not so long as the implementation remains sensible and the language level modularity remains optional, i.e. remaining backwards compatible with the millions of libraries that will not have any idea about JSR 294 modularity.

Additionally, as more and more libraries start implementing JSR 294 modularity, it also acts as an additional safety net for OSGi bundles, ensuring at compile time that they are explicit about their dependencies. However, it will mean that this information is duplicated in the MANIFEST.MF.

One thing that OSGi has, which is not touched on at all by this JSR, is dynamism. However, given the JSR's definition of modularity, I don't see that any discussion on dynamism will have any purpose, but if it crops up, or I have any additional thoughts I'll be sure to address them here.

My Solution

My solution addresses the needs stated by JSR 294 directly.

I would simply add a new keyword which identifies that a class is accessible from it's parent package. Let's face it, developers create package hierarchies to structure their collections of classes, so this isn't such a big leap.

The class visibility keywords we have right now are:

Access Levels
Modifier Class Package Subclass World
public Y Y Y Y
protected Y Y Y N
no modifier Y Y N N
private Y N N N
Source: http://java.sun.com/docs/books/tutorial/java/javaOO/accesscontrol.html

So add an extra column and row to that table:
Access Levels
Modifier Class Package Subclass Parent
Package
World
public Y Y Y Y
Y
xxx
Y
Y
Y
Y
N
protected Y Y Y N
N
no modifier Y Y N N
N
private Y N N N
N

So in my Foo Bar example, Bar's class modifier would be my new keyword (i.e. xxx in the above table). This would protect the class from the rest of the world, but leave it exposed to any class in the parent package or same package, in much the same way as the protected modifier.

I'm sure this solution can be picked apart and I suspect that it comes across as quite naive to think that developers package their classes in the same way, or that a class in a sub-package might not want to access a class in it's parent package, but that isn't really my point. By simply adding this new keyword public can be public and mean it.

You may also note, I don't mention version control, but that's because the JSR doesn't mention it either.

So, perhaps JSR 294 isn't describing modularity at all, but a requirement for a new access modifier. That is, rather than adding a new paradigm ("modularity"), perhaps JSR 294 should look to extend the paradigms that already exist in the Java language, namely access modifiers.

If the JSR is implying more than that, it needs to be clarified or the JSR process needs to be reviewed.

Wednesday, 28 January 2009

notes on SVN and Eclipse Team usage

This is just a few notes about SVN and Eclipse Team usage - stuff that has come up recently which I've had to explain and it occurred to me is probably useful to new and existing SVN/Eclipse users.

When using SVN in Eclipse we use the SVN plugin 'org.tigris.subversion.subclipse_1.2.4' which I believe is fairly old now, but seems to work without any problems. We have tried to upgrade to newer versions, but it has caused problems and I don't like to fix things that aren't broken as a general rule.

Note, the 'quoted' actions are SVN actions, but only on the local working copy.

Decorators

There are a number of decorators which indicate the status of files stored under SVN:

When decorated like this, it means that the file, folder's contents or sub-folder's contents have not been changed locally.

--

This decoration indicates that the folder's contents have changed. This could mean that something new is present or one of the files (in any sub-folder) has been changed.

--

This decoration indicates that the file is unknown to the local SVN working copy. It should be 'added' (using Team -> Add to Version Control) and then 'committed' (Team -> Commit). However, if you commit the parent folder a confirmation and comment dialog will appear with a list of resources that are about to be committed. New files are usually left unchecked as SVN does not know what to do with them and will not be push to the remote repository as part of the commit. At this point you can check new filesthey will automatically be 'added' and 'commited' in one fell swoop.

--

This decoration shows that a resource has been 'added' but not 'committed'. In most cases new resources are added automatically when committed if you select them during a commit of the parent folder, but if you follow the process steps explicitly, this is what you'll see. You also most likely see this decoration after a refactoring operation. For instance, renaming a package will:
- create a folder
- 'add' new folder
- 'move' resources from original folder to new folder
- 'delete' original folder

The resources in the new package will decorated having been changed and needing to be 'commited'. All of the above can be rolled back, but I'll describe two methods for that further on.

--

This decoration indicates that a folder has been 'deleted'. This is quite important and is different from how resources (e.g. files) are handled.

If you delete a resource or folder then Eclipse effectively issues the SVN 'delete' command. Resources (files) are removed from the file system, but folders are not. In both cases the working copy needs to be committed and the decoration changes to reflect this.

--

That's essentially it. Eclipse uses decorations to indicate the 'state' of resources and folders.

Rolling Back Changes

To roll back a change there are a number of methods, but from within Eclipse only the following should be used.

Right click on the resource (or parent folder) and select Replace With. There are three options, but to roll back local changes use either Base Revision or Latest from Repository. Only select a branch/tag if you know what you're doing!

Replacing with the Base Revision is often the safest option. SVN stores a copy of all resources so that changes can be undone without having to go back to the server by simply copying the resource back in to the desired location and then updating the state of the working copy. However, there is a down side - if you've 'moved' folders or deleted certain resources, they will not have the supporting SVN data to restore them without contacting the server. You'll get a message like 'resource is not in working copy'. In this instance, you'll have to replace with the Latest from Repository...

Replacing with the Latest from Repository should be handled with care because you'll get any changes that have been committed to the file/folder which might not be compatible with the rest of the resources in your project. For instance, if someone has refactored a method on an interface and you decide to get the latest version of that interface without updating the rest of the project, you might end up with compilation errors.

I tend to use Base Revision for rolling back individual file edits and Latest from Repository for rolling back major refactoring operations.

Copying SVN Controlled Resources

My final note is regarding the copying of resources that are under souce control.

Each folder has a .svn folder within it which contains various SVN related data (e.g. the base revision, and other meta data). Unix operating systems tend to hide files that start with a period and on Windows this folder seems to have the hidden attribute set. In Finder (Mac) and Windows these files and folders are hidden by default. On Windows change the View options to show hidden files (and I tend to apply that to all views). This tip shows how to change a default system property on Mac in order to display these files.

Also note, last time I checked you couldn't create files or folders starting with a period through Windows Explorer or the Command Prompt, though it is possible from with programs, but I haven't checked for years, so that could be different now. If that's still the case, it makes dealing with these folders a little tricky, though you should never have a need to create one during normal usage scenarios.

If you copy a folder that is under source control and then add that folder to another project you'll copy the .svn folder as well.

This will confuse Eclipse, but probably not until you come to commit and you'll get bizarre messages about the resource already being under source control. Even after running SVN cleanup, this might not fix the problem for you as the folders are consider under source control with no problem.

Again, there are a number of solutions:
  • export the source project, which seems to automatically exclude source control meta data
  • disconnect the source project from source control (Team -> Disconnect) and choose to 'Also delete the SVN meta information from the file system'. This approach is often undesirable as you then have to checkout that project again before you can continue working on it.
  • copy the source folder using Finder (Mac) or Windows Explorer and manually delete the .svn folders after enabling your UI to make them visible. Be sure to find them all or you risk partially committing a project in various states, though SVN seems to be quite good at recovering from that if you commit all resources at once (i.e. it seems to be atomic when on the repository side).
Once you've got your folder under your project, refresh Eclipse. Issuing an SVN update command should show you if your SVN working copy is in order.