Monday, 10 December 2007

flex and server modularisation

I read the linked article and agreed with most of what it said, but when I thought about it further I had an issue with the counter argument to point 8.

I've been getting up to speed with recently Flex as well as the back end product called LiveCycle Data Services. I have to admit it is so easy to create applications with these APIs it is almost silly! For instance, I can call methods on a POJO running on my server from Flex with almost no effort just a couple of lines in my server config and a couple of lines in my Flex application. When called from the UI my POJO can then do whatever it needs to on the backend. Since it is running in an enterprise application I suppose a logical approach to accessing an RDBMS (which is the scenario I am mostly likely to have to implement) would be to lookup and delegate to session beans.

The thing that bugs me though is this means I have to use an application server and I have to package my POJOs inside a web application or enterprise application. I'm averse to this because I don't believe that this presents an approach which gives much flexibility.

In the kind of work I am likely to be doing soon, we want to be able to build a suite of components that we can plug together, just like LEGO, depending on our customers' requirements. Using the approach above it would mean that we end up deploying a single large heavy weight application, that may be slightly different for different customers and that means multiple branched code bases! Nightmare.

Flex supports modularisation and components, so that is part of the way there, but right now I am still unclear how we can deliver an application to a customer that only contains the software they actually need and no more yet still allow us to give them access to more software later without rebuilding and deploy their entire application. The situation is even worse with web applications, though simplying packaging up POJOs and the components they depend on in to a single web application probably isn't as bad as it sounds and can be done with some clever build scripts.

However, what you end up with is a project per customer. I imagine we'll end up with a very complicated structure in our source code respository consisting of Flex components, Flex modules, Server POJOs , web components and ultimately a web application containing a Flex application all of which could have specific implementations for each different customer.

Not exactly LEGO and it all has to all be resolved at build time. This isn't good because if a part changes that is high up in the build hierarchy then I may need to re-package each customer's application to cope with the changes, even if that change was for a single customer! Or you end up with copies and branches of your components with specific changes in them and that becomes a nightmare to maintain.

I think the counter-argument in the linked article is probably correct in an application that is only deployed in a single infrastructure and is built on SOA pattern. In this case you can change the server or the UI with out it necessarily affecting the other. However, this doesn't work very well when you want to deliver a subset of a larger collection of components to different customers possibly with parts of the application customised for those customers.

The solution is to work to interfaces, build the components and be able to deploy them dynamically and at runtime. This is what OSGi is all about and I would love to be able to bridge OSGi and Flex.

It is possible to build something OSGi like within Flex - all the tools are there and has already been done inside Esri's ArcWeb Explorer product, but unfortunately it doesn't look as though you can get to it without having the whole product (or at least some kind of "key" for their API), so I am keeping my eye out for something more open and light-weight.

Further more, Flex allows you to call a HTTP service, WebService or "RemoteObject" (the POJO approach talked about above) and the Cairngorm framework allows you to abstract this in to a Command Pattern with Business Delegate Pattern so that your main Flex code remains de-coupled from the actual chosen approach.

This means we could call in to a HTTPService running on an OSGi container, though I would prefer to call in to a WebService. In fact, my preference would be to deploy server side code as bundles and have the bundle automatically expose it's service(s) as a Web Service that my Flex UI can call in to. However, I'm yet to find a light weight Web Service implementation that uses OSGi's HTTPService which can do this.

In conclusion, using the traditional method of packaging an application as a WAR/EAR means that changes to dependancies really could force you to have to re-package (if not re-build) your entire application, especially when supporting multiple products/customers from a single codebase. To make a truly dynamic application we need to use OSGi technology in both the server and UI. The APIs are available, but we're still missing open and easily accessible implementations:
  • An open OSGi implementation for Flex
  • Some way to expose a bundle's service(s) as a WebService using just the HTTPService within the OSGi framework

Tuesday, 27 November 2007

I haven't blogged for a short while, so I thougt I better just make a quick note about a few of the things I've been up to.

OSGi ant tools

osgijc is an ant task that replaces javac for use when compiling osgi bundles created with Eclipse. This avoids having to use the PDE and makes integration with a continous integration environmet much simpler.

I should point out that there is already a tool that can do a lot more than this called bnd, but I found it complicated to use and it seemed to clash with the way I like to work in Eclipse. I also felt that it was uncessary to repeat all the information stored in the MANIFEST.MF and file in a seperate file just to be rebuilt elsewhere, but this is because of the way that you work with OSGi projects in Eclipse. bnd is no doubt highly appropriate if you don't use Eclipse to create your plugins and don't have a MANIFEST.MF throughout your development cycle.

Anyway, a minor change, but as of today osgijc now reads the source folders and destination folder from the file in the project. In the past these needed to be set in the ant task.

Also note that there is also osgijc's sister tool, osgibb, which will bundle Eclipse OSGi projects in to bundle jars.

By the way, neither tool has been tested with plugins that contribute to the Eclipse UI, but I can't see a reason why that wouldn't work either.

In summary though, I would say use osgijc and osgibb if you're wanting to use ant and your projects are Eclipse plugin projects, otherwise, use bnd.

Rapid Rich Internet Application Development

The main thing I've been up to recently, outside of regular working ours, is getting up to speed with Flex and LiveCycle Data Services.

I have to say, Flex combined with LiveCycle Data Services using db4o is absolutely the fastest way I've ever created an internet application with full data support in the back end. Flex itself makes UI development a doddle. LiveCycle Data Services make accessing data in the back end as simple as called methods on a POJO (real POJOs), then combine this with db4o so that you don't have worry about RDBMS issues and you're really flying in no time.

Flex + LCDS + db4o = truly rapid development

I finish my current employment on Thursday, with Friday left for me to sort out some stuff before I start my new role on Monday. In my new employment I will be using Flex extensively. I am hoping I will also be able to make use of db4o, but since I won't be working in a green-field environment and am most likely to be working with existing RDBMS I think this is unlikely.

I'm far from an expert with Flex at the moment, but I have done enough work to be up to speed with most of the UI side of things. I've been writing a little game to try and use as many of the different aspects as possible. If I get it finished maybe I'll put it online, but I suspect I'll probably just leave it once I start working on something real.

Things I am still unsure about are how to secure access to the backend through LiveCycle Data Services. It is possible for Flex to easily call through to WebServices as well, which I have secure examples of. Anyway, I'm sure I'll pick this up quickly enough on the job and with the training I'll be receiving. Always good to have a head-start though!

Tuesday, 13 November 2007

hibernate hell

I'm afraid I don't like "Hibernate". Hibernate, when used incorrectly, can add a massive overhead to your project in terms of runtime efficiency and maintainability.

Imagine the scenario where the architects of an application decide to create the database schema first. This isn't necessarily a bad thing until you add Hibernate in to the equation. EJB3, cool! So let's generate our objects from our database schema... bad mistake.

Hibernate is in an O/R mapping tool. That means it allows developers to map objects to a relational database schema. By generating objects based on a database schema what you are actually doing is creating Relational Database Objects. What I mean is, rather than creating a valid object model which works well, you are simply creating a bad object model that reflects your database schema. This might sound great and handy BUT... what you end up with won't be an object model that is very useable. Hibernate will add annotations to your objects and will assume all kinds of fetching strategies (it SHOULD assume eager fetching as per the JPA spec, but I'm not sure what it does) and may also add associations between classes that are uncessary or undesireable. For instance, when you add new use cases are the fetching strategies still appropriate? Possibly not! So, if you take this approach you need to go through the generated model with a fine tooth comb and check that it supports your all your currently known use cases and try best you can to make sure it can support your unknown future use cases (a very difficult task!).

Personally, I don't like the "database up" design approach. It implies a waterfall approach from the outset as usually once the database schema has been settled on it is very hard to change it. A better approach is to only think in terms of your object model and do not worry about your database schema until you really need to. At this point I can see Hibernate being more useful in that it will be able to generate your schema for you. However, I wouldn't recommend this either as it will most likely design you a schema that is very rigid to your current object model. Not only will you be stuck with your schema, but your object model won't be very flexible either.

I prefer to work with my object model and leave the RDBMS until the last possible moment. Since I do a lot of modular work, my database schema design ends up being very modular as well. This doesn't suit everyone so a suggestion would be to take the object model in small chunks and design the schema at agreed stages during the project lifecycle. You will naturally group objects together in packages where cohesion between classes is high or the classes fall in to a natural domain. Why should your database schema be any different? A lot of the time though, this is hard if not impossible because of referential integrity - and rightly so! But if you want the same flexibility with your RDBMS schema as you do with your objects grouped in to packages (low coupling) you end up with a lot of join tables. I don't have a problem with this, but tools like Hibernate cannot work that out for you.

Another option, if you really must use a RDMBS is to work with an ODBMS until later on in to your project when your object model has settled down and the risk of change in designing a database schema has been reduced. If you were to use something like DB4O for instance, it will encourage you to respect your object model. With the correct, sensible level, of abstraction it is not hard to create a layer to replace DB4O with your data access code for your RDBMS schema.
(Further more, consider using stored procedures in order to further remove the dependancy on your database schema.)

So you may be thinking that it isn't fair for me to dislike Hibernate, but the reason I do is mainly because it allows developers/architects to take certain shortcuts which add complexity and a certain amount of rigidity to a project. In turn this makes it difficult to be agile, encouraging a waterfall approach and makes it really hard to change your project's code at a later date, even if you just want to add something new.

It doesn't have to be that way, but given the availabilty of the tools, I suspect most developers (under project pressures) will take the easy way out just to satisfy their manager's continual (and understandable) desire for reducing timescales, but it is a false economy.

If you're about to embark on something like this, either generating objects based on a database schema or generating the schema based on objects consider NOT using the tools. Continue to use Hibernate, yes, but don't take the shortcuts as in the end they'll only add overhead rather than saving time and money.

Your turn... have you had a good experience with Hibernate generation tools? Did you have to spend a long time tweaking the objects/schema that was generated? Did you use Hibernate but manually annotate the objects/create mapping files? What approach worked best for you? What nightmares have you had?

Sunday, 11 November 2007

extending a line

This post is more of a math post really. I have to admit, my maths is not that great! I even bought the book "Mathmatics and Physics for Programmers" to help me out. When I used to work with Derek I had it easy with the maths really. The team had several good math brains with Matthew "Bayesian" Bayes probably topping the list of reputable mathematicians in the group. I was given the math on a plate and just had to code it up, most of the time. When I moved on from there found very little need for math, from GIS programming to server side Java my math usage went from daily to less than monthly at a guess.

So I was coding something on a little side project of mine and realised I had a need to "extend a line". Usually I would email my old colleague and friend David Waters, but decided to try and work this one out for myself. I had a flick through my math book I mentioned above and didn't find a direct solution to my problem. However, I did have a good read through the section on Vectors.

Given I had two points and a distance I needed to extend one of those points by, I realised that I needed the following components:
  • a vector (subtract point 1 from point 2)
  • the magnitude of the vector (i.e. the distance between two points, either my starting points or the origin and the vector)
Once I had the magnitude, I worked out what the difference was between the starting line length and the existing line length and worked out what percentage this was of the original line length. This gave me a value which I used to multiply the vector with. Once I had done that I simply added point 1 back on to my newly scaled up vector in order to give me my new end point.

The code looks like this:

* @param p1
* the start of the line
* @param p2
* the end of the line to extend
* @param distance
* how far to extend the line by
* @return the line object
private Line2D createAndExtendLine(Point2D p1, Point2D p2, int distance) {

double length = p1.distance(p2);

Point2D vec = new Point2D.Double(p2.getX() - p1.getX(), p2.getY()
- p1.getY());

double multiplier = (length + distance) / length;

Point2D p3 = new Point2D.Double((vec.getX() * multiplier) + p1.getX(),
(vec.getY() * multiplier) + p1.getY());

return new Line2D.Double(p1, p3);
It appears to work in my test cases, so I'm happy with it, but of course I'd love to hear from the math wizards if it could be better or simpler!

Tuesday, 6 November 2007

USB Mobile Internet

There seems to be a few mobile internet deals popping up these days.

Vodafone's "best" deal appears to be £49 for the modem (+vat) then £25 per month (+vat) with a fair usage cap of 3gb on an 18 month contract at up to 7.2mbs (faster than i can get at home). shame because i like vodafone, but that's just too expensive.

Tmobile has a similar deal, but with free modem, though you can get 10gb data allowance for around £39, and their prices include vat, unlike vodafone. However, I've never liked Tmobile, though can't put a finger on why.

The best deal so far appears to be "3", who's prices include vat and appear to be more transparent. 7gb per month including modem for £25 inc vat. I need a demo, to see if it's as easy as the Vodafone modem (literally plug it in to your laptop and off you go) and I need some assurance that if it turns out to be rubbish I can cancel my contract. I don't live in London, so coverage isn't likely to be as good.

I really want this as I use my laptop a lot when I'm on the move. Certainly I spend an hour or more a day on the train so having internet would be really helpful. However, I'm starting a new job in december and think they will be giving me a new mobile phone. So I'll either be transfering my new number (and they pay the bill) or just dumping my existing contract all together. Either way it'll free up some money for one of these gadgets, which means I can work and play on the move!

For now, I'll leave it and see what happens. Prices may even come down in a month and the deals may change now that 2 big vendors are online with it. Come on Orange, where are you?

Sunday, 4 November 2007

installing flex builder 3

My next "big thing" is Flex. I'll be starting a new job in December and will be working with this great bit of Adobe technology. I decided to get a head start and get it installed. Took me a while, but eventually I worked it out thanks to the above link.

So, to cut a long story short;
  • get the latest version of Eclipse 3 (I'm using
  • get the latest version of Flex 3 Builder beta Plugin (I'm using Flex 3 Builder public beta 2)
  • install Eclipse as usual
  • run the Flex installer
  • use the shortcut under Programs -> Adobe to start Eclipse
If you can run the New Flex Project wizard, everything should be ready to rock. I was using Eclipse 3.0 and it just wasn't happening, but once I got the latest software it was fine.

Well, that's as far as I've got so far, I'm sure there'll be plenty more flexing going here in the future.

Tuesday, 30 October 2007

is struts that great?

People rave on about Struts and a lot of architects choose it for their technology stack, but I have to wonder why. J2EE already has a framework, it's called Servlets and JSPs. It uses a centralised file-based configuration (web.xml), and if people use it correctly, there's no reason why you can't use an MVC model for your application (i.e. apply some common sense and good practices). In fact, you could argue that Servlets and JSPs are just the "V" part of the MVC model, with Session Beans being the "C" part and Entity Beans being the "M" part.

Likewise Struts is the view part of the MVC model, you still have to code the control and model parts yourself. Actions are not the controller part, they are a the link between the view and the controller.

The reference article states the following advantages of Struts:
  • Centralized File-Based Configuration. Just like a regular J2EE application then, with it's web.xml.
  • Form Beans. Not exactly difficult to code up yourself and what Struts gives you doesn't really utilise a lot of re-use.
  • Bean Tags. OK, so struts gives you this tag library and a few other cool tag libraries, but they are trivial to implement and could easily be put in to a re-use library within your own company.
  • HTML Tags. Ditto.
  • Form Field Validation. Yeah OK, pretty useful, but have you seen the implementation? What a mess.
  • Consistent Approach. As I said in one of my earlier comments, you still have to code a lot of stuff yourself, consistent approach is about programming in a sensible fashion. Further more, templating engines (e.g. velocity) and MDA are becoming used more extensively, and these provide a consistent approach. Struts does not actually enforce any kind of consistency. The configuration is a mess and actions are simply Servlets with a different name.
So, I partially agree with a couple of the points, but I fully agree with all the disadvantages mentioned.

I really feel it is a waste of time getting up to speed with Struts. You're going to lose focus on real programming and become swamped by messy configuration and inappropriate framework code when there already exists a simple and effective framework in the form of Servlets and JSPs. The only good frameworks are the ones that encourage good programming practices and almost feel as though they are an extension of the language, or a filler of the gaps in the language. (Yes, of course I am talking about OSGi)

That all said, I have no experience with Struts2. Does the same spaghetti configuration exist? Are actions still bloated, glorified Servlets? To be honest, I can't even be bothered trying to find out!

In conclusion, Struts (like other bloated frameworks, read Hibernate) are a waste of time and just add unnecessary overhead and footprint to any project. Get back to basics and keep things simple. Think about re-use as you write your code and you will end up with a "framework" that works for you and your company, but don't go looking to create one!

Thursday, 25 October 2007

Gaikokugo - a wizard in 30 minutes

I was trying to enter some vocabulary in to Gaikokugo, the language learning suite I have been developing and realised that it was a bit tedious. The vocabularly appears in a sheet and to enter new vocabulary I had the following process:
  • double click the sheet to create a new row
  • click on the new row and edit the values in the properties view (using the mouse to move between properties)
  • tab out of the properties view and double click the sheet to start again
This wasn't optimal, so I decided to implement the following:
  • Select "New Entry..." from the "View" menu OR double click the sheet.
  • A wizard opens and focus is set to the "local" word input. There are also inputs for the foreign words, categories and level, and a check box (which is checked) so that the wizard opens again once "finished".
  • Now I can quickly enter vocabulary simply by entering the local word(s), foreign word(s) and hitting enter to have the wizard save the input and re-open to start again, or I can tab through an also update the categories and level, which are defaulted to whatever is selected in the CategoryView.
This took about 30 minutes to implement, which basically means the length of my commute in to Glasgow. Once you get over the learning curve menus, forms, wizards and so on are very easy to implement with Eclipse RCP.

I have some refactoring to do. I have a VocabularlySheet and the NewVocabularyWizard. I need to abstract these in to "LanguageEntry" (the super-class of Vocabulary, Phrase and Conversation) generic components so that I can reuse the same code in the other perspectives.

One thing I find confusing about Eclipse is the terminology it uses. As a user of Eclipse I would be find it difficult to use the concept of pages, perspectives and views. As a result I have called perspectives views for the sake of the user. They select either Vocabulary, Phrase or Conversation view from the view menu, but what they are really doing is changing perspective and seeing a bunch of different Eclipse-views.

Wednesday, 17 October 2007

OSGi and UML - again

I think I'm happy with it now, though it has took some time getting here. I suspect Peter Kriens could pick holes in this (please), but I feel this is a good representation of how the bundles in Gaikokugo, my language learning application, look:

This morning I decided to email my friend and mentor John Skelton, co-author of Schaum's Outlines: UML to ask for some advice. As I was explaining the situation to him I essentially made the breakthrough that I realised it simply isn't important to show the bundle that the interface (the service) is declared in and everything else just fell in to place. Looking back I think Peter had been trying to drill this in to my thick head. =)

For instance, in a more complicated example I've been using at work, I have the interface out on it's own. The bundle that contains the interface uses the interface, even though it also declares it but the interface is not shown within a bundle, it is just shown out there being used by various bundles. By drilling down in to the UML model you can see which bundle the interface belongs to and so I have shown this in a seperate package (class) diagram. These diagrams are going to become part of the softare architecture document I'm writing (which is a new one for me as I usually hate writing them, but am enjoying it this time - perhaps I'm getting old?).

So in the above diagram, I have used an absolutely bog standard UML2 implementation diagram (aka component diagram). I think it's pretty clear that the "db" bundle exposes the interface, that the "db.db4o" bundle implements the interface and that the "rcp" and "db4o.test" bundles use the interface. There's no mention of a dependancy on the bundle that exposes the interface, just the interface itself, which is the best approach for keeping the system as loosely coupled as possible. In the above example the interface is shown inside the bundle that contains the interface is only shown because it doesn't do anything with the interface itself.

The one thing that is missing in the diagram above is the service listener that Peter mentions in his article ( as part of the register, get, listen operations that can be done on a service) but I am happy this can be represented with a simple dependancy connector that states the relationship is one of listener.

In conclusion, this is perhaps not as elegant as Peter's notation, but I'm happy that I can get the message across using UML. I can imagine that the diagrams could get complicated, but at the same time that kind of forces the architect to break things down in to more managable chunks. That would mean it is hard to get a view of the big picture, so it is a matter of swings and round-abouts.

Oh, and I did find a free tool for doing drawings, it was in Open Office! It's the first time I've had office software on my personal hardware for ages as I have been using Google Docs for some time now. Hopefully Google will come up with a nice online collaborative drawing tool next - hint hint! ;-)

Now, my next problem is exposing my OSGi services as WebServices, but I'll leave that one for another post.

roles in the software world

I was pondering what certain "roles" within software companies should actually mean. I think it's a grey area, though I'm not sure it even matters really. Here's a few that I encounter fairly often and how I interpret them. I have also written them up in terms of my perceived "pecking order" with a.n.organisation.

Please feel free to post other ones and/or your interpretations. Which one of these is closest to your title? Do you feel you meet my description? If not, is your description appropriate to the role you actually do, not just the name?


Quite rare these days, I feel, but if someone had this role I would say that they basically just wrote code. No design or interpretation, they take a spec and make it work. They would most likely contribute to a piece of software, but not be directly responsible for the final deliverable, that being handled by someone more senior.

Analyst Programmer

Like the programmer, but able to take requirements and assess the feasibility of what is required then create a spec and basically code it up.

('Software' | specialisation) Developer

(e.g. Software Developer, Java Developer, Senior Java Developer, etc)

Like the analyst programmer, however, a developer would be able to produce the final deployable result. They may also be required to contribute to estimates and mentor more junior members of the team. The design output from a developer is most likely minimal. After the spec, they code.

The senior version may manage a small team of other developers, delegating tasks throughout the project lifecycle. They would no doubt have a hand in the planning of the project's deliverables and milestones.

('Software' | specialisation) Engineer

A specialist version of the Developer that will also produce high quality software design documentation to a level and detail in which it can be understood by other members of the team. An engineer may also be required to capture requirements, performance analysis, high level design and detailed design.

The senior version is similar to the "Senior" Developer in that they will be leading small teams, mentoring, delegating work, contributing to project plans and so on.

I'd like to think this is where I am at now.

Technical Architect (TA)

My feeling here is that there's quite a bit of overlap between this role and Engineer role. The major difference is that the TA probably won't touch any code unless absolutely necessary. They may be involved in the design of the architecture for multiple projects at the same. The role is primarily a documentation creation and work validation role, they would most likely make the biggest contribution a software architecture document, for instance. In an organisation with a TA and Senior Engineers, these roles would probably work together with the TA delegating the responsibility of delegating tasks to the Software Engineer.

Solutions Architect (SA)

My view of what the SA should be is that of someone who is ultimately responsible and accountable for one or more projects. The SA collates all the information about the final solution; deployment environments, software versions, managing non-functional requirements and so on. It is also their responsibility to validate the technical architecture of the software solution, though not necessarily to have a hand in designing it. They will typically add all the guff to the software archiecture document that is not actually to do with the software, you know, the stuff that everyone skips and just makes the 30 page succint document 120 pages.

While the role is perceived as "higher" than that of a TA, the SAs I have met have usually been less technically competant than the senior developers/engineers on the team. A good SA should be able to bridge the gaps between the technical people, business people, project management and quality assurance. They are natural leaders and amicable.

Monday, 15 October 2007

OSGi services

I think I'm finally starting to "get" where Peter Kriens was coming from in his blog entry on OSGI design techniques.

When initially designing the system it is important to define what services are going to be used (get, register, listen). I've been trying to force this in to a UML component diagram but this is implementation specific, which is what the UML is good for, as Peter says, but it is not so useful for the level of analysis that Peter's design technique is aimed at.

I also found this article from IBM describing a UML profile for services, which may be useful.

I really need to get to grips with UML profiles!

My next 6 weeks I will be documenting the design of a new OSGi based system that I am architecting at work, as well as continuing that design and giving it some detail, so I think I will have plenty of chance to get up to speed with UML properly. My aim is prove the design that is in my head (and partially implemented) before spending too much (more) time writing code to support it. At that point I will also be getting a couple of team members to help me out, so I want the design of the system to be clear in order to aid in delegating the implementation work.

Bit of ramble there but I think my conclusion is... the OSGi Design Technique is going to be a useful analysis tool and UML is going to be good to show how it needs to be/has been implemented. Of course, now I need to find a (preferably free/opensource) tool that can help me do the types of diagrams that Peter produces.

Thursday, 4 October 2007

Gaikokugo - high level design

I decided to try and document my approach with Gaikokugo. My first problem is describing OSGi services in UML. Peter Krien's touched on this in his blog [here], and looks like he gave up. I like Peter's simple notation and the more complicated diagram further down the post is likely to get very complicated in a UML environment. However, I'm determined to use UML (as there are so many tools to hand and is pretty much an industry standard), so here is what I came up with to describe the core components of the Gaikokugo Language Platform as it currently stands using a simple component diagram.

The components represent OSGi bundles or Eclipse plugins, as appropriate.
  • gaikokugo.common contains common classes used throughout and exposes no services. Right now it contains a single class; GaikokugoException, which is an unchecked exception that can be used to wrap checked exceptions or thrown when it is unreasonable for the application to cope with the current situation or checked exception.
  • gaikokugo.rcp is the UI part of the application, an Eclipse specific plugin. All of the UI code is in here at the moment. This may change and I'm already considering adding an extension point to allow a re-usable Exception handler to be plugged in to handle the unhandled exceptions and report them back to me via email, or directly in to buzilla, etc. This is likely to be a seperate project and an extension point is probably not even required, now that I think about it.
  • gaikokugo.db is a standard OSGi bundle that exposes the persistence interfaces for the application. One of my goals for this application was to avoid having to have the user "save" their work as they went along, so my design forces the implementation of the db to be persistent throughout. I do talk in terms of database, but the GaikokugoDBService's only method is to open a file on the filesystem. How this file is read and written is up to the persistence implementation.
  • gaikokugo.db.db40 is a persitence implementation based on db40 and registers itself as an implementation of the GaikokugoDBService interface exposed in the db bundle. I was using the db40 OSGi bundle, but I just couldn't work out what benefits it gave me over included the libraries in the bundle itself and I had problems getting Eclipse to detect it as a plugin. Anyway, all of the implementation code for the db is specific to db4o in this bundle, so no problems with classloaders or anything like this.
  • gaikokugo.db.db40.test doesn't yet exist, but will contain unit tests for the db40 implementation. This will most likely be executed using ProSyst's Test Platform... but even though it is excellent it isn't a free piece of software, so I may build my own OSGi based unit testing framework.
  • likewise gaikokugo.rcp.test also does not exist, but envisaged having UI unit tests in this bundle, as well as an implementation of the db bundle specific to the UI tests. Not sure how I am going to do this yet, but I believe it is possible to hook in to the UI programmatically in order to simulate user activities.
There is still lots to be done, but at least I have done some design... I have started listing the features of the project as "Mylyn Tasks" within my Eclipse IDE. I probably need a better way to do this as I frequently switch from my laptop to my desktop PC and I don't think the tasks come along for the right. I should probably setup bugzilla or something similar.

"old" technology

I've been so fascinated by new technologies recently, I had forgotten just why people use the "old" stuff. Yes, technology is moving along at a rapid pace, but consider this n-tier architecture:
  • SQL database (Oracle, MySQL, SQLServer, etc)
  • Hibernate 2
  • EJB 2
  • Struts 1.3.8
Nothing amazing or innovative there... at least not by today's standards, but only a few years ago the latter three were hot technology. Today they are stable, reliable and simply work exactly how you expect them to and there are lots of tools to help you work with them very effectively. So, I'll be revisiting some of this technology in the next few days (though I am off to Spain for a week on Sunday) to get back up to speed, just in case I happen to need it again.

Monday, 1 October 2007

new personal project: gaikokugo

I start my Japanese lessons on Tuesday and in a strange attempt to help myself learn better I've started a new project called "gaikokugo", which I believe (and hope) is Japanese for foreign language.

More details can be found at the above link, but to summarise, this is going to be an Eclipse based RCP application using db4o as the underlying database, with three perspectives, vocabulary, phrases and conversations and include automatic testing facility (where feasible).

Thursday, 27 September 2007

POJOs and annotations

What is a POJO? It is a plain old Java object. I believe the definition allows inheritence, but I would like to think that it discourages use of factories, home interfaces and other such malarky (think EJB2).

But what really disturbes me is that people think that they can take a POJO and annotate it and it is still a POJO. Well that's just not true!

By annotating your POJO you have coupled it tightly to whatever framework supplies the class definitions for the annotations. For instance, EJB3. People rave about being able to use POJOs, but you're still annotating your classes with EJB specific code.

The programming model is similar to POJO, in that you now don't have to worry about that extra EJB rubbish you had to produce in earlier versions (never understood why that was needed in the first place!), but it's still an EJB.

As far as I am aware, please correct me if I am wrong, you can't take that EJB annotated class out of the J2EE environment and use it elsewhere without having to include the classes for the annotations. I believe you will get a "class not found" (or "class def not found") exception if you try to use a class that has been annotated, but you don't have those classes on your class-path.

Which to me makes annotations pretty much useless.

Wednesday, 26 September 2007

Coding Kata

I was recently introduced to Coding Katas. I actually consider myself a practitioner of this but without realising it had a name and I haven't been working on the examples on that website, I'd just been doing some other stuff but using new technologies.

I recently undertook "Kata 14" and pumped out a solution in an evening. Good fun! And the output is actually quite interesting too. While I had a specific reason for doing this, I might actually be tempted to keep it updated and improve it and see if I can get it to produce even better results.

In other news, I'm currently working on a task management system based on OSGi, GWT and an DB4O in my spare time (currently my train journeys to and from work). It is actually coming together very quickly. Programming the front end with an easy API like GWT helps a lot, as does not having to worry about mapping my objects to an RDBMS. It just works. Can't say I'll have it it available any time soon as I haven't bothered to outline any specific requirements for it, I'm just throwing it together to see what the technologies can do and how they can help me.

I may well shelve it and work on a game instead, though I have no idea what!

I'm also quite looking forward to EclipseCon 2008 - if I can get my employer to cough up for it. Not holding my breath though as I suspect it will fall right in the middle of some busy work period. :(

Saturday, 22 September 2007

I don't watch TV

Seriously. I find it intrusive and time consuming. I tend to put aside some time to watch my favourite shows on DVD and of course I'll sit and watch movies on DVD, but that's not what I want to talk about.

There appears to be a growing trend of people publishing podcasts and online videos with regards to technical articles and interviews. While I'm working I haven't got time to watch a video and I can't listen to someone rambling on about something when I'm trying to concentrate on what I'm doing.

I'm sure I'm missing out on a lot of good stuff, but I much prefer text and pictures on a web page. I can scan read and the pictures, if appropriate, will aid my understanding of what I have scan read. If I pick up on something that requires more attention I bookmark it and come back to it later when I can read it in detail. I can quickly access the importance and relevance of a document while I'm working, without it interupting me too much.

Videos and podcasts don't allow for that kind of interaction. They are also not searchable by search engines.

I feel as though the technical community is taking a step backwards from what the internet is supposed to be. For me the internet is a fantastic place to find information in a randomly accessible way. At least consider posting a transcript of your video or podcast somewhere accessible. Thanks!

Wednesday, 19 September 2007

Joel vs GWT?

I don't generally read "Joel on Software", as I read I can't help but associate a self-righteous tone and a fat, patronising, American accent as I read a bunch of fluff that I generally already know. The odd article is spot on though and I have friends who read him religiously... almost.

Somebody over on the GWT Google group pointed out Joel's latest rant. I wasn't sure at first, but I thought he was trying to imply GWT was doomed to be out-done by some "NewSDK" and that people are wasting their time building AJAX applications, but then I re-read it and realised that GWT *is* the "NewSDK" he's referring to, so unless I'm missing some half-baked American irony Joel is a bit behind the times on this one.

Tuesday, 18 September 2007

GWT and OSGi

I decided I would try to build a GWT app and deploy it in my OSGi container using the Http service.

Deploying GWT as a client only app was straight forward enough. I converted my Eclipse project to an OSGi bundle in the usual way and created an Activator which then created a ServiceTracker and Customiser for the HttpService. In my customiser I register the location of the GWT files and it just works. Great!

So I decided to take the next step and deploy a client/server GWT application.

At first I couldn't work out what was going on. My server side class which extended RemoteServiceServlet was being called (I had overridden the service method to make sure) but nothing appeared to be happening and my client was reporting an error on the server. Everything was working fine through hosted mode, of course.

Eventually I discovered it is because in RemoteServiceServlet there is a call to request.getContextPath() which is a Servlet 2.2 method. OSGi only supports the 2.1 specification and the framework I am using simply throws a ServletException when calls to Servlet 2.2 methods are made. So I copied the code of RemoteServiceServlet and created "OSGiRemoteServiceServlet" then replaced the call to request.getContextPath() with the 2.1 equivalent of request.getRequestURI(). This fixed the problem and my GWT app now runs fine through my OSGi container which is fantastic.

Hopefully, I won't run in to any more Servlet Spec 2.1 issues, but if things don't work as expected, especially compared to hosted mode, that'll be the first thing I check!

Friday, 14 September 2007


2-5 years, j2ee will be a thing of the past.

i believe it has failed in achieving it's aims. i'm not sure what those aims were exactly, but i find j2ee hard to use. people have tried to make it easy to use by adding further frameworks on top, e.g. spring/struts. i always believed that servlets and jsps were enough, with a good bit of design to seperate display logic, business logic and database logic. that's the kind of design you should apply in any type of application (e.g. .net, php), not just j2ee.

let's face it, ejbs were always a bit rubbish. ejb3 has tried to become the silver bullet object-relational-mapping solution but i'm just not convinced there's very little modularity about ejbs. your application is tightly coupled to the interfaces they declare and there's no getting away from that. if the implementation of those interfaces are not available your application will not work. ok so an ejb may be used by several applications, but in reality... i've never seen that! further more it isn't as though you can have code dynamically contributing application functionality at runtime with a web app. and don't get me started on the hibernate nightmare!

so step aside j2ee and welcome osgi. i believe it's already starting to happen. some of the big app servers are already moving over to an osgi based architecture; ibm's websphere was already there, of course. my worry is that they'll do it wrong and it's such a simple thing to get right. rather than producing app-servers they should be producing high quality, enterprise capable osgi compliant frameworks. creating a web app as an osgi plugin is already easy since there should be an http-service running in your framework. you can just look it up, register your servlet and away you go. the ibms and weblogics of the world should focus on making it happen fast and reliably. i think you can forget sun on this one though. my impression is they see osgi as a threat and are deliberately trying to avoid it, though before long osgi or something like it will become part of the java specification.

and anyway, back to databases. since this is java we're talking about shouldn't we be doing things with objects and not having to worry about mapping them to a relational schema? which leads me to my next prediction...

5-10 years, rdbms will be a thing of the past.

rdmbs are great for procedural languages, but most modern languages deal with objects. pretty much every modern language deals with objects. what a waste of time mapping objects to a relational schema when you could just be storing your objects directly in to a oodbms. oracle 9i (i think, definately 10) has object support in it, but you still access the database in a relational way. i think the team over at db4o have the right idea and their solution works for all versions of java and .net. great stuff! and it runs in an osgi container. even better.