The amazing adventures of Doug Hughes

Archive for October, 2008

Double-click your Flex DataGrid rows

NOTE: In the comments below, someone named Chris left a note about using the doubleClickEnabled attribute of the DataGrid to accomplish the same thing I did by using the initialize event and the itemRenderer. I took Chris’s suggestion and applied it to my application and it does in fact accomplish the same thing, so there’s a simpler way to deal with the problem. I would imagine that, internally, they’re roughly equivalent and, while there’s a more straightforward solution it’s nice to know why things work, which is as much what this post is about as it is what to do to get it working in the first place.

No matter which approach you take to solving the problem, this is, hopefully, the last time someone will have to take hours to find the solution… ]It would seem to me that in the past 2 years I’d have had to make a Flex DataGrid respond to double-clicks on the rows, but apparently I haven’t, because over the weekend I was working on a project and had to do just that. Since it proved to be poorly documented (at least according to my Googling) and took me quite a while to work out, I figured I’d blog what I did to make it work.

It all started with a DataGrid, and the itemDoubleClick event handler…You see, the Flex 3 DataGrid component has a built-in event handler to deal with double-clicks on its rows, but (and this I found referenced a LOT in Google searches on the subject) even though you’ve implemented it, nothing happens. This is because the rows (which are instances of a subclass of InteractiveObject) have double-click support, but it’s disabled by default. So you have to turn it on for each row in the grid… but the question is HOW?

Because the rows are all held in the DataGrid.rendererArray (which isn’t even documented!!) which is read-only, the creation of the rows is hidden behind closed doors in the DataGrid itself, and the particular classes used by this process are effectively hard-coded into the DataGrid, how on earth do you turn doubleClickEnabled on in the first place?? The answer, it turns out, is fairly simple (or blowin’ in the wind, depending on your iTunes playlist for a Monday morning)… you use the DataGrid’s itemRenderer property to apply doubleClickEnabled to each object that it creates.

Let me explain. DataGrid.itemRenderer is actually an instance of ContextualObjectFactory, which in turn has it’s own property called, of all things, “properties”. By default, it’s null, but if you add name/value pairs to it (by assigning it an instance of Object), each name/value pair is copied into every object created by the factory. So all you have to do is something like this:

<mx:TitleWindow xmlns:mx=""
title="DataGrid DoubleClick Example">
        import mx.collections.ArrayCollection;
        private var data: ArrayCollection = new ArrayCollection();
        private function get dataProvider() {
            if (data.length < 1) {
                    Column1: "This is the first row",
                    Column2: "This is row 1 column 2 ",
                    Column3: "column 3 from row 1 "
                    Column1: "This is the second row",
                    Column2: "This is row 2 column 2 ",
                    Column3: "column 3 from row 2 "
            return data;
        private function doubleClickHandler(event: ListEvent): void {
  "Column1 is: " +;
        private function gridInit(event: FlexEvent): void {
   = {
                doubleClickEnabled: true
<mx:DataGrid id="myGrid" left="10" top="108" bottom="10" right="10" dataProvider="{dataProvider}" itemDoubleClick="doubleClickHandler(event);" initialize="gridInit(event);">

First off, check out the gridInit() method, where I’m assigning event.currentTarget.itemRenderer an object with the name/value pair I want to end up on all my grid rows. Since the initialize hander is on the grid, currentTarget is the grid itself and I can work with all it’s properties as though I were using this.myGrid. I have come to love the Flex built-in events a lot. They’re not perfect, but most excellent nonetheless.Let me hit the hight points for you:

Using the initialize event on the DataGrid component to apply an object to the intemRenderer property means that the itemRenderer exists but hasn’t done anything yet, so that you don’t get any errors or omissions as happened when I was trying to do this in the parent component’s creationComplete handler, in the DataGrid’s creationComplete handler, and anywhere else I tried to add the object to the itemRenderer’s properties collection. If you do it with the parent’s creationComplete handler, it blows up with compiler errors referring to class casting from IFactory, and if you do it from the grid’s creationComplete event you get no errors but only a few of the rows are actually double-clickable. Very odd, but using initialize on the grid itself works just fine.

Setting to {doubleClickEnabled:true} means that every object created by the itemRenderer will have doubleClickEnabled=true applied to it as part of the object creation process. Since the factory exists to create all the rows in the grid, every row will have double-click turned on and since the itemDoubleClick event has a defined handler, that handler will be called when any row is enabled.

One more bit of niftiness: When you double click a row, the ListEvent that gets fired has an itemRenderer property, which is the actual itemRenderer you double-clicked on. It has a data property, which represents the entire item from the dataProvider. An example of where this is nice is in the case of AIR, working with a DataGrid that’s populated from a query against a local database. Say your query has 10 columns, but your DataGrid’s DataGridRow tags are only configured to deal with 3 of them. The property actually contains the entire row from the query (which is actually an ArrayCollection), and the itemRenderer.listData is actually the individual cell in the DataGrid that you double-clicked on. I actually like the fact that you have access to the whole row from the original dataset.

So the next time you have to make the rows in a grid accept double-clicks and you go googling for it, I hope this comes up. This article would have saved me a few hours over the weekend. Who knows? Maybe it’ll be another 2 years before I have to do this again and it’ll save me another couple hours when I go Googling for it myself!

Data Validation vs. Object Validation

Back when we put together Validat, one of the topics of discussion was whether validation should occur at the object level or at the data level (i.e. a collection of user submitted data). This topic came up again last week on Ben Nadel’s blog with his postings regarding his class experience with Hal Helms. The comment Ben made that really got me thinking again was ….

An object can only ever exist in a valid state. An object should never be created or allowed to enter a state that is not valid within the domain.

While his discussion of Human and Vehicle objects is nice and illustrates the point fairly well, most times with web applications, we are dealing with a much more “data centric” scenario for lack of a better word. A lot of the functionality we are building is fairly basic CRUD … reading and writing data. So, lets take a shopping cart for example. A shopping cart has a Product object and that Product object is fairly simple containing basic data like a name, description, picture, etc. From the administration interface when the shop owner is creating and editing products, they have a simple form to enter this data. They submit the form and then what? Obviously the submitted data needs validating, but how?

To slip off topic for a second, Ben’s posting seems to hint that he and/or Hal are advocating putting the form level validation into the controller of the application. This is one of those areas where it is probably more personal opinion rather than right vs. wrong, but I am still of the belief that the controller layer of an application should merely be the traffic cop in the intersection guiding bits of information from point A to point B and not really doing anything more.

In one of the projects we are currently working on, the top most service layer has a saveProduct() method on it to keep with our example. This method simply takes a collection of data and then using Validat, validates that collection of data against a set of rules to see if it represents a valid object. At no point have we created a business object to do this validation. If the data collection is valid, then the business object is populated and passed on to various functions within the application model for processing. If the data collection is invalid, then a collection of errors is returned to the controller and thus the controller simply has to check the result of the saveProduct() method call and based upon that result, redisplay the form and display a save confirmation message to the user.

So, to bring this back to the question at hand, even though we put it into Validat, I still see no reason why you would ever want to stuff potentially invalid data into a business object. Not only does this potentially not represent a valid business object, but it requires you to do away with some of the basic type and validation checking of arguments on your get/set methods in order to allow any data to be put into that business object. It just seems much cleaner to validate the user submitted data at the top most service layer (not the controller) and then if valid, create and populate the business object.

SVNKit, ColdFusion, and you… very cool Java+CF stuff

On a bit of my own time last evening, I was looking over the beta version ofBroadchoice Workspace(which is pretty durn cool in and of itself, even if it has a ways to go before it’s completely ready for prime time) and decided to post the suggestion that they integrate document versioning into the product. For the things I’d use it for, versioning would be pretty important.

My suggestion was to do something like integrateSVNKit(the only pure Java library for accessing Subversion servers) into an application and, programmatically, create a new repo folder and a new working copy in the webroot to hold uploaded files it’s practically built-in versioning, really.

Incidentally, if you use Subversion and Eclipse, it’s likely that you have used SVNKit because it’s one of two ways (and until relatively recently the ONLY way on OSX) to get Subclipse working… SVNKit is bundled with Subclipse. More on getting it running in a minute, but first, some thoughts…

Think of this in terms of something simple, like an image gallery. As you add images, they’re added to source control if they’re not already under source control. If they are, then nothing’s done at upload.

Then using cfthread you fire a commit for every file you upload with an atomic commit message. When you go to do back-ups on the server, you don’t back up the working copy, you back up the repository. This would give you the ability to use files from disk as though nothing unusual was happening, but it would also allow you to do clever things like have preconfigured repository locations that your application depends on and, onApplicationStart(), are checked for. If they’re missing, they’re simply checked out to the correct folder under the webroot, most likely.

The possibilities really are endless. Let’s takeblogCFCas an example (kudos toRay). Let’s say when you go to download the application, what you really get is an installer that asks you where you want the root folder of the application set up, or the webroot into which you want it installed, and your CF Admin password. It then goes on to use SVNKit to checkout the source from the server, CF’s AdminAPI to create the datasource, and cfquery to create the db tables. One of the first functions I played with last night was Repository.getRepositoryVersion(), which returns the current repo version from the SVN server. Since you can also get the repository version of the working copy, we now have a pretty foolproof built-in mechanism to manage either auto-updates or a “check for updates” button in the admin section of the application. Now imagine what it would be like if something like this were added into the CF Administrator under a ColdFusion Applications section. The UI gives you fields for the install destination, the SVN Repository information (URL, repo path, username and password) and a radio button for doing an export from the repo or a checkout. If Adobe were to get really clever, they could publish a whole application distribution system that allows you to include a post-checkout file that could create databases and DSNs, create tables and folders within that DSN, add mappings, etc. And since you’re already inside the CF Admin when you do this, you’re a trusted user! Beyond that, CF Admin would give you buttons to check for updates, replace from repo, the ability to specify an interval for background updates, and the reload URL for the application to be fired off once an update has been pulled from the repo. And if you’ve got write privileges to the repository, it could even let you commit local changes to the server.

You know what would happen if Adobe did this? People would complain that they can’t swap SVN out for CVS or Perforce. OK, I kid (sort of). I think something like this would make sites likeRIAForge.orginto powerhouse software distribution systems and gives ColdFusion an even greater claim to “make hard things (like installing server software) easy”.Seriously, though… it’s things like this that get my imagination running and my coding juices flowing and make me really want to see things like this put to heavy use in the ColdFusion world.

As for usage, there are 2 things you need:

Next, you’ll simply need a code block that looks something like this:

<cfset cpRoot = "/path/to/download/svnkit-">
<cfset classPath = [cpRoot & "svnkit.jar",cpRoot & "svnkit-cli.jar",cpRoot & "trilead.jar",cpRoot & "jna.jar"]>
<cfset loader = createObject("component","JavaLoader").init(classPath)>

<cfset loader.create("").setup()>

<cfset svnDavRepoFactory = loader.create("").setup()>
<cfset svnRepoFactory = loader.create("")>

<cfset svnRepoUrl = loader.create("org.tmatesoft.svn.core.SVNURL").parseURIEncoded("")>

<cfset svnWcUtil = loader.create("org.tmatesoft.svn.core.wc.SVNWCUtil")>
<cfset svnAuthManager = svnWcUtil.createDefaultAuthenticationManager("username","password")>

<cfset repository = svnRepoFactory.create(svnRepoUrl)>
<cfset repository.setAuthenticationManager(svnAuthManager)>

<cfset latestRevision = repository.getLatestRevision()>
<cfdump var="#variables#" />

At the very least, if you check out a working copy of the application on production, it would be mind-numbingly easy to set this up so that it does an SVN update as part of the application reload routine. So all in one step you update from the repo if the working copy’s revision number is lower than that of the server, clear the trusted cache, and reload the frameworks and libraries necessary to run the application.

Kick me in the knee and call me a geek… I like it, and I can’t wait for an opportunity to use this in a project. Yeah, I get excited about int’restin stuff.

What Do You Want Model-Glue To Do?

Its no secret that at Alagad we like Model-Gue. We frequently use the framework on projects and several of us have contributed to the project over the last few years. Some of us were even privy to Model-Glue before it actaully made it out into the world.

Joe has dcided that he would like to have more community input into the roadmap of Model-Glue, as well as get a better understanding of how Model-Glue is being used, and which features are most valuable to the developers who use it. Help mold the path of Model-Glue by taking some time to fill out the survey mentioned here.

This is your chance to be heard, don’t pass up the opportunity!

Object Oriented Purity vs. Reality

One of the issues I have dealt with in the past and continue to deal with with our object management system is the difference between object oriented purity and reality. For example, suppose that I have an Order object, or better yet, an Order Id. Now, my display requires that I show the Manufacturer details for a given Product that is contained in this Order.

In an ideal object oriented world, servers would have unlimited memory and never need a reboot, thus all of our objects and their data would live in memory. Therefore, when I need to do something like getOrder().getProduct().getManufacturer().getName(), there is no problem as all of the data is in memory.

However, when you add a database to the picture for data persistence, you introduce a question. If I treat my low level business objects as pure objects, an Order obviously has no idea what the Manufacturer name is for a given Product within that order. The same would apply to the database design if it is properly normalized. So, when I use the previously mentioned chain of objects, I am potentially making 3 separate queries on the database …. one to get the Order details, one to get the Product details, and one to get the Manufacturer details.

In most cases, these are all very small light queries, so it is not a big problem. However, this bugs me in that we are making three separate queries where a nice simple join would have retrieved all of the necessary data with a single hit on the database. However, if we were to use that join, our data tier is now dependent or aware of the requirements of our controller / view tier which is not a good idea. If the data requirements were to ever change, instead of just changing a service object to return a slightly different set of data based upon objects in the system, we would actually be changing database queries.

So, at this point, I am still sticking with the purer object oriented route and taking the minor performance hit when doing things like this. How do other people handle this? Do you take the performance hit, break encapsulation, or is there some middle ground out there?

At Least One Good Reason to Get Aptana: TailView

This may seem silly to some, but one of the most difficult to find and more important daily-use pieces of software tooling is a decent cross-platform log reader that provides a decent, if small set of features and doesn’t cost anything.

Finally, after literally years of hoping to find a log reader that was useful without getting in the way, the other day I came across something that’s delightful and, better yet, free.TailView, apparently included with Aptana, has all the fundamental features I’d look for in a log reading tool and one or two that are just plain cool:

  • It’s an Eclipse plugin, so it’s cross-platform and goes nicely with the Console, Problems, and Search Results views in the lower-right tabset of my screen, just below the main editor window. Aptana’s instructions to access their Eclipse update/install site arehere.
  • Within its tab, it has its own tab bar, one for each log you have open
  • Start/Stop capture buttons, so you can leave a logfile open and just ignore it
  • Add/remove log files (so you can use it to tail any text-based log on your computer)
  • It bolds a log’s tab when that tab has been updated with new data
  • Clear the log file in the UI (so you can watch new entries roll in)
  • Delete logfile from disk
  • COOLEST FEATURE: reg-ex based color coding, so you can tell it to turn anything that matches^[(w| |:)+] [error] purple when it is displayed. Actually that one comes built-in and it’s the regex that matches an Apache log’s error entry.

Why on earth would anyone get excited enough about a log file reader to write a blog post about it? Well… if knowing how to read logs (and sometimes even just where to find them on the machine: Jrun logs on CF Standalone, anyone?) will make or break your day, then a decent tail utility (more about tail in a sec) will take your made day and just add gravy. Yeah, mixed metaphors… life goes on, deal with it.

The point is this: every time I use this thing, I am momentariliy transported back to the days when I would try to diagnose Apache problems on Windows using Notepad as my logfile viewer, and every time that happens, I get a bit of a smile on my face because this is soooo much nicer and so much easier.

Briefly, by the way,tailis an old UNIX command that would open a log file and refresh its own screen when new lines were added. Oldest entries at the top, newest entries at the bottom, it would just sit there pushing every new log line to the screen so you could see things as they happened instead of having to re-open the file to get fresh lines. Eventually it came to be synonymous with live-update log file reading in general and now pretty much anything that will show you new log data as it comes it is loosely called a “tail utility”, whether, like the original, it’s a command-line, text-mode console application or a full-featured, standalone app. Console on OSX (/Applications/Utilities/Console) is one such application, but it has no Windows equivalent.

In fact, I’ve been (casually) looking for a decent cross-platform GUI-based tail utility for (literally) years, with little success until I came across TailView the other day. I’d downloaded a different tail plugin for Eclipse and was checking to see if it had installed correctly so I went to the Window > Other Views… panel and typed Tail in the little filter field… and boom! There was TailView, from Aptana. I strongly recommend you check it out.

Because a decent log reader will really make or break your day, or even your week… seriously.

Come Learn About Derby

Have you wondered what Apache Derby is all about?

Have you thought about how you might be able to incorporate Derby into your applications?

If you have, then join me on Wednesday, October 22, 2008 7:30 PM at the Hartford Adobe User Group meeting where I will be giving a presentation that will dispel some myths about Derby, show how you can use Derby to create applciations that are truly platform independent and discuss some tips to get started as well as some tools to make the job easier.

The meeing will be available via Conenct, details will be available here.

Hope to see you there!

Tag Cloud