The amazing adventures of Doug Hughes

Archive for June, 2009

Where Oh Where Did My Bottleneck Come From?

At one point in my career I thought I knew what performance tuning was… I would have described it as “writing tight code.” (Remember that phrase? “Tight code”?) And if I were to be tasked with fixing performance issues, I would have jumped on my computer, gotten a local copy of the app going, and started hitting pages to find the slow ones, then look at the code for ways to fix the issue. That, however, was before I knew what performance engineering really is.

The problem with that is the fact that no application runs in a vacuum… that’s the whole point of the web.Between your users and your application are myriad other systems: browsers and the hardware that run them, copper, fibre, switches and routers oh my! Even between the local gateway and your application, there is probably a web server, maybe a load balancer, a switch or two and probably some sort of box to hook your 1GB Ethernet backbone up to the real interwebs. Essentially, your application server is an island in the midst of a massive sea of technology, any part of which can involve itself in the performance and reliability of your application. If that weren’t the case, my old methods would work fine… but it ain’t that way.

Meanwhile, one of the burning questions we have to answer is “what is performance?” and its evil twin “what is poor performance?” What do we use to benchmark whether or not an application performs acceptably? That, however, is a pretty easy question to answer: User perception of response times. At least initially. There are other more technical issues to be resolved down the line, but the first sign of trouble is when a user calls the support desk and complains that the login is taking 30 seconds to process.

So, where to start? As I said before, originally I would have started with the code, run the processes being complained about, “tightened up the code”, and pushed it to prod without a great deal more thought. Only later would I start to consider the DB connections, the NIC, switches, etc., but if one of those items is the problem then I just wasted 3 days rewriting a block of code for, what? Nothing. That code performed well enough as it was… and I never solved the problem.

I have heard it said that, by the time you’re ready to fix the problem, you have 99% confidence that you know exactly what it is and it’s true. 1% is a reasonable margin of error… I mean, there’s always going to be times like when you installed the new lightbulb in the basement and flipped the switch but nothing happened. Who thinks to check the fuse panel when a lightbulb goes out? The thing is, that’s a cheap mistake. Spending days implementing a performance fix that misses the mark is not.

So we’re back to process, process, process. And it all starts with user perception. So we hook up JMeter to run a test plan that models real-world usage patterns (and we base those on web server logs, right?) and we do see a performance issue… what does it tell us? Heh, well that depends, doesn’t it?

Yes, I’m about to talk about jmeter again… because it gives you great info: min/max/mean processing times, error rates, etc., all aggregated across as many requests as your test plan made. So if you run a jmeter test plan against the application from the same machine that’s hosting the it, you should be seeing the application in its peak performance capability… your baseline benchmark.

It’s often enough to confirm that the application itself is or is not the culprit, but if you don’t see the problem at the server, just move to the next network connection closer to the internet until you’re hooked up to the WIFI at Starbucks. Have some cups of coffee, eat a scone or three or… well, bummer. You found a dying NIC and never made it to Starbucks. At some point you should definitely see a dramatic, unreasonable slowdown and, at the least, narrow the field of potential causes. Personally I think the most critical part, however, is to record findings so that you have real, hard numbers to work with. I’ve actually taken to using a spreadsheet to track the numbers with the added benefit that I can use charts and graphs to demonstrate the issue to stakeholders who often can’t tell a “long-running query” from a crescent wrench. Plus I can run many different sets of benchmarks, recording their results over time, and compare them all based on anything from time of day to season of the year.

Ultimately, though, this is about “reasonable performance”. It takes work to build a performant application delivery system… there’s no sense in building it to serve 10,000,000 hits a day with <20ms response times when it only needs to serve 10,000 requests and response times are allowed to be up to 100ms. Until you can’t do that, you don’t have a problem…. what we’re trying to prevent is a user going to your domain and suffering through long and painful response times and deciding that your website sucks. We need to concern ourselves with the perceptionthat your application is horrendous and should be replaced with a teletype in a small room full of well-trained monkeys, however unfair that may be. Which is why, if you know what you’re doing, a tool like jmeter is really the place to start.

The point is that jmeter has the capacity to place a realistic load on your application, simulating as many users as you need so you can see how your application performs not just under load but from any of several different locations. You can run a tool like jmeter from a remote machine (you can even take your laptop home and use it from a consumer internet connection), from the local machine (to get a baseline of your applications actual performance for comparison with other locations) and from pretty well anywhere in between.

Because the best application in the world running on a broken network is nothing more than a broken application (from the user’s perspective!)… know what I mean?

The G1 garbage collector

The G1 garbage collector has been made available in the 1.6_14 release of Sun’s JDK. There was some controversy initially when this was released due to some wording in the license, but the wording has since been updated. Sun was trying to indicate that no support would be provided for the G1 garbage collector, yet it initially came across that you could not even use it unless you had a paid support contract (you can read more about the licensing changes here). This new garbage collector is meant to replace the current ‘mark and sweep’ collector that most people are using. The G1 collector is targeted at server environments which multi-core CPU’s with large amounts of memory, and aims to minimize delays and ‘stop the world’ collections, replacing with concurrent garbage collecting while normal processing is still going on. With the current collector, all processing has to be halted for the GC to iterate through the heap and mark any items for collection, with the next garbagecollection run actually ‘sweeping’ the items out of heap. The G1 organizes memory into smaller pieces called ‘cards’, and as items survive each garbage collection and become older, they are compacted to the older side of the heap. This helps minimize pauses that occur with the mark and sweep collector, and should show good performance improvements with long running applications.To enable the G1 garbage collector, add the following to your jvm.config file after java.args=

 -XX:+UnlockExperimentalVMOptions -XX:+UseG1GC 

While I have not done extensive testing, running with the 1.6_14 release and the G1 collector seems to be quite zippy, and has been working great with CF 8.01, Model Glue:Unity, Transfer, and Coldspring, so I would say things look pretty good for the Coldfusion community and the latest from Sun. I plan on releasing some benchmarks that show the long-term performance of the G1 collector, look for that in posts to come.

It's Probably Not The Code: Server Performance vs Application Performance

I was recently confronted with an application that suffered from abysmal performance… and by “abysmal” I mean “really really bad”. My task was simple: Make the website go fast! It was an interesting challenge, really, and it got me thinking about the difference between server performance and application performance. All tools and best practices aside, the thing that really hit home for me was the sheer number of places I found that needed fixing. Yes, the code was an issue, but before I even started looking at the code (and thru the judicious application of appropriate tools), I found at least 3 places that affected the server itself.

So the difference between server performance and application performance can be summarized pretty much like this:

Server performance is fundamental to the performance of every application on the server and includes concepts like request tuning; thread and DB connection pooling (both of which affected the app in question); the performance of the connection to the DB and any other external resources (and the performance of the resource itself); JVM settings for things like garbage collection; basically anything that happens on the server but outside any single application. In the ColdFusion world, most of the time this stuff will be a matter of configuration, not code (unless you’re a GlassFish contributor heh), because ColdFusion takes it upon itself to manage most of these things for you.

Application performance, on the other hand, is trickier… because the application runs on the server, if the server is misconfigured somehow, the application may take the blame for something that’s actually a server tuning issue. Application performance may actually involve a particular datasource that’s performing poorly, meaning that applications can be affected by external resources as well… however it can also mean poor choices in application architecture or bad coding practices. But the fundamental difference remains: You can have one poorly performing application on a well-tuned server, but if your server is poorly tuned you’ll suffer performance penalties in pretty much every application.

As I was working through this issue, I was using a process of benchmarking, analyzing, adjusting and re-benchmarking. Each incremental improvement brought with it a new set of questions and a new set of problems. The benchmarks I was running led me to start looking at server settings first, and, once those were solved I was able to work thru some application issues (including some adjustments to code)… but the key was starting in the right place. Running the benchmarks gave me real numbers to work with, both to judge whether or not I was done as well as for comparison to judge the improvements over the previous run. (Hint: Jmeter gives you awesome stats and a good, stable server load to start getting server performance information from the webserver and JVM on up to your application code itself.)

What’s the main take-away from this experience? Hrm… there are many, but I think the key one is that server and application performance tuning aren’t voodoo or black magic and they need to be approached scientifically (read: methodically and consistently). Working in terms of “user time” (“the perceived time a user spends waiting for responses from a website”), using jmeter to simulate load and watch response times, and making sure that you’re looking at your system realistically (because the webserver, app server and application all contribute) and starting objectively from the same point every time you deal with a performance issue will all work to your favor. Using the right tools to gain measurable and/or trackable information from every level of the application will make you a hero.

Over the course of time we’ll probably be blogging about the tools we use for these things, but underneath the tools is the more important part: The Process. Having a methodology for performance tuning is critical and keeping the difference between the performance of your application server and your application itself will help you make sure you start checking in the right place every time and never miss a step.

Because the best application in the world running on a broken server is nothing more than a broken application… know what I mean?

Apache's Jmeter Part II – recording a test script with the proxy component

As a follow up to my previous post about Apache’s Jmeter, today I will go over using the Jmeter proxy component to record your activities in Firefox and make a test plan. First thing we need to keep in mind as we build this plan, is our goal of actually simulating users. Sure, we could spin up a simple http sampler, setup a 40 thread group, and tell it to loop 100 times, but would this really simulate user load?&nbsp; Real users have pauses between their actions, they may use your site search with a variety of keywords, or they may just wander aimlessly.&nbsp; You also may notice trends in your log files, where you have 2 or more different general types of users. Lets setup a scenario today to simulate 2 of your theoretical user groups defined as follows:

Your site has an average of 100 concurrent users 65% of your users follow scenario A 35% of your users follow scenario B

&nbsp;&nbsp;&nbsp;&nbsp; Lets start with a new test plan, and the first thing we want to do is to create thread groups to represent our different scenarios. Right click your test plan (root element), add -&gt; thread group. Name the first group ‘Scenario A’, and set it to 65 threads, 15 second ramp up, and repeat forever. Setup another thread group called ‘Scenario B’, set it to 35 threads, 15 second ramp up, and also repeat forever. Next, lets add the proxy server by right clicking on your workbench (root object), choose ‘Non Test Elements’, then choose the ‘HTTP Proxy Server’. I set the server to listen on 8085 (any open port should work fine), and I chose the type of ‘HTTP Request HTTPClient’
. Now under ‘Target Controller’, choose ‘Test Plan &gt; Scenario A’, and under grouping choose ‘Put each group in a new controller’. This tells the proxy server to simulate a connected browser, place all recorded requests under the Scenario A thread group, and to group each page request into a controller (just makes it easier to follow what is going on in the plan). Now press start, and lets switch to firefox.&nbsp;&nbsp;&nbsp;&nbsp; You need to set Firefox to use your new proxy server.&nbsp; Go to preferences -&gt; Advanced -&gt; Network -&gt; Settings. Set firefox to connect for HTTP or HTTPS to localhost, on port 8085 (or whichever port you chose). Be sure to remove anything from the ‘no proxy for’ input box if you are testing locally, or this wont work =) &nbsp; firefox proxy configurationNow you are ready to perform your actions that will represent a user in Scenario A. Browse the site of your choice (I will be picking on Adobe.com for this test, please don’t everyone run out and run 100 threads against Adobe, I may become un-popular). Carefully perform only the actions that you want to be replicated in your test, then switch back to Jmeter and turn off the proxy to ensure no additional rogue web requests get captured. You should now have something that looks like this (I named my controllers just to make it easier to follow my plan).&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; Now go back into your proxy object, configure it to record actions to the controller ‘Scenario B’. Restart the proxy and perform some other browsing actions that a typical user may do. Now we are ready to apply some ‘realistic’ touches to our test plan here. Looking at your test plan, you now see several controller groups under your ‘Scenario A’, each of which represents a web action.&nbsp; As I am sure you know, when you click on a link or go to a web page, often times other pages or assets get loaded. Each sampler (they look like an eyedropper) under the controller represents an actual HTTP request that occured during the action you recorded.&nbsp; Now, we want to add some human randomness here, because a machine slamming in web requests every 1 ms is really not a realistic test of your servers!Right click on an individual sampler object, choose ‘add timer’, then ‘add gaussian random timer’. Set your ‘constant delay offset’ to 2000 ms, then set the deviation to 1000 ms.&nbsp; This will add a random delay to each action, in the range of 1000ms – 3000 ms (one to three seconds), which will more accurately represent a real person browsing your site.Now all that is left to do is add your listener elements, and run your tests! Remember, for good testing you always want to first get a baseline with your test of current performance, then test after making any site modifications. One of the hardest parts of creating a test is accurately modeling your real traffic. Search through user feedback, apache logs, google analytics exit logs, any data you can get to help you decide how you construct your test plan. The plan will do you no good at all if its not ‘real world’!Next time, I will discuss adding csv config elements so that user actions can pull search or post criteria from a file. &nbsp;

No Object Oriented Cookbooks Here

With all of the recent talk regarding object oriented development in ColdFusion, a common theme appears with developers attempting to make the jump that I have seen many times in the past. You see comments like “where are the OO tutorials” or “where can I find an OO book”. The problem that people don’t seem to grasp here is that object oriented development is not something that you can just follow a check list for and be an expert at. There is no step A, step B, now you have a world class object oriented application.

Imagine a carpenter who only has a hammer in his toolbox. This carpenter can get quite a bit done with just that one tool, but to him, everything looks like a nail. This is where procedural development is very useful and practical for getting things done. For this carpenter, the hammer can take care of quite a few tasks – “hammering” them out quickly – just maybe not in the best way possible.

However, if you take that same carpenter and give him some more tools, over time and with experience with those new tools, he can start to create much more refined projects. Don’t skip the part about time and experience though.

Object oriented development is more of an umbrella over a very large collection of tools and best practices for application development. With object oriented development, you have things like encapsulation, inheritance, and design patterns. Just having these tools in your toolbox does not make you an object oriented developer or make your application any better.

It is just as easy to create a bad object oriented application as it is to make a bad procedural application. What sets apart a “guru” developer from an average developer is the ability to know the tools in his/her toolbox and to be able to use those tools in the right place at the right time.

I have heard a quote many times regarding design patterns where a developer asks, I have worked all of the design patterns into my application but one, can you help me get this last one in? Just because you have the tools does not mean that using every one of them makes you a better developer or a better application. If you go into the transition to object oriented development expecting to read a book and be an expert tomorrow, you are in for a headache at the very least.

I started my programming career in object oriented development over 12 years ago and I am still learning new tools and how to better utilize them. While I could be slow, what that really means that there is a lot out there to learn and a lot of experience to gain.

As a fellow developer stated a few days back, if you ever quit learning, this industry is going to pass you by. With a concept as big as object oriented development though, you are much better off taking small bites and learning one tool at a time.

Adobe Flash Builder 4 'Gumbo' and Flash Catalyst In the Wild

Adobe just release Adobe Flash Builder 4 code name ‘Gumbo’ and Flash Catalyst on their labs website. What are doing here? Go grab it at: http://labs.adobe.com/

You need more persuasion? Here is some semi-accurate information on all the new features:

Flash Builder 4 (formally Flex Builder)

While the layout of the application built on the Eclipse platform will largely go unchanged.

There are a few new trinkets worth checking out.

The new Flash Builder 4 has a new service inspector panel. The tool should largely replace the need for a third party web debugging proxies like Charles.

Flash Builder 4 has a new unit testing panel. Gumbo has engulfed the popular opensource Fluint unit testing framework and supports additional popuplar unit testing frameworks.

The last view that I find interesting is the Client data management panel. This view is largely for assistance in CRUD development and will be suported for languages like PHP, .NET, Java and ColdFusion. In addition to panels Flex Builder is getting some coding enhancements. These are much needed and include: getter and setter generation, package explorer, improvements to AS documentation including mxml documentation and templete creation for AS, MXML and CSS. I’m pretty sure I’ll use every single one of these on a daily bases and I say ‘Bravo’ Adobe.

Flash Catalyst

With Flash Catalyst Adobe expects to make a play to bridge the developer – designer workflow.

Using a new declarative graphics markup called FXG, Flash Catalysis will allow export from Photoshop/Illiustrator component parts or entire RIA composistions into mxml components that can be consumed by the Flex framework.

Don’t start growing a ponytail out just yet, Flash Catalysis and the new designer – developer workflow will really be geared towards the Flash 10 player. So I’m guessing we’ll still see the same ole halo theme floating around for a couple more months as we wait for Flash 10 adoption rates to max out.

Overall I’m still excitted about the direction that the Flex framework is heading.

We should see better looking and functioning Flex applications and the tools are slowly starting to catch up.

I could warn you about the perils of developing with Beta software, but I’ll save that for another post.

Tag Cloud