The amazing adventures of Doug Hughes

I am really trying hard not to make this blog post boring andas a died in the wool server engineer, that is fairly difficult for me. In reality though, this is a fairly important topic, in fact very important to anyone thinking about High Availability (HA). In reality all deployed websites have to have some aspect of HA. By the essence of what they are and do, they need to be available when needed. So need to be “highly-available”. What I amconsidering in this postare two relatedyet different overall concepts. The first is called “N-Tier” and here is a simple diagram showing a possible HA infrastructure…

n-tier

The first point of conjecture or thought here is how many tiers actually are there? My view is that where data passes from one device-segment to another we can class that as a tier. This is marginally inaccurate though as in the public Internet, there can be many tiers by that token. In this case and because we do not know exactly how many hops data will take in passing to and from End Users we can really only consider the Internet as a single tier. Here is some explanation of each tier as I have encountered them in the work I have done over the years.

  1. This is the End User tier this is where the request originates, typically this could be a person although not always, it could also be machine generated. Usually a request is issued to get return data of some kind but is some cases a request may be issue the start, pause or kill a process.
  2. The Public Internet in this case but it could also be an intranet networkor extranet. The way I would characterize that tier is that it contains no servers or data manipulation entities that return data to the End Users, except for possible network failure messages.
  3. This tier is the gateway-entrance to the application location. This tier contains routing, data protection (Firewall), traffic control and clustering devices. Again there are no servers in this tier in the sense that they would feed requested data back to the End Users, although, they could, once again, send back status-error messages.
  4. This tier is where the web-application servers sit and these are the servers which process the requests that come in fromEnd Users. Here is where my reference to the “clone” really becomes an essential element. In a clustered and perhaps load-balanced environment it is essential that all code is replicated in real time, otherwise if an End User is moved around different servers they could obviously get different results. In addition, maintaining state and any variables that relate to that in server memory is also an immense challenge. Imagine, for instance, a shoppingcart in the End Users session scope. If the End User isonce again moved around different servers they could lose the cart altogether.If the infrastructure is a J2EE/JavaEE onethere is something called Buddy servers. In this case session information can be clustered, although this can be tough to set up initially. The main point I am emphasizing here is that this typical High Availability (HA) environment is challenging to maintain yet nevertheless typical.
  5. These routers sit between the web-application servers and database servers and their job is to add a layer of added security to the data stored in the databases in tier 6.
  6. This is where the database servers sit and as with the web-application servers they are clustered. In my experience, clustering databases is considerably trickier than clustering web-application servers. Typically the clustering is via algorithms which come with the database software itself. If the requirement is an optimal active-active load balanced cluster where each DB server is in use, then mirroring of the data between DB serversis necessary. In the case of the system here; that is mirroring between 4 servers.

So in this illustration we have looked at a typical High Availability (HA) environment in an “N-Tier” paradigm. This is the most typical kind of HA infrastructure and mostly varies only in having more or less hardware. There is no doubt that lots of equipment like this offers good levels of availability althoughthat comes with challenges, some of which I have listed above. One thing that is certainly impacted by this sort of infrastructure, in my experience, is performance. The more pieces of hardware that data has to pass through the less efficient the performance. I realize that this sounds somewhat counter-intuitiveto others in this series of articles, this is not my intent if so. There is another school of thought emergingwhich is another way of looking at application-data architecture and flow. This is called “N-Layer” and in essence concentrates on the kind of data or state of data as it passes through an application. That will the subject of my next post in this series and it introduces some controversial notions and concepts.

Tag Cloud

%d bloggers like this: