The amazing adventures of Doug Hughes

Archive for September, 2008

MSSQL Driver Issue with CF 8.0.1

This is one of those cases where being on the leading edge of technology causes issues. I just got a new workstation that is running the 64-bit version of Windows Vista. I installed the 64-bit version of ColdFusion 8.0.1 with the latest hotfixes etc. and everything was going well. That is, until I tried setting up an MSSQL datasource. Apparently there is something wrong with the drivers that ship with CF 8.0.1 as I was not able to connect to a local MSSQL server or a remote MSSQL server. Any time I tried creating a new data source, JRun would crash.

So, after a bit of digging, Jared Rypka-Hauer and Jason Delmore both suggested trying the latest JDBC drivers from Microsoft. Microsoft just released a CTP of version 2.0 in July which can be found here …

http://www.microsoft.com/downloads/details.aspx?FamilyId=F914793A-6FB4-475F-9537-B8FCB776BEFD

For those that have not installed a JDBC driver ( I have not before for MSSQL ), here are the steps you will need to follow.

  1. Expand the zip that you get from Microsoft and grab the sqljdbc4.jar file.
  2. Copy that file to the JRunserverslib folder in your multi-server CF install
  3. Restart ColdFusion
  4. Create a new data source, selecting ‘other’ as the typeJDBC URL = jdbc:sqlserver:/localhost:1433;databaseName=[Your Database Name]
    Driver Class = com.microsoft.sqlserver.jdbc.SQLServerDriver
    Driver Name = MSSQL

Submit that along with your database username and password and hopefully you get a green Ok result. I will update if I get any new information from Jason.

Update: In less than 24 hours, Jason from Adobe has confirmed that they can reproduce this and will be addressing the issue. So, thanks Jason and Adobe for quickly looking into this!

Truncate Most of the Tables

Recently I was in need of a script for SQL Server 2005 that would go in and truncate all of my tables in a given database. Not too tall an order one would think. Just write the darn thing and off you go. Not so fast, Bucko. What about those pesky foreign-keys? What about tables I don’t want truncated? What about the tedium in putting this thing together?

Well, after Googling and Googling and Googling some more I found this little jewel written by pvsramu in a forum entry over at SQLTeam.com. I thought I had found the answer…and I did, almost.

It did everything I wanted except I wanted to be able to exclude tables from the truncation. So mods were in order. I started looking at the best way to create a variable that I could use to list the tables to be excluded. All I found was frustration. SQL Server is not very kind when it comes to using comma separated list variables in an “IN” clause. After looking around for a while and, frankly, running out of time, I decided the best course of action was brute force. Not pretty but it gets the job done. With the whole brute force thing in mind, all it takes to remove tables from the truncation process is to add a list of them to each of the four queries in the script. Like I said, it’s not to elegant.

I certainly welcome any thoughts you all might have as to how to get SQL Server to deal with a list of values in a variable so that I could have just one list in the script.

As for this script, we use it daily. It’s been very handy and I hope some of you might find it handy as well.

Until later…

set ANSI_NULLS ON
set QUOTED_IDENTIFIER ON
go
-- =============================================
-- Author:		<Author,,Name>
-- Create date: <Create Date,,>
-- Description:	<Description,,>
-- =============================================
CREATE PROCEDURE [dbo].[truncate_tables]
	-- Add the parameters for the stored procedure here

AS
BEGIN
	-- SET NOCOUNT ON added to prevent extra result sets from
	-- interfering with SELECT statements.
	SET NOCOUNT ON;

/*

	This batch t-sql deletes data from all the tables in the database.

	Here is what it does:
	1) Disable all the constraints/triggers for all the tables
	2) Delete the data for each child table & stand-alone table
	3) Delete the data for all the parent tables
	4) Reseed the identities of all tables to its initial value.
	5) Enable all the constraints/triggers for all the tables.

	Note: This is a batch t-sql code which does not create any object in database.
	If any error occurs, re-run the code again. It does not use TRUNCATE statement to delete
	the data and instead it uses DELETE statement. Using DELETE statement can increase the
	size of the log file and hence used the CHECKPOINT statement to clear the log file after
	every DELETE statement.

	Imp: You may want to skip CHECKIDENT statement for all tables and manually do it yourself. To skip the CHECKIDENT,
	set the variable @skipident to "YES" (By default, its set to "NO")

	Usage: replace #database_name# with the database name (that you wanted to truncate) and just execute the script in query analyzer.

	*/

	-- use [#database_name#]

	SET NOCOUNT ON

	DECLARE @tableName varchar(200)
	DECLARE @tableOwner varchar(100)
	DECLARE @skipident varchar(3)
	DECLARE @identInitValue int
	DECLARE @tableExclusions varchar(200)

	SET @tableName = '
	SET @tableOwner = '
	SET @skipident = 'NO'
	SET @identInitValue=1
	/*
	Step 1: Disable all constraints
	*/

	EXEC sp_MSforeachtable 'ALTER TABLE ? NOCHECK CONSTRAINT ALL'
	EXEC sp_MSforeachtable 'ALTER TABLE ? DISABLE TRIGGER ALL'

	/*
	Step 2: Delete the data for all child tables & those with no relationships
	*/

	WHILE EXISTS
		(
		SELECT	T.table_name
		FROM	INFORMATION_SCHEMA.TABLES T
				LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
				ON T.table_name=TC.table_name
		WHERE	(TC.constraint_Type ='Foreign Key'
				OR TC.constraint_Type IS NULL)
				AND	T.table_name NOT IN ('dtproperties','sysconstraints','syssegments')
				-- add the other tables we want to exclude
				AND T.table_name NOT IN (--add values here--)
				AND Table_type='BASE TABLE'
				AND T.table_name > @TableName
		)

	BEGIN
		SELECT TOP 1 @tableOwner=T.table_schema,
				@tableName=T.table_name
		FROM	INFORMATION_SCHEMA.TABLES T
				LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
				ON T.table_name=TC.table_name
		WHERE	(TC.constraint_Type ='Foreign Key'
				OR TC.constraint_Type IS NULL)
				AND	T.table_name NOT IN ('dtproperties','sysconstraints','syssegments')
				-- add the other tables we want to exclude
				AND T.table_name NOT IN (--add values here--)
				AND Table_type='BASE TABLE'
				AND T.table_name > @TableName
		ORDER BY t.table_name

		--Delete the table
		EXEC('DELETE FROM '+ @tableOwner + '.' + @tableName)

		--Reset identity column
		IF @skipident = 'NO'
		IF EXISTS
			(
			SELECT * FROM information_schema.columns
			WHERE COLUMNPROPERTY(OBJECT_ID(
			QUOTENAME(table_schema)+'.'+QUOTENAME(@tableName)),
			column_name,'IsIdentity')=1
			)
		BEGIN
			SET @identInitValue=1
			SET @identInitValue=IDENT_SEED(@tableOwner + '.' + @tableName)
			DBCC CHECKIDENT (@tableName, RESEED, @identInitValue)
		END

		CHECKPOINT
	END

	/*
	Step 3: Delete the data for all Parent tables
	*/

	SET @TableName='
	SET @tableOwner='

	WHILE EXISTS
		(
		SELECT	T.table_name
		FROM	INFORMATION_SCHEMA.TABLES T
				LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
				ON T.table_name=TC.table_name
		WHERE	TC.constraint_Type ='Primary Key'
				AND T.table_name <>'dtproperties'
				-- add the other tables we want to exclude
				AND T.table_name NOT IN (--add values here--)
				AND Table_type='BASE TABLE'
				AND T.table_name > @TableName
		)

	BEGIN
		SELECT TOP 1 @tableOwner=T.table_schema,
				@tableName=T.table_name
		FROM	INFORMATION_SCHEMA.TABLES T
				LEFT OUTER JOIN INFORMATION_SCHEMA.TABLE_CONSTRAINTS TC
				ON T.table_name=TC.table_name
		WHERE	TC.constraint_Type ='Primary Key'
				AND T.table_name <>'dtproperties'
				-- add the other tables we want to exclude
				AND T.table_name NOT IN (--add values here--)
				AND Table_type='BASE TABLE'
				AND T.table_name > @TableName
		ORDER BY t.table_name

	--Delete the table
	EXEC('DELETE FROM '+ @tableOwner + '.' + @tableName)

	--Reset identity column
	IF @skipident = 'NO'
	IF EXISTS
		(
		SELECT	*
		FROM	information_schema.columns
		WHERE	COLUMNPROPERTY(OBJECT_ID(QUOTENAME(table_schema)+'.'+QUOTENAME(@tableName)),
				column_name,'IsIdentity')=1
		)
	BEGIN
		SET @identInitValue=1
		SET @identInitValue=IDENT_SEED(@tableOwner + '.' + @tableName)
		DBCC CHECKIDENT (@tableName, RESEED, @identInitValue)
	END

	CHECKPOINT

	END

	/*
	Step 4: Enable all constraints
	*/

	EXEC sp_MSforeachtable 'ALTER TABLE ? CHECK CONSTRAINT ALL'
	EXEC sp_MSforeachtable 'ALTER TABLE ? ENABLE TRIGGER ALL'

	SET NOCOUNT OFF
END

Tinkering with CF Admin: Make your own login system!

A while back, a colleague emailed us asking if anyone knew how to get CF Admin to use LDAP for authentication instead of it’s own private data store. It wasn’t a question any of us had any experience trying out, and I was curious, so I did a bit of digging and found something interesting. I’ve been meaning to blog about it for ages, so I’m happy to finally have the chance.

Here’s the deal: the ColdFusion Administrator (at this point) is restricted to using its own datastore, it’s own login code, and it’s own security subsystem. Kind of like the Model-T’s color scheme, you can use any login mechanism you want, as long as it’s the one that comes baked-in. This can be a problem, especially if you want your team to be able to access the administrator via a login scheme like LDAP or a SQL DB full of user/group information and divide up access to different parts of the administrator. Now that the Admin has multi-user capability this is even more important, but the restriction still applies: There’s no built-in mechanism to swap out the datastore that CF Admin uses for users, roles and activities.

That doesn’t mean, however, that you can’t make this work… The key files in the Administrator that have to do with gaining access and setting up the environment are login.cfm and enter.cfm. Login is the form (dur) and it posts to enter.cfm which sets up the environment (including instantiating the whole AdminAPI). After IMing with Ben Forta about it a bit, I was able to come up with something that was, if not perfect, at least a workable solution. You see, while both of those pages are encrypted using cfencrypt, there are no rules against replacing the built-in files with files of your own creation. Well, there’s actually a caveat there: it seems that enter.cfm does a great deal of “stuff” to set up the Administrator’s environment and I just wouldn’t want to replace it without knowing more about what it does first.

The simplest login.cfm code required to gain access to the CF Admin is:

<cfset createObject("component","CFIDE.adminapi.administrator").login("pwd","username")
<cflocation url="http://localhost/CFIDE/administrator/index.cfm" addtoken="false" />

Since an uncredentialled user is always forwarded to login.cfm, and since enter.cfm (or whatever file sets up CF Admin’s environment) is run on every request, putting that in login.cfm will essentially remove any sort of login process at all. If you hit any page in the Administrator, you’ll always see it. Try it… change the name of login.cfm to login.cfm.old and put a new login.cfm file containing the above code (with a valid username/password combination, obviously) and hit a page in the Administrator. BOOM! Right on thru. Obviously this isn’t a great idea, but it illustrates the fact that you can use login.cfm as a hook into the front end of logging into the Administrator and replace it with functionality of your design, so long as it ends with adminapi.administrator.login() and forwards back to index.cfm.

Incidentally, this all has to happen within the context of the Admin’s folders… apparently the application scope is involved in figuring out who’s logged in and who isn’t and, therefore, if you call administrator.login() from another application it will NOT gain you access to CF Admin’s UI. So be sure that you’re working with login.cfm and enter.cfm to make this work.

There are a few primary ways I’ve thought of that this could be accomplished that I thought I’d share here:

  • Fetch the credentials from your data store along with role/privilege information anduse the AdminAPI to programmatically create an account for them. The advantage here is that since they’re logging via your data store, you can disable their account before they ever get to the admin without having to use CF Admin to do anything.
  • Create a set accounts with specific sets of roles and privileges, have your users credentials verified against your datastore and then call adminapi.administrator.login() using the appropriate account for that user’s login.
  • Have login.cfm post to itself, analyze the user’s state in the DB, and display various content based on various switches. If, for example, you have a DB with all sorts of users, some of which are allowed into the administrator, a user with a valid account but no access to the admin could be presented with a page that says “You’re not authorized, click here to request access”, etc., all by having login.cfm post to itself. Yeah, the code would have to be old-school, but it works. Hasn’t everyone, at least once, written a data-entry page that posts to itself with at the top of it to handle the POST data?

In any case, there are many ways to deal with this on either end of the login system… you could integrate the AminAPI into a user-management application, you could use a bulk create/update scheduled task, you could… well you could many things. I guess the point is that even with the encrypted code in the Administrator’s directories, there are still ways to tinker with and modify CF Admin’s functionality to get it to do what you want it to.

I am going to end this post with a question: Knowing what you now know, how would you accomplish the task of using an alternative source of credentials for the CF Administrator?

Discounted Pricing on Model-Glue Training

I’ve already announced that I’ll be holding a Model-Glue training program in Raleigh, NC from October 6th to 9th.  Unfortunately early bird pricing has expired.  However, if you still want a discount on the training program you can register before September 26th and save $400 per seat. If you have any questions about the training program please feel free to contact us

How To Make a Backup Plan for SQL Server 2005

Over the weekend I received a question from a blog reader asking how to create a backup script for SQL server. Rather than answering this in email I thought I’d share this knowledge with the world at large.

First off, you don’t need to do anything too complicated to create a backup script. SQL Server Management Studio has everything built in that you need.

To start creating a backup script first check that SQL Server Agent is running. If not, you’ll get error saying that the ‘Agent XPs’ component is turned off. To turn SQL Server on, simply open SQL Server Management Studio and rick click on SQL Server Agent and click Start. SQL Server Agent will start and you should no longer get the error about ‘Agent XPs’.

Unlike SQL 2000, creating a backup plan in SQL 2005 is not really very intuitive. In SQL 2000 you could simply use a wizard to create a maintenance plan. However in SQL 2005 you actually create a SQL Server Integration Services project which runs the backup for you. (Actually this is the same basic thing you did in SQL 2000, but the interface was nicer.)

First, click to expand the Management node under your server and then right click on the Maintenance Plans node and click New Maintenance Plan.

image

Give the new plan a name and click ok. This opens a new maintenance plan. On the surface the new interface looks completely useless.

image

However, you actually have a fair amount of power. The first thing you may notice is that you can give you plan a description and set the schedule for it. I’ll come back to scheduling this in a bit. Also note the Connections button. This is where you can select which servers your backup script will apply to. This automatically uses localhost, which is what I want for my script. You may wish to change this to another server for your script.

Look under the Object Explorer and you’ll see a Toolbox window. This is where you can decide what you want to do in your Maintenance Plan. Your options include backing up you databases, checking database integrity and more. All of the tasks essentially follow the same patterns as well. I’ll show you how to backup your databases and run integrity checks. After that you should know enough to start playing with some of the other tasks available to you.

Let’s start dragging the Back Up Database Task to the big open area in the center of your window. Once it’s there right click on it and click Edit.

image

Now you will be looking at the properties for the Back Up Database Task. If you’ve ever manually backed up a database this will look familiar.

Select which databases you want to backup from the databases drop down. I tend to choose all user databases.

image

Once you have selected the databases you want to backup you choose a destination. I’m assuming you’re backing up to disk. I also create a backup file with a subdirectory for each database.

Note that when you select the directory you want to backup into you may expect to see the directory structure on the computer you’re currently using. If you are not working from the console of your SQL server you may be surprised to instead see the directory structure of you SQL Server. Keep in mind that in this case the SQL Server Management Studio is actually connecting you to a server process that is on another computer. For this reason you’ll see that server’s directory structure. You also can’t browse to any network shares. For the purposes of this tutorial I’m simply going to select a directory local to the SQL server. However, if you want to backup to a network share you’ll need to make sure that the SQL Server Agent (I believe) is running on an user account that has access to that network share. And at that point you can manually type in the path to the network share you want to backup to.

Lastly I always check the backup integrity.

image

Click OK to return back to you plan.

Next let’s add a Check Database Integrity Task by dragging it into your backup plan.

image

Right click on this task and click edit to view its properties. For my plan I selected to check all user databases.

image

The last thing you need to do is tell your backup plan what order you want the backup and check database tasks to run in. This is actually very simple and (once you’ve done it once) intuitive. I want my backup script to run first so I’m going to select my Back Up Database Task. Note that once it’s selected a green arrow hangs down from it. Just select the tip of that arrow and drag it to the Check Database Integrity Task. This will set the sequence that first the backup task is run then the Check Database task.

image

Save your plan by clicking the save icon or using the File menu. If you now refresh your Maintenance Plans node in the Object Explorer you’ll see your plan listed.

To run you plan you simply right click on it and click Execute.

image

The last thing you’ll want to do is create a schedule for your plan. To do so, click on the ellipsis (…) next to the schedule box under the description of your plan.

Use the dialog to control when you want backups to run. I’m setting mine to run once a day at 1:00 AM.

image

And that’s it. Your databases will now be backed up to the location you specified according to the schedule you specified.

Of course, this was a simple example. You can get a lot more complicated if you dig around in the various tasks you have at your disposal.

Transfer, Model-Glue and makeEventBean()

One of my favorite methods in Model-Glue is makeEventBean(). For those who may not know about makeEventBean() here is the definition of makeEventBean() from the Model-Glue documentation:

Loops over a CFC instance, looking for any methods whose names start with set. For any setter methods where a like-named value exists in the viewstate, the setter will be called, passing this value.

For example, lets say you have a user object, and that user object has methods named setFirstname() and setlastName(). When you want to update this user through the use of a form submission, if you label to the form fields the same as the property they represent, then makeEventBean( user ) (where user is an instance of your user object) will call the setFirstname() and setLastName() methods, passing in the correct values, on the user object.

This is an example of code that might exist in your controller.

<cfset var user = getUserService().getUser( arguments.event.getValue( "userId", 0 ) ) />
<cfset arguments.event.makeEventBean( user ) />

MakeEventBean() is really handy when you have large forms to process as it eliminates the need to call all the setters manually.

When using Transfer, if you have relationships defined (such as a manyToOne) you must set those relationships before you can save the original object. For example, if you are creating a new user, but this user has a property that is another object (company for example), you will need to set the company property in your user Transfer object before you can save the user.

In this case, your code may look like this.

<cfset var user = getUserService().getUser( arguments.event.getValue( userId, 0 ) ) />
<cfset var company = getCompanyService().getCompany( arguments.event.getValue( "companyID", 0 ) ) />
<cfset user.setCompany( company) />
<cfset arguments.event.makeEventBean( user ) />

I will admit that this does not add a lot of code, but if you have a lot of these kind of relationships defined it can get burdensome. Also, to me, it feels like some logic or business process is creeping into our controller that maybe should not be there.

One solution I have started to implement on objects that have manyToOne relationships defined is to make use of Transfer decorators to simplify setting the properties that are other objects. I create methods in the user decorator with names like setCompanyId() that would get executed when makeEventBean() is called. These methods then take responsibility of setting the appropriate property with the corresponding object.

Here is an example of what the setCompanyId() method in our user decorator might look like:

<cffunction name="setCompanyId" access="public" returntype="void" output="false">
<cfargument name="companyId" type="numeric" required="true" />
<cfset var company = getTransfer().get("company",arguments.companyId)>
<cfset setCompany(Company)>
</cffunction>

Now, when makeEventBean() is called for the user, and companyID exists in the event this new method will take that value and set the correct company object inside of our user object.

This has been working well on a fairly large and complex project I am invovled in and I plan on using this technique moving forward on projects that will utilize Transfer. It has saved us quite a bit if time, and code.

Hello World

Hello everyone from sunny Phoenix. My name is Layne Vincent. While I’m not the newest memeber of the Alagad team I am the newest to the blog-o-sphere.

My career in tag based meta-languages started back in the late 80’s when a colleage asked if I would like to learn SGML. Over the course of the next several years I wrote hundreds of DTDs and FOSI’s (there’s a term you won’t hear much anymore) while also teaching the fundamentals of document analysis and SGML implementation to clients such as Lockheed, Bell South, Alcatel and GTE.

During the mid-nineties I decided that HTML was the wave of the future and being that HTML is just a subset of SGML, it seemed a perfect fit. I went to work for Nortel and while I was there I came upon a requirement for an employee database. After looking around for a good way to implement it I found this new application called ColdFusion. It was version 1.0. Well, after spending my entire adult life wondering what I wanted to be when I grew up, I had finally found it.

After Nortel I moved on to Hewlett Packard where I enjoyed many years of ColdFusion development. One day a young intern named Jeff Chastain walked into my office and asked if I knew of a good language to use to design a web site with a database backend. The answer was obvious and now you know the beginning’s of Jeff’s career in CF.  

As many developers do, I decided it was time to go it alone and form my own company. As those of you who are going it alone probably know, it can be both rewarding and frustrating. About a year and a half ago I decided to throw in the towel and sign on with Alltel. Alltel is a terrific company to work for and I have nothing but rave reviews for them. I have a question though, "When Doug Hughes asks you to join his team, how do you say no?". The answer is, you don’t.

So here I am at Alagad, diving deeply into all things OO and loving every minute of it. In the future I hope to write articles on anything and everything I can think of. Hopefully some of you will find them interesting or helpful or perhaps both.

Well, time to get back to work,

See you all soon.

Layne

Tag Cloud