@Shazwazza

Shannon Deminick's blog all about web development

Deploying to Azure from VSTS using publish profiles and msdeploy

October 26, 2017 06:39
Deploying to Azure from VSTS using publish profiles and msdeploy

In almost all of the examples online about how to deploy various services to Azure, they always list the super easy way to do it and that is to authenticate your current account to your Azure subscription which then grants your VSTS build to do all sorts of things… The problem is that not everyone has the security clearance to use the super easy tools in VSTS.

When you attempt to use these nice tools in VSTS you might get an error like this: “Failed to set Azure permission ‘RoleAssignmentId: some-guid-goes-here’ for the service principal … does not have authorizationto perform action ‘Microsoft.Authorization/roleAssignments/write’ over scope” This is because these nice VSTS tools actually creates a custom user behind the scenes in your azure subscription to use but your account doesn’t have access to authorize that.

Luckily there’s a work around

MS Deploy … sigh

Maybe there are other work arounds but this works, however it’s not the most elegant. I thought I’d post my findings here because it was a bit of a pain in the ass to get this all correct.

So here’s the steps:

1. Download the publish profile

You need to get the publish profile from your app service that you want to deploy to. This can be a website, a staging slot, an Azure function, (probably a bunch of others)

image

The file downloaded is an XML file containing a bunch of info you’ll need

2. Create a release definition and environment for your deployment

This assumes that you are pretty familiar with VSTS

You’ll want to create an empty environment in your release definition. Normally this is where you could choose the built in fancy VSTS deployment templates like “Azure App Service Deployment” … but as above, this doesn’t work if you don’t have security clearance. Instead, choose ‘Empty’

image

Then in your environment tasks, add Batch Script

image

3. Setup your batch script

There’s 2 ways to go about this and both depend on a msdeploy build output. This build output is generated by your build in VSTS if you are using a standard VSTS Visual Studio solution build. This will create msdeploy packages for you and will have put them in your artifacts folder. Along with msdeploy packages this will also generate a cmd batch file that executes msdeploy and a readme file to tell you how to execute it which contains some important info that you should read.

So here’s 2 options: Execute the cmd file, or execute msdeploy.exe directly

Execute the cmd file

There’s a bit of documentation about this online but most of it is based on using the SetParameters.xml file to adjust settings… but i just don’t want to use that.

Here’s the Path and Arguments that you need to run:

$(System.DefaultWorkingDirectory)/YOUR_BUILD_NAME/drop/YOUR_MSBUILD_PACKAGE.deploy.cmd
/y "/m:https://${publishUrl}/MSDeploy.axd?site=${msdeploySite}" /u:$(userName) /p:$(userPWD) /a:Basic -enableRule:DoNotDeleteRule "-setParam:name='IIS Web Application Name',value='${msdeploySite}'"

The parameters should be added to your VSTS Variables: ${msdeploySite}, $(userName), $(userPWD) and these variables correspond exactly to what is in your publish profile XML file that you downloaded. These parameters need to be pretty much exact, any misplaced quote or if you don’t include https, etc… will cause this to fail.

Important: the use of -enableRule:DoNotDeleteRule is totally optional, if you want to reset your site to exactly what is in the msdeploy package you do not want this. If however, you have user generated images, content or custom config files that exist on your site and you don’t want them deleted when you deploy, then you need to set this.

I’m unsure if this will work for Azure Functions deployment (it might!) … but I used the next option to do that:

Execute msdeploy.exe directly

If you execute the CMD file, you’ll see in the VSTS logs the exact msdeploy signature used which is:

"C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe" -source:package='d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.zip' -dest:auto,computerName="https://YOUR_PUBLISH_URL/MSDeploy.axd?site=YOUR_PROFILE_NAME",userName=********,password=********,authtype="Basic",includeAcls="False" -verb:sync -disableLink:AppPoolExtension -disableLink:ContentExtension -disableLink:CertificateExtension -setParamFile:"d:\a\r1\a\YOUR_PROJECT_NAME\drop\YOUR_MSDEPLOY_PACKAGE_FILE.SetParameters.xml" -enableRule:DoNotDeleteRule -setParam:name='IIS Web Application Name',value='YOUR_PROFILE_NAME'

So if you wanted, you could take this and execute that directly instead of the CMD file. I use this method to deploy Azure Functions but the script is a little simpler since that deployment doesn’t require all of these parameters. For that I use this for the Path and Arguments:

C:\Program Files\IIS\Microsoft Web Deploy V3\msdeploy.exe
-verb:sync -source:package='$(System.DefaultWorkingDirectory)/YOUR_BUILD_NAME/drop/YOUR_MSDEPLOY_PACKAGE.zip' -dest:auto,computerName="https://$(publishUrl)/msdeploy.axd?site=$(msdeploySite)",UserName='$(userName)',Password='$(userPWD)',AuthType='Basic' -setParam:name='IIS Web Application Name',value='$(msdeploySite)'


Hopefully this comes in handy for someone Winking smile

Importing SVN to Mercurial with complex SVN repository

November 2, 2010 21:14
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
Here @ TheFARM, we’ve been moving across to Mercurial (on BitBucket) for our code repositories. In many cases our SVN repositories are structured ‘normally’:
  • trunk
  • tags
  • branches

Using the ‘hg convert’ command line, when your SVN repository is structured this way will import your trunk into the Mercurial ‘default’ branch and your branches/tags into named branches. This also imports all history and revisions. From there, you can merge as you wish to structure your Mercurial repository the way that you want.

However, in some cases we have more complicated repositories. An example of this is a structure like the following:

  • trunk
    • DotNet
    • Flash
  • tags
  • branches
    • v1.1-DotNet
    • v1.2-Flash

In the above structure, we’ve actually branched the trunk/DotNet & trunk/Flash folders separately into their own branches. Unfortunately, Mercurial doesn’t operate this way so it doesn’t really understand creating branches from folders. There’s a couple different ways that you can get this from SVN into Mercurial whilst maintaining all of your history…

One way is to run ‘hg convert’ on the entire repository. You’ll end up with 3 branches in Mercurial: default, v1.1-DotNet & v1.2-Flash. The problem is that if you try to merge the named branches into default, you’ll end up with a mess since the branches don’t have the same folder structure as default. To overcome this, you can restructure each named branch to follow the same folder structure as default. To do this, we us the ‘rename’ method on Tortoise Hg. So for instance, if we had this folder structure inside of v1.1-DotNet:

  • BuildFiles
  • MyProject.Web
  • MyProject.Config

So that we can merge this with default we need to restructure this into:

  • DotNet
    • BuildFiles
    • MyProject.Web
    • MyProject.Config

So we just need to right click each folder seperately, and select the rename option from the Tortoise Hg sub menu:

image

Then we prefix the folder name with the new folder location which will the ‘move’ the file:

image

Now that the named branch v1.1-DotNet is in the same folder structure as default, we can perform a merge.

The other way to import a complicated SVN structure to mercurial is to convert individual branches to mercurial repositories one by one. The first thing you’ll need to do is run an ‘hg convert’ on the Trunk of your SVN repository. This will create your new ‘master’ mercurial repository for which will push the other individual mercurial repositories in to. Next, run an ‘hg convert’ on each of your SVN branches. For example: hg convert svn://my.svn.server.local/MyProject/Branches/v1.1-DotNet.

Once you have individual repositories for your branches, we can force push these into your ‘master’ repository. To do a merge of these branches, the above procedure will still need to be followed to ensure your branches have the same folder structure as default. HOWEVER, because we’ve forced pushed changesets into Mercurial, it has no idea how these branches relate to each other (in fact, it gives you warnings about this when you force push). When you try to do a merge, you’ll end up getting conflict warnings for every file that exists in both locations since Mercurial doesn’t know which one is newer/older. This can be a huge pain in the arse, especially if you have tons of files. If we assume that the branch files are the most up to date and we just want to replace the files in default, then there’s a fairly obscure way to do that. In the merge dialog, you’ll need to select the option “internal : other” from the list of Merge tools:

image

This tells Mercurial that for any conflict you want to use the ‘other’ revision (which is your branch revision since you should have default checked out to do the merge).

We’ve had success with both of these options for converting SVN to Mercurial and maintaining our history.

VisualSVN server on SVN protocol

September 20, 2010 21:53
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
I’m sure I’m not the only one who has noticed that running SVN over the Http protocol using VisualSVN is REALLY slow in comparison to running SVN using the file:/// or svn:// protocol. It is nice having the option of the http protocol so at least you can browse your repositories in your browser, allow external access to them without opening up another port on your firewall and also apply Windows security to your repositories, however, it is really, really slow. After some Googling on how to get VisualSVN server to run using the SVN protocol, it turns out this is not possible but you can run the SVN protocol as a service in tandem with VisualSVN which will give you the best of both worlds. Luckily for us, VisualSVN installs all of the necessary files for us to do this. Here’s how:
  • Create a batch file in your VisualSVN bin folder (normally: C:\Program Files\VisualSVN Server\bin) called something like: “INSTALLSVNPROTOCOL.bat”
    • You’ll need to edit the below script to map your svn repository folders properly. Change the “E:\YOUR-SVN-REPOSITORY-ROOT-FOLDER” to the path of your svn repository root folder.
echo ---Install the service REM this should all be on one line! sc create SVNPROTOCOLSERVICE binpath= "\"c:\Program Files\VisualSVN Server\bin\svnserve.exe\" --service --root \"E:\YOUR-SVN-REPOSITORY-ROOT-FOLDER\" " displayname= "SVN Service" depend= Tcpip echo ---Config to auto-start sc config SVNPROTOCOLSERVICE start= auto
  • Next, run your batch file.
    • This will install a windows service to host your repositories on the SVN protocol
  • Update your windows service to run as Administrator, or a user that has the permissions to run the service
    • Start Menu –> Adminstrative Tools –> Services –> Find the “SVN Service” that was just created –> Right click –> Properties –> Log On  Tab –> Change “Log on as:” to use your Administrator account.
  • Start the windows service

Your done! You can now access your repositories via the SVN protocol using something like:

svn://yourservername.yourdomainname.local/YOUR-REPOSITORY-NAME

 

Ok, to uninstall:

  • Create a batch file in the same folder as your install batch file called something like “UNINSTALLSVNPROTOCOL.bat”
echo --remove svn service sc stop SVNPROTOCOLSERVICE sc delete SVNPROTOCOLSERVICE
  • Run the batch file

Automated website deployment with PowerShell and SmartFTP

September 3, 2010 00:56
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
SmartFTP is a fantastic FTP application which handles syncing files very effectively. This means that when you upload your entire website, SmartFTP will automatically detect changes and only upload what is required (instead of overwriting all of the files like some FTP applications do). For each project at TheFARM we have build scripts which run and create a time stamped ZIP package for each deployment environment with all of the necessary files formatted appropriately for each. Our deployment process then involves unzipping the contents of this file, opening up SmartFTP, connecting to the deployment destination and transfering all of the deployment files up (which SmartFTP synchronizes for us).

I thought it would be much more efficient if we automated this process. So we did some investigation and it turns out the SmartFTP conveniently has an API! So we decided to see if we could write a PowerShell script to use the SmartFTP api to automagically transfer/sync all of our deployment files in our Zip package to the necessary FTP site and with a bit of trial and error we managed to do it! Now, I’m not PowerShell expert or anything, and in fact this was my very first PowerShell script ever written so I’m sure this could all be done a bit better, but it works! I’m not going to go into detail about the SmartFTP api or how to write PowerShell stuff because this script will work with some basic requirements:

  • You need both PowerShell and SmartFTP installed
  • Currently this only supports the standard FTP protocol, but if you need SFTP, etc… you can just change the $fav variable’s ‘Protocol’ property
  • The parameters, in this order are:
    • destination
      • the IP address, or host of your FTP server
    • user
      • the username used to login to the FTP server
    • password
      • the password used to login to the FTP server
    • path
      • The FTP path of where you want your files to go on your FTP server
    • port
      • The FTP port to use, default is 21
    • source
      • The source folder to copy to the FTP site, if not specified, uses the current directory that the PowerShell script is run from

Example usage:

FTPSync.ps1 123.123.123.123 MyUserName MyPassword 21 “C:\MyWebsiteFolder” “/websites/MyWebsite”

or you can just double click on the ps1 file and it will prompt you for these details.

So without further adieu, here’s the script!

#requires -version 2.0 # Define inputs param ( [parameter(Mandatory=$true)] [string] $dest, [parameter(Mandatory=$true)] [string] $user, [parameter(Mandatory=$true)] [string] $pass, [parameter(Mandatory=$true)] [ValidatePattern('\d+')] [int] $port = 21, [parameter(Mandatory=$false)] [ValidateScript({ Test-Path -Path $_ -PathType Container })] [string] $source, [parameter(Mandatory=$true)] [ValidatePattern('\/+')] [string] $path ) # get current folder $currFolder = (Get-Location -PSProvider FileSystem).ProviderPath; # set current folder [Environment]::CurrentDirectory=$currFolder; # if the source isn't set, then use the current folder if ($source = "") { $source = $currFolder; } Write-Host "------------------------------------------------------" -foregroundcolor yellow -backgroundcolor black Write-Host("{0, -20}{1,20}" -f "Destination", $dest); Write-Host("{0, -20}{1,20}" -f "User", $user); Write-Host("{0, -20}{1,20}" -f "Pass", "********"); Write-Host("{0, -20}{1,20}" -f "Port", $port); Write-Host ""; Write-Host "Source:"; Write-Host $source; Write-Host ""; Write-Host "Path"; Write-Host $path; Write-Host "------------------------------------------------------" -foregroundcolor yellow -backgroundcolor black # Create application $smartFTP = New-Object -comObject SmartFTP.Application; $smartFTP.Visible = [bool]0; $smartFTP.CloseAll(); # create temp favorite item $fav = $smartFTP.CreateObject("sfFavorites.FavoriteItem"); $fav.Name = $user + " @ " + $dest + " (temp favorite by cmdInterface)"; # 1 = FTP standard protocol $fav.Protocol = 1; $fav.Host = $dest; $fav.Port = $port; $fav.Path = $path; $fav.Username = $user; $fav.Password = $pass; # forces it not to be saved $fav.Virtual = "true"; # Add temporary favorite to SmartFTPs FavoriteManager $favMgr = $smartFTP.FavoritesManager; $rootFolder = $favMgr.RootFolder; $rootFolder.AddItem($fav); # Get the transfer queue $queue = $smartFTP.TransferQueue; # stop the queue if it isn't already if ($queue.State -ne 1) { $queue.Stop(); } # Stopped = 1 # clear the queue foreach($item in $queue.Items) { $queue.RemoveItem($item); } # set the thread count for the queue $queue.MaxWorkers = 20; #enable logging $queue.Log = "true"; $queue.LogFolder = $currFolder + "\\LOG"; # create new transfer item $newItem = $smartFTP.CreateObject("sfTransferQueue.TransferQueueItem"); # set the item as a folder and copy operation, $newItem.type = 2; #FOLDER = 2 $newItem.Operation = 1; #COPY = 1 # Set the source $newItem.Source.type = 1; #LOCAL = 1 $newItem.Source.Path = $source; # Set the destination $newItem.Destination.type = 2; #REMOTE = 2 $newItem.Destination.Path = $path; $newItem.Destination.FavoriteIdAsString = $fav.IdAsString; #links up to our connection favorite # and finally add it $queue.AddItemTail($newItem); Write-Host "STARTING" -foregroundcolor yellow -backgroundcolor black; $queue.Start(); while ($queue.Items.Count -ne 0) { Write-Host "Processing...bytes transfered: " $queue.TransferredBytes; Start-Sleep -s 2; #wait 2 seconds } # store the total bytes $totalBytes = $queue.TransferredBytes; # cleanup smartftp app $queue.Quit(); $smartFTP.Exit(); # parse logs # regex to find "[DATE/TIME] STOR FILENAME # which indicates a file transfer $regex = new-object System.Text.RegularExpressions.Regex("\[[\w\-\:]*?\]\sSTOR\s(.+?)\[",,[System.Text.RegularExpressions.RegexOptions]::SingleLine); $totalFiles = 0; Write-Host "Files Transfered" -foregroundcolor cyan -backgroundcolor black Get-ChildItem $queue.LogFolder -include *.log -Recurse | foreach ($_) { $currFile = Get-Content $_.fullname; $match = $regex.Matches($currFile); if ($match.Count -gt 0) { foreach($m in $match) { Write-Host $m.Groups[1]; } $totalFiles++; } remove-item $_.fullname -Force -Recurse ; } Write-Host "COMPLETED (total bytes: " $totalBytes ", total files: )" $totalFiles -foregroundcolor cyan -backgroundcolor black; "------------------------------------------------------" # cleanup COM Remove-Variable smartFTP

TSQL CASE statement in WHERE clause for NOT IN or IN filter

April 17, 2010 02:58
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
There’s a ton of articles out there on how to implement a case statement in a WHERE clause but couldn’t find one on how to implement a CASE statement in a WHERE clause that gives you the ability to use a NOT IN or IN filter. I guess the only way to explain this is to use an example, and I am fully aware that the use of this may not be the best practice and is most likely required because of poor database design/implementation but hey, when you inherit code, there’s really no other choice :)

Suppose I have a stored proc that has an optional value:

@OnlyNonExported bit = 0

I want to return all items from MYTRANSACTIONS table if @OnlyNonExported  = 0, but if this value is 1 I want to return all items from MYTRANSACTIONS that have not been tracked in my TRACKEDTRANSACTIONS table. The original theory is to use a NOT IN clause to acheive the latter requirement:

SELECT * FROM mytransactions m 
WHERE mytransactions.id NOT IN (SELECT id FROM trackedtransactions)

So if I wanted to use a case statement for this query, one would think you could do something like this:

SELECT * FROM mytransactions m 
WHERE mytransactions.id NOT IN 
	CASE WHEN @OnlyNonExported = 0 
		THEN  (SELECT -1) 
		ELSE  (SELECT id FROM trackedtransactions) 
	END

But SQL doesn’t like this syntax and it turns out that you cannot use IN or NOT IN conditions with CASE statement in a WHERE clause, you can only use = or != conditions. So how do you achieve the above? Well the answer is even more dodgy that the above:

SELECT * FROM mytransactions m 
WHERE mytransactions.id != 
	CASE WHEN @OnlyNonExported = 0 
		THEN  (SELECT -1) 
		ELSE  COALESCE((SELECT id FROM trackedtransactions t WHERE t.id = m.id), -1)
	END

So basically, when we want to return all transactions, return all rows where the id equals –1 (assuming that your IDs start at 1) and when we want to filter the results based on whether or not these IDs exist in another table, we only return rows who’s IDs don’t match the same ID in the tracked table. BUT if this ID doesn’t exist in the tracked table, then an empty result set is returned and the id won’t be matched against it, so we need a COALESCE function will will return a –1 value if there is an empty result set.

Hopefully you’ll never have to use this but if you do, hope this saves you some headaches :)

Changing the hostname of a SharePoint site

January 9, 2010 02:44
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
We’ve recently set up a SharePoint server here at TheFARM which will run parts of our intranet and be our document management system.

So it got installed, but the problem was that it was installed onto a machine called mars. I made the obligatory jokes about ‘life on mars’ (admittedly I may have made the joke a few to many times :P) but at the end of last year we ran a competition to name the new intranet.

There were some fun names like SkyNet, and Randall, but ultimately the winning entry was TheBarn, which is very aptly farm-based.
But we had a problem, we don’t want to rename the server from mars (plus I’ve done that on SharePoint before, baaaaaaaaaaaaaaaaaaaaaaaaaaaaad idea), so how do you get SharePoint to accept http://thebarn when that’s not the machine name?

Unlike standard standard sites in IIS just adding a host header isn’t going to work, SharePoint will redirect you to the one it knows about, so although we were coming in via http://thebarn we’d end up at http://mars.

Hmmm…

Luckily it is actually very easy to do with SharePoint. SharePoint has the ability to Extend a web application:

image

So you navigate here, choose the Extend an existing Web application, select your site and enter the hostname (and set the port back to 80):

image

Now you’ll have a SharePoint site which listens on your new host header. You can go and delete the old one if you want (Remove SharePoint from IIS Web site) and then you’re done.

Wildcard mapping in IIS 7 classic pipeline = web.config!

December 9, 2009 00:34
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
After foolishly pulling out my hair trying to find out why my wildcard mapping was disappearing in IIS 7 using classic pipeline mode, i realized it was my own fault!! I followed the instructions on this site: http://learn.iis.net/page.aspx/508/wildcard-script-mapping-and-iis-7-integrated-pipeline/ and unfortunately just skipped over the message about how this modifies your web.config… oops! So basically, every time I deployed my handler mapping would be removed… Doh!

Unfortunately, the method to add a wildcard mapping in this article will actually remove the inheritance of standard handlers from the root of IIS and your machine.config and just make copies of them. This might not be the best approach, but i suppose sometimes it’s necessary. We only need the wildcard mapping for URL Rewriting so i decided to see if i could just simply add the isapi wildcard mapping only, have the rest of the handlers inherit from the root and see if it works… turns out it does!

So instead of having to modify IIS itself, i just needed to add this to my web.config:

<handlers>
	<remove name="ASP.Net-ISAPI-Wildcard" />
	<add name="ASP.Net-ISAPI-Wildcard" path="*"
	verb="*" type="" modules="IsapiModule"
	scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll"
	resourceType="Unspecified"
	requireAccess="None"
	allowPathInfo="false"
	preCondition="classicMode,runtimeVersionv2.0,bitness64"
	responseBufferLimit="4194304" />
</handlers>

Too easy! No fussing around with IIS and now at least i won’t override my changes accidentally.

Testing Outgoing SMTP Emails - So Simple!

July 16, 2009 23:29
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
At the Umbraco retreat before CodeGarden 09 in Denmark, Aaron had told me an extremely handy tip about testing outbound emails in your .Net applications. I'm not sure why I've never heard about this before and the funny thing is all of the .Net developers working in our office (including contractors) had never seen this before either! It's so incredibly simple and built into .Net, so if you don't know about this already you'll want to be using this in the future.

If you application needs to send emails for whatever reason and you’re testing locally, you generally have to make sure that you're only sending emails to your address(es) so you’re not spamming a bunch of random people. This is an easy way to get around that and lets you view all of the emails sent. Just change (in our case add) a deliveryMethod attribute to your smtp settings to SpecifiedPickupDirectory:

<system.net>
  <mailSettings>
    <smtp from="noreply@localhost" deliveryMethod="SpecifiedPickupDirectory">
      <specifiedPickupDirectory pickupDirectoryLocation="c:\maildrop" />
    </smtp>
  </mailSettings>
</system.net>

Now, all emails that are sent, just get saved to the specified folder and you can view them with Windows Live Mail, Outlook express, Thunderbird, or whatever.

Nice!!

Guide to installing Cold Fusion 8 on Windows Server 2008 (IIS 7) 64 bit

May 8, 2009 00:27
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
After a lot of trial and error i finally figured out how to get CF 8 running in on Windows Server 2008 x64 in IIS 7. So i figured I’d write a post about it since there’s pretty much no documentation covering this that i could find.

Installation

  • Take a backup of IIS
    • C:\Windows\System32\Inetsrv\AppCmd add backup "backupname"
  • Install CF 8 Enterprise
    • Select Multiserver
    • Keep default paths
    • DO NOT attempt to configure anything for ColdFusion until the update is applied
  • Install CF 8.1 Update
    • Configure for Multiserver

Web Site/Server Configuration

  • Give the IIS users/groups (IUSR, IIS_IUSRS) full control over your JRun install folder (C:\JRun4\lib\wsconfig)
    • After looking at the logs, it seems that the configuration tool is trying to set IIS_WPG permissions on this folder which is for Server 2003, not 2008
  • Create a new application pool called ColdFusion
    • Under advanced settings, enable running in 32 bit mode and make Managed Pipeline mode Classic instead of Integrated
    • CF will not run without 32 bit and Classic enabled (according to my experience so far)
  • Create a new website and ensure it is assigned to the ColdFusion application pool
    • For testing, create a website pointed to your default CFIDE install folder
  • Launch the Web Server Configuration Tool from Start Menu
    • Click Add
    • Select "coldfusion" from the JRun Server drop down list (not "admin")
    • Ensure the Web Server has IIS selected
    • Select the website you just created from the IIS Web Site drop down list (Do not check All, or be prepared to restore IIS if your running other .Net apps!)
    • Check "Configure web server for ColdFusion 8 application"
    • Click Advanced...
      • Check Enable verbose logging for connector if you want details log requests for debugging
    • Save changes and click yes to restart the web server (this will restart IIS!!!)

Testing

  • If you configured a test site to point to your CFIDE folder, go to the website in your browser to the /install.cfm path
    • This should show you a Congratulations screen
  • If you configured your site with your own CF files, test those instead

Debugging

  • After some trial and error, i figured out the above procedure, but there are logs to refer to.
  • the CF web site config tool creates web site configuration structures at this location:
    • \Run4\lib\wsconfig\(some number)
    • Each (some number) corresponds to a different website configured with the tool
    • In each folder is a LogFiles folder that contains logs that you can use to debug the installation
  • There's also a log file at: \Run4\lib\wsconfig\wsconfig.log

Un-configuring a site

  • If a site needs to be un-configured or re-configured, the web configuration tool seem to always fail when trying to remove a site.
  • To remove a site manually:
    • Stop the website in IIS
    • Stop the CF server and CF admin services in the Services administration tools
    • Delete the folder: \Run4\lib\wsconfig\(some number)
      • where (some number) corresponds to the site you want to remove
    • edit the \Run4\lib\wsconfig\wsconfig.properties file and remove the lines referring to the number (some number) of the site folder that you deleted in the previous step
    • Start the CF admin and CF server services
    • Run the web configuration tool and re-add the site you want configured
    • Start the site in IIS