@Shazwazza

Shannon Deminick's blog all about web development

Importing SVN to Mercurial with complex SVN repository

November 2, 2010 21:14
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
Here @ TheFARM, we’ve been moving across to Mercurial (on BitBucket) for our code repositories. In many cases our SVN repositories are structured ‘normally’:
  • trunk
  • tags
  • branches

Using the ‘hg convert’ command line, when your SVN repository is structured this way will import your trunk into the Mercurial ‘default’ branch and your branches/tags into named branches. This also imports all history and revisions. From there, you can merge as you wish to structure your Mercurial repository the way that you want.

However, in some cases we have more complicated repositories. An example of this is a structure like the following:

  • trunk
    • DotNet
    • Flash
  • tags
  • branches
    • v1.1-DotNet
    • v1.2-Flash

In the above structure, we’ve actually branched the trunk/DotNet & trunk/Flash folders separately into their own branches. Unfortunately, Mercurial doesn’t operate this way so it doesn’t really understand creating branches from folders. There’s a couple different ways that you can get this from SVN into Mercurial whilst maintaining all of your history…

One way is to run ‘hg convert’ on the entire repository. You’ll end up with 3 branches in Mercurial: default, v1.1-DotNet & v1.2-Flash. The problem is that if you try to merge the named branches into default, you’ll end up with a mess since the branches don’t have the same folder structure as default. To overcome this, you can restructure each named branch to follow the same folder structure as default. To do this, we us the ‘rename’ method on Tortoise Hg. So for instance, if we had this folder structure inside of v1.1-DotNet:

  • BuildFiles
  • MyProject.Web
  • MyProject.Config

So that we can merge this with default we need to restructure this into:

  • DotNet
    • BuildFiles
    • MyProject.Web
    • MyProject.Config

So we just need to right click each folder seperately, and select the rename option from the Tortoise Hg sub menu:

image

Then we prefix the folder name with the new folder location which will the ‘move’ the file:

image

Now that the named branch v1.1-DotNet is in the same folder structure as default, we can perform a merge.

The other way to import a complicated SVN structure to mercurial is to convert individual branches to mercurial repositories one by one. The first thing you’ll need to do is run an ‘hg convert’ on the Trunk of your SVN repository. This will create your new ‘master’ mercurial repository for which will push the other individual mercurial repositories in to. Next, run an ‘hg convert’ on each of your SVN branches. For example: hg convert svn://my.svn.server.local/MyProject/Branches/v1.1-DotNet.

Once you have individual repositories for your branches, we can force push these into your ‘master’ repository. To do a merge of these branches, the above procedure will still need to be followed to ensure your branches have the same folder structure as default. HOWEVER, because we’ve forced pushed changesets into Mercurial, it has no idea how these branches relate to each other (in fact, it gives you warnings about this when you force push). When you try to do a merge, you’ll end up getting conflict warnings for every file that exists in both locations since Mercurial doesn’t know which one is newer/older. This can be a huge pain in the arse, especially if you have tons of files. If we assume that the branch files are the most up to date and we just want to replace the files in default, then there’s a fairly obscure way to do that. In the merge dialog, you’ll need to select the option “internal : other” from the list of Merge tools:

image

This tells Mercurial that for any conflict you want to use the ‘other’ revision (which is your branch revision since you should have default checked out to do the merge).

We’ve had success with both of these options for converting SVN to Mercurial and maintaining our history.

VisualSVN server on SVN protocol

September 20, 2010 21:53
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
I’m sure I’m not the only one who has noticed that running SVN over the Http protocol using VisualSVN is REALLY slow in comparison to running SVN using the file:/// or svn:// protocol. It is nice having the option of the http protocol so at least you can browse your repositories in your browser, allow external access to them without opening up another port on your firewall and also apply Windows security to your repositories, however, it is really, really slow. After some Googling on how to get VisualSVN server to run using the SVN protocol, it turns out this is not possible but you can run the SVN protocol as a service in tandem with VisualSVN which will give you the best of both worlds. Luckily for us, VisualSVN installs all of the necessary files for us to do this. Here’s how:
  • Create a batch file in your VisualSVN bin folder (normally: C:\Program Files\VisualSVN Server\bin) called something like: “INSTALLSVNPROTOCOL.bat”
    • You’ll need to edit the below script to map your svn repository folders properly. Change the “E:\YOUR-SVN-REPOSITORY-ROOT-FOLDER” to the path of your svn repository root folder.
echo ---Install the service REM this should all be on one line! sc create SVNPROTOCOLSERVICE binpath= "\"c:\Program Files\VisualSVN Server\bin\svnserve.exe\" --service --root \"E:\YOUR-SVN-REPOSITORY-ROOT-FOLDER\" " displayname= "SVN Service" depend= Tcpip echo ---Config to auto-start sc config SVNPROTOCOLSERVICE start= auto
  • Next, run your batch file.
    • This will install a windows service to host your repositories on the SVN protocol
  • Update your windows service to run as Administrator, or a user that has the permissions to run the service
    • Start Menu –> Adminstrative Tools –> Services –> Find the “SVN Service” that was just created –> Right click –> Properties –> Log On  Tab –> Change “Log on as:” to use your Administrator account.
  • Start the windows service

Your done! You can now access your repositories via the SVN protocol using something like:

svn://yourservername.yourdomainname.local/YOUR-REPOSITORY-NAME

 

Ok, to uninstall:

  • Create a batch file in the same folder as your install batch file called something like “UNINSTALLSVNPROTOCOL.bat”
echo --remove svn service sc stop SVNPROTOCOLSERVICE sc delete SVNPROTOCOLSERVICE
  • Run the batch file

Automated website deployment with PowerShell and SmartFTP

September 3, 2010 00:56
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
SmartFTP is a fantastic FTP application which handles syncing files very effectively. This means that when you upload your entire website, SmartFTP will automatically detect changes and only upload what is required (instead of overwriting all of the files like some FTP applications do). For each project at TheFARM we have build scripts which run and create a time stamped ZIP package for each deployment environment with all of the necessary files formatted appropriately for each. Our deployment process then involves unzipping the contents of this file, opening up SmartFTP, connecting to the deployment destination and transfering all of the deployment files up (which SmartFTP synchronizes for us).

I thought it would be much more efficient if we automated this process. So we did some investigation and it turns out the SmartFTP conveniently has an API! So we decided to see if we could write a PowerShell script to use the SmartFTP api to automagically transfer/sync all of our deployment files in our Zip package to the necessary FTP site and with a bit of trial and error we managed to do it! Now, I’m not PowerShell expert or anything, and in fact this was my very first PowerShell script ever written so I’m sure this could all be done a bit better, but it works! I’m not going to go into detail about the SmartFTP api or how to write PowerShell stuff because this script will work with some basic requirements:

  • You need both PowerShell and SmartFTP installed
  • Currently this only supports the standard FTP protocol, but if you need SFTP, etc… you can just change the $fav variable’s ‘Protocol’ property
  • The parameters, in this order are:
    • destination
      • the IP address, or host of your FTP server
    • user
      • the username used to login to the FTP server
    • password
      • the password used to login to the FTP server
    • path
      • The FTP path of where you want your files to go on your FTP server
    • port
      • The FTP port to use, default is 21
    • source
      • The source folder to copy to the FTP site, if not specified, uses the current directory that the PowerShell script is run from

Example usage:

FTPSync.ps1 123.123.123.123 MyUserName MyPassword 21 “C:\MyWebsiteFolder” “/websites/MyWebsite”

or you can just double click on the ps1 file and it will prompt you for these details.

So without further adieu, here’s the script!

#requires -version 2.0 # Define inputs param ( [parameter(Mandatory=$true)] [string] $dest, [parameter(Mandatory=$true)] [string] $user, [parameter(Mandatory=$true)] [string] $pass, [parameter(Mandatory=$true)] [ValidatePattern('\d+')] [int] $port = 21, [parameter(Mandatory=$false)] [ValidateScript({ Test-Path -Path $_ -PathType Container })] [string] $source, [parameter(Mandatory=$true)] [ValidatePattern('\/+')] [string] $path ) # get current folder $currFolder = (Get-Location -PSProvider FileSystem).ProviderPath; # set current folder [Environment]::CurrentDirectory=$currFolder; # if the source isn't set, then use the current folder if ($source = "") { $source = $currFolder; } Write-Host "------------------------------------------------------" -foregroundcolor yellow -backgroundcolor black Write-Host("{0, -20}{1,20}" -f "Destination", $dest); Write-Host("{0, -20}{1,20}" -f "User", $user); Write-Host("{0, -20}{1,20}" -f "Pass", "********"); Write-Host("{0, -20}{1,20}" -f "Port", $port); Write-Host ""; Write-Host "Source:"; Write-Host $source; Write-Host ""; Write-Host "Path"; Write-Host $path; Write-Host "------------------------------------------------------" -foregroundcolor yellow -backgroundcolor black # Create application $smartFTP = New-Object -comObject SmartFTP.Application; $smartFTP.Visible = [bool]0; $smartFTP.CloseAll(); # create temp favorite item $fav = $smartFTP.CreateObject("sfFavorites.FavoriteItem"); $fav.Name = $user + " @ " + $dest + " (temp favorite by cmdInterface)"; # 1 = FTP standard protocol $fav.Protocol = 1; $fav.Host = $dest; $fav.Port = $port; $fav.Path = $path; $fav.Username = $user; $fav.Password = $pass; # forces it not to be saved $fav.Virtual = "true"; # Add temporary favorite to SmartFTPs FavoriteManager $favMgr = $smartFTP.FavoritesManager; $rootFolder = $favMgr.RootFolder; $rootFolder.AddItem($fav); # Get the transfer queue $queue = $smartFTP.TransferQueue; # stop the queue if it isn't already if ($queue.State -ne 1) { $queue.Stop(); } # Stopped = 1 # clear the queue foreach($item in $queue.Items) { $queue.RemoveItem($item); } # set the thread count for the queue $queue.MaxWorkers = 20; #enable logging $queue.Log = "true"; $queue.LogFolder = $currFolder + "\\LOG"; # create new transfer item $newItem = $smartFTP.CreateObject("sfTransferQueue.TransferQueueItem"); # set the item as a folder and copy operation, $newItem.type = 2; #FOLDER = 2 $newItem.Operation = 1; #COPY = 1 # Set the source $newItem.Source.type = 1; #LOCAL = 1 $newItem.Source.Path = $source; # Set the destination $newItem.Destination.type = 2; #REMOTE = 2 $newItem.Destination.Path = $path; $newItem.Destination.FavoriteIdAsString = $fav.IdAsString; #links up to our connection favorite # and finally add it $queue.AddItemTail($newItem); Write-Host "STARTING" -foregroundcolor yellow -backgroundcolor black; $queue.Start(); while ($queue.Items.Count -ne 0) { Write-Host "Processing...bytes transfered: " $queue.TransferredBytes; Start-Sleep -s 2; #wait 2 seconds } # store the total bytes $totalBytes = $queue.TransferredBytes; # cleanup smartftp app $queue.Quit(); $smartFTP.Exit(); # parse logs # regex to find "[DATE/TIME] STOR FILENAME # which indicates a file transfer $regex = new-object System.Text.RegularExpressions.Regex("\[[\w\-\:]*?\]\sSTOR\s(.+?)\[",,[System.Text.RegularExpressions.RegexOptions]::SingleLine); $totalFiles = 0; Write-Host "Files Transfered" -foregroundcolor cyan -backgroundcolor black Get-ChildItem $queue.LogFolder -include *.log -Recurse | foreach ($_) { $currFile = Get-Content $_.fullname; $match = $regex.Matches($currFile); if ($match.Count -gt 0) { foreach($m in $match) { Write-Host $m.Groups[1]; } $totalFiles++; } remove-item $_.fullname -Force -Recurse ; } Write-Host "COMPLETED (total bytes: " $totalBytes ", total files: )" $totalFiles -foregroundcolor cyan -backgroundcolor black; "------------------------------------------------------" # cleanup COM Remove-Variable smartFTP

TSQL CASE statement in WHERE clause for NOT IN or IN filter

April 17, 2010 02:58
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
There’s a ton of articles out there on how to implement a case statement in a WHERE clause but couldn’t find one on how to implement a CASE statement in a WHERE clause that gives you the ability to use a NOT IN or IN filter. I guess the only way to explain this is to use an example, and I am fully aware that the use of this may not be the best practice and is most likely required because of poor database design/implementation but hey, when you inherit code, there’s really no other choice :)

Suppose I have a stored proc that has an optional value:

@OnlyNonExported bit = 0

I want to return all items from MYTRANSACTIONS table if @OnlyNonExported  = 0, but if this value is 1 I want to return all items from MYTRANSACTIONS that have not been tracked in my TRACKEDTRANSACTIONS table. The original theory is to use a NOT IN clause to acheive the latter requirement:

SELECT * FROM mytransactions m 
WHERE mytransactions.id NOT IN (SELECT id FROM trackedtransactions)

So if I wanted to use a case statement for this query, one would think you could do something like this:

SELECT * FROM mytransactions m 
WHERE mytransactions.id NOT IN 
	CASE WHEN @OnlyNonExported = 0 
		THEN  (SELECT -1) 
		ELSE  (SELECT id FROM trackedtransactions) 
	END

But SQL doesn’t like this syntax and it turns out that you cannot use IN or NOT IN conditions with CASE statement in a WHERE clause, you can only use = or != conditions. So how do you achieve the above? Well the answer is even more dodgy that the above:

SELECT * FROM mytransactions m 
WHERE mytransactions.id != 
	CASE WHEN @OnlyNonExported = 0 
		THEN  (SELECT -1) 
		ELSE  COALESCE((SELECT id FROM trackedtransactions t WHERE t.id = m.id), -1)
	END

So basically, when we want to return all transactions, return all rows where the id equals –1 (assuming that your IDs start at 1) and when we want to filter the results based on whether or not these IDs exist in another table, we only return rows who’s IDs don’t match the same ID in the tracked table. BUT if this ID doesn’t exist in the tracked table, then an empty result set is returned and the id won’t be matched against it, so we need a COALESCE function will will return a –1 value if there is an empty result set.

Hopefully you’ll never have to use this but if you do, hope this saves you some headaches :)

Changing the hostname of a SharePoint site

January 9, 2010 02:44
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
We’ve recently set up a SharePoint server here at TheFARM which will run parts of our intranet and be our document management system.

So it got installed, but the problem was that it was installed onto a machine called mars. I made the obligatory jokes about ‘life on mars’ (admittedly I may have made the joke a few to many times :P) but at the end of last year we ran a competition to name the new intranet.

There were some fun names like SkyNet, and Randall, but ultimately the winning entry was TheBarn, which is very aptly farm-based.
But we had a problem, we don’t want to rename the server from mars (plus I’ve done that on SharePoint before, baaaaaaaaaaaaaaaaaaaaaaaaaaaaad idea), so how do you get SharePoint to accept http://thebarn when that’s not the machine name?

Unlike standard standard sites in IIS just adding a host header isn’t going to work, SharePoint will redirect you to the one it knows about, so although we were coming in via http://thebarn we’d end up at http://mars.

Hmmm…

Luckily it is actually very easy to do with SharePoint. SharePoint has the ability to Extend a web application:

image

So you navigate here, choose the Extend an existing Web application, select your site and enter the hostname (and set the port back to 80):

image

Now you’ll have a SharePoint site which listens on your new host header. You can go and delete the old one if you want (Remove SharePoint from IIS Web site) and then you’re done.

Wildcard mapping in IIS 7 classic pipeline = web.config!

December 9, 2009 00:34
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
After foolishly pulling out my hair trying to find out why my wildcard mapping was disappearing in IIS 7 using classic pipeline mode, i realized it was my own fault!! I followed the instructions on this site: http://learn.iis.net/page.aspx/508/wildcard-script-mapping-and-iis-7-integrated-pipeline/ and unfortunately just skipped over the message about how this modifies your web.config… oops! So basically, every time I deployed my handler mapping would be removed… Doh!

Unfortunately, the method to add a wildcard mapping in this article will actually remove the inheritance of standard handlers from the root of IIS and your machine.config and just make copies of them. This might not be the best approach, but i suppose sometimes it’s necessary. We only need the wildcard mapping for URL Rewriting so i decided to see if i could just simply add the isapi wildcard mapping only, have the rest of the handlers inherit from the root and see if it works… turns out it does!

So instead of having to modify IIS itself, i just needed to add this to my web.config:

<handlers>
	<remove name="ASP.Net-ISAPI-Wildcard" />
	<add name="ASP.Net-ISAPI-Wildcard" path="*"
	verb="*" type="" modules="IsapiModule"
	scriptProcessor="C:\Windows\Microsoft.NET\Framework64\v2.0.50727\aspnet_isapi.dll"
	resourceType="Unspecified"
	requireAccess="None"
	allowPathInfo="false"
	preCondition="classicMode,runtimeVersionv2.0,bitness64"
	responseBufferLimit="4194304" />
</handlers>

Too easy! No fussing around with IIS and now at least i won’t override my changes accidentally.

Testing Outgoing SMTP Emails - So Simple!

July 16, 2009 23:29
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
At the Umbraco retreat before CodeGarden 09 in Denmark, Aaron had told me an extremely handy tip about testing outbound emails in your .Net applications. I'm not sure why I've never heard about this before and the funny thing is all of the .Net developers working in our office (including contractors) had never seen this before either! It's so incredibly simple and built into .Net, so if you don't know about this already you'll want to be using this in the future.

If you application needs to send emails for whatever reason and you’re testing locally, you generally have to make sure that you're only sending emails to your address(es) so you’re not spamming a bunch of random people. This is an easy way to get around that and lets you view all of the emails sent. Just change (in our case add) a deliveryMethod attribute to your smtp settings to SpecifiedPickupDirectory:

<system.net>
  <mailSettings>
    <smtp from="noreply@localhost" deliveryMethod="SpecifiedPickupDirectory">
      <specifiedPickupDirectory pickupDirectoryLocation="c:\maildrop" />
    </smtp>
  </mailSettings>
</system.net>

Now, all emails that are sent, just get saved to the specified folder and you can view them with Windows Live Mail, Outlook express, Thunderbird, or whatever.

Nice!!

Guide to installing Cold Fusion 8 on Windows Server 2008 (IIS 7) 64 bit

May 8, 2009 00:27
This post was imported from FARMCode.org which has been discontinued. These posts now exist here as an archive. They may contain broken links and images.
After a lot of trial and error i finally figured out how to get CF 8 running in on Windows Server 2008 x64 in IIS 7. So i figured I’d write a post about it since there’s pretty much no documentation covering this that i could find.

Installation

  • Take a backup of IIS
    • C:\Windows\System32\Inetsrv\AppCmd add backup "backupname"
  • Install CF 8 Enterprise
    • Select Multiserver
    • Keep default paths
    • DO NOT attempt to configure anything for ColdFusion until the update is applied
  • Install CF 8.1 Update
    • Configure for Multiserver

Web Site/Server Configuration

  • Give the IIS users/groups (IUSR, IIS_IUSRS) full control over your JRun install folder (C:\JRun4\lib\wsconfig)
    • After looking at the logs, it seems that the configuration tool is trying to set IIS_WPG permissions on this folder which is for Server 2003, not 2008
  • Create a new application pool called ColdFusion
    • Under advanced settings, enable running in 32 bit mode and make Managed Pipeline mode Classic instead of Integrated
    • CF will not run without 32 bit and Classic enabled (according to my experience so far)
  • Create a new website and ensure it is assigned to the ColdFusion application pool
    • For testing, create a website pointed to your default CFIDE install folder
  • Launch the Web Server Configuration Tool from Start Menu
    • Click Add
    • Select "coldfusion" from the JRun Server drop down list (not "admin")
    • Ensure the Web Server has IIS selected
    • Select the website you just created from the IIS Web Site drop down list (Do not check All, or be prepared to restore IIS if your running other .Net apps!)
    • Check "Configure web server for ColdFusion 8 application"
    • Click Advanced...
      • Check Enable verbose logging for connector if you want details log requests for debugging
    • Save changes and click yes to restart the web server (this will restart IIS!!!)

Testing

  • If you configured a test site to point to your CFIDE folder, go to the website in your browser to the /install.cfm path
    • This should show you a Congratulations screen
  • If you configured your site with your own CF files, test those instead

Debugging

  • After some trial and error, i figured out the above procedure, but there are logs to refer to.
  • the CF web site config tool creates web site configuration structures at this location:
    • \Run4\lib\wsconfig\(some number)
    • Each (some number) corresponds to a different website configured with the tool
    • In each folder is a LogFiles folder that contains logs that you can use to debug the installation
  • There's also a log file at: \Run4\lib\wsconfig\wsconfig.log

Un-configuring a site

  • If a site needs to be un-configured or re-configured, the web configuration tool seem to always fail when trying to remove a site.
  • To remove a site manually:
    • Stop the website in IIS
    • Stop the CF server and CF admin services in the Services administration tools
    • Delete the folder: \Run4\lib\wsconfig\(some number)
      • where (some number) corresponds to the site you want to remove
    • edit the \Run4\lib\wsconfig\wsconfig.properties file and remove the lines referring to the number (some number) of the site folder that you deleted in the previous step
    • Start the CF admin and CF server services
    • Run the web configuration tool and re-add the site you want configured
    • Start the site in IIS