Saturday, January 30, 2010

FUSE Labs and Social Media – Producing great tools for the wrong people?

I never told many people about this but, last year I got pretty anxious for a while about the informal way that personal data had becoming readily available through social networking sites such as Facebook, LinkedIn and Twitter.  This was reinforced to me when rich clients such as TweetDeck and Fishbowl started coming out and I could see the real power of the API’s of these sites in full view.  Even more when I decided to go and take a peek at the API’s that Facebook makes available to developers and started to understand that I might be sharing personal information about my relationships or my location without even being fully aware who I was giving it to.

I remember at the time being able to completely visualize what sort of tools the bad guys would be capable of creating if they really put their minds to it.  Not to belittle the efforts of the people who wrote TweetDeck and Fishbowl, but it’s really getting easy to build very rich dashboards to represent social graphs and with a little additional logic, I could see that you could pivot around on this data in some really interesting ways.

Fast forward 8 or 9 months and the game just keeps getting a whole lot more interesting with people such as Twitter allowing applications to store the geolocation along with the other bits of the tweeted data.  This technical shift opens the door for new innovations to rise up in how we visualize information based on the location when it was being created.  The following image shows a picture of Bing’s new Twitter Map tool:

image

As you can see from this picture, I can see location-based tweets over time (past 7 days) and then I can use filters to isolate certain data (e.g.: view only tweets made by a single user).  Couple that with tools like the new Silverlight 4 Client for Facebook and you start to understand how much intelligence you could build around this stuff:

image

Couple that with the potential reach that you get across a social graph from deploying Facebook applications and you could build a very interesting, location based “listener”.  Pictures, events, places, comments, and times.

Using technology for good

OK, so that’s enough paranoia, let’s look at what I think needs to happen to balance the scales a bit.  We know that technology can be a terrific aid in helping to uphold safety, just as it can be used for bad stuff.  Here is a link to the text from Steve Ballmer’s recent speech at the Worldwide Public Safety Symposium.

In and of itself, Bing Twitter Maps is not massively useful right now.  For example a relatively minor number people are producing geo-coded tweets and so what you see when you look at this tool is a demonstration of capability.  But think about how useful tools such as this could be in a crisis sometime in the not too distant future.  A tool such as this could help somebody who was in distress to get a message out.  It could be used to monitor activity near the scene of a crime – such as a burglary – and it could help groups of people who are often in disparate locations to understand when they were located near other members of the group.

For some of these types of scenarios to play out, I believe that we need more metadata so that different classes of information could be visualized in better ways.  Consider what could be done if there was a metadata standard for Twitter which was the equivalent of a 911 or emergency message.  Metadata such as this would allow individuals to send out distress signals and to perhaps have them acted upon.

Similarly, what if there was the ability for certain agencies to annotate their data with certain information which carried special weight with regards to public interest?  Fire departments or local councils could start to create messages which plugged in to the raft of tools that are starting to emerge.  Imagine a social network dashboard which was able to interrupt you with an important announcement about an event which was occurring nearby to you.

I think that as more standards start to emerge in the way that these types of information are transmitted, we will start to see some really amazing experiences being built and some which carry tremendous social responsibility messages to us. 

Until that day arrives, maybe we will be dazzled by new rich renditions of information, but whether it has great application in our lives might be debatable.  With the current tools the way they are, I think that there’s more on offer for petty stalkers than there is for seriously digital communities, groups, and individuals.

Tuesday, January 26, 2010

Creating and running an Azure package by hand

Note: there appears to be a bug in this which is currently preventing the web application from "spinning up" on my machine.  I'm not sure what the problem is but I will correct this post when I find it.


Recently I was running a limited version of Visual Studio which didn’t allow me to build Azure packages because the VS and Azure bits were out of sync.  So I ended up having to build the Azure bits by hand and I thought that in this post I would share how that was done.  In this post we will create an ASP.NET MVC application and attach it to an Azure Web Role.  After that, we will deploy it to our development fabric and run it.
Create the ASP.NET MVC Application
  1. Open Visual Studio, click New Project and create a new ASP.NET MVC 2 Web Application called HelloCloudService
    image
  2. Manually add a reference to the Azure .NET assemblies (these are located in the ref folder where you installed the SDK – the default path is C:\Program Files\Windows Azure SDK\v1.0)
    image
  3. Change your Index action so that it looks like this:





    public ActionResult Index()
    {
        if (RoleEnvironment.IsAvailable)
        {
            ViewData["Message"] = "Welcome to the Cloud!"; 
    } else {
            ViewData["Message"] = "Welcome to ASP.NET MVC!"; 
    }
    
        return View();
    }



  4. Press F5 to run your web application and you should see that we are indeed NOT running in the Azure cloud fabric.

     image

Create the Web Role

Next we will create our Service Definition (.csdef) and a Service Configuration which matches it (.cscfg)
  1. Go to the folder which contains your Web Application project folder and create another folder alongside it called WebRole

    image
  2. Open the WebRole project and create a new text file named HelloCloudService.csdef.  Open the file using Notepad.  You can read about the schema for the Service Definition file here.
  3. Add the following text to your .csdef file and press save:

    <ServiceDefinition name="HelloCloudService"

    xmlns="http://schemas.microsoft.com/ServiceHosting/2008/10/ServiceDefinition">
      <WebRole name="HelloCloudServiceWebRole">
        <InputEndpoints>
          <InputEndpoint name="HttpIn" protocol="http" port="80" />
        </InputEndpoints>
      </WebRole>
    </ServiceDefinition>



  4. Open the Windows Azure SDK Command Prompt in elevated mode, type the following command, and press enter (you will have to change any paths to suit your file locations):

    cspack "D:\Sandboxes\2010Wave\HelloCloudService\WebRole\HelloCloudService.csdef"

    /role:HelloCloudServiceWebRole;"D:\Sandboxes\2010Wave\HelloCloudService\HelloCloudService" 
    /generateConfigurationFile:"D:\Sandboxes\2010Wave\HelloCloudService\HelloCloudServicePackage\HelloCloudService.cscfg" 
    /out:"D:\Sandboxes\2010Wave\HelloCloudService\HelloCloudServicePackage" 
    /copyOnly

    Interesting things to take note of here are:



    • We are generating a Service Configuration file named HelloCloudService.cscfg and placing it within the deployment package folder.  This file is based on the Service Definition that we specified in step #3.  So had we declared any configuration items in that .csdef file, there would be stubs for their values in the Service Configuration file.
    • By using the /copyOnly flag, all files are copied into a folder hierarchy whereas omitting this flag would produce a .cspkg formatted file that would be used as a deployment package for deploying to Windows Azure.  That’s what I showed in a previous post on how to deploy into the cloud as part of your build process.



  5. At this point you should have created a CloudService package folder and you should find it positioned like so:

    image
  6. Looking at the structure of that folder we see an elaborate hierarchy:

    image
Running the CloudService in the Development Fabric
  1. From the same elevated command prompt we can check to see the status of the Dev Fabric to see whether or not it is running:

    image 
    You can find the reference documentation for CSRun.exe here.
  2. We can see in the above image my Devfabric service is not running… let’s start it by typing the following commands:

    csrun /devfabric:start


  3. At this point you should see the Devfabric service icon (it’s the one which looks like a blue Windows flag) appear in your system tray icons


    image
  4. Now we can launch our Service on the devfabric by calling /run and passing in the path to the location of our package and the configuration file that we generated using CSPack



    csrun /run:"D:\Sandboxes\2010Wave\HelloCloudService\HelloCloudServicePackage";
    "D:\Sandboxes\2010Wave\HelloCloudService\HelloCloudServicePackage\HelloCloudService.cscfg" 
    /launchbrowser

     Things to pay attention to here are:



    • We are specifying to pass in the Service Configuration file that we created previously as part of our packaging command



  5. After running that command, a browser window should appear with your web site hosted within it but now showing a message of “Welcome to the Cloud!”
      
  6. If you right click on the Devfabric icon in your system tray and select Show UI you will also so that your service is now indeed being hosted in the Devfabric

    image

Monday, January 25, 2010

How to build an Azure Cloud Service package as part of your build process

I explained in a previous blog entry how important, and ultimately simple, it is to create a deployment package for SharePoint as a part of your Continuous Integration process and in this post I’d like to show you how to produce deployment packages for Windows Azure.  This article is laid out in the following sections:
  • Deploying to Windows Azure
  • The CSPack Command Line Tool
  • Automating the Creation of your deployment package
Deploying to Windows Azure
Those of you who have deployed solutions to Windows Azure in the past will know the routine.  That is, you must:
  1. Create an account and then login to the Windows Azure portal (this is done through http://azure.com but for the purposes of this article I am going to use the tech preview site).
  2. Create a cloud service project for your account – in the following image I have 3 cloud services in my PDC08 CTP account:

    image
  3. Then click on a Cloud Service project to access the user interface to upload new deployments and toggle them between staging and production:

    image
The packages that you upload into the Cloud Service project is a Cloud Service Package format and is created by an Azure SDK tool called CSPack.exe.  These Cloud Service Packages have a .cspkg file extension and are also built into the Visual Studio tooling so that creation of these packages is all done seamlessly as part of using the Visual Studio tools.  As mentioned in this article, this is as simple as right-clicking on a VS project node and choosing the Publish option.

The CSPack Command Line Tool
CSPack.exe is a tool which comes as part of the Windows Azure SDK and, when installed, it can be found at C:\Program Files\Windows Azure SDK\v1.0\bin\cspack.exe
image
As per the documentation for CSPack, the syntax for using this tool to create a package is:
CSPack <service-definition-file> [options]
And the options that we are interested in are:

Argument
Description
/out: <file | directory>
This option indicates the output format and location for the role binaries.
/role:<rolename>;<role-binaries-directory>;
This option specifies the directory where the binaries for a role reside and the DLL where the entry point of the role is defined. The command line may include one /role option for each role in the service definition file.

And actual sample usage would look something like this:

cspack \ServiceDefinition.csdef 
/role:AmbientPlacesWebRole;\CloudService.WebRole 
/out:AmbientPlaces.cspkg

In this example you can see that we are passing in the path to the .csdef file which describes the definition of our service, including what configuration items that we are specifying.  Next we pass through the directories and names for each role that is specified in our Cloud Service Defnition (.csdef) file.  Finally we can optionally be explicit in specifying the name and location of our outputted Cloud Service Package (.cspkg).

Automating the Creation of your deployment package

Building the solution as part of our automated build process is very similar to what we did when we created our SharePoint deployment package my previous article on the topic:

[SharePointBuildProcess[4].png]

Except that, instead of using WSPBuilder.exe, we would use CSPack.exe in those places.

You can refer to my previous article to see how we did it for SharePoint packages, but for building Cloud Service packages I would refer you instead to the following blog post which does an excellent job of explaining on way to do the same thing from an MS Build script:
http://blogs.msdn.com/domgreen/archive/2009/09/29/deploying-to-the-cloud-as-part-of-your-daily-build.aspx
Another option that is available is to use the Windows Azure PowerShell Cmdlets to customize your deployment process:
http://blogs.msdn.com/chabrook/archive/2009/10/26/azure-powershell-cmdlets-announced.aspx

Thursday, January 21, 2010

Reflections on the Microsoft MVP Program

Back in 2004 I was lucky enough to rewarded for a bunch of community activity that I’d been doing with my first MVP award.  At the time this meant a lot of forums activity on ASP Messageboard and the newer ASP.NET forums and I was also developing and managing quite a few open source software projects.  I’ve been re-awarded every year since then – up until this year that is.  For the last couple of years I’ve focused more and more on my management and hockey coaching skills and this has meant no time for “giving” to the community – unless a couple of thousand tweets per year counts Happy So now I am no longer an MVP for the first time in 6 years.

I’ve reflected on the MVP program many times before but my opinion still has not swayed significantly from what I wrote here back in July 2006 – and yes giving up the MSDN subscription is going to have a pretty hard impact I expect.  I also agree strongly with what @davidlem wrote back in his post back in January of that same year.

During my time as an MVP I was lucky enough to work for employers who supported me and this allowed me to travel to Seattle on several occasions to attend MVP Summits.  And I have great memories of those!  Meeting with real Redmondites and making a personal connection with so many of them has really helped me in my career and made it fun along the way too. Justin Rogers, Doug Seven, Kent Sharkey, Duncan MacKenzie, Jonathon DeHalleux… the list goes on.  Meeting and getting to talk with people such as Anders Heilberg, Paul Vick, and Don Box and being able to attend keynotes delivered by both Steve Ballmer and Bill Gates were standout highlights.  Of course meeting and making relationships with community leaders (and there are too many for me to start naming them) was also an enriching experience.

So if you are an MVP now, my advice is to use the program as a tool and take the opportunity to attend these conferences because they are truly unique opportunities and you won’t regret it.  And while there, go out of your way to say “hi” to people.  Get to know some of them.  Read their blogs, chat with them, have a beer with them.

There are changes that I think could be made to improve the program but they are largely encapsulated in the previous blog posts that I linked to.  Given the chance to make any single change, probably upping the churn factor would be the thing that I’d like to see.  To give more people opportunity and to force others to cherish their moments in the program while they have them – to take advantage of that time.

So where to now?  Who knows… I’m yet to fully decide how much time to take from business and sporting interests to channel back into technical community activity.  I’d love to, but you’ve got to do things that make you happy for the right reasons.  But I am grateful for what I’ve had and I’d love to personally thank all those who have supported me over that time.

Wednesday, January 20, 2010

How to reference the jQuery libraries from within your SharePoint environment

If you’ve decided that you want to implement a modern client-centric approach to your SharePoint page customizations you may well decide to use jQuery as a major ingredient.  If you do then you will need to decide how to reference the jQuery library scripts from within your pages.  In this article I will show you how to use the AdditionalPageHead delegate control to implement an elegant approach to solving this puzzle.

SharePoint 2007 provides us with a very elegant way to insert items into the head section of our HTML pages. These insertions might typically be referencing additional CSS or JavaScript resources. The mechanism that is provided by SharePoint is a DelegateControl with an ID of AdditionalPageHead. The cool fact about the AdditionalPageHead delegate control is that it allows multiple controls to be injected into it!
You can read the following articles to see examples of using the AdditionalPageHead delegate control to perform these types of common tasks:
A good scenario for thinking about using this type of approach might be if you are developing a WebPart which makes use of jQuery. In this case you will want to deploy your custom WebPart as a feature and have all of the resources packaged correctly so that everything comes together at runtime. These resources include:
  • The jQuery library JavaScript files
  • Your WebPart class
  • JavaScript behaviours for your WebPart
  • CSS styles for your WebPart
You would separate out the resources that are specific to your web part from the jQuery library which should form part of the common infrastructure of your page so that other features can reference it. So we would create 2 features and deploy them separately:
  • CommonPageHeadInfrastructure - A feature that includes a custom web control which writes common JavaScript and CSS references into the page head region and a Feature definition which injects the custom control into the AdditionalPageHead delegate control.
  • CustomWebPartFeature - A feature which encapsulates the functionality of your custom web part.
The web control that you create to write out your common JS and CSS references is a simple control which inherits from Control and writes out the references from within its Render method like so:

public class PageHeadContentControl : Control
{
    protected override void Render(System.Web.UI.HtmlTextWriter writer)
    {
        writer.Write("<script type="text/javascript" src="/_layouts/1033/jquery-1.2.6.min.js"></script>") ;
    }
} 

And you would deploy that control into the delegate control using a technique similar to what is shown in this article.

As for your custom web part, you would create it as you normally would and, when creating your Child Controls, register your scripts in the typical manner with the ScriptManager like so:As for your custom web part, you would create it as you normally would and, when creating your Child Controls, register your scripts in the typical manner with the ScriptManager like so:"

var scriptPath = "/_layouts/1033/YourFeatureName/YourScriptFileName.js";
var scriptKey = "YourFeatureNameScriptIncludeKey" ;

var type = this.GetType() ;
var cs = Page.ClientScript;

if (!cs.IsClientScriptIncludeRegistered(type, scriptKey))
{
    cs.RegisterClientScriptInclude(cstype, scriptKey, scriptPath);
}  

Of course this means that you must deploy the YourScriptFileName.js file as part of your feature. You can learn about the structure for doing that by reading this article.

Tuesday, January 19, 2010

The difference between a great developer and an average one is 6 hours

I explained in my intro post that I was using this blog to immerse myself in the new wave of technologies.  In this article I want to shed some more light on what the primary driver behind this was.

Until a couple of years ago I was very comfortable with the current .NET technologies of the day - ASP.NET 2.0, ASP.NET Ajax, WinForms... and a bunch of tools and libraries that shipped within that ecosystem.  I would probably go so far as to put myself out as being an expert (whatever that means) at using them to build stuff.  Then one by one, new components shipped and I somehow stopped staying at the leading edge of the curve - .NET 3, .NET 4, Silverlight, WPF, and a bunch of agile practices that came with them.  The way we build stuff changed too and so new arts and disciplines came to support that - PowerShell, TDD, DI, MS Build and the list goes on.

Let's just say that, during that period, when given a choice between sleeping and staying ahead of the curve... I chose to sleep.

The most important result of all that was not that I cannot use these new technologies, it's more about the level of comfort that I have with them.  Probably the thing that I liked best about what I had before was the ability that I had to create a high quality base application in a very short space of time.

This meant that I could fire up Visual Studio, create a blank solution, and in under 6 hours I could have something which:

  • Was housed in a source control system
  • Could successfully build as part of a CI process
  • Used best practices for cross cutting concerns such as configuration, logging and security
  • Used high quality third party libraries where they were needed to add value
  • Had a good consistent approach to naming and design guidelines for things such as namespaces and classes
So in under 6 hours, I had an architecture that I was very comfortable with; I knew that I could extend it, deploy it, customize it, and generally do what I liked with it.

At the time I remember lamenting how long it would take other developers to get to a good, consistent base such as this.  You might assign a developer a task and when you came back a week or two later they hardly had anything resembling something which had evolved from anything resembling modern software.

Getting to your high quality base should be the sharpest tool in your toolkit.  And until you can get to that point, it should be the main focus for your Coding Katas.  

So that's the aim for me this year and that's the journey that I hope that this blog will document.  Base app, best practices, 6 hours!

Monday, January 18, 2010

How to build a SharePoint WSP package as part of your Continuous Integration process

Having a continuous, repeatable, and automated build process is necessary when developing software - and developing custom solutions for SharePoint is no exception. In this article I will describe how to automate the creation of SharePoint WSP Solution files as part of your continuous integration build process. The article is laid out in the following sections:
  • SharePoint Features and Solutions
  • WSP Builder
  • Automating the Creation of your WSP Solution
If you are familiar and comfortable with SharePoint features and solutions, feel free to skip forward past the first two sections.

SharePoint Features and Solutions
The way that you develop for SharePoint is via Features. SharePoint Features are a collection of resources and XML configuration information which describes to SharePoint how a certain feature is structured. To provide a concrete example, take a look at the following link to see what goes into creating a Feature:

Anatomy of a SharePoint WSS v3 Feature Project in Visual Studio 2005

That's a fair bit of stuff! At a minimum you are typically looking at:
  • A Feature.xml file which defines the Feature and specifies the location of assemblies, files, dependencies, or properties that support the Feature.
  • A signed assembly to be installed in the GAC.
  • An Elements.xml configuration file which defines the elements that the Feature depends upon
In that article there is also a lot of fluff that is required to create a SharePoint Solution - which is what you need to create to actually deploy and install your Feature into SharePoint. The items that are created in that article which exist solely to create the SharePoint Solution are:
  • A Manifest.xml file to describe how to package the Solution
  • A DDF file which defines the structure and contents of the solution package
  • A custom MSBuild task which runs makecab to process the DDF file and create the .wsp Solution file.
WSPBuilder
WSPBuilder is an open source piece of software which really helps to simplify the task of creating SharePoint Features and Solutions. WSPBuilder consists of an executable program and a Visual Studio plug-in which aid developers when developing SharePoint Features. Here's a link to a page with a tonne of links to useful articles which explain WSPBuilder:
WSPBuilder documentation
When using WSPBuilder you do not need to know about the second grouping of objects that were created in the article which I referenced earlier. Namely, you will not need to create a Manifest.xml, a DDF file, or the custom MSBuild task to call makecab.exe.

When using the WSPBuilder Visual Studio add-in, developers get "right-click" > "add feature" functionality which really does help to bootstrap the development of Features. Not only that, WSPBuilder also helps by simplifying the following common tasks:
  • Deploying to the GAC
  • Creating Features
  • Generating SharePoint Solutions
  • Debugging
Each of these features can be accessed from within Visual Studio to help speed up the development process when developing SharePoint Features. The interesting part of this is that these services are actually provided by the underlying WSPBuilder.exe application. And this means that we can use this executable during our build to automatically generate SharePoint .wsp solution files as part of our continuous integration process.

Automating the Creation of your WSP Solution
Having used WSPBuilder, your Visual Studio will have a specific structure that conforms to how SharePoint is laid out in the 12 Root, GAC, and 80 Folder. And using the VS add-ins, your developers will have been able to easily create and deploy WSPs on their local development environments. Here is a view of the layout that I would use for a typical SharePoint development project:

ProjectStructure

When developing I can simply right click the My.SharePoint.FrontEnd project to get WSPBuilder options for deploying, debugging, or re-deploying assemblies to the GAC. However, in order to maintain order and control over the content of WSP files and have them versioned properly, it is best to generate the WSP files that will get pushed through into production automatically as part of your build process.

Creating a WSP package from your build process simply means that you will invoke WSPBuilder.exe as part of your CI build. Therefore the first thing that you must do is to ensure that you have access to WSPBuilder on your build server.

Once you have WSPBuilder on the build server it is a matter of adding some control flow to your build tasks so that it gets called after the Visual Studio project has been built. The control flow for your continuous build process should look similar to this:Creating a WSP package from your build process simply means that you will invoke WSPBuilder.exe as part of your CI build. Therefore the first thing that you must do is to ensure that you have access to WSPBuilder on your build server. Once you have WSPBuilder on the build server it is a matter of adding some control flow to your build tasks so that it gets called after the Visual Studio project has been built. The control flow for your continuous build process should look similar to this:

SharePointBuildProcess

In my case I am using MS Build to control the flow of the build process, but you could just as easily use NAnt or whatever build scripts you typically use to build your solutions.

Running through the MS Build script logic we see that, at the head of the MSBuild file, I would usually add some properties to provide runtime pointers to where MS Build can find various locations.Running through the MS Build script logic we see that, at the head of the MSBuild file, I would usually add some properties to provide runtime pointers to where MS Build can find various locations.

<PropertyGroup>
<Root>$(MSBuildStartupDirectory)</Root>
<WSPBuilderPath>c:\Program Files (x86)\WSPTools\WSPBuilderExtensions</WSPBuilderPath>
<WSPSolutionPath>$(Root)\$(build_branch_name)\MySharePointProject\My.SharePoint.FrontEnd</WSPSolutionPath>
</PropertyGroup> 

In the above snippet I have a property which points to the path of the WSPBuilder executable and another one (WSPSolutionPath) that points to the root path of the the Visual Studio project which has my features in it (typically I have 2 projects in my solution: 1 which has the code assembly and one which only has the feature definitions).

Notice that the WSPSolutionPath is composed by a variable which is passed into my build - $(build_branch_name) - from my continuous integration process. You can hard code this part of the path if you do not need the branch path to be dynamic.

Next up the I use an MS Build task to compile the solution. This builds the assembly which contains the code for WebParts, Workflows, FeatureReceivers, etc and which will ultimately reside in the GACNotice that the WSPSolutionPath is composed by a variable which is passed into my build - $(build_branch_name) - from my continuous integration process. You can hard code this part of the path if you do not need the branch path to be dynamic. Next up the I use an MSBuild task to compile the solution. This builds the assembly which contains the code for WebParts, Workflows, FeatureReceivers, etc and which will ultimately reside in the GAC.

<ItemGroup>
<ProjectToBuild Include="$(Root)\$(build_branch_name)\MySharePointProject\My.SharePoint.sln" />
</ItemGroup>

<MSBuild
Projects="@(ProjectToBuild)"
Targets="Build">

<Output
TaskParameter="TargetOutputs"
ItemName="AssembliesBuiltByChildProjects"
/>
</MSBuild> 

Once the solution has been built, I am able to run WSPBuilder to create a WSP package from the compiled artifacts.

<Exec Command='"$(WSPBuilderPath)\WSPBuilder.exe"
-SolutionPath "$(WSPSolutionPath)"
-12Path "$(WSPSolutionPath)\12"
-GACPath "$(WSPSolutionPath)\GAC"'
ContinueOnError='false' 
/> 


Notice that the GACPath argument expects any assmblies to be in a GAC folder of the feature project, I used a post-build script in my Core assembly project to copy the assembly across to this folder in my feature project:

xcopy "$(OutputDir)*.dll" "$(SolutionDir)My.SharePoint.FrontEnd\GAC\" /Y /R 

The final thing to do is to publish the WSP package to a release server so that it can be deployed into acceptance and production environments…

<MakeDir Directories="$(deploy_path)\$(build_number)" Condition="!Exists('$(deploy_path)\$(build_number)')" />

<ItemGroup>
<WSPSourceFiles Include="$(MSBuildProjectDirectory)\My.SharePoint.FrontEnd.wsp; $(WSPSolutionPath)\Deploy\*" />
</ItemGroup>

<Copy
SourceFiles="@(WSPSourceFiles)"
DestinationFolder="$(deploy_path)\$(build_number)"
/> 

As you can see, this is simply a matter of creating a folder and copying the resultant WSP file across.  Importantly you will note that when I create the release folder that I include the build number as part of the path.  This practice makes it very easy to keep your versioning aligned between source code, release notes, and deployable artifacts.

Updates
Jeremy Thake has uploaded a great screencast which shows how to do this (and more) using TFS which you can view here.

How to fix a “clip duration” error when using the Expression Encoder SDK

Tonight I was playing around with the latest Expression Encoder SDK in an attempt to build a small video editing tool.  What I wanted to do was to take a video and then extract only the highlights from it into a smaller “highlights video”.  This is really useful for my side interest as a hockey coach because it would enable me to extract only certain highlights from a hockey match and then send the reduced video around to the players to watch.
So I wrote the following code to get things started:

MediaItem mediaItem = new MediaItem(@"D:\Videos\CANON\20090202\20090202000204.m2ts");

mediaItem.Sources[0].Clips.Add(new Clip(TimeSpan.FromSeconds(49), TimeSpan.FromSeconds(56)));
mediaItem.Sources[0].Clips.Add(new Clip(TimeSpan.FromSeconds(159), TimeSpan.FromSeconds(167)));
mediaItem.Sources[0].Clips.Add(new Clip(TimeSpan.FromSeconds(290), TimeSpan.FromSeconds(297)));

Job job = new Job();
job.MediaItems.Add(mediaItem);

job.OutputDirectory = @"C:\output";
job.Encode();


When I ran it the following exception was thrown:
EncodeErrorException was unhandled
A clip's duration is smaller then the min clip duration 00:00:00.0170000.
Now if you look at the timings that I entered for each of the Clips in the media’s ClipCollection you will clearly see that they are indeed each greater than the duration shown in the error message, so what gives!?

The fix turned out to be quite simple – and obvious really – but I thought that I’d post the solution because there weren’t any clean hits that came up when I did my own search on the error message.  To fix, you simple call Clear() on the ClipCollection before you start adding your own Clip’s to it like so:

MediaItem mediaItem = new MediaItem(@"D:\Videos\CANON\20090202\20090202000204.m2ts");
mediaItem.Sources[0].Clips.Clear();

...

Job job = new Job();
job.MediaItems.Add(mediaItem);

job.OutputDirectory = @"C:\output";
job.Encode();

Tuesday, January 12, 2010

WaveBot integration with Blog Posts

I'm using this page to try out the blogbot@appspot.com Wave bot.  Using that bot, you can embed your Google Wave's into public web pages.  If you have a Wave account, you should see an embedded Wave below.

My initial impressions on embedded Waves is that, whilst they are an interesting idea, currently the limited feature-set of Wave means that you might as well simply direct people to your Wave via its public URL.  The reasons for this thinking are that:

  1. Your Waves have to be public to ensure that visitors can see the content - and without the ability to assign roles to participants of a Wave, this means that everybody will have contributor access.
  2. There is no ability to have draft and published versions of content.  So if all of the content that you are sharing is "published", what's the benefit of pushing it through to your blog?  You might as well use the in-built content publishing features of your blog software. 
Positive aspects to using Google Wave as a "multimedia wikichat" content source are:
  1. Google Wave has a kick-ass strength in content versioning, providing users with the ability to replay the evolution of content.
  2. It allows you to embed rich content into your content.  This includes the ability to embed Wave Gadgets so that you could include things such as charts or surveys as well as the ability to use Wave Bots that you could include to automate repetitive tasks or to augment content by providing additional services - such as Dictionary and Thesaurus services for example. 

Further reading:








Wednesday, January 6, 2010

Using Drag and Drop in a Silverlight 4 Application

Today I was working through the excellent Image Browser Lab lab to teach myself some of the newer features in Silverlight 4.  If you haven’t tried that lab as yet, you should, it shows you how to build a really cool Image browsing application which supports moving, resizing, and rotating images on a canvas.  It also shows you how to enable the scenario where a user can drag pictures from their hard drive and into the Silverlight application.  Very cool stuff!  Here’s a picture of the page in action:

image

As you will notice, in this tutorial, it shows you how to add the yellow drag handles that you see on the middle photo so that you can resize and move the images around.  And as also mentioned, you can drag images in off of your own hard drive.  If you have 5 minutes to spare, here is a link to a video by Jesse Liberty which shows how simple it is to get Drag and Drop working.

What happens if you cannot get Drag and Drop to work in your Silverlight application?

OK, call me a simpleton, but this got me today and so I’m going to put it out there to hopefully save some other poor sucker the hour it took me to work out what was going on.  Emboldened after having worked through this exercise, I decided to implement some Drag and Drop goodness in one of my own applications.  I added the few lines of mark up and code that are required to get things working, ran it up with F5, and nothing!  Zip!  Drag and Drop was not working for me.

The long of the short of it is that I had started the instance of Visual Studio in elevated mode.  What this meant was that Windows was protecting me by not allowing me to drag files from my Windows Explorer instance (which was not elevated) into my IE process (which was).

So after about an hour of scratching my head, it turned out to be a simple fix.  And now I have drag and drop functionality in my own application! Hot

Tuesday, January 5, 2010

New Year's Resolution #1 - Buy a Kindle

For the last couple of years I've allowed my interest in technology to take a back seat while I spent more time with Uni studies and gaining knowledge and experience as a hockey coach.  This year however I've promised that I will immerse myself in the latest rounds of technology - hence this blog. 

A big part of my life with technology has always been the software that I create - and this only gets better with the current wave - but I've not really been much of a gadget guy.  This is all changing though and now that we have Kindle in Australia I've decided to invest in one.

This was a big decision for me and I thought long and hard about whether to get a Kindle or to purchase a Netbook.  The attraction of having a device of this type was mostly so that I can extend my reading - especially of technical articles - to the 2 hour daily bus commute.  And the decision point between a Kindle and a Netbook really came down to the fact that you can do a lot more on a Netbook than you can on a Kindle.  For example, if I was reading a PDF tutorial on how to use PowerShell, then using a Netbook I could easily fire up PowerShell and try some of that stuff right there and then.

The thing that swayed me away from buying a Netbook in the end was mostly that the Netbook market is still such a changing landscape and, to get the Netbook that I wanted I would probably have ended up spending around $700 anyways. 

So the final decision was: spend $300 on a Kindle and solve my immediate reading need problem, then tackle a small form-factor PC later in the year.

Oh yeah, and with the $AUD doing so well against the $USD... how could I afford not to buy one! Smile

Friday, January 1, 2010

Walkthrough using Autofac as your IoC Container in an ASP.NET MVC application

Note: The Autofac assemblies that I used to write this sample are part of the latest Beta version – which is a bit of a moving target.  I know that the most recent build of that major version has changed since I wrote this stuff, so you might want to keep an eye on Nick’s blog to learn about any changes that are occurring in the codebase.  One significant change you will need to know about if you are planning to work through this article is the change that he has made to how ASP.NET MVC Controllers are registered with the Container.  You can read about that change here.

If you are building properly structured software where you have correctly implemented separation of concerns (SOC) then wiring up dependencies for classes starts to become quite an exercise. Here's an example from the a great series of articles on the topic of IoC and DI that I recommend you read to learn more about the topic which shows how verbose and complex things can become:

IFileDownloader downloader = new HttpFileDownloader(); 
ITitleScraper scraper = new StringParsingTitleScraper();
HtmlTitleRetriever retriever = new HtmlTitleRetriever(downloader, scraper);


Commonly when you use simple dependency injection in this manner you find that some services are reliant upon many other components and their construction becomes very messy. You can imagine what this starts to look like when you have lots of type registration and dependency injection to do right across your application!



In this article I am going to get you up and running using Autofac as the IoC container that will handle all of the lifetime management of your dependencies and do the dependency injection for you. To get the ball rolling, go ahead and create a new solution called SimpleAutofac:



clip_image001



I like to structure my source code so that the main branch of the application lives within a folder named trunk and that my external dependencies are referenced from a folder which lives above that. Create a folder for your Autofac dependencies, grab the latest build of Autofac and add its binaries to the folder you just created.



clip_image002



You should get the following list of assemblies added to your Autofac dependencies folder



clip_image003



Next add a reference from your SimpleAutofac project in Visual Studio to the Autofac.dll, Autofac.Configuration.dll and the Autofac.Integration.Web.dll assemblies. Having done that, we can wire up Autofac into our web application and start registering types. The first thing is to take advantage of the Autofac.Integration.Web.dll which contains the AutofacControllerFactory class that can be used at the ControllerBuilder’s current ControllerFactory.



private static IContainerProvider _containerProvider;

protected void Application_Start()
{
RegisterRoutes(RouteTable.Routes);

var containerBuilder = new ContainerBuilder();
containerBuilder.RegisterModule(new AutofacControllerModule(Assembly.GetExecutingAssembly()));
_containerProvider = new ContainerProvider(containerBuilder.Build());

ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory(_containerProvider));
}


We need to make a minor alteration to the web.config file to include an Http Module that Autofac will use to dispose



<httpModules>
<add name="ContainerDisposal" type="Autofac.Integration.Web.ContainerDisposalModule, Autofac.Integration.Web"/>
</httpModules>


The ContainerDisposal module requires us to implement the Autofac IContainerProviderAccessor interface on our Application class, which, in-turn, mandates that we expose the ContainerProvider as a property on the class. Go ahead and add the following lines of code to your Application class:



public IContainerProvider ContainerProvider
{
get { return _containerProvider; }
}

protected void Application_EndRequest(object sender, EventArgs e)
{
ContainerProvider.EndRequestLifetime();
}


You can read more about the Autofac MVC integration on the project's Wiki page. That's all there is to it, press F5 and your application should now run.



clip_image004



Note that the Controllers are now being served by our Autofac IoC container and so we can now demonstrate the advantages that we get from this by going and adding some dependencies to our Controller classes and see that they get injected at runtime for us.



Create a new interface called IMessageProvider in the Models folder of the web project.  We will use this as a dependency that we’ll then pass to controller classes:



namespace SimpleAutofac.Models
{
public interface IMessageProvider
{
string EchoMessage(string message);
}
}


And now create a concrete instance of that interface that we will use in our application. For the sake of giving meaning to the demo, let's name our class SqlMessageProvider and imagine that this class is responsible for retrieving messages from a SQL Server database. Our class will look like this:



namespace SimpleAutofac.Models
{
public class SqlMessageProvider : IMessageProvider
{
public string EchoMessage(string message)
{
return string.Format("{0} returned from message provider.", message);
}
}
}


Next we will wire up our implementation with our container so that it knows what class to return within our web application whenever an IMessageProvider is required. Go back to the Application class in Global.asax.cs and add the following registration instruction to our container builder:



containerBuilder.RegisterType<SqlMessageProvider>().As<IMessageProvider>();


Finally, go to the HomeController class and create a constructor which takes an IMessageProvider instance and then change the code in the Index action handler so that it gets its message from the IMessageProvider service as opposed to being a raw string:



namespace SimpleAutofac.Controllers
{
public class HomeController : Controller
{
private readonly IMessageProvider messageProvider;

public HomeController(IMessageProvider messageProvider)
{
this.messageProvider = messageProvider;
}

public ActionResult Index()
{
ViewData["Message"] = this.messageProvider.EchoMessage("Welcome to ASP.NET MVC!");
return View();
}

public ActionResult About()
{
return View();
}
}
}


Now press F5 to run the application and you should see that our Autofac container did indeed handle the type registration for us and it successfully injected the correct IMessageProvider instance into our HomeController class.



clip_image005



The last thing that I want to show is how to pass a connection string to our SqlMessageProvider instance whenever it is instantiated.



Go back to our SqlMessageProvider class and add a constructor that takes a connection string as an argument. Change the EchoMessage so that it shows us which connection our message came from:



private string connectionString = "";

public SqlMessageProvider(string connectionString)
{
this.connectionString = connectionString;
}

public string EchoMessage(string message)
{
return string.Format("{0} returned from {1} provider.",
message,
this.connectionString);
}



Now go back to the Application class and change the IMessageProvider registration so that it takes a specific instance that we've already pre-configured with our connection string information



var connectionString = "DarrensSqlServer";

var containerBuilder = new ContainerBuilder();
containerBuilder.Register(c => new SqlMessageProvider(connectionString)).As<IMessageProvider>();
containerBuilder.RegisterModule(new AutofacControllerModule(Assembly.GetExecutingAssembly()));

_containerProvider = new ContainerProvider(containerBuilder.Build());

ControllerBuilder.Current.SetControllerFactory(new AutofacControllerFactory(_containerProvider));


clip_image006