Thursday, November 13, 2008

Frequently Bought Together



Just remember kids: no-one likes a smart arse.

Wednesday, November 12, 2008

RedGate SqlCompare wins again

...in my highly subjective 'which tool are we going to use for database schema synchronisation?' challenge that is.

RedGate's prices seem to have gone up again ($595 USD for the comparison bundle[1], but if like me you want to use command line interface you're looking at 2x $595 USD, or 3x if you want the API too). And support and maintenance is on top of that.


So I had a good look around and considered the options:

Visual Studio for Database Professionals - easy option since it's now included in our Team Suite SKU. However even the 2008 version is still pretty crude, with very little option to change the generated delta SQL, and as a result scripting out unnecessary changes (like rebuilding your tables via a temp table just to get the column order 'right') and doubtless doing things with role memberships that I didn't want. So that didn't last long.

SqlDelta I like a lot. It does schema and data compare, and it's got a command line interface all for $330 USD - cheaper than RedGate's most basic non-pro compare bundle. And it's from down under. But it choked on my instead-of triggers on a view (either scripted them as CREATE when they needed to be ALTER or vice-versa). So I had to faff about to get a sync to work. That's an immediate fail.


Then there's ApexSQL Diff. But I didn't really get round to using that. Which is where the 'highly subjective' bit of this review comes in, not to mention the 'use the first product that works, stop playing around and get some work done' voice-of-conscience.

So RedGate it is.

[1] There's an option, not available on their website, to get a Sql Compare Pro bundle, which if you need the pro editions + the API basically means you get them for 2/3 price.

Monday, November 10, 2008

Remember to enable MARS when using Snapshot Isolation from SSAS

We started getting this error when processing our cube:

OLE DB error: OLE DB or ODBC error: Cannot create new connection because in manual or distributed transaction mode
It went away when:
  • We changed to using ReadCommitted isolation, rather than snapshot
  • We processed the cube using Maximum Parallel Tasks = 1

Reading knowlege base article PRB: SQLOLEDB Allows Only One Connection in Scope of Transaction lead me to think that SSAS was trying to open multiple connections within a transaction, which isn't allowed.

Which got me thinking about MARS. Not quite sure why it wasn't on to start with, but I enabled it, and then everything was fine again.

Turns out this is actually a RTFM, if you pay attention when reading How to enable the snapshot transaction isolation level in SQL Server 2005 Analysis Services . Of course it's only in Snapshot mode that SSAS attempts to ensure database consistency for the duration of the entire Process operation, which is why it's not a problem using ReadCommitted isolation.

Friday, October 31, 2008

If McCain wins next week...

...I'll leave the US

and I don't even live there!

Thursday, October 30, 2008

Viewing the MDX cellset with WPF

When executing an MDX query there's various bits of useful metadata that can be returned in the cellset over and above the members and dimensions you've explicitly specified in your query. This can include things like the formatted value (as specified by the cube definition) as well as other attribute values for the dimension member you're explicitly querying against.

This kind of stuff can be invaluable if you're writing your own front-end app to access OLAP data, not least because it saves a whole heap of faffing about with WITH clauses (query scoped calculated measures). Something like:

with Member (
[date].[date].[date].currentmember.properties("Year")
) as TheYear,
select {
[measures].TheYear, [measures].[measure]
} on Columns,{
[date].[date].[date]
} on Rows from Cube
...can just become:

select {
[measures].[measure]
} on Columns,{
[date].[date].[date]
} dimension properties member_name, member_value,
[date].[date].[date].Year on Rows
on Rows from Cube
Trouble is you'd really struggle to work this out since all that metadata is helpfully hidden from view when you execute MDX in BIDS (and MDX studio), which makes it all a bit hit-and-miss. So I wrote a little WPF app just to visualise the actual cellset returned. Pretty basic stuff - load the results into a dataset and bind it to a grid. I could fiddle with my DIMENSION PROPERTIES clause with immediate gratification.

But it took me hours to get the binding working. One of the problems is that the MDX columns have names like '[Measures].[MyThing]' and you can't just set that as the property name of your binding and expect the binding infrastructure to cope:

{Binding Path=[Measures].[SomeMeasure]}
The binding infrastructure sees the dot and tries to walk the path, with predictable results:

System.ArgumentException: Measures is neither a DataColumn nor a DataRelation for table Table

[NB: If you had a column simply named SomeMeasure this would work, due to the magic of ICustomTypeDescriptor, but that's another story]

So instead you have to use the indexer syntax on the DataRowView:

{Binding Path=['[Measures].[SomeMeasure]']}
Or (if that made you wince)

{Binding Path=Row['[Measures].[SomeMeasure]']}
But those don't work either:

System.ArgumentException: Column '"[Measures].[SomeMeasure]"' does not belong to table Table

Even whilst the same binding path works 'just fine thanks' in the debugger. It took me a long, long time to realise there's an extra set of quotes in that error message. The WPF binding syntax doesn't require quotes for string indexers:

{Binding Path=Row[[Measures].[SomeMeasure]]}
It looks so wrong but it works.

SharePoint: The final CM frontier

For a long time I thought Biztalk was the elephant in the room when it came to Configuration Management - specifically version control. I'm not talking here about Source Control, I'm talking from a deployment perspective - the 'deploy a given, atomic, integral version into production' problem.

For a traditional .net app (Winforms, ASP.Net, WPF, whatever) it's pretty much sorted. We've had source control integration in the IDE since the dawn of time, CCNet for years, and it's even easier to get CI going now that TFS 2008 supports it out of the box. That's not to say everyone actually does so, but it's there if they want it, right?

Databases are a bit harder, but there are lots of tools around that you can incorporate into your CI cycle, and now that Microsoft's in the space with VSTS for Database Pros (now included with VSTS Dev Edition) the barrier to entry has been dropped again (though it's still a pretty poor story in the SSAS space).

However for Biztalk things are not so rosy. You get source control at least, so you can practice a unified versioning / labelling scheme, but building the project doesn't give you all the artifacts you need to actually deploy into an environment. For that you've got to IDE-deploy to a dev BizTalk instance, configure and then at least export out the binding file (if not a complete MSI). There's NAnt and MSBuild tasks to do the building, but I've not seen anyone wrap up the whole end to end, so Biztalk deployments languish in the 'manual effort with concentration' department. A direct result of this is that integrating your code with biztalk orchestrations (via WS / WCF) is increasingly seen as a more manageable approach than embedding your code, thus alleviating / sidesteping many of the issues.

I thought Biztalk was the frontier of version management - that is until I started working with SharePoint.

I'd previously accepted that working with SharePoint using the Web UI was working uncontrolled, but I'd always imagined that real SharePoint developers used SharePoint Designer, checked things into source control and had established patterns for migrating content between development and production server instances. How wrong I was. Making my first tentative forays into 'how to do this properly' I was struck by the complete absence of any guidance. It appears this is most definitely not a solved problem, which Jeremy pretty much confirmed last night in his great RDN talk on the subject. There's tools out there to manage the problem, but they're pretty new on the block. The book, quite literally, hasn't been written yet.


This is clearly a pretty major failing on Microsoft's part. I appreciate that the main thrust of version-management with SharePoint is version-management of content rather than version-management of configuration, but clearly if they want developers to embrace SharePoint as a platform they're going to have to do a bit better. Time for a 'developers, developers, developers' rant perhaps?


It seems to be that the more productivity-orientated the development environment, the less effort has been put into establishing a viable CM story:



(You could substitute 'maintainability' for 'Ease of CM / CI' if you like: one tends to drive the other)


Clearly it's easier to diff / merge lines of code rather than SharePoint XML manifests (or Workflow XAMLs), but it's a complete abdication to leave these higher-level (ie non-codey) development environments quite so dramatically out the cold.

Surely there's got to be a better answer than using DiffDog for everything?

Wednesday, September 10, 2008

Getting just the date from Sql Server's datetime

Whilst many still advocate using Convert() to drop the time-bit from a datetime[1], going off to a string like that is nothing like as efficient as the numerical alternative:

CAST( FLOOR( CAST( @dateTime AS FLOAT ) ) AS DATETIME )
Sql stores the days since 1900 in the first 4 bytes, and the time in the last 4 bytes. Throwing away the bit after the decimal place (what the above does) just strips those last 4 bytes right how.

But... why AS FLOAT? Why convert straight to an INT?


CAST( CAST( getdate() AS INT ) as Datetime)
Turns out this gives us tomorrow's date! I guess this is because the starting date is actually Jan 1st 1900, and not Jan 0, which could be a classic off-by-one error if you weren't awake.


[Update]: Because casting direct to an int rounds up in the afternoon! Doh! Doh! Doh!. This is exactly why I'm writing this down, because I knew that once and forgot. Anyway, going via a FLOAT is the go, OR the much more legible alternative that Mitch put in the comments : datediff(day, 0, @yourdate)
[/Update]

That being said, if all you want is an integer value that represents a date, just casting datetime to an int seems like a pretty good way to go, provided you never get carried away and cast it back again. Or I guess you could just compensate for the off-by-one and save yourself[1] a lot of pain later on...

If you're really interested, run this and see for your self:


select
getdate()
,FLOOR( CAST( getdate() AS FLOAT )) -- use floating point
,datediff(dd, '19000101', getdate()) -- right answer, by definition
,Cast(getdate() as int)
,CAST( CAST( getdate() AS INT ) as Datetime) -- tomorrow!







[1] Yes, Sql 2008 has a date-only type. But you and I will still be stripping times from datetimes for many, many years to come.
[2] By which I really mean myself, of course

Thursday, August 07, 2008

Continuous Integration in TFS 2008: Part 2

Ok so in Part I I missed two important features due to a schoolboy cut-and-paste error:
  • The association of a build with the changesets that triggered it - fantastic for trouble shooting
  • Automatically raising a 'bug' work item for 'he who broke the build' to fix it. Sweet.
Maybe I'll go into those in more detail another time, maybe not. They are cool. But here's the not-cool stuff:

Stupid default working directory / issues with path length
Straight out of the box many of my builds failed with excessive path length errors. Unfortunately the build agent starts with its default working directory of C:\Documents and Settings\NetworkService\Local Settings\Temp\ so I've got 61 extra chars on my path before I've even started, and we were already skating pretty close to the 255 chars limit (think BizTalk, over-zealous namespacing, auto-generated code etc...). Easily fixed, but seemed like a silly place to start.

Ok, it's going to be a bit less under Vista / Server 2008: still too long. Is C:\Builds so wrong?

Binaries deployed to shared Staging folder
More problematically, the folder structure created on the staging share is a complete mess, totally useless to use as the source for xcopy deployment to testing environments and the like. I'll show you what I mean:

Here's my solution structure: 4 deployable applications and one shared library:



And here's the staged output from the TFS build:





Pretty disappointing. All the binaries from all of the different applications have been dumped in the same folder. At least the websites have also been spat out in 'xcopy ready' form, but god help me if I wanted to Xcopy the console app one place and the WinForms app somewhere else.

What I'm looking for is more or less what I get out of SNAK right now:



...with any deployable asset (exe, website) neatly packaged to be copied off somewhere.

Same-name content files overwritten
Ok, I guess I could just accept the above and copy everything, only using the EXE that I want, but that's icky, and it does rather suppose there aren't any naming conflicts between the deployable artefacts.

For example: I added some files marked as Content in my HelloWorldConsole app, but they got completely ignored by the deployment process. I had to also mark them as 'Copy to Output Directory' before TFS build stuck them anywhere (which I'm not convinced is correct behaviour, but there you have it), and then it stuck them in the 'right' relative location to the build root folder:



...becomes...



But there are two 'Subfolder1's above, and I only got one out at the end. Predictably, one of them got overwritten.

When would this be a problem? When could different projects possibly have a content file with the same name?! I can think of some examples:

  • We always use a file 'log4net.config' to host our logging configuration, so we can change it on the fly without recycling app pools and the like. Only one project would have got the right configuration.
  • Bundled licence files (Licence.lic) would get mixed up
I'm sure you can think of some more. And yes, using Embedded Resources works fine, but they're not always an answer (eg: log4net.config).

There is a fix. There's some changes you can make to your *proj files to make them preserve the output path hierarchy when deployed to the build drop folder. But it's a per-project fix, and that's - frankly - a bit lame. I know you don't add new project to your solution on a daily basis, but it's just one more thing that needs to be kept on top of, or things start falling apart. And that's just not how all this is supposed to work.

(Alternatively you could customize everyone's project templates. I guess you could fix it on all their PCs, or you could just put a fixed template on a share somewhere and tell people to use it. Since it's one of the default project types you're amending I guess you probably have to remove the original from their machine too. And hope they don't have to re-install VS anytime...)

Conclusions
Getting VSTS to perform Continuous Integration on your project is now really easy. To be fair, this on it's own was pretty easy with CCNet too, but it's even easier now, and we don't have to fight over the CCNet.config file.

But the staged output from the build strikes me as limited in use. It's possible to go and hack about with the generated MSBuild files that actually perform the build and stage, and bend it to my will, but that's just what I don't want to have to do. I want it to 'just work' and I don't think we're there yet.

New MCCS Certification announced

Microsoft will tommorrow announce their new MCCS - Microsoft Certified Certification Specialist certification. This exam will henceforth be a pre-requisite for embarking on any of the various MCAD / MCSD / MCPD upgrade paths.

The course content is not nailed down yet, but most of the detail is on Gerry's blog. In the comments. Obviously.

Friday, August 01, 2008

Miss VMWorld 2008

Everytime I get this email...



... I get totally the wrong idea.

Friday, July 25, 2008

MyClass in VB.Net

It's always a bit of a shock when you find something you've missed in a language you've used for years. I'm mostly a C# person, but I thought I knew pretty much all of VB.Net's quirks by now. But I totally missed 'MyClass'.

'MyClass' allows a class to access methods and properties as declared on itself, irrespective of them being overridden further down the inheritance heirachy. It's like using 'Me' if all the 'overridable's were removed.

Since there's no C# equivilent this was a big surprise to me, but it shouldn't have been - it's only doing the same as 'MyBase' does (against a type's ancestor): executing properties / methods by specific type address, not via virtual dispatch. As the IL for this sample shows:


Public Class Class1
Public Overridable ReadOnly Property Name()
Get
Return "Class1"
End Get
End Property
End Class

Public Class Class2
Inherits Class1
Public Overrides ReadOnly Property Name()
Get
Return "Class2"
End Get
End Property

Public Function GetNames() As String
Dim lines(3) As String
lines(0) = MyBase.Name
lines(1) = MyClass.Name
lines(2) = Me.Name
Return String.Join(",", lines)
End Function
End Class

Public Class Class3
Inherits Class2
Public Overrides ReadOnly Property Name()
Get
Return "Class3"
End Get
End Property
End Class
Calling new Class3().GetNames() produces the following (edited for brevity)


     // mybase - explicit dispatch to class1
L_000b: call instance object ConsoleApplication1.Class1::get_Name()

// myclass - explicit dispatch to class2
L_001a: call instance object ConsoleApplication1.Class2::get_Name()

// me - virtual dispatch, will resove to class3's implementation
L_0029: callvirt instance object ConsoleApplication1.Class2::get_Name()
So the output eventually is 'Class1, Class2, Class3'. Nifty. That being said, I can't honestly say I've ever really needed this, so it might go back into the 'curios' collection. Useful in a pinch maybe, but surely it's a smell? As if designing-for-subclassing wasn't hard enough as it is...


PS: Interestingly the Reflector disassembler doesn't understand this either, so it wasn't just me that missed it: Reflector thinks the VB was:

Public Function GetNames() As String
Dim lines As String() = New String(4 - 1) {}
lines(0) = Conversions.ToString(MyBase.Name)
lines(1) = Conversions.ToString(Me.Name) ' got this wrong
lines(2) = Conversions.ToString(Me.Name)
Return String.Join(",", lines)
End Function

Thursday, July 17, 2008

Using Extension Methods in .Net 2.0 from VB.Net

So despite what ScottGu originally said, Extension Methods don't 'just work' for VS 2008 projects targeting .Net 2.0.

There's no end of blog posts describing the workaround - add your own ExtensionAttribute class to get it working - but all the samples are in C# (which is interesting in of itself). So here's the VB.Net version:

Namespace System.Runtime.CompilerServices
<AttributeUsage(AttributeTargets.Method Or AttributeTargets.Assembly Or AttributeTargets.Class)> _
Public Class ExtensionAttribute
Inherits Attribute
End Class
End Namespace

...and why am I bothering to blog about this rather trivial conversion? Because of the key gotcha: make sure you put this in a project with no root namespace set:



That had me banging my head on the table for too long.

As did the next one: extension methods only show up under the 'All' tab in IntelliSense - obviously too advanced for mere Morts. I gotta remember to turn that off: using VB is bad enough without the IDE patronising you as well.

Interestingly, if you get the AttributeUsage declaration wrong on the attribute, you get this error:



"The custom-designed version of System.Runtime.CompilerServices.ExtensionAttribute ... is not valid"

Fascinating. So this hackery works by design, it's just not really supported as such.

More reading: MSDN documentation on Extension Methods in VB

Tuesday, July 15, 2008

Continuous Integration in TFS 2008: Part 1

Many people now accept the benefits of a regular / continuous integration cycle (even if they don't actually practice it themselves). Picking up the pieces after someone's broken the checked-in source code, especially if it's not picked up for a few days, can be a real time waster.

Like many agile practices, however, the cost / benefit is hard to quantitatively analyse. It's far easier to justify therefore if it's really easy to setup: as the costs tend to zero the benefits become essentially 'free'. And you could argue that tools like CruiseControl.Net have made it pretty easy.

Personally, having spent significant sections of the last 3 years getting CCNet / Nant build cycles going on various projects, I'd beg to differ. Sure, it's really easy to setup CCNet / Nant (or CCNet / MSBuild) to build your solution, but that's only the first step in the process. Typically you also want to do things like:
  • Import the latest built version of external dependencies (ie components maintained outside of the solution being built)
  • Execute unit tests
  • Execute integration tests (so config files pointing at databases etc... have to be in the right place)
  • Package the build outputs nicely ('xcopy ready')
  • Deploy and install into test environments
CCNet and NAnt don't really give you this stuff 'out of the box'. You spend time gluing bits together, inventing your own build process and so on, and maintaining this stuff seems to get out of control very easily. Deploy and install is a particular minefield, because somewhere in there you have to start doing configuration file substitution (put your test server settings in the web.config etc...). And doing all this in XML just rubs salt into the wound.

You can handle most of this by hand on small projects, but the last app I worked on had five or six deployable parts to it (webservices, windows services, Winforms with ClickOnce manifests and the like), each of which had 20 or so settings to change for each of 7 different environments and the differing systems it integrated with. That's 100's of settings to keep track off, without even getting into the Biztalk artefacts, and that was only one of several projects of similar complexity. Automation's a no brainer at that point.

My solution to try and scale back the per-project cost of managing this was my own open source project SNAK. This attempted to commoditize a standard build / test / package / deploy process that you could implement on your side by pretty much setting a single variable at the top of a build script. And I think it works reasonably well: but it's clearly not the answer, not least because it took a fair amount of my (and others) time, of which I have very little.

So I was very, very hopeful when I started looking at the CI support in TFS 2008. Microsoft were really bashed over CI (lack of) in 2005, but this time round it looks like they've delivered:



You pretty much pick your solution file:



...your output directory...



...and your build frequency, and off you go:



Given how hard it was to deal with VSTS tests under CI in 2005 (because the test file was always in the wrong place), this screen will be a real pleasure to some:



And if you've tried to implement a build output retention policy in NAnt, you'll really appreciate this:



So up until now, absolutely fantastic. But then I had a few issues, which I'll deal with in Part 2 (so as not to take the gloss off the good bits above).


[I was due to present on this topic at the Perth .Net user group the other week, but a failing laptop saw to that (not the way I was expecting the demo to fail!). Since there's now no slots till Xmas, I've recycled some of the content into this post. The laptop was lobotomized and is recovering well...]

[Readify are doing a Dev Day in Perth on the 29th, with TFS as one of the tracks, so I'd be surprised if they didn't cover this there]

Monday, July 07, 2008

Recycling old posts?

Sorry about that. I re-tagged a few articles over the weekend, and I think Blogger has got confused and bounced them into my feed as if they were new posts. Unfortunately some of them were, so it's all a bit of a mess.

New posts were actually:
* Finally: PowerShell as build language
* Using PowerShell to export the Windows Feeds list


Normal service will resume shortly...

Friday, July 04, 2008

Using PowerShell to export the Windows Feeds list

Moved computers recently, and one of the things I realised I lost was my RSS feeds list. It was probably a blessing (I just tend to accumulate subscriptions otherwise), and maybe I should be using a reading service of some nature, but there you are.

Anyway given I'm all Mesh'd up, I though I'd copy my feeds list into my Mesh folder (like my bookmarks), so I'd have a backup and this wouldn't happen again. Only I couldn't find where the feeds list actually lives. Instead there's a whole API for dealing with it...

...which is surprisingly easy to use, and works like a treat in PowerShell (I'm always amazed at it's ability to 'just work' with things like COM objects). So I just exported the list instead:

# Dump the contents of the Windows Feeds store to an XML file

$erroractionpreference="stop";
[xml]$feedsListDocument = "<feeds/>"
$feedsList = $feedsListDocument.get_DocumentElement();
$feedManager = new-object -com "Microsoft.FeedsManager"

@"
<feeds>
$(
$feedManager.RootFolder.Feeds | % {
$feed = $_;
$feedXml = $feed.Xml(-1, 0, 0, 0, 0)
'<feed Name="{0}">{1}</feed>' -f $feed.Name,$feedXml
}
)
</feeds>
"@

Easy as. The XML it spits out is overly large (since it includes all the article contents from the cache), but for the MB involved it barely seems worth refining it.

Update 2008-07-17: So like the very next day I realised I could have just sync'd the feed list into Outlook, and asked it to export it as OPML. But syncing into Outlook blew my tiny mailbox quota (these feeds are suprisingly large) so I ended up back doing this again anyway. Then it turned out that IE can export the feed list as OPML too (File \ Import and Export - you'd think I'd have noticed originally) - but I still like having a script because I can schedule it.

Note to self: It is definitely time to find a blog that can cope with XML a bit better

Finally: PowerShell as build language

I've never really got into MSBuild, which surprised some people given how much time in the last four years I've spend mucking about with CCNet / NAnt. It was partly that we did a bit of investigation when MSBuild came out, and saw a couple of things we didn't really like about it and decided to wait for v2 (ie Team Build in TFS 2008). Partly.

More fundamentally however the problem is that MSBuild is just too similar to NAnt, and my considered opinion after years of usage is that NAnt sucks, or to be more specific, XML is a terrible 'language' for writing executable code. Fowler puts it pretty well:
"After all until we tried it I thought XML would be a good syntax for build files"
http://www.martinfowler.com/articles/rake.html
Sure it's fine for the IDE to write that stuff out (though even then you have to look at it and wince, right), but for humans who want to customise their build process? Jesus wept. Variable scope: gone. Explicit parameters for subroutines (targets): gone. Its fine when it's simple, but once you start looping and branching and using temporary variables it's just a great big mess of angle brackets that even it's mother pities. And debugging? Now there's a story...

There's a time and a place for the angle bracket tax, and this isn't it. Square peg, round hole.

So given how amenable for DSLs PowerShell has proven to be, I've been holding my breath for some kinda PowerShell rake-alike.

And here it is: Introducing PSake

(Also Microsoft themselves are thinking about it, and canvassing for opinions about whether it's a good idea or not.)

Sadly (actually quite the opposite) I'm not actually having to deal with the build process on my current project, so I don't really have much excuse to play with it. But I dream of a future in which the TFS Team Build project wizard kicks out a PS1 file instead. It'd certainly make fixing some of it's shortcomings a whole heap easier (that's a subject for a future post)


Edit [14/7/08]: Most requested feature for MSBuild? Debugging. Obviously this'll be interpreted by the MSBuild team as a need for a debugger, but maybe they should have used a language that already had one.

Thursday, June 26, 2008

Beyond Compare 3 supports 3 way merge, is totally awesome

Beyond Compare 3 is out in beta. It supports 3 way merges!

I found this out literally minutes before I started what turned into a 2 day mergeathon between two large and divergent branches in TFS, with *lots* of merge conflicts to manually resolve, and I can honestly say I'd probably still be merging if I hadn't downloaded it. It's just fantastic.

I'll probably post some screenshots etc... soon, but if you're struggling merging with BC2 and/or the built-in diff/merge support then you really should check it out.

Monday, June 23, 2008

MSDN Downloads and the fly-out menus trauma

Raymond's just posted about the rationale behind the windows menu show delay, and goes on to point out various web properties that blatantly ignore the underlying usability requirement.

Sadly finding examples is like shooting fish in a barrel. I remember Jakob Nielsen winging about this last millennium, but as the technology moved forwards: Director, DHTML then Flash, the ease with which anyone can design their own UI and distribute it widely over the internet has lead to a flood of bad UI. Even as Vista attempts to move forwards, the new Silverlight version of MSDN Downloads re-re-implements the fly-out-menus concept, with almost unusable results.

Maybe this is a necessary pain we have to move through, but it kinda sucks that we can't explore novel and interesting UI concepts without making them totally unusable. I'm no UI designer, but at least I don't pretend to be, or work as one.

The templating within WPF is a great example of an enabling technology here, where the usability can be codified into a control by 'experts', but still delegate most of the 'funky look' to the end-designer. In this case if WPF / Silverlight had shipped with a decent fly-out menus control, maybe the MSDN Downloads team wouldn't have got it so horribly wrong, and I wouldn't have had to uninstall Silverlight in frustration.

I guess there is hope then that this isn't just another enabling technology that enables people to make a real arse of things.


PS: Check out this bizzarro comment on Raymond's blog:
"Let's not get into the "gynaecologist's interface" that is Vista's Start Menu, shall we?"

WFT?

Friday, May 30, 2008

Don't be Stupid

Years ago I was working on a project and I came up with a fantastic idea to help limit the level of regressions in the codebase I was working on. Rather than write unit tests as little throwaway test harnesses, I moved them into the codebase, and created a little app to execute them. It even did this via reflection, so as we added more tests, they got run too.

I thought I was being pretty clever.

I was being very stupid. I'd just re-invented xUnit, and didn't even know. [1]

It's a particular type of stupidity that manifests itself only in those who'd otherwise regard themselves as anything but: we get so wrapped up in our great idea that we stop to consider that someone else might have done this already. Programmers are particularly badly afflicted by this, mostly because it suits our vanity to create it ourselves.

There was already an automated testing community, that had over time evolved what worked and what didn't, the practical upshot of which - for a .Net developer at the time - was that NUnit already existed. I could have spent the time writing more tests instead. Or better still, more screens, which is what I was actually being paid for.

The last three applications I've worked on have all involved considerable custom frameworks (stupid) including a custom databinding scheme (very stupid). They were written by clever people, most of whom I respect, but they did some stupid things that less able programmers wouldn't have been able to do. Clever isn't always a complement in the agile camp, and this is why.

Of course 5 years of hindsight is a wonderful thing, and I've written my share of head-slappingly dumb code too. And it's all too easy to succumbed to the 'quick fix' fallacy when the boss is breathing down your neck : after all it's so much easier to get started writing your own framework than to learn to use someone else's.

But once you start down the dark path, forever will it haunt your destiny[2]. Which is why I make this plee to you now:
Please, before you put finger to keyboard again, consider whether what you're about to write has already been written.
Don't be stupid.

[1] To be fair to my erstwhile self, at least I was actually doing some testing, which was more than had been done before on that project
[2] Or that particular project at any rate

Tuesday, May 13, 2008

Enabling multiple RDP sessions in Vista

After many days of frigging around I realised those thegreenbutton.com Vista multiple remote desktop hacks (that you find from google) are all broken by SP1. That page' on missingremote.com that is supposed to draw all this together still hasn’t been updated with this new info.


However add SP1 to your search and you find this other thread, which works: http://thegreenbutton.com/forums/permalink/242509/255166/ShowThread.aspx#255166

Ah, the joys of subsequent-threads-with-lower-page-rank-than-the-original-now-outdated-info.

Thursday, May 08, 2008

Running ASP.Net webservices under a service account

Most of the time I run websites and webservices in an app pool that's running as Network Service. It just saves a whole truck load of time and hastle:
* no passwords to worry about
* already trusted for kerberos delegation
* can still use it to talk to a database under integrated security (you just grant access to the machinename$ account in the domain).

Hey - this is what this account was *invented* for.

However, sometimes a specific service account is a must. Reasons include:
* Needing to differentiate access rights between applications running on the same host
* Needing to authenticate back across a one-way domain trust
* Specific policy mandates

Unfortunately you can't just add any account to IIS_WPG and use it, because the ACL on windows\temp is wrong: and grants access to network service rather than to the group. Miss this one, and you'll just get serialization errors left right and center.

So I do this:


Net localgroup iis_wpg /add mydomain\myserviceaccount
cacls %systemroot%\temp /E /G IIS_WPG:C


...then when you change the identity of the app pool you won't get 'Service Unavailable'.

Sunday, March 16, 2008

Don't override Equals

A colleague had a problem the other day which turned out to be due to an overridden Equals operator. In this case it was a straightforward bug in the implementation, but after he saw my horror struck face I had to introduce him to the whole 'don't override Equals' philosophy[1]. On the pretext that you've not come across it, here's the argument in full:

  • You have two objects that came from different places, and need to know if they represent essentially the same data.
  • You can't override Equals unless you also override GetHashCode. If two objects are equal, they must have the same hashcode, or collections are screwed.
  • GetHashCode must return the same value for an instance throughout it's lifetime, or Hashtable's are screwed
  • Your object isn't readonly, so you need an immutable field in the instance to base the hashcode on.
  • But if you modify one instance's data to equal another, that field can't change, so the hashcodes are still different.
  • You're screwed
And that's without getting into the problems associated with a correct implementation of Equals in the first place (getting the reflexive, symmetric and transitive bit right). Generally speaking some kind of IsEquivilent method is a whole heap less trouble, but it depends what you're up to. You might think about loading your objects through some kind of registry, so references to the 'same' data actually end up pointing to the same instance. Then everything just works...

More reading:

UPDATE 10/04/08: Some clarifications: I'm talking about not overriding Equals/GetHashCode for reference types here. It's not such a problem for value types [as IDisposable points out in the comments]. And I've futher clarified some of my assertions about GetHashTable in the comments.

[1] PS: Like all advice, this has exceptions. But the chances are they don't apply in your case. No, really.

Thursday, January 31, 2008

Care required passing arrays to .Net methods in Powershell

In Powershell, argument lists for .Net methods are treated as arrays:
$instance.MyMethod($arg1,$arg2);
...which can be confusing if you want to pass an array as a single argument:
$instance.MyMethod($myArray);

New-Object : Cannot find an overload for "MyMethod" and the argument count: ""
Instead, force the array-argument to be contained within a single-member array:
# Note the extra comma below
$instance.MyMethod(,$myArray);
Makes sense when you think about it, but definitely a gotcha.

[In my case, I was caught out with the byte[] overload constructor for a MemoryStream]

Wednesday, January 30, 2008

Blobs out with SQL 2008

Recently I re-visited the blobs in/blobs out argument with a colleague. You know the one, one of you says blobs shouldn't be stored in database (principally because the last time he tried it 'blobs in' in VB 6 access to the blob data was a pain in the arse), then the other one says no they should be in the database (because the last time they tried it 'blobs out' all the files got mixed up / out of sync / weren't backed up). Etc...

Anyway, not only has Paul Randal posted a good summary of the pros and cons, but he did so as an intro to a new SQL 2008 data type 'FileStream' that attempts to bridge the two approaches (the 'have your cake and eat it' approach).

I'm cautious. Transactions at the filesystem level are a real mess (as some of the OneNote blogs make clear, especially with non-MS implementations of SMB like SAMBA). Your database backup is presumably still huge and unwieldy (or missing the blob data, which is worse?).

The main advantage of this approach seems to be that SQL can access the blob data faster through NTFS than via it's own internal MDF formats. But you've apparently still got to go via SQL to get the data, you can't (for example) just serve up images-stored-as-blobs directly via IIS. Or maybe I've missed something. Either way, the upside all seems to be focused on blob streaming performance, which may or may not be the most relevant factor for your app.

So it's possible that next year's arguments will be blobs in vs blobs out vs filestream, and still no one-size-fits all. Ah well.

Thursday, January 03, 2008

Path already mapped in workspace error with CCNet and TFS

Had a problem with CCNet that kept me here till midnight where try as I might, I just couldn't get a build to not fail with the dreaded "Path ... is already mapped in workspace ..." error:
Microsoft.TeamFoundation.VersionControl.Client.MappingConflictException: The path C:\Builds\etc\Working is already mapped in workspace someworkspace
We use a different workspace for every CCNet project to avoid collissions, and to maintain uniqueness we keep the workspace name the same as the CCNet project name. I couldn't find the workspace in question, and was pretty sure I'd already deleted it. In fact I'd used TF Sidekicks to delete all the build user's workspaces, and it still didn't work. So what was up?

Fortunately in a post 'How to handle "The path X is already mapped in workspace Y"' I learnt of the mappings cache file on the client PC, in the user's Local Settings\Application Data\Microsoft\Team Foundation\1.0\Cache\VersionControl.config file. Just nuking workspaces on the server isn't enough!

So to be sure I blew away the build server's local profile entirely, and that finally fixed it.

Popular Posts