Monday, November 15, 2010

Column Store Indexes in Sql Server Denali

It’s funny because a college and I were having a discussion in the kitchen the other day about the whole ‘no SQL’ movement, and my point to him was that many of the advantages pertained to having a columnar storage model, and (especially in the light of Vertipaq) I didn’t think it would be long before this kind of storage mode migrated to mainstream RDBMs’s like SQL Server.

And then this, in Denali (Sql Server v-next):

“The columnstore index in SQL Server employs Microsoft’s patented Vertipaq™ technology, which it shares with SQL Server Analysis Services and PowerPivot. SQL Server columnstore indexes don’t have to fit in main memory, but they can effectively use as much memory as is available on the server. Portions of columns are moved in and out of memory on demand.”

MVP’s have been able to download CTP1 for a fortnight apparently, which means Mitch has been holding out on me. Damn his poker face.

Thursday, November 04, 2010

Debugging Talk Tonight

Tonight’s talk at the Perth .Net User Group should be pretty good – because it’s me talking! Barring uber-embarrassing stuff-ups, I will be talking about and demonstrating debugging techniques using WinDbg and PowerDbg, and hopefully shedding some light on an area that’s generally under-utilized by many .Net developers.

Join us at Enex 100, Level 3 Seminar room at 5.30pm. More details in the link above.

Thursday, October 21, 2010

Western Power Killed My Pong Clock

No, really. After today’s brown-out my irreplaceable original Buro Vormkrijgers Pong Clock appears to be fried.

Really not happy at all.

Sunday, October 17, 2010

Critical Concepts, Often Confused

These aren’t similes, but they’re often taken as such. I don’t think I’ve worked on a project that hasn’t mixed up at least one of these pairs. Sometimes it takes a heap of suffering before you realise what you’ve done…

Estimates vs. Commitments

The estimate is how long you say it’ll take. The commitment is when you say it’ll be done by. These are not the same thing.

Quite apart from catering for resource levelling, adding a sickness / holiday buffer, catering for pre-sales / training requirements / all the other stuff, you probably shouldn’t be shooting for a point estimate anyway. Ideally you make a range-based estimate, and aim your commitment at a fairly high confidence interval within that (bearing in mind even 95% means you are missing your dates 1-in-20 times). Mistaking these concepts can, alone, be the root cause of all your delivery problems. See Software Estimating (McConnel)

Domain Invariants vs. Validation

If you put all your validation in your domain model you probably just made them all domain invariants. Congratulations. Now try and implement ‘god mode’, privileged system operations, or special-case this one screen where the logic has to be different…

Validation is often highly contextual. What’s valid in the context of one transaction (one screen) may not be in another, so sometime you’ll have to accept the reality that some validation belongs to the operation, not to the domain. Eagerly promote all validation to domain invariants at your peril.

(This is one of the things that scares me about frameworks like Naked Objects)

Business Owner vs Single-Point-Of-Contact

Critical to have a single business owner, yes? So we can just have one person to ask all our questions to? Wrong.

The business owner is the owner of the project, and the arbiter of the decisions. But that doesn’t let you off the hook from talking to all the other stakeholders in the project. They may, and often will, have very different opinions. If you can’t keep them all happy, the owner decides, but if you don’t even ask them you’re relying on your owner to be the single source of all domain knowledge. That’s a fairly dangerous road to be walking down, even before your owner flips out due to project-overload and goes postal in a feature workshop. Canvas more than one opinion.

Friday, October 01, 2010

Windows Mobile 7 vs. the World

Here’s the scenario: you face an uphill battle to regain some kind of presence in a market where you’ve failed in the past, and now battle the huge incumbent advantage of another player. Do you:

  • Come up with an innovative strategy to outflank the incumbent, find a niche or play to your own unique strengths?
  • Copy exactly what they’ve done. It worked for them, right?

Well, er… it seems to me a lot like Microsoft did the latter. With Windows Mobile 7 they’ve done a great job with the UI, the developer experience looks pretty good, using the cloud as a back-end is starting to make sense, etc… but on features alone it’s kinda hard to see why anyone would favour one of these over an iPhone – they’ve picked exactly the same model:

  iPhone Windows Mobile 7 Android
Side-loading of apps (not via app store) No No Potentially, if carrier wants to
Corporate (restricted distribution) apps No No As above
Flash in browser No No (nor Silverlight) 3rd party support available (for OEMs, mind)
Background apps / multitasking No No Yes?
Native Code No No Yes
Video Calls iPhone 4 Optional, depends on H/W [4] No
Tethering No No No (w/o rooting)

Why no Flash / Silverlight in browser? Various Microsofties and MVP’s have tried to tell me it’s a technical limitation, that Silverlight(phone) and and Silverlight(browser) are non-overlapping functionality sets. Whilst that’s true, it’s also B.S.: this is – as in Apple’s case – about control. Rich browser apps are a side-loading vector: if you can run a fully-functional GUI app in the browser, the monopoly of the app store goes away.

Microsoft’s gamble of course is that the consumer market is less about tabular feature comparisons, and more about marketing, branding and emotion. And to a certain extent they’d be right, but that’s why Apple went out and bought the Liquid Metal process. So it’s an uphill battle there too.

Most importantly, unlike Apple, Microsoft don’t make phones. So it’s crazy to attempt (as they are) to follow the ‘own the customer experience’ model of Apple, when they don’t actually own it at all. They can specify the hardware to an extent (and have done), but they’re not a vertical: the manufacturer has a stake here too.

Of course Microsoft’s previous model sucked. They provided a platform, left the experience up to the end-vendor, and what we ended up with was the same tired old Today screen for years and years (with the recent exception of HTC). So no-one wants to go back there. But that’s exactly the Android model, and it seems to be working pretty well for them.

With Android users get a different vendor-specific experience on different phones, and with a partner model that’s a good thing. A Sony should be different from an Samsung or whatever: you buy a Sony for the Sony brand, not the freaking OS. And provided the search bar and maps goes back to Google that seems to suit everyone involved just fine. Backs mutually scratched: it’s the partner model, working how it always should have.

So Microsoft’s approach seems neither fish nor fowl. They plan to compete with Apple on Apple’s terms, whilst Google takes their own partner model and shows them how it’s done. They desperately needed to change something, but I think it was the software, not the business model.

 

(Oh, and the really funny thing: Windows Mobile 6.5 isn’t going away – it continues to be Microsoft’s ‘Platform for Corporate Users’ – basically because of the current sidebanding limitation. Microsoft have said they’ll consider this later, but…)

[2] http://social.msdn.microsoft.com/Forums/en-US/windowsphone7series/thread/2892a6f0-ab26-48d6-b63c-e38f62eda3b3

[4] http://pocketnow.com/tech-news/windows-mobile-7-device-specs-bigger-screens-multi-touch-and-more-memory

Thursday, September 30, 2010

Rethrowing Exceptions Without Losing Original Stack Trace

Everyone knows you should never ‘throw err’:

    try

    {

        // Do something bad

    }

    catch(Exception err)

    {

        // Some error handling, then…

        throw err;

    }

 

…because you overwrite the original stack trace, and end up with no idea what happened where. If you want to re-throw, you just ‘throw’ within the catch block, and the original exception is re-throw unmodified (or wrap-and-throw).

But that’s within the catch block. What do you do if you need to re-throw an exception outside the catch, one you stored earlier? This is exactly what you have to do if you’re implementing an asynchronous (APM / IAsyncResult) call, or marshalling exceptions across app domain / remoting boundaries.

The runtime manages this just fine by ‘freezing’ the exception stack trace. When rethrow, the new stack trace is just appended to the old one – that’s what all those ‘Exception rethrow at [0]’ stuff is in the stack trace. But the method it uses to do this (Exception.PrepForRemoting) is internal. So unfortunately in order to use it, you have to call it by reflection:

    public static void PrepForRemoting(this Exception err)

    {

        typeof(Exception).InvokeMember(

            "PrepForRemoting",

            BindingFlags.Instance | BindingFlags.NonPublic | BindingFlags.InvokeMethod,

            (Binder)null, err, new object[0]);

    }

 

    /// <summary>

    /// Rethrow an exception without losing the original stack trace

    /// </summary>

    [DebuggerStepThrough]

    public static void Rethrow(this Exception err)

    {

        err.PrepForRemoting();

        throw err;

    }

Evil I here you cry? Well suck it up, because that’s exactly what Rx does in System.CoreEx:

image

(Tasks in .Net 4 side-step this problem by always wrapping exceptions in a new AggregateException prior to throwing – this also allows a Task to accumulate multiple exceptions throughout its lifecycle, depending on the continuations applied)

Sunday, September 19, 2010

Reacting to Rx

I’ve finally got round to spending a bit of time looking at Rx over the weekend, and my head is still spinning as to just how fantastically relevant this is to some of the stuff I’m working on right now. If have no idea what Rx is, check out these brief Channel 9 videos:

The first will get you interested, the second will make the penny drop[1].

So anyway, I have a class called a MessagePump<T>. Its job is to abstract away a lot of low-level socket guff (fragmentation, parsing etc…) and just deliver messages as they are read off a socket. It basically just sits in a big async loop of BeginRead / EndRead operations, constantly passing itself as the callback (ie never ‘owning’ a thread).

That’s all it does, so to deliver messages into the rest of the system it exposes a MessageReceived event. And sometimes a message might not parse properly, probably someone got out of sync whatever, so there’s a ExceptionReceived event. Oh, and if you get a zero-byte read from BeginRead that means the socket the other end closed, so there’s a Disconnected event

  • MessageReceived(object, EventArgs<T>)
  • ExceptionReceived(object, EventArgs<Exception>)
  • Disconnected(object, EventArgs)

Now compare this to Rx’s IObserver<T> interface:

  • OnNext(T)
  • OnError(Exception)
  • OnCompleted()

It’s like completely the same. I guess there are only so many ways to skin a cat, but I wasn’t expecting it to be quite so aligned. Hopefully I can read this as saying my design is basically sound.

But whatever, what it really means is that dropping in Rx is going to be a bit of a doddle. In fact because the IObserver<T> and IObservable<T> interfaces (alone) are part of the .net 4 framework, even without Rx I can implement the pattern (just without the Rx fruit),which makes leveraging Rx later on (e.g. to filter with Linq) an option for the consumer.

And because the IObserver<T> / IObservable<T> pattern is much more amenable to composition than a raw .net event (which is really, the whole point of Rx), we can use containers like MEF to attach the subscribers at runtime, with (what seems to be) relative ease.

Both temporal and binary decoupling. Cool.

 

[1] For example: did you ever write something like an auto-complete popup? You want to wait a while after each keystroke in case the user didn’t finish typing yet (about 500ms I think). I ended up writing a general-purpose event-buffer class, that only propagated the event after a specified inactivity period (this also worked great for file change notifications). In Rx this is trivial: just use the ‘Throttle’ linq operator over the event sequence. See the hands-on-lab

Saturday, September 18, 2010

Problems With Stuff

image

Being charitable you might point out that as a technology becomes increasingly pervasive it inevitably ends up in the hands of less technically savvy users, but I like to think of it as ‘all our stuff is still a bit crap’.

Wednesday, September 08, 2010

.Net 4 not supported on Windows 2008 Server Core

There is an explanation from the .net SKU owner as to why (which I don’t entirely follow), but the bottom line is that what the download page says is right – it’s just not available. So no Distributed Cache either.

Poo.

(It does support a subset of the net 3.5 functionality, largely orientated towards ASP.Net support – there’s a basic explanation of which bits here)

Visual Studio 2010 build spew in DebugView

If you’re a fan of DebugView (like me) you’d have been driven spare by the reams of spurious debug output that VS 2010 generates when doing a build: some 15,000 lines (in my case) of repeated cruft that drowns your output:

*** HR originated: -2147024774
*** Source File: d:\iso_whid\x86fre\base\isolation\com\copyout.cpp, line 1302


*** HR propagated: -2147024774
*** Source File: d:\iso_whid\x86fre\base\isolation\com\enumidentityattribute.cpp, line 144



This is a known issue on the forums, and there is a Connect Issue associated with it, so please vote for it. Hopefully it’s not too late to get this fixed in SP1.



(I’m optimistic– the bug was raised by Rusty Miller, an (erstwhile?) tester on the VS team)

Tuesday, September 07, 2010

TechEdAu 2010

It was only the week before last, but already I feel the clarity slipping away like a dream in the morning. Ahem. It was quite an interesting year, because apart from Windows Mobile 7, most of the stuff that was being talked about actually exists at RTM today, which was a nice change from learning about stuff you might get to use in 6 month’s time.

Memes this year:

  • Devices are ‘windows’ to the cloud [1]
  • Virtualisation, virtualization, virtualization
  • All I want for Christmas is Windows Mobile 7

Anyway, here’s what I went to

Day 1:

Day 2:

Day 3:

And here’s all the sessions I will be catching up on Online (as and when the videos come up):

…and a couple from TechEd North America that looked fairly promising:

Phew.

 

[1] If you think this cloud stuff is finally becoming the William Gibson / Ian M Banks model of pervasive cyberspace, you’d be right.

Saturday, September 04, 2010

Which WPF Framework?

So it’s way past time that I actually started getting used to a WPF framework, rather than keep re-inventing the wheel. But where to start? I thought it was just between Prism and Caliburn, but then I found WAF, and then researching that I found a whole bunch of others.

I suspect I’ll start with WAF because it describes itself as lightweight. Prism comes from the P&P team, who are normally anything but, and Caliburn supports paradigms other than MVVM, which just seems a bit pointless.

Tuesday, August 10, 2010

PowerDbg is search result #7 for ‘WinDbg’

Ok, this is only on MSDN search, but still that seems pretty damn high:

image

Mind you, we’re #38 on Bing, and #14 on Google so we’re not completely inconspicuous.

Time to pull our fingers out and finish off v6 I think.

Thread Safety in MSDN

Just what exactly is the point of even having a ‘thread safety’ comment in the MSDN doco, if it’s just blatant boiler-plate drivel.

Take, for example, System.Text.ASCIIEncoding. Generally speaking there’s only one of these in play at any one time, because the Encoding.ASCII static property is a singleton (as they all are):



public static Encoding ASCII
{
[TargetedPatchingOptOut(...)]
get
{
if (asciiEncoding == null)
{
asciiEncoding = new ASCIIEncoding();
}
return asciiEncoding;
}
}



So you’d better damn well hope it’s thread safe, otherwise all those concurrent write operations you’re doing, they’re screwed, right? But what does MSDN have to say on the subject:




“Any public static (Shared in Visual Basic) members of this type are thread safe. Any instance members are not guaranteed to be thread safe.”




Oh. Really helpful. Thanks a bunch.



Looking at the usage patterns through the Framework Class Libraries, it’s pretty clear they are thread-safe. Encoding.GetEncoding(int) hands out references to the singletons, which are similarly used with gay abandon in System.IO.Ports.SerialPort, System.IO.File.ReadAllLines, various StreamReader overloads etc… (though BinaryReader chooses to new up its UTF8Encoding, heaven knows why). And the sky would have fallen by now if these usages weren’t at least largely correct.



But poking about in Reflector is clearly not a substitute for accurate documentation, and the ‘parallel processing revolution’ everyone keeps going on about is clearly not going to work if we just keep trotting out the ‘instances members are not guaranteed to be thread safe’ line.



System.Text.Encodings: believed to be thread-safe.

Tuesday, July 27, 2010

Log4Net Active Property Values via Lamdas

Some years ago I wrote a couple of posts on some nasty problems that you could encounter if using log4net contexts in an environment where you didn’t control the thread lifecycle, say ASP.Net. Judging by the amount of coverage it got at the time (and still) I wasn’t the only person caught out by this.

Anyway I was doing something similar recently, not in ASP.Net, but in a Windows Service application with lots of threads. It’s the same kind of problem: there’s some thread-specific context that always exists, which we want to make available to log4net, but putting it in ThreadLocalContext doesn’t really work very well because we’d have to set them up in all our thread-entry methods, which would be everywhere where a callback gets entered – very messy in our (highly asynchronous) application.

Instead I wanted to put something in log4net’s GlobalContext that resolved to the thread’s context value. And actually now we’ve got lamdas and all that nice stuff, I was able to come up with a significantly neater implementation for a general-purpose contextual logging property, which basically answers the original ASP.Net problem too:

 

    /// <summary>

    /// Implements a class that can be used as a global log4net property

    /// to resolve an action to a string at event-fixing-time

    /// </summary>

    /// <remarks>With a suitable lamda expression, you can put this

    /// into your log4net.GlobalContext to resolve at logging time to a variety

    /// of stuff you might want to use in your logging statements.

    /// <example>Using threadId (not thread Name) as a property:<code>

    /// log4net.GlobalContext.Properties["threadId"] =

    /// new Log4NetContextProperty(() => Thread.CurrentThread.ManagedThreadId.ToString());

    /// </code></example>

    /// </remarks>

    public class Log4NetContextProperty : IFixingRequired

    {

        private readonly Func<string> _getValue;

 

        public Log4NetContextProperty(Func<string> getValue)

        {

            _getValue = getValue;

        }

 

        public override string ToString()

        {

            return _getValue();

        }

 

        public object GetFixedObject()

        {

            return ToString();

        }

    }

In this case I wanted ‘threadId’ as a logging property (log4net exposes thread name, which is normally fine, but the R# test runner creates woppingly long thread names that basically hide the actual logging message, and I really just wanted the IDs (hence the example above). But you can see how you can basically use this to expose any context data to log4net if you wanted to.

Wednesday, July 21, 2010

64 Bit Explained

Look, it’s really not that hard.

Programs are still in the same place, in %ProgramFiles%, unless you need the 32 bit version, which is in %ProgramFiles(x86)%, except on a 32 bit machine, where it’s still %ProgramFiles%.

All those dll’s are still in %SystemRoot%\System32, just now they’re 64 bit. The 32 bit ones, they’re in %SystemRoot%\SysWOW64. You’re with me so far, right? Oh, and the 16 bit ones are still in %SystemRoot%\System – moving them would just be weird.

Registry settings are in HKLM\Software, unless you mean the settings for the 32 bit programs, in which case they’re in HKLM\Software\Wow6432Node.

So the rule is easy: stick to the 64 bit versions of apps, and you’ll be fine. Apps without a 64 bit version are pretty obscure anyway, Office and Visual Studio for example[1]. Oh, and stick to the 32 bit version of Internet Explorer (which is the default) if you want any of your add-ins to work. The ‘default’ shortcut for everything else is the 64 bit version. Having two shortcuts to everything can be a bit confusing, so sometimes (cmd.exe) there’s only the one (64 bit) and you’ll have to find the other yourself (back in SysWOW64, of course). And don’t forget to ‘Set-ExecutionPolicy RemoteSigned’ in both your 64 bit and 32 bit PowerShell environments.

Always install 64 bit versions of drivers and stuff, unless there isn’t one (MSDORA, JET), or you need both the 32 bit and 64 bit versions (eg to use SMO / SqlCmd from a 32 bit process like MSBuild). Just don’t do this if the 64 bit installer already installs the 32 bit version for you (like Sql Native Client).

Anything with a ‘32’ is for 64 bit. Anything with a ‘64’ is for 32 bit. Except %ProgramW6432% which is the 64 bit ProgramFiles folder in all cases (well, except on a 32 bit machine). Oh and the .net framework didn’t actually move either, but now it has a Framework64 sibling.

I really don’t understand how people get so worked up over it all.

 

[1] Ok, so there is a 64 bit version of Office 2010, but given the installer pretty much tells you not to install it, it doesn’t count.

Monday, July 19, 2010

P/Invoke Interop Assistant

P/Invoke is like a poke in the eye. Sure the P/Invoke wiki made life a lot more palatable, but it’s at best incomplete, at worst inaccurate, and invariably you’ll find yourself hand-crafting signatures based on Win32 API doco and bringing a production server to its knees because of a stack imbalance.

In my idler moments I’ve often thought that surely parsing the source-of-truth Win32 header files and spitting out P/Invoke signatures couldn’t be that hard. Fortunately for everyone, the Microsoft Interop Team thought so too[1], and released the P/Invoke Interop Assistant to Codeplex. Actually that was about 2 years ago, but I only just noticed, so it’s still exciting for me.

As I understand it this has been made easier because Microsoft have been standardizing their header files and adding some additional metadata [2], which makes it possible to generate accurate signatures (and, presumably, to generate MSDN doco).

Sadly of course, none of this does anything to make any of the underlying API’s any easier to use…

 

[1] Actually if you look on Wikipedia, turn’s out there’s a fair few around.
[2] In retrospect you wonder why managed code took so long to take off as a concept, given how enormously fragile the previous paradigm actually was. SAL’s a great idea, but only highlights how fundamental the problem is.

Friday, June 11, 2010

Converting to Int

You wouldn’t have thought that such as basic operation as turning a double into an integer would be so poorly understood, but it is. There are three basic approaches in .Net:

  • Explicit casting, i.e. (int)x
  • Format, using String.Format, or x.ToString(formatString)
  • Convert.ToInt32

What’s critical to realise is that all of these do different things:

    var testCases = new[] {0.4, 0.5, 0.51, 1.4, 1.5, 1.51};

    Console.WriteLine("Input  Cast   {0:0}  Convert.ToInt32");

    foreach (var testCase in testCases)

    {

        Console.WriteLine("{0,5} {1,5} {2,5:0} {3,5}", testCase, (int)testCase, testCase, Convert.ToInt32(testCase));

    }

Input  Cast   {0:0} Convert.ToInt32
0.4 0 0 0
0.5 0 1 0
0.51 0 1 1
1.4 1 1 1
1.5 1 2 2
1.51 1 2 2


As my basic test above shows, just casting is the equivalent of Math.Floor – it looses the fraction. This surprises some people.



But look again at the results for 0.5 and 1.5. Using a format string rounds up[1], to 1 and 2, whereas using Convert.ToInt32 performs bankers rounding[2] (rounds to even) to 0 and 2. This surprises a lot of people, and you’d be forgiven for missing it in the doco (here vs. here):



Even more interesting is that PowerShell is different, in that the [int] cast in PowerShell is the same as a Convert.Int32, not a Math.Floor():



> $testCases = 0.4,0.5,0.51,1.4,1.5,1.51
> $testCases | % { "{0,5} {1,5} {2,5:0} {3,5}" -f $_,[int]$_,$_,[Convert]::ToInt32($_) }

Input Cast {0:0} Convert.ToInt32
0.4 0 0 0
0.5 0 1 0
0.51 1 1 1
1.4 1 1 1
1.5 2 2 2
1.51 2 2 2


This is a great gotcha, since normally I’d use PowerShell to test this kind of behaviour, and I’d have seen the wrong thing (note to self: use LinqPad more)



 



[1] More precisely it rounds away from zero, since negative numbers round to the larger negative number.



[2] According to Wikipedia bankers rounding is a bit of a misnomer for ‘round to even’, and even the MSDN doco on Math.Round seems to have stopped using the term.

Thursday, June 03, 2010

Splatting Hell

Recently both at work and at home I was faced with the same problem: a PowerShell ‘control’ script that needed to pass parameters down to an arbitrary series of child scripts (i.e. enumerating over scripts in a directory, and executing them in turn).

I needed a way of binding the parameters passed to the child scripts to what was passed to the parent script, and I thought that splatting would be a great fit here. Splatting, if you aren’t aware of it, is a way of binding a hashtable or array to a command’s parameters:

# ie replace this:
dir -Path:C:\temp -Filter:*

# with this:
$dirArgs = @{Filter="*"; Path="C:\temp"}
dir @dirArgs

Note the @ sign on the last line. That’s the splatting operator (yes, its also the hashtable operator as @{}, and the array operator as @(). It’s a busy symbol). It binds $dirArgs to the parameters, rather than attempting to pass $dirArgs as the first positional argument.

So I thought I could just use this to pass any-and-all arguments passed to my ‘master’ script, and get them bound to the child scripts. By name, mind, not by position. That would be bad, because each of the child scripts has different parameters. I want PowerShell to do the heavy lifting of binding the appropriate parameters to the child scripts.

Gotcha #1

I first attempted to splat $args, but I’d forgotten that $args is only the ‘left over’ arguments after all the positional arguments had been taken out. These go into $PSBoundParameters

Gotcha #2

…but only the ones that actually match parameters in the current script/function. Even if you pass an argument to a script in ‘named parameter’ style, like this:

SomeScript.ps1 –someName:someValue

…if there’s no parameter ‘someName’ on that script, this goes into $args as two different items, one being ‘-someName:’ and the next being ‘someValue’. This was surprising. Worse, once the arguments are split up in $args they get splatted positionally, even if they would otherwise match parameters on what’s being called. This seems like a design mistake to me (update: there is a Connect issue for this).

Basically what this meant was that, unless I started parsing $args myself, all the parameters on all the child scripts had to be declared on the parent (or at least all the ones I wanted to splat).

Gotcha #3

Oh, and $PSBoundParameters only contains the named parameters assigned by the caller. Those left unset, i.e. using default values, aren’t in there. So if you want those defaults to propagate, you’ll have to add them back in yourself:

function SomeFunction(
    $someValue = 'my default'
){
    $PSBoundParameters['someValue'] = $someValue

Very tiresome.

Gotcha #4

$PSBoundParameters gets reset after you dotsource another script, so you need to capture a reference to it before that :-(

Gotcha #5

Just when you thought you were finished, if you’re using [CmdLetBinding] then you’ll probably get an error when splatting, because you’re trying to splat more arguments than the script you’re calling actually has parameters.

To avoid the error you’ll have to revert to a ‘vanilla’ from an ‘advanced’ function, but since [CmdLetBinding] is implied by any of the [Parameter] attributes, you’ll have to remove those too :-( So back to $myParam = $(throw ‘MyParam is required’) style validation, unfortunately.

(Also, if you are using CmdLetBinding, remember to remove any [switch]$verbose parameters (or any others that match the ‘common’ cmdlet parameters), or you’ll get another error about duplicate properties when splatting, since your script now has a –Verbose switch automatically. The duplication only becomes an issue when you splat)

What Did We Learn?

Either: Don’t try this at home.

Or: Capture PSBoundParameters, put the defaults back in, splat it to child scripts not using CmdLetBinding or being ‘advanced functions’

Type your parameters, and put your guard throws back, just in case you end up splatting positionally

Have a lie down

Viewing MDX Data with WPF (redux)

Spend most of the day today grappling with binding a WPF datagrid to a DataSet loaded from a parameterized MDX query.

The first gotcha was that SSAS expects its parameterized queries to be passed using the ICommandWithParameters interface, however the OleDb provider for .Net doesn’t support named parameters (except for sprocs). This is a ‘fixed’ Connect issue – fixed as in ‘still broken in .Net 4 but marked as fixed because we can’t be bothered’.

Ahem.

So rather than use ado.net parameters, I’m now using string replacement on my source query text. Just great:

    // So have to do manual parameterization :-(

    query = query

        .Replace("@date", dateKey)

        .Replace("@time", timeKey)

        ;

Then of course the WPF data grid wouldn’t show the data (despite the DataSet visualizer working just fine). It bound and showed columns just fine using AutoGenerateColumns:

    dataGrid1.ItemsSource = dataSet.Tables[0].DefaultView;

 

image

…but all the rows showed blank!

Eventually I noticed a spew of debug output, listing the binding failures:

System.Windows.Data Error: 17 : Cannot get 'Item[]' value (type 'Object') from '' (type 'DataRowView'). BindingExpression:Path=[Blah1].[Blah2].[Blah3].[MEMBER_CAPTION]; DataItem='DataRowView' (HashCode=66744534); target element is 'TextBlock' (Name=''); target property is 'Text' (type 'String') TargetInvocationException:'System.Reflection.TargetInvocationException: Exception has been thrown by the target of an invocation. ---> System.ArgumentException: Blah1 is neither a DataColumn nor a DataRelation for table TheTableName.

at System.Data.DataRowView.get_Item(String property)

--- End of inner exception stack trace ---

This all seemed awfully familiar, and fortunately I happened across a helpful blog article (which I wrote!) explaining the problem. This time it is AutoGenerateColumns that’s generated the wrong binding path, causing WPF to try and find ‘deep’ members (attempting to walk multiple indexers) rather than just bind to a column with that name.

The fix is something like this:

    // This works

    var table = dataSet.Tables[0];

    dataGrid1.Columns.Clear();

    dataGrid1.AutoGenerateColumns = false;

    foreach (DataColumn dataColumn in dataSet.Tables[0].Columns)

    {

        dataGrid1.Columns.Add(new DataGridTextColumn

        {

              Header = dataColumn.ColumnName,

              Binding = new Binding("[" + dataColumn.ColumnName + "]")

        });

    }

    dataGrid1.ItemsSource = table.DefaultView;

Grr.

Popular Posts