Friday, July 25, 2008

MyClass in VB.Net

It's always a bit of a shock when you find something you've missed in a language you've used for years. I'm mostly a C# person, but I thought I knew pretty much all of VB.Net's quirks by now. But I totally missed 'MyClass'.

'MyClass' allows a class to access methods and properties as declared on itself, irrespective of them being overridden further down the inheritance heirachy. It's like using 'Me' if all the 'overridable's were removed.

Since there's no C# equivilent this was a big surprise to me, but it shouldn't have been - it's only doing the same as 'MyBase' does (against a type's ancestor): executing properties / methods by specific type address, not via virtual dispatch. As the IL for this sample shows:


Public Class Class1
Public Overridable ReadOnly Property Name()
Get
Return "Class1"
End Get
End Property
End Class

Public Class Class2
Inherits Class1
Public Overrides ReadOnly Property Name()
Get
Return "Class2"
End Get
End Property

Public Function GetNames() As String
Dim lines(3) As String
lines(0) = MyBase.Name
lines(1) = MyClass.Name
lines(2) = Me.Name
Return String.Join(",", lines)
End Function
End Class

Public Class Class3
Inherits Class2
Public Overrides ReadOnly Property Name()
Get
Return "Class3"
End Get
End Property
End Class
Calling new Class3().GetNames() produces the following (edited for brevity)


     // mybase - explicit dispatch to class1
L_000b: call instance object ConsoleApplication1.Class1::get_Name()

// myclass - explicit dispatch to class2
L_001a: call instance object ConsoleApplication1.Class2::get_Name()

// me - virtual dispatch, will resove to class3's implementation
L_0029: callvirt instance object ConsoleApplication1.Class2::get_Name()
So the output eventually is 'Class1, Class2, Class3'. Nifty. That being said, I can't honestly say I've ever really needed this, so it might go back into the 'curios' collection. Useful in a pinch maybe, but surely it's a smell? As if designing-for-subclassing wasn't hard enough as it is...


PS: Interestingly the Reflector disassembler doesn't understand this either, so it wasn't just me that missed it: Reflector thinks the VB was:

Public Function GetNames() As String
Dim lines As String() = New String(4 - 1) {}
lines(0) = Conversions.ToString(MyBase.Name)
lines(1) = Conversions.ToString(Me.Name) ' got this wrong
lines(2) = Conversions.ToString(Me.Name)
Return String.Join(",", lines)
End Function

Thursday, July 17, 2008

Using Extension Methods in .Net 2.0 from VB.Net

So despite what ScottGu originally said, Extension Methods don't 'just work' for VS 2008 projects targeting .Net 2.0.

There's no end of blog posts describing the workaround - add your own ExtensionAttribute class to get it working - but all the samples are in C# (which is interesting in of itself). So here's the VB.Net version:

Namespace System.Runtime.CompilerServices
<AttributeUsage(AttributeTargets.Method Or AttributeTargets.Assembly Or AttributeTargets.Class)> _
Public Class ExtensionAttribute
Inherits Attribute
End Class
End Namespace

...and why am I bothering to blog about this rather trivial conversion? Because of the key gotcha: make sure you put this in a project with no root namespace set:



That had me banging my head on the table for too long.

As did the next one: extension methods only show up under the 'All' tab in IntelliSense - obviously too advanced for mere Morts. I gotta remember to turn that off: using VB is bad enough without the IDE patronising you as well.

Interestingly, if you get the AttributeUsage declaration wrong on the attribute, you get this error:



"The custom-designed version of System.Runtime.CompilerServices.ExtensionAttribute ... is not valid"

Fascinating. So this hackery works by design, it's just not really supported as such.

More reading: MSDN documentation on Extension Methods in VB

Tuesday, July 15, 2008

Continuous Integration in TFS 2008: Part 1

Many people now accept the benefits of a regular / continuous integration cycle (even if they don't actually practice it themselves). Picking up the pieces after someone's broken the checked-in source code, especially if it's not picked up for a few days, can be a real time waster.

Like many agile practices, however, the cost / benefit is hard to quantitatively analyse. It's far easier to justify therefore if it's really easy to setup: as the costs tend to zero the benefits become essentially 'free'. And you could argue that tools like CruiseControl.Net have made it pretty easy.

Personally, having spent significant sections of the last 3 years getting CCNet / Nant build cycles going on various projects, I'd beg to differ. Sure, it's really easy to setup CCNet / Nant (or CCNet / MSBuild) to build your solution, but that's only the first step in the process. Typically you also want to do things like:
  • Import the latest built version of external dependencies (ie components maintained outside of the solution being built)
  • Execute unit tests
  • Execute integration tests (so config files pointing at databases etc... have to be in the right place)
  • Package the build outputs nicely ('xcopy ready')
  • Deploy and install into test environments
CCNet and NAnt don't really give you this stuff 'out of the box'. You spend time gluing bits together, inventing your own build process and so on, and maintaining this stuff seems to get out of control very easily. Deploy and install is a particular minefield, because somewhere in there you have to start doing configuration file substitution (put your test server settings in the web.config etc...). And doing all this in XML just rubs salt into the wound.

You can handle most of this by hand on small projects, but the last app I worked on had five or six deployable parts to it (webservices, windows services, Winforms with ClickOnce manifests and the like), each of which had 20 or so settings to change for each of 7 different environments and the differing systems it integrated with. That's 100's of settings to keep track off, without even getting into the Biztalk artefacts, and that was only one of several projects of similar complexity. Automation's a no brainer at that point.

My solution to try and scale back the per-project cost of managing this was my own open source project SNAK. This attempted to commoditize a standard build / test / package / deploy process that you could implement on your side by pretty much setting a single variable at the top of a build script. And I think it works reasonably well: but it's clearly not the answer, not least because it took a fair amount of my (and others) time, of which I have very little.

So I was very, very hopeful when I started looking at the CI support in TFS 2008. Microsoft were really bashed over CI (lack of) in 2005, but this time round it looks like they've delivered:



You pretty much pick your solution file:



...your output directory...



...and your build frequency, and off you go:



Given how hard it was to deal with VSTS tests under CI in 2005 (because the test file was always in the wrong place), this screen will be a real pleasure to some:



And if you've tried to implement a build output retention policy in NAnt, you'll really appreciate this:



So up until now, absolutely fantastic. But then I had a few issues, which I'll deal with in Part 2 (so as not to take the gloss off the good bits above).


[I was due to present on this topic at the Perth .Net user group the other week, but a failing laptop saw to that (not the way I was expecting the demo to fail!). Since there's now no slots till Xmas, I've recycled some of the content into this post. The laptop was lobotomized and is recovering well...]

[Readify are doing a Dev Day in Perth on the 29th, with TFS as one of the tracks, so I'd be surprised if they didn't cover this there]

Monday, July 07, 2008

Recycling old posts?

Sorry about that. I re-tagged a few articles over the weekend, and I think Blogger has got confused and bounced them into my feed as if they were new posts. Unfortunately some of them were, so it's all a bit of a mess.

New posts were actually:
* Finally: PowerShell as build language
* Using PowerShell to export the Windows Feeds list


Normal service will resume shortly...

Friday, July 04, 2008

Using PowerShell to export the Windows Feeds list

Moved computers recently, and one of the things I realised I lost was my RSS feeds list. It was probably a blessing (I just tend to accumulate subscriptions otherwise), and maybe I should be using a reading service of some nature, but there you are.

Anyway given I'm all Mesh'd up, I though I'd copy my feeds list into my Mesh folder (like my bookmarks), so I'd have a backup and this wouldn't happen again. Only I couldn't find where the feeds list actually lives. Instead there's a whole API for dealing with it...

...which is surprisingly easy to use, and works like a treat in PowerShell (I'm always amazed at it's ability to 'just work' with things like COM objects). So I just exported the list instead:

# Dump the contents of the Windows Feeds store to an XML file

$erroractionpreference="stop";
[xml]$feedsListDocument = "<feeds/>"
$feedsList = $feedsListDocument.get_DocumentElement();
$feedManager = new-object -com "Microsoft.FeedsManager"

@"
<feeds>
$(
$feedManager.RootFolder.Feeds | % {
$feed = $_;
$feedXml = $feed.Xml(-1, 0, 0, 0, 0)
'<feed Name="{0}">{1}</feed>' -f $feed.Name,$feedXml
}
)
</feeds>
"@

Easy as. The XML it spits out is overly large (since it includes all the article contents from the cache), but for the MB involved it barely seems worth refining it.

Update 2008-07-17: So like the very next day I realised I could have just sync'd the feed list into Outlook, and asked it to export it as OPML. But syncing into Outlook blew my tiny mailbox quota (these feeds are suprisingly large) so I ended up back doing this again anyway. Then it turned out that IE can export the feed list as OPML too (File \ Import and Export - you'd think I'd have noticed originally) - but I still like having a script because I can schedule it.

Note to self: It is definitely time to find a blog that can cope with XML a bit better

Finally: PowerShell as build language

I've never really got into MSBuild, which surprised some people given how much time in the last four years I've spend mucking about with CCNet / NAnt. It was partly that we did a bit of investigation when MSBuild came out, and saw a couple of things we didn't really like about it and decided to wait for v2 (ie Team Build in TFS 2008). Partly.

More fundamentally however the problem is that MSBuild is just too similar to NAnt, and my considered opinion after years of usage is that NAnt sucks, or to be more specific, XML is a terrible 'language' for writing executable code. Fowler puts it pretty well:
"After all until we tried it I thought XML would be a good syntax for build files"
http://www.martinfowler.com/articles/rake.html
Sure it's fine for the IDE to write that stuff out (though even then you have to look at it and wince, right), but for humans who want to customise their build process? Jesus wept. Variable scope: gone. Explicit parameters for subroutines (targets): gone. Its fine when it's simple, but once you start looping and branching and using temporary variables it's just a great big mess of angle brackets that even it's mother pities. And debugging? Now there's a story...

There's a time and a place for the angle bracket tax, and this isn't it. Square peg, round hole.

So given how amenable for DSLs PowerShell has proven to be, I've been holding my breath for some kinda PowerShell rake-alike.

And here it is: Introducing PSake

(Also Microsoft themselves are thinking about it, and canvassing for opinions about whether it's a good idea or not.)

Sadly (actually quite the opposite) I'm not actually having to deal with the build process on my current project, so I don't really have much excuse to play with it. But I dream of a future in which the TFS Team Build project wizard kicks out a PS1 file instead. It'd certainly make fixing some of it's shortcomings a whole heap easier (that's a subject for a future post)


Edit [14/7/08]: Most requested feature for MSBuild? Debugging. Obviously this'll be interpreted by the MSBuild team as a need for a debugger, but maybe they should have used a language that already had one.

Thursday, June 26, 2008

Beyond Compare 3 supports 3 way merge, is totally awesome

Beyond Compare 3 is out in beta. It supports 3 way merges!

I found this out literally minutes before I started what turned into a 2 day mergeathon between two large and divergent branches in TFS, with *lots* of merge conflicts to manually resolve, and I can honestly say I'd probably still be merging if I hadn't downloaded it. It's just fantastic.

I'll probably post some screenshots etc... soon, but if you're struggling merging with BC2 and/or the built-in diff/merge support then you really should check it out.

Monday, June 23, 2008

MSDN Downloads and the fly-out menus trauma

Raymond's just posted about the rationale behind the windows menu show delay, and goes on to point out various web properties that blatantly ignore the underlying usability requirement.

Sadly finding examples is like shooting fish in a barrel. I remember Jakob Nielsen winging about this last millennium, but as the technology moved forwards: Director, DHTML then Flash, the ease with which anyone can design their own UI and distribute it widely over the internet has lead to a flood of bad UI. Even as Vista attempts to move forwards, the new Silverlight version of MSDN Downloads re-re-implements the fly-out-menus concept, with almost unusable results.

Maybe this is a necessary pain we have to move through, but it kinda sucks that we can't explore novel and interesting UI concepts without making them totally unusable. I'm no UI designer, but at least I don't pretend to be, or work as one.

The templating within WPF is a great example of an enabling technology here, where the usability can be codified into a control by 'experts', but still delegate most of the 'funky look' to the end-designer. In this case if WPF / Silverlight had shipped with a decent fly-out menus control, maybe the MSDN Downloads team wouldn't have got it so horribly wrong, and I wouldn't have had to uninstall Silverlight in frustration.

I guess there is hope then that this isn't just another enabling technology that enables people to make a real arse of things.


PS: Check out this bizzarro comment on Raymond's blog:
"Let's not get into the "gynaecologist's interface" that is Vista's Start Menu, shall we?"

WFT?

Friday, May 30, 2008

Don't be Stupid

Years ago I was working on a project and I came up with a fantastic idea to help limit the level of regressions in the codebase I was working on. Rather than write unit tests as little throwaway test harnesses, I moved them into the codebase, and created a little app to execute them. It even did this via reflection, so as we added more tests, they got run too.

I thought I was being pretty clever.

I was being very stupid. I'd just re-invented xUnit, and didn't even know. [1]

It's a particular type of stupidity that manifests itself only in those who'd otherwise regard themselves as anything but: we get so wrapped up in our great idea that we stop to consider that someone else might have done this already. Programmers are particularly badly afflicted by this, mostly because it suits our vanity to create it ourselves.

There was already an automated testing community, that had over time evolved what worked and what didn't, the practical upshot of which - for a .Net developer at the time - was that NUnit already existed. I could have spent the time writing more tests instead. Or better still, more screens, which is what I was actually being paid for.

The last three applications I've worked on have all involved considerable custom frameworks (stupid) including a custom databinding scheme (very stupid). They were written by clever people, most of whom I respect, but they did some stupid things that less able programmers wouldn't have been able to do. Clever isn't always a complement in the agile camp, and this is why.

Of course 5 years of hindsight is a wonderful thing, and I've written my share of head-slappingly dumb code too. And it's all too easy to succumbed to the 'quick fix' fallacy when the boss is breathing down your neck : after all it's so much easier to get started writing your own framework than to learn to use someone else's.

But once you start down the dark path, forever will it haunt your destiny[2]. Which is why I make this plee to you now:
Please, before you put finger to keyboard again, consider whether what you're about to write has already been written.
Don't be stupid.

[1] To be fair to my erstwhile self, at least I was actually doing some testing, which was more than had been done before on that project
[2] Or that particular project at any rate

Tuesday, May 13, 2008

Enabling multiple RDP sessions in Vista

After many days of frigging around I realised those thegreenbutton.com Vista multiple remote desktop hacks (that you find from google) are all broken by SP1. That page' on missingremote.com that is supposed to draw all this together still hasn’t been updated with this new info.


However add SP1 to your search and you find this other thread, which works: http://thegreenbutton.com/forums/permalink/242509/255166/ShowThread.aspx#255166

Ah, the joys of subsequent-threads-with-lower-page-rank-than-the-original-now-outdated-info.

Thursday, May 08, 2008

Running ASP.Net webservices under a service account

Most of the time I run websites and webservices in an app pool that's running as Network Service. It just saves a whole truck load of time and hastle:
* no passwords to worry about
* already trusted for kerberos delegation
* can still use it to talk to a database under integrated security (you just grant access to the machinename$ account in the domain).

Hey - this is what this account was *invented* for.

However, sometimes a specific service account is a must. Reasons include:
* Needing to differentiate access rights between applications running on the same host
* Needing to authenticate back across a one-way domain trust
* Specific policy mandates

Unfortunately you can't just add any account to IIS_WPG and use it, because the ACL on windows\temp is wrong: and grants access to network service rather than to the group. Miss this one, and you'll just get serialization errors left right and center.

So I do this:


Net localgroup iis_wpg /add mydomain\myserviceaccount
cacls %systemroot%\temp /E /G IIS_WPG:C


...then when you change the identity of the app pool you won't get 'Service Unavailable'.

Sunday, March 16, 2008

Don't override Equals

A colleague had a problem the other day which turned out to be due to an overridden Equals operator. In this case it was a straightforward bug in the implementation, but after he saw my horror struck face I had to introduce him to the whole 'don't override Equals' philosophy[1]. On the pretext that you've not come across it, here's the argument in full:

  • You have two objects that came from different places, and need to know if they represent essentially the same data.
  • You can't override Equals unless you also override GetHashCode. If two objects are equal, they must have the same hashcode, or collections are screwed.
  • GetHashCode must return the same value for an instance throughout it's lifetime, or Hashtable's are screwed
  • Your object isn't readonly, so you need an immutable field in the instance to base the hashcode on.
  • But if you modify one instance's data to equal another, that field can't change, so the hashcodes are still different.
  • You're screwed
And that's without getting into the problems associated with a correct implementation of Equals in the first place (getting the reflexive, symmetric and transitive bit right). Generally speaking some kind of IsEquivilent method is a whole heap less trouble, but it depends what you're up to. You might think about loading your objects through some kind of registry, so references to the 'same' data actually end up pointing to the same instance. Then everything just works...

More reading:

UPDATE 10/04/08: Some clarifications: I'm talking about not overriding Equals/GetHashCode for reference types here. It's not such a problem for value types [as IDisposable points out in the comments]. And I've futher clarified some of my assertions about GetHashTable in the comments.

[1] PS: Like all advice, this has exceptions. But the chances are they don't apply in your case. No, really.

Thursday, January 31, 2008

Care required passing arrays to .Net methods in Powershell

In Powershell, argument lists for .Net methods are treated as arrays:
$instance.MyMethod($arg1,$arg2);
...which can be confusing if you want to pass an array as a single argument:
$instance.MyMethod($myArray);

New-Object : Cannot find an overload for "MyMethod" and the argument count: ""
Instead, force the array-argument to be contained within a single-member array:
# Note the extra comma below
$instance.MyMethod(,$myArray);
Makes sense when you think about it, but definitely a gotcha.

[In my case, I was caught out with the byte[] overload constructor for a MemoryStream]

Wednesday, January 30, 2008

Blobs out with SQL 2008

Recently I re-visited the blobs in/blobs out argument with a colleague. You know the one, one of you says blobs shouldn't be stored in database (principally because the last time he tried it 'blobs in' in VB 6 access to the blob data was a pain in the arse), then the other one says no they should be in the database (because the last time they tried it 'blobs out' all the files got mixed up / out of sync / weren't backed up). Etc...

Anyway, not only has Paul Randal posted a good summary of the pros and cons, but he did so as an intro to a new SQL 2008 data type 'FileStream' that attempts to bridge the two approaches (the 'have your cake and eat it' approach).

I'm cautious. Transactions at the filesystem level are a real mess (as some of the OneNote blogs make clear, especially with non-MS implementations of SMB like SAMBA). Your database backup is presumably still huge and unwieldy (or missing the blob data, which is worse?).

The main advantage of this approach seems to be that SQL can access the blob data faster through NTFS than via it's own internal MDF formats. But you've apparently still got to go via SQL to get the data, you can't (for example) just serve up images-stored-as-blobs directly via IIS. Or maybe I've missed something. Either way, the upside all seems to be focused on blob streaming performance, which may or may not be the most relevant factor for your app.

So it's possible that next year's arguments will be blobs in vs blobs out vs filestream, and still no one-size-fits all. Ah well.

Thursday, January 03, 2008

Path already mapped in workspace error with CCNet and TFS

Had a problem with CCNet that kept me here till midnight where try as I might, I just couldn't get a build to not fail with the dreaded "Path ... is already mapped in workspace ..." error:
Microsoft.TeamFoundation.VersionControl.Client.MappingConflictException: The path C:\Builds\etc\Working is already mapped in workspace someworkspace
We use a different workspace for every CCNet project to avoid collissions, and to maintain uniqueness we keep the workspace name the same as the CCNet project name. I couldn't find the workspace in question, and was pretty sure I'd already deleted it. In fact I'd used TF Sidekicks to delete all the build user's workspaces, and it still didn't work. So what was up?

Fortunately in a post 'How to handle "The path X is already mapped in workspace Y"' I learnt of the mappings cache file on the client PC, in the user's Local Settings\Application Data\Microsoft\Team Foundation\1.0\Cache\VersionControl.config file. Just nuking workspaces on the server isn't enough!

So to be sure I blew away the build server's local profile entirely, and that finally fixed it.

Wednesday, November 07, 2007

The new starter experience

I normally avoid 'link-posts', but again Hacknot is right on the money with 'If They Come, How Will they Build It?', an eminently familiar analysis of the plight of a new developer on a project with an oral documentation culture.

In fact I'd go slightly further than Hacknot, and state that the initial experience of a new developer on the project is one of the most important things to get right. First impressions do matter, and if your first impression of a project is the frustration of:
  • Not having a login
  • Not having internet access
  • Not being able to get latest
  • Not being able to build
  • Not being able to locate any documentation
  • Not having clear lines of escalation
  • Not having clear rules of engagement
  • Not knowing what's expected of you
  • Not having a mentor
...you're going to be hard pushed not to be prejudicing your opinions of the professionalism of the rest of the project. You'll start disheartened, but on the other end of this unfortunate indoctrination you're going to be just like them. You won't regard the absence of the list above as anything other than normal. You'll accept that that's just not how things are done around here. You will love big brother.

[ahem. got carried away there]

I regard the absence of guides and documentation as more than a major time-waster: it's a self-perpetuating morale hole for all future team members to climb into and die.

New staff play a vital part in ensuring a project's approach doesn't atrophy. If you waste their 'fresh' time frustrating them with missing documentation and runaround, you won't get the benefit of seeing things from their eyes. They'll have clammed up and learnt to live with how it is, and by the time you ask them they'll have forgotten that they used to care.

FixBrokenWindows - they're not all in your code

Tuesday, October 30, 2007

Great Powershell One-liners

Or, 'highly useful things you can do in Powershell':

Watching the last 20 lines of the last (log?) file in a directory:
(gci)[-1] | gc | select-object -last 20
Finding where .Net is installed:
(get-itemproperty HKLM:\software\microsoft\.netframework).InstallRoot
Pinging a URL [1]:
$temp = (new-object net.webclient).DownloadString($url)
if ($?) { write-host Passed -foregroundcolor "green" }
Getting the DNS suffix for a machine:
(Get-WmiObject -class "Win32_ComputerSystem").Domain

[1] Ok, that one's not a one-liner.

Why Powershell Rocks

I try and remain reasonably sanguine / sceptical / cynical about most new technologies, but there's two about at the moment that I just can't find anything to fault with: Vista's Media Center, and Windows Powershell.

So (apart from the fawning) what's so good about Powershell?

The key point is that Powershell is a shell that thinks it's a scripting language. Or is it the other way round? Well it's both anyway. So things you can do in a BAT file, you can do in Powershell:
xcopy /I /F /Y Somefile.abc ..\SomeFolder
[In many cases, like the above, you can just cut and paste the same line into Powershell and it will work]

Ever tried doing that in VBScript/JScript? Either you wrote lines of Scripting.FileSystemObject code, or you shelled out (in which case you've got the line above plus the 'shelling out' code).

But BAT files can only take you so far. Further if you're stubborn or clever, but even then there's a wall. Creating files with todays date in the name is a real swine. Setting IIS properties involves an external VBScript. HTTP-pinging a URL and checking it's up... no chance.

For these you have to write a VBScript or an (.Net?) EXE, where you can take advantage a host of supporting libraries and benefit from 'real' programmer concepts like variable scope, functions and conditionals. But integrating them with your BAT files (ie retting return values back) is quite a challenge. And you end up with your foot in both camps

Powershell does both.

I'll give you an example of the kind of hybrid approach this engenders: setting ACLs for the anonymous user on a website. Now you can assume that it's IUSR_%computername%, but in some scenarios[1] (renamed for security, ghosted image / machine renamed) it's not. So you've got to look up the user first, which is pretty tricky in a BAT file (even with adsutil.vbs), but then set some file permissions, which is nigh-on impossible in VBScript. In PowerShell this is easy:
$iisobj = [wmi]"root/MicrosoftIIsV2:IIsWebVirtualDirSetting.Name='W3SVC/1/Root/MyWebSite'"
$userName=$iisobj.AnonymousUserName
$path=$iisobj.Path

CACLS $path /E /G $userName:R
Note how I did something very 'script' - dealing with objects and properties, and followed it up with something very 'batch' - just calling a shell command (CACLS). And it just worked.

Now in my real script I didn't use CACLS, I used the .Net System.Security.AccessControl classes, because I'm a .Net developer and that's what I thought of first. I wrapped them up into a neat reusable function, so the code above actually didn't look so different. But I ended up writing 20+ lines of code, where CACLS would have done the job just as well. I'm learning too.

There's lots of other great things about Powershell too:
  • Some great syntax improvements, like range operators, and reverse-indexing arrays
  • Fantastic support for script parameters, and default values
  • Here-strings
...but they're all secondary to this one, which is that Powershell is all you need[2]. That's pretty compelling to me.


[1] I.E. My scenario.
[2] Ok, unless you're doing something really wacky/Win32 [3] where the .Net BCL doesn't have the support yet.
[3] [4] Is there a difference?
[4] Recursing footnotes? Can I get away with that?

Tuesday, October 16, 2007

IE Advanced Security Configuration affects Powershell execution policy

Like a few other MS apps, but hardly obvious to us mortals, Powershell determines whether something is 'local' based on the Internet Explorer Security Zones. This means that (by default) scripts run from a UNC are considered part of the Local Intranet zone, and so can run unsigned under the RemoteSigned execution policy:



However, if your Win 2003 server is running the Internet Explorer Enhanced Security Configuration, UNC paths are not included in the zone unless explicitly added:



So your scripts will refuse to run unless you reduce the execution policy down to 'unrestricted', leaving you with lots of nasty messages:
Run only scripts that you trust. While scripts from the Internet can be useful, this script can potentially harm your computer. Do you want to run etc... [D] Do not run [R] Run once [S] Suspend [?] Help (default is "D"):
Yuck (and very frustrating till you work it out).

If IE Enhanced Security Configuration is mandated, the only way round this is to explicitly add the UNCs where your scripts are located into the Local Intranet zone (nb: don't add a http: prefix!)

You can do this manually, or you can just import a set of registry keys if you need to do a whole load in one go (or the same across multiple servers). If you do it through the UI the settings are user-scoped, if you do it via the registry you can add settings either for the current user, or at a machine-wide level. See Adding Sites to the Enhanced Security Configuration Zones in MSDN and Description of Internet Explorer security zones registry entries in the knowlege base for more details.

Monday, July 16, 2007

Log4net in Asp.Net redux: Implement IFixingRequired on your Active Property Values

A while back I noted that the built-in contexts in log4net were broken in the face of ASP.Net's thread agility (and pointed this out to the log4net community).

The workaround I suggested, which was also suggested on the log4net developers list, was to leverage log4net's support for deferred evaluation of logging properties. These are known as Active Property Values, but that's just a fancy/short way of saying 'any object you like, who's ToString() method results in the value that you actually want logged'. This was a pretty neat workaround, eg:
      log4net.GlobalContext.Properties["requestUrl"] = new HttpContextRequestUrlProvider();

private class HttpContextRequestUrlProvider
{
public override string ToString()
{
HttpContext context = HttpContext.Current;
if (context == null) return null;
return context.Request.RawUrl;
}
}
However in one of my logging databases, I recently noticed logging entries attributed to me that had clearly come from someone else, which made me worry for a few hours whether this pattern was broken, or broken when using an appender that buffers (like the AdoNetAppender).

Digging through the source code showed that BufferingAppenderSkeleton does indeed attempt to 'fix' logging entries when they go into the buffer, to guard against exactly these kind of multithreading logging mishaps:
 // Because we are caching the LoggingEvent beyond the
// lifetime of the Append() method we must fix any
// volatile data in the event.
loggingEvent.Fix = this.Fix;
This causes all kinds of intrinsic log4net values (like thread ID, UserName), and the message itself to be 'fixed': i.e. fully evaluated (and rendered via the layout) now, rather than later. Otherwise all the logging events in the buffer would end up being written out with the values in-play at the time the appender flushed (ie potentially another thread/users's context).

This fixing also 'fixes' properties from the log4net contexts (ThreadContext, LogicalThreadContext and GlobalContext) if they implement IFixingRequired. And that's what I missed - one obscure interface:
 // Fix any IFixingRequired objects
IFixingRequired fixingRequired = val as IFixingRequired;
if (fixingRequired != null)
{
val = fixingRequired.GetFixedObject();
}
But I've missed that for the last couple of years. So… actually I'd rather not think about the implications of that. Meanwhile, if anyone who does any log4net dev would actually like to update the Active Property Value doco, that'd be good, thanks.

Better still, 18 months later, can we have our Adaptive Context now please?

Popular Posts