So, Farewell Then Google Reader
Your feed
Has expired
Like Bloglines,
Which you killed
Off
I guessGoogle Reader was 7 1/2
Now
We'll just have to use
Something else
Instead
(With apologies to E.J.Thribb)
Software, Data and Analytics
So, Farewell Then Google Reader
Your feed
Has expired
Like Bloglines,
Which you killed
Off
I guessGoogle Reader was 7 1/2
Now
We'll just have to use
Something else
Instead
As any .Net UI developer will tell you, INotifyPropertyChanged is a fundamental part of 'binding' an object to a UI control. Without it binding is essentially one-way: changes in the control change the object, but if this has a ripple effect on other properties, or properties are changed by other 'below the UI' processes, the UI can't know to repaint. This is essentially an implementation of the Observer pattern[1].
Unfortunately it's not for free - you have to implement it yourself - and that's where the problems start. So much has been written on the pain of implementing INotifyPropertyChanged (INPC for short) that I need not repeat it all here. It's generated so many questions on StackOverflow you'd think it's due its own StackExchange site by now.
The principal complaints are around all the boilerplate code and magic strings required to implement, so for the sake of completeness I'll summarize some of the solutions available:
An object that implements INotifyPropertyChanged must raise the PropertyChanged event only on the thread that was originally used to construct any registered subscribers for that event
Now there's a problem[2].
Clearly this is something that's just not possible to check for at runtime, so your design has to cater for this. Passing objects that might have been bound to business logic that might mutate them? UI thread please. Adding an item into a collection that might be ObservableCollection? UI thread please. Doing some calculations in the background to pass back to an object that may have been bound? Marshal via UI thread please. And so on. And don't even get me started on what you do if you have two (or more) 'UI' threads[3].
This is a horrible, horrible creeping plague of uncertainty that spreads through your UI, where the validity of an operation can't be determined at the callsite, but must also take into account the underlying type of an object (violating polymorphism), where that object came from (violating encapsulation), and what thread is being used to process the call (violating all that is sacred). These are aspects that we just can't model or visualize well with current tooling, at least not at design time, and none of the solutions above will save you here.
So there you go. INotifyPropertyChanged: far, far worse than you imagined.
[1] ok, any use of .net events could be argued is Observer, but the intent here is the relevant bit: the object is explicitly signalling that it's changed state.
[2] Actually I've over-simplified, because you can have whole chains of objects listening to each other, and if any one of them is listened to by an object with some type of thread-affinity, that's the constraint you have to consider.
[3] Don’t try this at home. There are any number of lessons you’ll learn the hard way.
public static class NotUsed{
public static void DefinatelyNotUsed(this TContext context, Action thing)
where TContext : DataContext
{
}
}
... then you'll also have to pull in System.Data.Linq to get the 'importing' assembly to compile.$blah.Catalogs | Get-Member Get-Member : No object has been specified to the get-member cmdlet.
Get-Member -inputObject:$blah.Catalogs TypeName: BlahNamespace.BlahCollection Name MemberType Definition ---- ---------- ---------- Add Method System.Void Add(Microsoft.SqlServer.Management.IntegrationSer... Clear Method System.Void Clear()
"A DAC is a database lifecycle management and productivity tool that enables declarative database development to simplify deployment and management. A developer can author a database in SQL Server Data Tool database project and then build the database into a DACPAC for handoff to a DBA"
http://msdn.microsoft.com/en-us/library/ee210546.aspx
var localTime = DateTime.Now;
var utcTime = localTime.ToUniversalTime();
localTime.Dump("Local Time");
utcTime.Dump("UTC time");
utcTime.Kind.Dump("UTC Kind");
utcTime.ToString("yyyy-MM-ddTHH:mm:ss K").Dump("Expected XML");
XmlConvert.ToString(utcTime).Dump("Actual XML from XML Convert");
Local Time | 23/10/2012 11:11:11 AM |
---|---|
UTC time | 23/10/2012 3:11:11 AM |
UTC Kind | UTC |
Expected XML | 2012-10-23T03:11:11 Z |
Actual XML from XML Convert | 2012-10-23T03:11:11.0773940+08:00 |
MSBuild mysolution /p:PackageSources=\\server\fileshare...now finally I have a green light on TeamCity and can go and do something actually productive instead.
It’s fairly normal in production environments to find the SQL Server configured to disallow use of the SQL Agent account for the execution of certain types of job steps: SSIS packages and CmdExec for example. Instead you have to configure an explicit SQL Agent proxy, which requires first storing credentials within SQL’s credential store.
For domain accounts this is fairly straightforward, but if you attempt to add credentials from one of the ‘virtual accounts’ (such as Network Service), you’ll get the following error: “The secret stored in the password field is blank”
The solution is (eventually) obvious: add the credential using TSQL (or SMO), and avoid the UI validation:
USE [master]
GOCREATE CREDENTIAL [Network Service] WITH IDENTITY = N'NT AUTHORITY\NETWORK SERVICE'
GO
et, voila:
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'vsdbcmd.exe' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'vsdbcmd.exe'Oh god, I thought. Yet more VSDBCMD wierdness. But this box had SQL Server installed, so the normal 'needs SMO / Batch Parser' caveats didn't apply. Eventually I ILSpy'd the assemblies to check the bittyness, and guess what! The error message was completely accurate. I'd accidentally picked up the VSDBCMD not from the VS 2008 folder (9.0) but instead from the VS 2010 folder (10.0). Which is .Net 4. Which really is a more recent version of the runtime than was installed on the Windows 2008 R2 server. Embarrasing to be caught out by a completely accurate error message (though if it listed the versions involved I might have paid attention)
write-host "Generating XMLA" $asDeploy = "$programfiles32\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" & $asDeploy "$pwd\..\bin\MyCube\MyCube.asdatabase" /d /o:"$pwd\MyCube.xmla"Which works just nicely. Except when we migrated that project to SQL 2008 R2, when it stopped working.
C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem" 4.00 subsystem version 3 subsystem (Windows CUI)
C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem" 4.00 subsystem version 2 subsystem (Windows GUI)
start-process -FilePath:$asDeploy -ArgumentList:$asdatabase,"/d","/o:$xmla" -Wait;Maybe I should just do that all the time to be safe, but just being able to use other command line tools within a script without a whole lot of ceremony is one of the really nice bits about powershell, so I tend not to. In this case the launch semantics of an existing utility changing between versions seems like a really nasty thing to be caught out by.
computername\SQLServer2005MSSQLUser$computername$MSSQLSERVERThis group is configured by the installer to contain the service account (eg Network Service), and a corresponding SQL login is created (for the windows group) granting sysadmin rights:
Playing with the Kinect SDK for Windows, and having a ball, but the doco is (understandably) a bit rubbish in places, or to be more specific – lacks critical details around the form that a parameter takes, where that detail is important.
Anyway, this is my list of gotchas so far:
Bizarrely, whether you initialize and open your depth image stream with ImageType.Depth or ImageType.DepthAndPlayerIndex makes the difference between whether what you get is ‘right way round’ or horizontally inverted.
Inverted is generally more useful, because it matches with the ‘mirror image’ video stream. So why isn’t the stream like that always? Seems like an unnecessary inconsistency to me, and one you might want to spell out in the doco.
When you do turn player index tracking on, the depth stream ‘pixels’ are lshifted 3 positions, leaving the lower 3 bits for the player index. This is documented, and I understand you’ve got to put the player index somewhere, but why not make the format consistent in both cases, and just leave the lower bits zero if tracking not enabled? Better still, why not put the (optional) player index in the high bits?
This is especially irritating because...
The nuiCamera.GetColorPixelCoordinatesFromDepthPixel() mapping method expects the ‘depthValue’ parameter to be in the format it would have been if you had player tracking enabled. If you don’t, you’ll have to lshift 3 places to the left yourself, just to make it work. So depending on how you setup the runtime, the pixels from one part of the API can or can’t be passed to another part of the API. That’s poor form, if you ask me.
Not that you’ll find that in the doco of course, least of all the parameter doco.
Ok, so I understand that the depth to video coordinate space translation is a lossy one, but I still don’t see why this method doesn’t exist.
I picked up the Kinect SDK and the first thing I wanted to do was depth-clipping background removal. And the easy way to do this is to loop through the video pixels, and for each find the corresponding depth pixel and see what its depth was. And you can’t do that.
Instead you have to loop through the depth pixels and call the API method to translate to video pixels, but because there are less of them compared to the video pixels, you have to paint them out as a 2x2 block, and even then there’ll be lots of video pixels you don’t processes, so many you have to run the loop twice: once to set all the video pixels to some kind of default state, and once for those that map to depth pixels to put the depth ‘on’.
Just didn’t feel right.
function showMap(position) {
// Show a map centered at (position.coords.latitude, position.coords.longitude).
}
// One-shot position request.
navigator.geolocation.getCurrentPosition(showMap);
The condensed version
Updated Links now point to Channel 9 site, where the videos will end up
Get-Website : Retrieving the COM class factory for component with CLSID {688EEEE5-6A7E-422F-B2E1-6AF00DC944A6} failed due to the following error: 80040154.