tag:blogger.com,1999:blog-173328162024-02-22T00:57:10.462+08:00Cup(Of T)Software, Data and Analyticspiers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.comBlogger222125tag:blogger.com,1999:blog-17332816.post-30941984050717679652018-05-23T21:34:00.001+08:002018-05-23T21:34:17.812+08:00FP = Functional PowerShellI have been writing a lot of F# the last few years, and this has made me look critically at some of my PowerShell scripts. While we may use pipelines all the time in PowerShell, often we still treat it as yet-another-curly-bracket-language when it comes to flow control.<br />
<br />
But this is our preconceptions letting is down, because PowerShell is already much more expression orientated than that. So, for example, 'if' and 'switch' in PowerShell are (since v2 or something) <i>expressions</i>, and not just pure imperative branching.<br />
<br />
This means that - rather than write this:<br />
<pre><code>
# imperative version
if ($inputValue % 2 -eq 0){
$result_a = 'even'
}else{
$result_a = 'odd'
}
Write-Host "Result_A is now $result_A"
</code></pre>
... you could instead write the far superior:<br />
<pre><code>
# expression-orientated version
$result_b = `
if ($inputValue % 2 -eq 0){
'even'
}else{
'odd'
}
Write-Host "Result_B is now $result_b"
</code></pre>
This works with 'switch' also:<br />
<pre><code>
# expression-orientated switch version
$c = `
switch($inputValue){
1 { 'One' }
2 { 'Two' }
default { 'Other' }
}
Write-Host "C is now $c"
</code></pre>
Sure, PowerShell isn't going to give you any nice warnings about not providing output values on all branches and so forth, but the latter version is just plain *easier to follow*.piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-13398634385845900092016-03-09T10:00:00.000+08:002016-03-09T17:28:42.033+08:00Still no migration path from PowerPivot + PowerQuery to SSAS Tabular in SQL 2016 (RC0)<p>PowerPivot is a great product, and PowerQuery can be fairly awesome as well, but when you run out of steam running your spreadsheet in Excel/SharePoint you are a bit screwed, because if you convert a PowerPivot model to SSAS Tabular (using the SSDT ‘Import from PowerPivot’ project) then all your PowerQuery tables are converted into <strong>pasted data </strong>because SSAS can’t talk PowerQuery.</p> <p>So in order to ‘upsize’ between PowerPivot and SSAS Tabular you have to:</p> <ul> <li>Re-author all your PowerQueries as traditional SSIS ETLs (or similar) </li> <li>Land the data in a relational database </li> <li>Change your PowerPivot model to source from those tables, rather than from PowerQuery. </li> <li>This actually involves recreating them all, since for tables sourced from PowerQuery, the connection type can’t be changed. </li> <li>And the calculated columns and measures </li> <li>And the formatting and sort orders </li> <li>etc… </li> </ul> <p>That looks a lot like ‘rewrite from scratch’ to me, which is a pretty poor option (and a major gotcha with the PowerPivot/PowerQuery approach). So I was pleased to read (somewhere I can’t find now) that this will be addressed in the SQL 2016 timeframe, with PowerQuery supported as a data source for SSAS[1], SSRS[2] and SSIS[3].</p> <p>Only… seems like it’s actually <strong>not</strong>.</p> <p>I’ve been doing a trial of SQL 2016 using CTP3.3 and RC0, to determine if this fixes an issue we had with PowerPivot KPIs, and it seems like it does. However, if SSIS or SSAS can source from PowerQuery I’m blowed if I can see where that functionality is, and the <a href="https://msdn.microsoft.com/en-US/library/bb522628.aspx">release notes</a> have been very quiet on this front.</p> <p>The only concrete thing I’ve found is this tantilizing (and presumably unintentional) bit in the <a href="https://www.microsoft.com/en-us/server-cloud/products/sql-server-2016/">SQL 2016 Preview</a> site’s <a href="http://download.microsoft.com/download/F/C/2/FC21C981-4351-4434-A78A-3384CA7515BF/SQL_Server_2016_Deeper_Insights_Across_Data_White_Paper.pdf">Deeper Insights Across Data</a> white paper:</p> <p><img title="image" style="border-left-width: 0px; border-right-width: 0px; background-image: none; border-bottom-width: 0px; padding-top: 0px; padding-left: 0px; display: inline; padding-right: 0px; border-top-width: 0px" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgrtlcxH2JQwDYX9NVELML6wvMSBVE0k6_vEmBIiooJXqcTKSLjMPnqDi4NaQUKwWJ-FJ_ZD_3rnThSD5Tr9vohherLs_8sZe66jWSIMqMX720ib4WevxsmOEIRspEFO0ZkH8IA/?imgmax=800" width="644" height="125" /></p> <p>I stress that’s not my highlighting :-(</p> <p> </p> <p><font size="1">[1] Can’t find where I originally got this idea from. May have just got the wrong end of the wrong stick <br />[2] See </font><a title="https://gqbi.wordpress.com/2015/05/07/bi-nsight-sql-server-2016-power-bi-updates-microsoft-azure-stack/" href="https://gqbi.wordpress.com/2015/05/07/bi-nsight-sql-server-2016-power-bi-updates-microsoft-azure-stack/"><font size="1">https://gqbi.wordpress.com/2015/05/07/bi-nsight-sql-server-2016-power-bi-updates-microsoft-azure-stack/</font></a><font size="1">. That being said, SSRS can use SSIS as a source, so if <em>that </em>was to be implemented… <br />[3] See </font><a title="https://social.msdn.microsoft.com/Forums/en-US/68a06982-4166-4ac9-93c0-3c247a9c64a7/powerquery-within-ssis-in-sql-2016?forum=sqlintegrationservices" href="https://social.msdn.microsoft.com/Forums/en-US/68a06982-4166-4ac9-93c0-3c247a9c64a7/powerquery-within-ssis-in-sql-2016?forum=sqlintegrationservices"><font size="1">https://social.msdn.microsoft.com/Forums/en-US/68a06982-4166-4ac9-93c0-3c247a9c64a7/powerquery-within-ssis-in-sql-2016?forum=sqlintegrationservices</font></a><font size="1"> and </font><a title="http://sqlmag.com/blog/what-coming-sql-server-2016-business-intelligence" href="http://sqlmag.com/blog/what-coming-sql-server-2016-business-intelligence"><font size="1">http://sqlmag.com/blog/what-coming-sql-server-2016-business-intelligence</font></a><font size="1"> or just vote for </font><a title="https://connect.microsoft.com/SQLServer/Feedback/Details/1046883" href="https://connect.microsoft.com/SQLServer/Feedback/Details/1046883"><font size="1">https://connect.microsoft.com/SQLServer/Feedback/Details/1046883</font></a></p>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-35394756561682690492015-05-19T23:26:00.002+08:002015-05-19T23:26:54.565+08:00Driving RGB LED strips from the Raspberry Pi - A Brief OverviewIn 2013 I put my first set of programmable Christmas lights together, a very simple project which involved hanging a <a href="http://www.adafruit.com/product/306">5m addressable RGB LED strip (based on LDP8806)</a> outside my house, driven by a Raspberry Pi and my first ever experiments in Python. It was awesome, and because the RPi was on WiFi, I could SSH from my bed (where I had clear view of the lights outside the house) and incrementally tweak the light sequence, or add extra patterns I'd dreamt up during the day.<br />
<br />
The LDP8806 strips were a simple way to get started, because the protocol they speak is basically SPI. Since the RPi has hardware SPI support, this means talking to them is pretty much as simple as writing bytes to <span style="font-family: "Courier New", Courier, monospace;">/dev/spidev0.0</span>. You have to write the <em>right</em> bytes of course, but even that's pretty simple - write as many G,R,B bytes as you have pixels, then a trailing zero. And again, Adafruit have <a href="https://learn.adafruit.com/light-painting-with-raspberry-pi/software">great</a> <a href="https://learn.adafruit.com/digital-led-strip">tutorials</a> with ready-to-roll sample code (including a gamma correction).<br />
<br />
I was hooked. But what became immediately apparent was that 5m wasn't going to cut it. Wrapped round the 'Christmas tree' in a spiral it suddenly looked a lot smaller. I needed <strong>moar pixels</strong>.<br />
<br />
So last year (2014) I started playing with <a href="https://www.sparkfun.com/products/11821">WS2812</a>-based lights (aka <a href="https://learn.adafruit.com/adafruit-neopixel-uberguide">Neopixels</a>). These have the advantage over the LDP8806's of being significantly cheaper per LED, however because of the tight timing requirements of their data signal, they are traditionally driven using a microprocessor like an Arduino (which isn't going to get preempted or multitask).<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://cdn.sparkfun.com//assets/parts/8/0/3/7/11821-01.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="200" src="https://cdn.sparkfun.com//assets/parts/8/0/3/7/11821-01.jpg" width="200" /></a></div>
<br />
(The WS2812 LED, with integrated controller chip visible)<br />
<br />
So the question arises, how to control the WS2812's from the Raspberry Pi. This will be the topic of the next post.<br />
<br />
<em>(Of course just when you thought you were on top of it all everything all changes again. Adafruit is now selling a range of LED strips under the brand '</em><a href="https://learn.adafruit.com/adafruit-dotstar-leds/overview"><em>DotStar</em></a><em>' which claim to be a fair bit easier to talk to than WS2812, and as best I can tell from the </em><a href="https://github.com/adafruit/Adafruit_DotStar_Pi"><em>RPi Python demo code</em></a><em> they talk SPI).</em><br />
piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-72974042218411349282014-08-14T17:20:00.001+08:002014-08-14T17:38:13.887+08:00SMO issues with SQL 2012 SP2After rolling out SQL 2012 SP2 (CU1) across all our environments, we noticed all our TeamCity builds were going a <b>lot </b>slower, or never completing. I didn't make the connection at first, but I eventually tracked it down to a SMO call in our install script that was taking a <i>long </i>time to return (actually I think it either deadlocks or never returns on our busy build server).<br />
<br />
Firing up profiler I was able to confirm that the SMO call in question (querying a database's DatabaseSnapshotBaseName property) was resulting in multiple SQL queries hitting the server, one of which in particular was fairly massive. We were doing this in an inner loop, so the performance was fairly rubbish - we had a classic ORM-lazy-load issue.<br />
<br />
Clearly we should have looked at <a href="http://msdn.microsoft.com/en-us/library/ms210395.aspx">SetDefaultInitFields</a> before, but the performance was always good enough we didn't have to bother. Which makes me suspect <b>strongly </b>that either the defaults have changed with SP2 (CU1), or the 'get all properties for the database' query has changed to include additional data (probably something cloud related), which reduces the performance substantially in the 'get everything' case.<br />
<br />
What's a bit nasty here is that you also have this query executed if you reference the smo JobServer object, since (through ILSpy) one of the first things that class's constructor does is access the StringComparer property for the MSDB database object:<br />
<blockquote class="tr_bq">
<span style="font-weight: bold;">this</span>.m_comparer = parentsrv.Databases[<span style="color: blue;">"msdb"</span>].StringComparer;</blockquote>
This object depends on a couple of server properties, but I think 'Collation' is the key one here - if that's not loaded then the the newer 'all properties' query kicks in, which is again much slower than it was before.<br />
<br />
<b>Moral</b>: be sure to add 'Collation' to your SetDefaultInitFields() list for Databases.piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-78379112064918952192014-07-22T15:34:00.002+08:002014-07-23T16:37:03.456+08:00'Specified wildcard pattern is invalid' with Octopus Deploy and PSakeSo we upgraded to Octopus Deploy v2, and all our deployments are now failing with this error:<br />
<br />
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
22/07/2014 11:51:08 AM: An Error Occurred: </div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-date" style="position: absolute; right: 7px; text-align: right; white-space: pre-wrap; width: 140px;">
Info <span title="Tuesday, July 22 2014 11:51 AM">11:51:08</span></div>
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
Test-Path : Cannot retrieve the dynamic parameters for the cmdlet. The </div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-date" style="position: absolute; right: 7px; text-align: right; white-space: pre-wrap; width: 140px;">
Info <span title="Tuesday, July 22 2014 11:51 AM">11:51:08</span></div>
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
specified wildcard pattern is not valid: </div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-date" style="position: absolute; right: 7px; text-align: right; white-space: pre-wrap; width: 140px;">
Info <span title="Tuesday, July 22 2014 11:51 AM">11:51:08</span></div>
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
Octopus.Environment.MachinesInRole[myproject-sql-node]</div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
At D:\Octopus\Applications\Test\ORDW.Staging.TPPS\7.0.310-trunk_1\Tools\Install</div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-date" style="position: absolute; right: 7px; text-align: right; white-space: pre-wrap; width: 140px;">
Info <span title="Tuesday, July 22 2014 11:51 AM">11:51:08</span></div>
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
\psake.psm1:357 char:17</div>
</div>
<div class="log-line Info" style="background-color: white; clear: both; color: #444444; font-family: Monaco, Menlo, Consolas, "Lucida Console", "Courier New", monospace; font-size: 12px; line-height: 20px; padding-left: 10px; padding-right: 7px;">
<div class="log-date" style="position: absolute; right: 7px; text-align: right; white-space: pre-wrap; width: 140px;">
Info <span title="Tuesday, July 22 2014 11:51 AM">11:51:08</span></div>
<div class="log-message" style="margin-right: 150px; white-space: pre-wrap;">
+ if (test-path "variable:\$key") {</div>
</div>
<br />
Our install process is quite complex, so we use Psake to wrangle it. Integration between the two is relatively straightforward (in essence we just bind $octopusParameters straight onto psake's -properties), and I could see from the stack trace that the failure was actually happening within the PSake module itself. And given the error spat out the variable that caused the issue, I figured it was to do with the variable name.<br />
<br />
Most of the variable names are the same as per Octopus Deploy v1, but we do now get some extra ones, in particular the 'Octopus.Environment.MachinesInRole[role]' one. But that's not so different from the type of variables we've always got from Octopus, eg: 'Octopus.Step[0].Name', so what's different?<br />
<br />
Where psake is failing is where it pushes properties into variable scope for each of the tasks it executes as part of the 'build', and apparently it's choking because <b>test-path</b> doesn't like it. So I put together some tests to exercise test-path with different variable names, and find out when it squealed. This all works against the same code as runs in psake, ie:<br />
<br />
<span style="background-color: white; color: #444444; font-family: Monaco, Menlo, Consolas, 'Lucida Console', 'Courier New', monospace; font-size: 12px; line-height: 20px; white-space: pre-wrap;">test-path "variable:\$key"</span><br />
<br />
<table>
<colgroup><col></col><col></col></colgroup>
<tbody>
<tr><th>$key</th><th>Result</th></tr>
<tr><td>a</td><td>Ok</td></tr>
<tr><td>a.b</td><td>Ok</td></tr>
<tr><td>a-b</td><td>Ok</td></tr>
<tr><td>b.a</td><td>Ok</td></tr>
<tr><td>b-a</td><td>Ok</td></tr>
<tr><td>a.b.c</td><td>Ok</td></tr>
<tr><td>a-b-c</td><td>Ok</td></tr>
<tr><td>c.b.a</td><td>Ok</td></tr>
<tr><td>c-b-a</td><td>Ok</td></tr>
<tr><td>a[a]</td><td>Ok</td></tr>
<tr><td>a[a.b]</td><td>Ok</td></tr>
<tr><td>a[a-b]</td><td>Ok</td></tr>
<tr><td>a[b.a]</td><td>Ok</td></tr>
<tr><td><span style="color: red;">a[b-a]</span></td><td><span style="color: red;">Cannot retrieve the dynamic parameters for the cmdlet. </span><br />
<span style="color: red;">The specified wildcard pattern is not valid: a[b-a]</span></td></tr>
<tr><td>a[a.b.c]</td><td>Ok</td></tr>
<tr><td>a[a-b-c]</td><td>Ok</td></tr>
<tr><td>a[c.b.a]</td><td>Ok</td></tr>
<tr><td><span style="color: red;">a[c-b-b]</span></td><td><span style="color: red;">Cannot retrieve the dynamic parameters for the cmdlet. </span><br />
<span style="color: red;">The specified wildcard pattern is not valid: a[c-b-b]</span></td>
</tr>
<tr><td>a[b-b-a]</td><td>Ok</td></tr>
</tbody></table>
<br />
I've highlighted the failure cases, but what's just as interesting is which cases <i>pass</i><i style="font-weight: bold;">. </i>This gives a clue as to the underlying implementation, and why the failure happens.<br />
<br />
To cut a long story short, it appears that any time you use <b>square brackets </b>in your variable name, PowerShell uses <i>wildcard matching </i>to parse the content within the brackets. <b>If </b>that content contains a hypen, then the letters before and after the <b>first hyphen </b>are used as a range for matching, and the range end is prior to the range start (ie: alphabetically earlier), you get an error.<br />
<br />
Nasty.<br />
<br />
It's hard to know who to blame here. Octopus makes those variables based on what roles you define in your environment, though I'd argue the square brackets is potentially a bad choice. PSake is probably partly culpable, though you'd be forgiven for thinking that what they were doing was <i>just fine</i>, and there's no obvious way of supressing the wildcard detection. Ultimately I think this is probably a PowerShell bug. Whichever way you look at it, the chances of me getting it fixed soon are fairly slight.<br />
<br />
In this case I can just change all my Octopus role names to use dots not hypens, and I <i>think</i> I'll be out of the woods, but this could be a right royal mess to fix otherwise. I'd probably have to forcefully remove variables from scope just to keep PSake happy, which would be ugly.<br />
<br />
Interestingly the documentation for <b>Test-Path </b>is a bit confused as to whether wildcard matching is or isn't allowed here - the description says they are, but Accept wildcard characters' claims otherwise:<br />
<br />
<pre>PS C:\> help Test-Path -parameter:Path
-Path <string>
Specifies a path to be tested. <b>Wildcards are permitted</b>. If the path includes spaces, enclose it in quotation marks. The parameter
name ("Path") is optional.
Required? true
Position? 1
Default value
Accept pipeline input? True (ByValue, ByPropertyName)
<b>Accept wildcard characters? false</b>
</string></pre>
<br />
Also interesting is that <b>Set-Variable </b>suffers from the same issue, for exactly the same cases (and wildcarding definitely doesn't make any sense there). Which means you can do this:<br />
<blockquote class="tr_bq">
${a[b-a]} = 1</blockquote>
but not this<br />
<blockquote class="tr_bq">
Set-Variable 'a[b-a]' 1</blockquote>
Go figure.<br />
<br />
<br />
<strong>Update 23/7:</strong><br />
<ul>
<li>You can work around this by escaping the square brackets with backticks, eg Set-Variable 'a`[b-a`]'</li>
<li>I raised a <a href="https://connect.microsoft.com/PowerShell/feedback/details/926973/set-variable-throws-the-specified-wildcard-pattern-is-not-valid-if-variable-name-contains-or-chars-in-some-cases">Connect issue</a> for this, because I think this is a bug</li>
<li>I've raised an <a href="https://github.com/psake/psake/issues/116">issue with Psake</a> on this, because I think they should go the escaping route to work around it.</li>
</ul>
piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-49071654104329791082014-04-15T14:09:00.002+08:002014-04-15T14:09:39.338+08:00It's time to bring Data Tainting to the CLRLast week's <a href="http://heartbleed.com/">Heartbleed</a> bug once again exposes the shaky foundations of current software development processes. Decades-old issues such as inadvertent use of user-input values, and unintended access to memory (whether read or write) continue to be critical risks to the infrastructure on which our economies are increasingly reliant.<br />
<br />
How do we deal with these risks? We implore developers to be more careful. This is clearly not working.<br />
<br />
Risk-exposed industries (like resources) have mature models to categorize risk, and rate the effectiveness of mitigation strategies. Procedural fixes (telling people not to do something, putting up signs) rate at the bottom, are ranked lower than physical guards and such like. At the top are approaches that address the risk by <strong>doing something fundamentally less risky - </strong>engineering the risk away. As an industry, we could learn a thing or two here.<br />
<br />
At times we have. The introduction of managed languages all but eliminated direct memory access bugs from those environments - no more buffer overruns, or read-after-free - but did little or nothing to address the issue of user input . And yet this is arguably the more important of the two - you still need untrustworthy data to turn direct memory access into a security hole. We just moved the problem elsewhere, and had a decade of <a href="http://en.wikipedia.org/wiki/Sql_injection">SQLi</a> and <a href="http://en.wikipedia.org/wiki/Cross-site_scripting">XSS</a> attacks instead.<br />
<br />
I think it's time to fix this. I think it's time we made the <em>trustworthiness </em>of data a first-class citizen in our modern languages. I think it's time to <a href="http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/5754648-implement-data-tainting-in-the-clr"><strong>bring Data Tainting to the CLR</strong></a>.<br />
<br />
Data Tainting is a language feature where all objects derived[1] from user input are flagged as 'tainted' unless explicitly cleared, a bit like that infamous <a href="http://www.howtogeek.com/70012/what-causes-the-file-downloaded-from-the-internet-warning-and-how-can-i-easily-remove-it/">'downloaded from the internet' flag</a> that block you running your downloads. Combined with other code paths asserting that their arguments are <u>un</u>tainted, this can <strong>largely eliminate the problem of unsanitized user input being inadvertently trusted.</strong><br />
<br />
<a href="http://en.wikipedia.org/wiki/Taint_checking">Taint Checking</a> is not a new idea, it's just not very common outside of academia[2] (Perl and Ruby are the only extant languages I know of that support it, having strangely failed to take hold in JavaScript after Netscape introduced it <em>last century</em>). But it's exactly what we need if we are to stop making the same security mistakes, over and over again.<br />
<br />
It has bugged me for over a decade that this never took off, but the last straw for me was questions like this on security overflow: <a href="https://security.stackexchange.com/questions/55163/would-the-heartbleed-bug-have-been-prevented-if-openssl-was-written-in-go-d-vala">Would Heartbleed have been prevented if OpenSSL was written in (blah) ...</a> Why? Because the depressing answer is <strong>no</strong>. Whilst a managed implementation of TLS could not have a direct memory scraping vulnerability, the <em>real </em>bug here - that the output buffer sizing was based on what the client wrote in the input header - is not prevented. So the flaw could still be misused, perhaps allowing to DOS the SSL endpoint somehow.<br />
<br />
Raw memory vulnerabilities are actually quite hard to exploit: you need to know a bit about the target memory model, and have a bit of luck too. Unsanitized input vulnerabilities, once you know about the flaw, are like shooting fish in a barrel: this input string here is <em>directly </em>passed to your database / shown to other users / used as your balance / written to the filesystem etc... The myriad ways we can find to exploit these holes should not be a testament not to our ingenuity, but highlight an elephant in the room: <strong>it's still too hard to write secure code</strong>. Can we do more to fall into the pit of success? I think so.<br />
<br />
Implementing taint checking in the CLR will not be a trivial task by any means, so Microsoft are going to take a fair bit of persuading that matters enough to commit to. And that's where I need your help:<br />
<ul>
<li>If any of this resonates with you, please vote for my user voice suggestion: <a href="http://visualstudio.uservoice.com/forums/121579-visual-studio/suggestions/5754648-implement-data-tainting-in-the-clr"><strong>bring Data Tainting to the CLR</strong></a></li>
<li>If you think it sucks, comment on it (or this post) and tell me why</li>
</ul>
Imagine a world where Heartbleed could not happen. How do we get there?<br />
<br />
Next time: what taint checking might look like on the CLR.<br />
<br />
<span style="font-size: x-small;">[1] Tainting spreads through operators, so combining tainted data with other data results in data with the taint bit set.</span><br />
<span style="font-size: x-small;">[2] Microsoft Research did a good review of the landscape here, if you can wade through the overly-theoretical bits: </span><a href="http://research.microsoft.com/pubs/176596/tr.pdf"><span style="font-size: x-small;">http://research.microsoft.com/pubs/176596/tr.pdf</span></a>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-73352712875596731082014-04-10T16:28:00.000+08:002014-04-10T16:28:36.230+08:00Could not load file or assembly ApprovalTestsIf you get the following error when running <a href="http://approvaltests.sourceforge.net/">ApprovalTests</a>...
<blockquote>Could not load file or assembly 'ApprovalTests, Version=3.0.0.0, Culture=neutral, PublicKeyToken=11bd7d124fc62e0f' or one of its dependencies. The located assembly's manifest definition does not match the assembly reference. (Exception from HRESULT: 0x80131040)</blockquote>
...make sure the assembly that your <b>tests</b> are in is not <i>itself</i> called ApprovalTests. As perhaps it might be if you were just demoing it to someone quickly. Doh!piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-82360957260059689382014-03-15T22:15:00.004+08:002014-03-15T22:15:43.794+08:00Another PowerShell Gotcha - Order of precidence and type inferenceThere was a fun moment at last week's Perth Dot Net User Group when the presenter (demonstrating Octopus Deploy's new scripting console), put in the following PowerShell:<br />
<br />
<code>
> Write-Host [Environment]::MachineName<br />
</code>
<br />
And got back (literally)<br />
<br />
<code>[Environment]::MachineName<br />
</code>
<br />
...which wasn't what he expected <em>at all. </em>It's worth pausing to consider why this happens.<br />
<br />
Remember of course that when passing strings as arguments in PowerShell, the quotes are optional - it would make a fairly poor shell replacement otherwise. So for example the following is completely valid:<br />
<br />
<code>> Write-Host Hello!</code><br />
<br />
This is the reason<sup>[1]</sup> why variables in PowerShell have the dollar prefix - it makes their use in binding fairly unambiguous (just as the @ symbol in Razor achieves the same thing).<br />
<br />
<code>
> $a = 'Moo'<br />
> Write-Host $a</code><br />
<code>Moo<br />
</code>
<br />
If you <em>really </em>wanted to write '$a' you'd have to enclose it in single quotes (as I just did) or escape the dollar symbol.<br />
<br />
Anyway back to the original problem, you can see that PowerShell has two possible ways of interpreting<br />
<br />
<code>
> Write-Host [Environment]::MachineName<br />
</code>
<br />
...and since it doesn't start with a $, you get the 'bind to as an object' behavior, which - in this case - gives you a string (since it's clearly not a number).<br />
<br />
What you really wanted was one of the following:<br />
<br />
<code>
> Write-Host ([Environment]::MachineName)<br />
> Write-Host $([Environment]::MachineName)</code><br />
<code>SOMECOMPUTERNAME<br />
</code>
<br />
They both give the intended result, by forcing the expression within the brackets to be evaluated <em>first </em>(which on its own is unambiguous to the parser), and then passing the <em>result</em> of that as an argument to the bind for Write-Host.<br />
<br />
This is really important trick to know, because will otherwise bite you again and again when you try and call a .Net method, and attempt to supply a parameter via an expression, for example:<br />
<br />
<code>$someClass.SomeMethod($a.length -1)</code>
<br />
...when what you need to say is<br />
<br />
<code>$someClass.SomeMethod(($a.length -1))</code>
<br />
Key take-home: <strong>When in doubt, add more brackets<sup>[2]</sup></strong><br />
<br />
<span style="font-size: x-small;">[1] presumably</span><br />
<span style="font-size: x-small;">[2] parenthesis</span>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-55761349441254760242013-05-25T00:04:00.001+08:002013-05-25T00:04:49.662+08:00Sql trivia: BINARY_CHECKSUMYou know how in SQL Books online the doco tells you to be wary of using <a href="http://msdn.microsoft.com/en-us/library/ms189788.aspx">CHECKSUM</a> and <a href="http://msdn.microsoft.com/en-us/library/ms173784.aspx">BINARY_CHECKSUM</a> functions because they will miss some updates. Here's a trivial example:<br />
<br />
<pre>select
BINARY_CHECKSUM(A),
BiggestInt,
BINARY_CHECKSUM(BiggestInt)
from (
select cast(null as int) as A,
Power(cast(2 as bigint),31) -1 as BiggestInt
) x
</pre>
<br />
Which returns 2147483647 in all 3 cases. So essentially BINARY_CHECKSUM is blind to the difference between a null and an int(max value). Which is fair enough... but does illustrate the point that even if you are only doing a checksum on a single nullable<i> </i>4 byte field, you can't stuff it into 32 bits without getting at least one collision.piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-32605224708631349792013-03-14T12:54:00.000+08:002013-03-14T12:54:52.950+08:00Google Reader In Memoriam<blockquote class="tr_bq">
So, Farewell Then Google Reader<br />
Your feed<br />
Has expired<br />
Like Bloglines,<br />
Which you killed<br />
Off</blockquote>
<blockquote class="tr_bq">
I guess<br />
Now<br />
We'll just have to use<br />
Something else<br />
Instead</blockquote>
Google Reader was 7 1/2<br />
(With apologies to <a href="http://en.wikipedia.org/wiki/E._J._Thribb">E.J.Thribb</a>)piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-50761138982356220392013-03-14T12:03:00.002+08:002013-03-14T12:03:55.882+08:00Installing the Analysis Services Deployment UtilityThe <a href="http://msdn.microsoft.com/en-us/library/ms162758.aspx">Analysis Services Deployment Utility</a> is a utility that can be used to deploy Analysis Services build outputs (.asdatabase files) to a server, or to generate the XMLA for offline deployment.<br />
<br />
I've often used this as part of an automated installation process to push releases out into an integration environment, but on this project I wanted to perform this installation as part of a nightly build. It failed - because the utility (and it's dependencies) weren't installed on the build server.<br />
<br />
I wasn't entirely sure what I needed to get it installed (and I was attempting to install the minimum amount of stuff). It turns out this tool is distributed as part of <b>Sql Management Studio</b>, not the SSDT/BIDS projects (as I'd previously assumed). Not sure if the Basic or Complete option is required, because I picked 'complete' and that fixed it.<br />
<br />
Also, for 2012 the path has changed, and the utility is now at:<br />
<br />
<span style="font-family: Courier New, Courier, monospace;">C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\<b>ManagementStudio</b>\Microsoft.AnalysisServices.Deployment.exe</span><br />
<br />piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-52734516008332680422013-03-07T17:42:00.001+08:002013-03-09T22:14:44.291+08:00Installing BIDS project templates for SQL 2012You can <i>absolutely </i>be forgiven for being confused about how to install the BIDS project templates for SQL 2012. They've moved to the new <a href="http://msdn.microsoft.com/en-us/data/hh297027">Sql Server Data Tools</a> (SSDT), but Microsoft are themselves inconsistent about what SSDT <i>is</i>. To make matters worse, you've got Visual Studio 2010 and 2012 versions of each package, depending on what IDE you want to work in.<br />
<br />
I'll attempt to clarify:<br />
<br />
<ul>
<li>SSDT is only the updated <b>SQL database</b> project template, that replaces the Database Edition GDR / datadude stuff (and subsumes the SQL-CLR project type)</li>
<li>SSDT-BI is the other, ex-BIDS projects: SSIS, SSRS, SSAS</li>
</ul>
<div>
Unfortunately the SQL 2012 install media uses the term 'Sql Server Data Tools' to refer to both at the same time, and up-until-last-week the SSDT-BI project didn't exist outside of the SQL install media. Much confusion and delay. Hopefully the following guidance clears it up a bit:</div>
<div>
<br /></div>
<h3>
If you only care about the SQL Server Database project</h3>
<br />
(eg: you are a C# developer, and your SQL database schema as a project in your solution)<br />
<br />
Install the appropriate version of SSDT that matches the version of Visual Studio (2010 or 2012) you're using right now (or both if necessary):<br />
<ul>
<li><a href="http://msdn.microsoft.com/en-us/jj650014">SSDT for Visual Studio 2010</a></li>
<li><a href="http://msdn.microsoft.com/en-us/jj650015">SSDT for Visual Studio 2012</a></li>
</ul>
<div>
Note these <b>do not</b> include the BIDS project templates (SSRS, SSIS, SSAS); they <b>only</b> include the new SQL Server Database project template.<br />
<br /></div>
<h3>
If you are a BI developer, and want the lot</h3>
<h4>
...in a Visual Studio 2010 Shell</h4>
<div>
Install the 'Sql Server Data Tools' component from the SQL 2012 install media. This gets you everything you need.<br />
Optionally, <i>also </i>install the updated version of the SQL project template (only) by installing <a href="http://msdn.microsoft.com/en-us/jj650014">SSDT for Visual Studio 2010</a><br />
<h4>
...in a Visual Studio 2012 Shell</h4>
<div>
</div>
</div>
Install the standalone version of <a href="http://msdn.microsoft.com/en-us/jj650015">SSDT for Visual Studio 2012</a> (for the database project) and <a href="http://www.microsoft.com/en-us/download/details.aspx?id=36843">SSDT-BI for Visual Studio 2012</a> (for the SSRS, SSIS, SSAS templates)<br />
<h4>
...but don't know which shell to use</h4>
<div>
If you plan to create a single Visual Studio Solution (.sln) combining BIDS artifacts database projects and other project types (e.g. C# or VB projects), then that will determine your choice here. It's certainly easier working in just one IDE than having to have two open.</div>
<div>
<br /></div>
<div>
Otherwise just pick one. You might be swayed by some of the new VS 2012 features, then again you get the 2010 version already on the install media, so that option is less downloading. Given they shipped against one version, but now support the next as far as I can see they'll have to support both versions going forwards, at least for a couple of years.</div>
<div>
<br /></div>
<div>
<i><br /></i></div>
<div>
<i>Editors Note: This rewrite replaces the sarcastic rant I had here previously, which was quite cathartic, but not desperately helpful in navigating the landscape here.</i></div>
piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-87617069257428972212013-02-21T10:49:00.001+08:002013-02-27T21:31:09.330+08:00INotifyPropertyChanged: Worst. Interface. Ever<p>As any .Net UI developer will tell you, <a href="http://msdn.microsoft.com/en-us/library/system.componentmodel.inotifypropertychanged.aspx">INotifyPropertyChanged</a> is a fundamental part of 'binding' an object to a UI control. Without it binding is essentially one-way: changes in the control change the object, but if this has a ripple effect on other properties, or properties are changed by other 'below the UI' processes, the UI can't know to repaint. This is essentially an implementation of the Observer pattern[1]. <br /> <br />Unfortunately it's not for free - you have to implement it yourself - and that's where the problems start. <i>So much</i> has been written on the pain of implementing INotifyPropertyChanged (INPC for short) that I need not repeat it all here. It's generated <a href="http://stackoverflow.com/search?q=inotifypropertychanged">so many questions on StackOverflow</a> you'd think it's due its own StackExchange site by now.</p> <p>The principal complaints are around all the boilerplate code and magic strings required to implement, so for the sake of completeness I'll summarize some of the solutions available:</p> <ul> <li>Design-time assistance to crank out the boilerplate through Snippets, or tools like <a href="http://blogs.jetbrains.com/dotnet/2012/07/inotifypropertychanged-support-in-resharper-7/">ReSharper</a> that also facilitate refactoring (without breaking magic strings)</li> <li>Compile-time IL re-writing approaches such as <strike>NotifyPropertyWeaver</strike> <a href="https://github.com/Fody/Fody#readme">Fody</a> and <a href="http://www.sharpcrafters.com/solutions/notifypropertychanged">PostSharp</a></li> <li>Run-time implementations using reflection, expression trees (such as in <a href="https://caliburnmicro.codeplex.com/wikipage?title=Basic%20Configuration%2c%20Actions%20and%20Conventions&referringTitle=Documentation">Caliburn.Micro's PropertyChangedBase</a>), call interception using ContextBoundObject or Dynamic Proxies (e.g. <a href="http://serialseb.blogspot.com.au/2008/05/implementing-inotifypropertychanged.html">Castle</a> or <a href="http://www.deanchalk.me.uk/post/WPF-e28093-Easy-INotifyPropertyChanged-Via-DynamicObject-Proxy.aspx">roll-your-own</a>), or the use of <a href="http://jesseliberty.com/2012/06/28/c-5making-inotifypropertychanged-easier/">[CallerMemberName] in .Net 4.5</a></li> <li>Employing the <a href="http://www.martinfowler.com/bliki/ValueObject.html">Value Object</a> pattern and turning all your your properties into some kind of Notifiable<T> (<a href="http://ayende.com/blog/4107/an-easier-way-to-manage-inotifypropertychanged">Ayende has a good example</a>, <a href="http://jeffhandley.com/archive/2008/10/07/inotifypropertychanged----searching-for-a-better-way.aspx">here’s another</a>). This does mean changing all your binding however (x.FirstName becomes x.FirstName.Value)</li> </ul> Pick one of these, stick with it and you're done. Ok, there's still a bit of griping about separation of concerns, and whether <a href="http://neilmosafi.blogspot.com.au/2008/07/is-inotifypropertychanged-anti-pattern.html">this is an anti-pattern</a>, but you're done right? <br /> <br />No. <br /> <br />It's <i>so much</i> worse than that. <br /> <br />Do you remember the first time you implemented GetHashCode(), and later when you <a href="http://stackoverflow.com/questions/263400/what-is-the-best-algorithm-for-an-overridden-system-object-gethashcode">realized you'd done it wrong</a>? And later when you <i>really</i> realized you'd done it wrong, that there was <em>no good way</em> you could override Equals() for a mutable object, and the sneaking realization that <a href="http://blogs.msdn.com/b/ericlippert/archive/2011/02/28/guidelines-and-rules-for-gethashcode.aspx">this whole problem existed only for the benefit of Hashtables</a>? It's a bit like that. <br /> <br />What we have with INotifyPropertyChanged is an <i>implicit contract</i>, that is to say that a large part of the contract can't be formally defined in code. Which means you have to validate your implementation manually. In this case the implicit bit is about <b>threading</b>. INotifyPropertyChanged exists to support UI frameworks and (bizarrely, in this day-and-age) they are still single threaded, and can only execute on the thread that constructed them – including event handlers. Think about this a bit, and you will eventually conclude: <br /> <blockquote class="tr_bq"><b>An object that implements INotifyPropertyChanged must raise the PropertyChanged event only on the thread that was originally used to construct any registered subscribers for that event</b></blockquote> <p>Now <i>there's</i> a problem[2]. <br /> <br />Clearly this is something that's just not possible to check for at runtime, so your design has to cater for this. Passing objects that <em>might </em>have been bound to business logic that might mutate them? UI thread please. Adding an item into a collection that <em>might </em>be ObservableCollection? UI thread please. Doing some calculations in the background to pass back to an object that <em>may </em>have been bound? Marshal via UI thread please. And so on. And don't even get me started on what you do if you have two (or more) 'UI' threads[3]. <br /> <br />This is a horrible, horrible creeping plague of uncertainty that spreads through your UI, where the validity of an operation can't be determined at the callsite, but must also take into account the underlying type of an object (violating polymorphism), where that object came from (violating encapsulation), and what thread is being used to process the call (violating all that is sacred). These are aspects that we just can't model or visualize well with current tooling, at least not at design time, and <b>none</b> of the solutions above will save you here. <br /> <br />So there you go. INotifyPropertyChanged: far, <i>far</i> worse than you imagined. <br /> <br /> <br /><font size="1">[1] ok, <i>any</i> use of .net events could be argued is Observer, but the intent here is the relevant bit: the object is explicitly signalling that it's changed state. <br />[2] Actually I've over-simplified, because you can have whole chains of objects listening to each other, and if <i>any one of them</i> is listened to by an object with some type of thread-affinity, that's the constraint you have to consider. <br />[3] Don’t try this at home. There are any number of lessons you’ll learn the hard way.</font></p> piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com7tag:blogger.com,1999:blog-17332816.post-66739386149431130412013-02-20T15:39:00.002+08:002013-02-20T15:39:43.107+08:00Crazy reference leakage using extension methods and genericsIt appears that if you have an extension method, that is in-scope (i.e. namespace included), <strong>even if you don't use it </strong>you have to reference all the assemblies that are part of the generic type constraint. <br />
<br />
This kinda sucks.<br />
<br />
Normally if you use a type from another assembly, and that type has as part of its interface another type in another assembly, you have to reference both assembly. Fine. <strong>But only if you use it.</strong><br />
<br />
eg: if you reference assembly 'Animal' and use a class 'Cow' that has a property 'Color', and the type of Color is defined somewhere else (System.Windows.Forms) you have to reference that too. But if you get the cow via the IBovine interface, and that doesn't expose Color, you don't need the reference (at least not statically).<br />
<br />
If, however, in the same namespace, there's an extension method that you're not using, and that extension method has some type constraints, you have to reference all the assemblies for all the type constraint parameters.<br />
<br />
For example, if you put this in one assembly, in a namespace <i>you merely import</i>:<br />
<br />
<code></code><br />
<pre><code> public static class NotUsed{
public static void DefinatelyNotUsed<tcontext>(this TContext context, Action<datacontext> thing)
where TContext : DataContext
{
}
}
</datacontext></tcontext></code></pre>
<code>
</code>
... then you'll also have to pull in System.Data.Linq to get the 'importing' assembly to compile.<br />
<br />
Doesn't this strike you as odd? I'm sure I can hear a gestalt Eric Lippert in my head explaining why, but I was certainly surprised.piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-17695513147266291062013-02-20T15:30:00.002+08:002013-02-20T15:30:47.364+08:00Get-Member on empty collectionsPowerShell's pipeline just <i>loves</i> to unravel collections, with the result that sometimes, when you want to do something on the collection itself, you can't. Like with Get-Member:<br />
<pre><blockquote>
$blah.Catalogs | Get-Member
<span style="color: red;">Get-Member : No object has been specified to the get-member cmdlet.</span>
</blockquote>
</pre>
<div>
What happened? Did $object.Catalogs return null, or did it return an empty IEnumerable? This has bitten me a few times, especially when poking around in an API for the first time (ie: at this point I have no idea what 'Catalogs' is, whether it's ICollection or whatever).</div>
<div>
<br /></div>
<div>
The answer, I realize, is to avoid the pipeline:</div>
<pre><blockquote>
Get-Member -inputObject:$blah.Catalogs
TypeName: BlahNamespace.BlahCollection
Name MemberType Definition
---- ---------- ----------
Add Method System.Void Add(Microsoft.SqlServer.Management.IntegrationSer...
Clear Method System.Void Clear()
</blockquote>
</pre>
<div>
Much better</div>
piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-77508900613023581972013-02-04T15:52:00.001+08:002013-02-04T15:52:37.935+08:00SQL 2012 Data Tier Applications explained<b>You:</b> So what are these DACPAC things then?<br />
<b>Microsoft:</b><br />
<blockquote class="tr_bq">
"A DAC is a database lifecycle management and productivity tool that enables declarative database development to simplify deployment and management. A developer can author a database in SQL Server Data Tool database project and then build the database into a DACPAC for handoff to a DBA"</blockquote>
<blockquote class="tr_bq">
<a href="http://msdn.microsoft.com/en-us/library/ee210546.aspx">http://msdn.microsoft.com/en-us/library/ee210546.aspx</a> </blockquote>
<br />
<b>You:</b> Hey, that sounds familiar. How is that different from Visual Studio 2010 Database Edition aka DataDude aka GDR aka VSDBCMD.exe?<br />
<b>Microsoft:</b> They're, like, totally different ok.<br />
<br />
<b>You:</b> How so?<br />
<b>Microsoft:</b> Well they just are. DACPACs <i>replace</i> all that GDR stuff. That was just crazy stuff the Visual Studio guys came up with you know. This is the real deal from the SQL product team. And we can package data too, in BACPACs.<br />
<br />
<b>You:</b> Awesome. So this'll solve that problem about also upgrading reference data when I push out a new version of the schema?<br />
<b>Microsoft:</b> Oh no. BACPACs can't be used to upgrade an existing database instance. Just to load data into new databases. They're for moving databases between servers.<br />
<br />
<b>You:</b> Like a backup<br />
<b>Microsoft:</b> Exactly like a backup, yes.<br />
<br />
<b>You:</b> ...so... can't you just use a backup?<br />
<b>Microsoft:</b> No. DACPACs and BACPACs don't just contain the database. They encapsulate the whole data tier application, so they include other items you'd need to deploy, like logins.<br />
<br />
<b>You:</b> Cool. And agent jobs as well I guess?<br />
<b>Microsoft:</b> Oh no. Just logins and ... well logins anyway. Try doing that with GDR. And you wouldn't be using sql_variant anyway would you? No.<br />
<br />
<b>You:</b> Come again?<br />
<b>Microsoft:</b> Oh nothing.<br />
<br />
<br />
<span style="font-size: x-small;">The author notes this conversation was imaginary, and any resemblance to reality is entirely coincidental</span>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-41905419337101971952013-01-08T15:29:00.000+08:002013-01-08T15:29:39.878+08:00The Windows RT desktop. For why?I still don't understand Windows 8 RT (aka Microsoft Surface, aka Windows on Arm), or more specifically it's desktop mode.<br />
<br />
On Windows 8 Pro (the x86 version) it all makes sense: desktop mode opens the door to all the 'legacy' apps you know and love, whilst the device itself weans you onto the world of RT/store apps. As an enterprise developer you can target either, most likely running existing corporate apps on the desktop whilst mulling the trade-offs (and $) involved in rewriting the front end to target Metro.<br />
<br />
But on Arm it's crippled: running only Microsoft-sanctioned apps (Office, Notepad and explorer) <a href="http://surfsec.wordpress.com/2013/01/06/circumventing-windows-rts-code-integrity-mechanism/">without jailbreaking</a>. Why?<br />
<br />
Clearly you can't expect it to run existing x86 apps, but for enterprise developers working in .Net this is a slap in the face. You're barred from running existing .net 4 desktop apps on the Surface RT desktop, and you can't build desktop apps using the Win RT runtime <i>either</i>.<br />
<br />
This is a crazy situation that smacks of half-baked.<br />
<br />
If the desktop mode on Arm is useful - and I'd argue it is - it should be possible for more than just Microsoft to write for it. Ideally enterprises could run existing .Net apps unmodified, but there's clearly advantages (re: capability, performance and battery life) in encouraging them to embrace the Win RT APIs.<br />
<br />
Conversely if the desktop mode is redundant, Microsoft need to seriously pull their finger out replicating all that functionality in the Metro interface, including a Metro version.<br />
<br />
I would like for the former to be the case. I suspect Microsoft's roadmap is the latter, that the desktop's just there till Office gets ported proper. Whichever way, we have a ridiculous situation where if an enterprise developer wants to target Windows 7 and both Windows 8's they have to ... write a web app. Way-to-go Microsoft!<sup>[1]</sup><br />
<br />
I keenly await Xamarin's Mono-Surface<sup>[2]</sup>, which will let you run .Net apps <i>on the Microsoft Platform</i>. Now that would be progress.<br />
<br />
<br />
<span style="font-size: xx-small;">[1] Sure they gave the enterprise the finger with the phone too. I guess we shouldn't be surprised. But perhaps this is just <a href="http://en.wikipedia.org/wiki/Steven_Sinofsky">Sinofsky</a>'s <a href="http://www.nytimes.com/2012/11/14/technology/at-microsoft-sinofsky-seen-as-smart-but-abrasive.html">'my way or the high way'</a> showing through</span><br />
<span style="font-size: xx-small;">[2] This is a fictitious product, and any resemblance to actual products planned or otherwise is entirely coincidental</span>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-29697652751647315632012-10-23T11:36:00.001+08:002012-10-23T11:36:53.375+08:00Avoid XmlConvert.ToString(DateTime)XmlConvert is the utility class that controls fundamental .Net data types being converted to/from their XML representations. For dates, the DateTimeOffset overloads are less ambigious than the DateTime ones, and as a result the latter have been depricated. But you'd still expect it to basically work for the fundamental scenarios. <br />
<br />
Nope. For DateTimeKind=UTC dates, it's totally broken.<br />
<br />
Ttry this (in Linqpad):<br />
<code></code><br />
<code></code><br />
<code></code><br />
<code><pre> var localTime = DateTime.Now;
var utcTime = localTime.ToUniversalTime();
localTime.Dump("Local Time");
utcTime.Dump("UTC time");
utcTime.Kind.Dump("UTC Kind");
utcTime.ToString("yyyy-MM-ddTHH:mm:ss K").Dump("Expected XML");
XmlConvert.ToString(utcTime).Dump("Actual XML from XML Convert");
</pre>
</code>
<br />
Gives:<br />
<br />
<div>
<table style="text-align: left;">
<tbody>
<tr>
<th>Local Time</th>
<td>23/10/2012 11:11:11
AM</td></tr>
<tr>
<th>UTC time</th>
<td>23/10/2012 3:11:11 AM</td></tr>
<tr>
<tr>
<th>UTC Kind</th>
<td>UTC</td></tr>
<tr>
<th>Expected XML</th>
<td>2012-10-23T03:11:11 Z</td>
</tr>
<tr>
<th>Actual XML from XML Convert</th>
<td>2012-10-23T03:11:11.0773940<strong><span style="background-color: yellow;">+08:00</span></strong></td></tr>
</tr>
</tbody></table>
<br />
<br />
You'll notice immediately (because of the highlighting), that it's done something really really dumb. It's made out that the UTC time is actually a local time in my local timezone offset (+8). <strong>It's screwed up the XML representation, and as a result changed the actual time value being passed, </strong>even though it <em>knows</em> (based on DateTime.Kind==DateTimeKind.Utc, that the ToUniversalTime() method always gives you) that it's a UTC date.<br />
<br />
This is because internally that method does this:<br />
<br />
[<span style="color: midnightblue; font-weight: bold;">Obsolete</span>(<span style="color: blue;">"Use XmlConvert.ToString() that takes in XmlDateTimeSerializationMode"</span>)]<br />
<span style="color: blue; font-weight: bold;">public</span> <span style="color: brown;">static</span> <span style="color: red;">string</span> <span style="color: midnightblue; font-weight: bold;">ToString</span>(DateTime <span style="font-weight: bold;">value</span>)<br />
{<br />
<span style="color: navy;">return</span> XmlConvert.<span style="color: midnightblue; font-weight: bold;">ToString</span>(<span style="font-weight: bold;">value</span>, <span style="color: blue;">"yyyy-MM-ddTHH:mm:ss.fffffff<span style="background-color: yellow;">zzzzzz</span>"</span>);<br />
}<br />
<br />
...and those z's add the local timezone offset <strong>from your PC</strong>. This is <strong>wrong </strong>for a DateTime with Kind=UTC. A better implementation would be as per my example above, ie:<br />
<br />
<span style="color: navy;">return</span> XmlConvert.<span style="color: midnightblue; font-weight: bold;">ToString</span>(<span style="font-weight: bold;">value</span>, <span style="color: blue;">"yyyy-MM-ddTHH:mm:ss.fffffff<span style="background-color: yellow;">K</span>"</span>);<br />
<br />
The K specifier is not so dumb, and adds the time zone to <strong>local time </strong>datetime instances, and adds the <strong>Z</strong> to UTC instances. Or alternatively an if{}else{} based on value.Kind==DateTimeKind.UTC.<br />
<br />
Anything but what it does right now.<br />
<br />
Why do I care? Well WCF generates webservices proxies using DateTime, not DateTimeOffset, so I want to make damn sure it's not making the same mistake.</div>
piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-57626282983000596752012-09-17T16:23:00.001+08:002012-09-17T16:23:59.153+08:00Nuget Package Restore with internet-restricted build serversThe first time I enabled nuget package restore I had a nasty shock: the TeamCity build server didn't have internet access (and wasn't getting any).<br />
<br />
I initially went back to storing-packages-in-source-control, but when I created a spike ASP.Net MVC 4 project, which wanted to add some <b>30</b> packages, I decided I needed to address the root problem.<br />
<br />
A few people suggested I should setup a local nuget repository, the simplest version of which is just to dump all the packages you need into a file share. But how to point the build server at the share, rather than the standard nuget feed URL?
Short of logging in as the build server user and changing the settings through Visual Studio (lame), there appear to be a couple of ways:<br />
<br />
<h3>
Editing %appdata%\nuget\nuget.config (for the build server user)</h3>
This achieves the same as the above, but you don't have to log in as that user. It's reliable, but because it doesn't 'carry' with the solution in source control it's brittle (needs doing again if the build server user changes etc...) and you can't vary it on a project by project basis, which is a bit limiting.<br />
<h3>
Editing the generated nuget.targets file that package restore adds to your solution</h3>
This looks like the go, and even has comments in there to show you what to change. But I didn't find this reliable. On my local machine (under a no-internet user) this worked just fine, but on my build server I kept having package resolution errors (from a Silverlight 4 project).<br />
<h3>
Setting the PackageSources build parameter</h3>
What the above did show is what MSBuild properties are involved, so I was able to specify that property on the MSBuild command line and that seems to work consistently:<br />
<blockquote class="tr_bq">
MSBuild mysolution <b>/p:PackageSources=\\server\fileshare</b></blockquote>
...now <i>finally</i> I have a green light on TeamCity and can go and do something actually productive instead.piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-88809797114579282032012-07-17T09:06:00.001+08:002012-07-17T09:06:11.608+08:00Creating SQL Credentials for Network Service account<p>It’s fairly normal in production environments to find the SQL Server configured to disallow use of the SQL Agent account for the execution of certain types of job steps: SSIS packages and CmdExec for example. Instead you have to configure an explicit SQL Agent proxy, which requires first storing credentials within SQL’s credential store.</p> <p>For domain accounts this is fairly straightforward, but if you attempt to add credentials from one of the ‘virtual accounts’ (such as Network Service), you’ll get the following error: “The secret stored in the password field is blank”</p> <p><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhBCW7f-B0vaAgSXPBg7VNqJCLeYqthajZhCs-ioVAwcnj8e2MuM3NTso7HEMWOXUdkRbnBsnL9jaQZy8SjSrhdk2XsQhMf7wO_FYmHQ2WjUSW99BDAUiC8HnzNa8VR4HQSXzm1/?imgmax=800" width="534" height="236" /></p> <p>The solution is (eventually) obvious: add the credential using TSQL (or SMO), and avoid the UI validation:</p> <blockquote> <p><font face="Consolas">USE [master] <br /></font><font face="Consolas">GO</font></p> <p><font face="Consolas">CREATE CREDENTIAL [Network Service] WITH IDENTITY = N'NT AUTHORITY\NETWORK SERVICE' <br /></font><font face="Consolas">GO</font></p> </blockquote> <p>et, voila:</p> <p><img style="background-image: none; border-bottom: 0px; border-left: 0px; padding-left: 0px; padding-right: 0px; display: inline; border-top: 0px; border-right: 0px; padding-top: 0px" title="image" border="0" alt="image" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhENzleLgCRTeonxuWVmSIBs_u-TiYyG6lvmClhn52DErNoNNNLen_VZl7Kp2e2n-YBNPIpHD_9o9SiLsO1oORzTCt09XtPbyJtCGGVNr4fkHF8Ie4ui9KsvPfZTj8KQTnGXk6X/?imgmax=800" width="197" height="131" /></p> piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-17745749294226424882012-05-26T10:06:00.004+08:002012-05-26T10:13:06.672+08:00vsdbcmd 'built by a runtime newer than the currently loaded runtime and cannot be loaded'When deploying the latest version of our application to the Production server we got this error:
<blockquote>Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'vsdbcmd.exe' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded.
File name: 'vsdbcmd.exe'</blockquote>
Oh god, I thought. Yet <i>more</i> VSDBCMD wierdness. But this box had SQL Server installed, so the normal <a href="http://blogs.msdn.com/b/bahill/archive/2009/02/21/deploying-your-database-project-without-vstsdb-installed.aspx">'needs SMO / Batch Parser'</a> caveats didn't apply.
Eventually I ILSpy'd the assemblies to <a href="http://david.gardiner.net.au/2010/04/vsdbcmdexe-fails-with-error-0x8007000b.html">check the bittyness</a>, and guess what! The error message was completely accurate. I'd accidentally picked up the VSDBCMD not from the VS 2008 folder (9.0) but instead from the VS 2010 folder (10.0). Which is .Net 4. Which really is a more recent version of the runtime than was installed on the Windows 2008 R2 server.
Embarrasing to be caught out by a completely accurate error message (though if it listed the versions involved I might have paid attention)piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-33816166370087993602012-05-17T12:38:00.000+08:002012-05-17T13:03:24.047+08:00Analysis Services 2008 R2 breaking change when deploying from the command lineAs collegues of mine will attest, I will script anything that has to be deployed. Some things are easier than others.<br />
<br />
In the case of Analysis Services, the .asdatabase file that comes out of the build needs to be futher transformed to create the XMLA that you need to run on the server to deploy your (updated) cube definition. Rather than attempt to replicate this transformation, I have previously chosen to get the <a href="http://msdn.microsoft.com/en-us/library/ms162758(v=sql.105).aspx">Analysis Services deployment utility</a> to do this for me, since this can supplied with command line arguments:<br />
<pre>write-host "Generating XMLA"
$asDeploy = "$programfiles32\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe"
& $asDeploy "$pwd\..\bin\MyCube\MyCube.asdatabase" /d /o:"$pwd\MyCube.xmla"
</pre>
Which works just nicely. Except when we migrated that project to SQL 2008 R2, when it stopped working.<br />
<br />
Well, actually that's not true. We'd been deploying to a 2008 R2 server for ages, it was when we changed the deployment script to use the 2008 version <i>of the deployment tool</i> that it all broke.<br />
<br />
Basically the next line in the script kept complaining that 'MyCube.xmla' didn't exist, but I'd look in the folder after the script had run and the file was there. So it seemed like maybe there was a race condition.<br />
<br />
Which there was.<br />
<br />
If you examine the PE headers for the Sql 2005 version of the deployment util (using a tool like <a href="http://support.microsoft.com/kb/177429">dumpbin</a>) you'll see it's marked as a <strong>command line application</strong>:<br />
<br />
<pre>C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem"
4.00 subsystem version
3 subsystem (Windows CUI)
</pre>
<br />
<br />
...but the 2008 R2 version is marked as a <strong>gui</strong> application:<br />
<pre>C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem"
4.00 subsystem version
2 subsystem (Windows GUI)
</pre>
<br />
What? Can't see the difference? One is marked CUI with a 'C', the other is GUI with a 'G'. An unfortunately high degree of visual similarity given what a fundamental difference it makes: launch the first from the command line and you wait for it, launch the second and you don't. When scripting it's pretty important to know which one you've got, or you're going to get race conditiions.<br />
<br />
In this case the answer was to control the process launching, so we can explicitly decide to wait:<br />
<pre>
start-process -FilePath:$asDeploy -ArgumentList:$asdatabase,"/d","/o:$xmla" -Wait;
</pre>
Maybe I should just do that all the time to be safe, but just being able to use other command line tools within a script without a whole lot of ceremony is one of the really nice bits about powershell, so I tend not to. In this case the launch semantics of an existing utility changing between versions seems like a really nasty thing to be caught out by.
<br />
Good reference sources:<br />
<a href="http://stackoverflow.com/questions/493536/can-one-executable-be-both-a-console-and-gui-app">Stack Overflow: Can one executable be both a console and GUI app?</a><br />
<a href="http://msdn.microsoft.com/en-us/library/ms809762.aspx">MSDN: A Tour of the Win32 Portable Executable File Format</a><br />piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-38042921975346690422012-05-07T12:24:00.001+08:002012-05-07T12:24:51.996+08:00Sql 2008, virtual accounts and a breaking security change from 2005Interesting gotcha today regarding the different ways Sql 2005 and Sql 2008 grant permissions to the service account they are running under. Interesting, because the differences broke my app, and exposed my complete lack of understanding of a key Windows 2008 R2 security concept - <em>virtual accounts.</em><br />
In Sql 2005, to simplfy management of the service account's permissions against SQL itself (specifically with regard to changing which account SQL is running under) the product team started creating local Windows security groups, of the form:<br />
<blockquote class="tr_bq">
computername\SQLServer2005MSSQLUser$computername$MSSQLSERVER</blockquote>
This group is configured by the installer to contain the service account (eg Network Service), and a corresponding SQL login is created (for the windows group) granting <strong>sysadmin</strong> rights:<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJl6glumieCfQqrOhydiPnJWJKAkramKZKd59ZkUAt4myqCZ8Vzhk_6TA5Pvcajx9UVvBNOA9_-CpFBJynNWZE7SHNYYoqdH9F-QcmokHfTGv17Vw64O9KMagtMxZrkFrHkyy5/s1600/Sql2005AdminGroup.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjJl6glumieCfQqrOhydiPnJWJKAkramKZKd59ZkUAt4myqCZ8Vzhk_6TA5Pvcajx9UVvBNOA9_-CpFBJynNWZE7SHNYYoqdH9F-QcmokHfTGv17Vw64O9KMagtMxZrkFrHkyy5/s1600/Sql2005AdminGroup.png" /></a>
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjc0-AkQz9s3lKLYWVlhS_HZcaKd6NriEgj1j-RD4DXE0oF67p0ZY9qkM62yoT-RkWCpCEpru5WnHQwcFWS3M8fThfOanlhMUMDftMcVf5uYo17YDzHY3CNrtnvw1c58LI_G0Eh/s1600/Sql2005SqlLogin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjc0-AkQz9s3lKLYWVlhS_HZcaKd6NriEgj1j-RD4DXE0oF67p0ZY9qkM62yoT-RkWCpCEpru5WnHQwcFWS3M8fThfOanlhMUMDftMcVf5uYo17YDzHY3CNrtnvw1c58LI_G0Eh/s1600/Sql2005SqlLogin.png" /></a></div>
<br />
I'm a big fan of running services as Network Service. Not having to create explicit service accounts means less admin overhead (both creation, and password expiry maintanance) and a lower overall attack footprint for your enterprise. But there is a downside - a lack of permissions isolation between services also running as Network Service on the same box. In this case, because of the above, <em>anything else on that box that runs as Network Service is automatically sysadmin on your SQL instance</em>.<br />
<br />
In Sql 2008 on Windows 2008 R2 the situation is a bit different, because Windows 2008 R2 introduces so-called virtual accounts. I'm still a bit hazy on these, but one of the things this enables you to do is grant permissions to a service <em>without knowing which account it's running under</em>. The actual permissions the service has at runtime are then the union of permissions explicitly granted to the service account as well as the permissions granted <em>to the service itself</em>.<br />
Which is cool. If a bit freaky at first.<br />
So whilst Sql 2008 still has one of those local Windows groups created for it's service accounts, the contents of this are now, somewhat tautologically:<br />
<br />
NT SERVICE\MSSQL$SQL2008<br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlXqM2iviHa4Z-1VcCVT33zr_5S7gz-_WRBwX-tSniXCPtpCjBzRx2r-eG3_VjUFzfFF3VkjR_EgzT4Pxf3-crECpTnO5xMQPPPghHQPs0qYGCKLkOAFTm8pmlDufACia3NOtz/s1600/Sql2008AdminGroup.png" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEjlXqM2iviHa4Z-1VcCVT33zr_5S7gz-_WRBwX-tSniXCPtpCjBzRx2r-eG3_VjUFzfFF3VkjR_EgzT4Pxf3-crECpTnO5xMQPPPghHQPs0qYGCKLkOAFTm8pmlDufACia3NOtz/s1600/Sql2008AdminGroup.png" /></a></div>
<div class="separator" style="clear: both; text-align: left;">
...and at the database level, the group is actually ignored, and the login (and SA grant) is given directly to itself, not the group:</div>
<div class="separator" style="clear: both; text-align: left;">
<br /></div>
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijwJAQy821zR728YP_oIOSdshrQ5aphcSLW4q_hFIrlUA7oCBnZy2Ruc5Khrhw-XQhzn0pB_XM2y5-F6PaZWRiaawSqwkt7VIcCXBBaSDdy9YxIGr1e5-3ouiabLKyks8txM70/s1600/Sql2008SqlLogin.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEijwJAQy821zR728YP_oIOSdshrQ5aphcSLW4q_hFIrlUA7oCBnZy2Ruc5Khrhw-XQhzn0pB_XM2y5-F6PaZWRiaawSqwkt7VIcCXBBaSDdy9YxIGr1e5-3ouiabLKyks8txM70/s1600/Sql2008SqlLogin.png" /></a></div>
<br />
(note: i've got SQL 2008 as a non-default instance, hence the specific naming. But you get the idea).<br />
<br />
What does all this mean? Put simply (and somewhat recursively):<br />
<ul>
<li><strong>Only the SQL 2008 service <em>itself </em>is setup as an adminstrator on the SQL 2008 service. </strong>The principal (service account) that runs it is not - by itself - an administrator on that instance.</li>
<li><strong>It is no longer the case that other applications running under the SQL Server service account are sysadmins on any SQL instances running under those same credentials.</strong></li>
</ul>
It was the second bullet that broke my app. This is illustrative of poor original design, for sure, but giving Analysis Service carte blanche over the SQL instance on that same box seemed like a fairly safe call originally. But it exposed a really cool security improvement in Windows 2008 R2.<br />
<br />
In this case the problem <em>is</em> the solution: I can just go an add a grant for the virtual service account for Analysis Services, give it enough SQL permissions to do what it needs and the problem goes away.<br />
<br />
More on virtual accounts from the <a href="http://msdn.microsoft.com/en-us/library/ms143504(SQL.110).aspx#New_Accounts">Sql 2012 doco</a>, and from Technet articles <a href="http://social.technet.microsoft.com/wiki/contents/articles/391.aspx">Managed Service Accounts (MSAs) Versus Virtual Accounts in Windows Server 2008 R2</a> and <a href="http://technet.microsoft.com/en-us/library/dd367859.aspx">What's New in Service Accounts</a>piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0tag:blogger.com,1999:blog-17332816.post-1449684454233189972011-11-20T22:26:00.001+08:002011-11-20T22:26:29.698+08:00Gotchas with the Kinect SDK for Windows<p>Playing with the Kinect SDK for Windows, and having a ball, but the doco is (understandably) a bit rubbish in places, or to be more specific – lacks critical details around the form that a parameter takes, where that detail is important.</p> <p>Anyway, this is my list of gotchas so far:</p> <h4>Depth Data Inverted when Player Index tracking enabled</h4> <p>Bizarrely, whether you initialize and open your depth image stream with ImageType.Depth or ImageType.DepthAndPlayerIndex makes the difference between whether what you get is ‘right way round’ or horizontally inverted.</p> <p>Inverted is generally more useful, because it matches with the ‘mirror image’ video stream. So why isn’t the stream like that always? Seems like an unnecessary inconsistency to me, and one you might want to <em>spell out in the doco</em>.</p> <h4>Different Depth Data Pixel Values when Player Index Tracking Enabled</h4> <p>When you do turn player index tracking on, the depth stream ‘pixels’ are lshifted 3 positions, leaving the lower 3 bits for the player index. This <em>is</em> documented, and I understand you’ve got to put the player index somewhere, but why not make the format consistent in both cases, and just leave the lower bits zero if tracking not enabled? Better still, why not put the (optional) player index in the high bits?</p> <p>This is especially irritating because...</p> <h4>GetColorPixelCoordinatesFromDepthPixel() Requires Bit-Shifted Input</h4> <p>The nuiCamera.GetColorPixelCoordinatesFromDepthPixel() mapping method expects the ‘depthValue’ parameter to be in the format it <em>would have been</em> if you had player tracking enabled. If you don’t, you’ll have to lshift 3 places to the left yourself, just to make it work. So depending on how you setup the runtime, the pixels from one part of the API can or can’t be passed to another part of the API. That’s poor form, if you ask me.</p> <p>Not that you’ll find that in the doco of course, least of all the parameter doco.</p> <h4>No GetDepthPixelFromColorPixelCoordinates Method</h4> <p>Ok, so I <em>understand</em> that the depth to video coordinate space translation is a lossy one, but I still don’t see why this method doesn’t exist.</p> <p>I picked up the Kinect SDK and the first thing I wanted to do was depth-clipping background removal. And the easy way to do this is to loop through the <em>video </em>pixels, and for each find the corresponding <em>depth pixel </em>and see what its depth was. And you can’t do that. </p> <p>Instead you have to loop through the <em>depth</em> pixels and call the API method to translate to video pixels, but because there are less of them compared to the video pixels, you have to paint them out as a 2x2 block, and even then there’ll be lots of video pixels you don’t processes, so many you have to run the loop twice: once to set all the video pixels to some kind of default state, and once for those that map to depth pixels to put the depth ‘on’.</p> <p>Just didn’t feel right.</p> piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com1tag:blogger.com,1999:blog-17332816.post-31925286120616582092011-09-22T20:51:00.005+08:002011-09-22T22:38:02.613+08:00Geolocation in HTML 5Ok, so it’s not actually part of HTML 5 (the spec), but conceptually at least it’s definitely part of HTML 5 (the brand).<br />
<br />
So what’s actually involved. Hmm. OH MY GOD IS IT THAT EASY !?<br />
<code> function showMap(position) {<br />
// Show a map centered at (position.coords.latitude, position.coords.longitude).<br />
}<br />
<br />
// One-shot position request.<br />
navigator.geolocation.getCurrentPosition(showMap);</code><br />
<br />
[from the <a href="http://dev.w3.org/geo/api/spec-source.html">W3 geolocation spec</a>]<br />
<br />
So you just rock up to <a href="http://html5demos.com/geo" title="http://html5demos.com/geo">html5demos.com/geo</a> and ...<br />
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhEdpVpwdYQ8sb_qf5O6Ik6U6lNihteLHaQ1T4gajkJe4o-Qj5Y6hl8rmkwlL57zl2EEuDKkuls0xxnmx6lEQkeoPuCAHDglMVj0u2AUs5Ucb_fXWJkJ_9_fvDjYjVBJlb5boiC/s1600-h/image%25255B3%25255D.png"><img alt="image" border="0" height="68" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhJyQkXwIq68EVqo8mTVyNwBjLQSGWIjQym3DTTtmhyphenhyphennQH3vKz0VZ4E3X-14n0J0D5cbdyT8D-xCk_eOBGJsbiG0RPPQ4XeztV1TN0Gu6YFVXvN8DJmxgAgJOqfLJKyAgFehfd-/?imgmax=800" style="background-image: none; border-width: 0px; display: inline; padding-left: 0px; padding-right: 0px; padding-top: 0px;" title="image" width="570" /></a> <br />
<br />
Holy crap. I won’t show you the resulting map because it shows where I live. What’s really freaky about that is <strong>this netbook doesn’t have a GPS. </strong>So either Windows 7 or IE 9 has fallen back to IP-based location inference, and somehow still got me <em>only one house out</em>.<br />
<br />
I’m totally freaked out.<br />
<br />
Anyway, the point of all this is that IE 9 is the browser for Windows Phone 7.5 (Mango), which – if it actually supports this API (and Wikipedia says <a href="http://en.wikipedia.org/wiki/Windows_Phone_7.5#Internet_Explorer_9_Mobile">yes it does</a>) - means you can write location-aware mobile apps targeting Mango without having to ‘go native’. And for the demo I want to put together, this can only be a good thing...piers7http://www.blogger.com/profile/11186470645521299750noreply@blogger.com0