Showing posts with label Deployment. Show all posts
Showing posts with label Deployment. Show all posts

Tuesday, July 22, 2014

'Specified wildcard pattern is invalid' with Octopus Deploy and PSake

So we upgraded to Octopus Deploy v2, and all our deployments are now failing with this error:

22/07/2014 11:51:08 AM: An Error Occurred:
Info 11:51:08
Test-Path : Cannot retrieve the dynamic parameters for the cmdlet. The
Info 11:51:08
specified wildcard pattern is not valid:
Info 11:51:08
Octopus.Environment.MachinesInRole[myproject-sql-node]
At D:\Octopus\Applications\Test\ORDW.Staging.TPPS\7.0.310-trunk_1\Tools\Install
Info 11:51:08
\psake.psm1:357 char:17
Info 11:51:08
+ if (test-path "variable:\$key") {

Our install process is quite complex, so we use Psake to wrangle it. Integration between the two is relatively straightforward (in essence we just bind $octopusParameters straight onto psake's -properties), and I could see from the stack trace that the failure was actually happening within the PSake module itself. And given the error spat out the variable that caused the issue, I figured it was to do with the variable name.

Most of the variable names are the same as per Octopus Deploy v1, but we do now get some extra ones, in particular the 'Octopus.Environment.MachinesInRole[role]' one. But that's not so different from the type of variables we've always got from Octopus, eg: 'Octopus.Step[0].Name', so what's different?

Where psake is failing is where it pushes properties into variable scope for each of the tasks it executes as part of the 'build', and apparently it's choking because test-path doesn't like it. So I put together some tests to exercise test-path with different variable names, and find out when it squealed. This all works against the same code as runs in psake, ie:

test-path "variable:\$key"

$keyResult
aOk
a.bOk
a-bOk
b.aOk
b-aOk
a.b.cOk
a-b-cOk
c.b.aOk
c-b-aOk
a[a]Ok
a[a.b]Ok
a[a-b]Ok
a[b.a]Ok
a[b-a]Cannot retrieve the dynamic parameters for the cmdlet. 
The specified wildcard pattern is not valid: a[b-a]
a[a.b.c]Ok
a[a-b-c]Ok
a[c.b.a]Ok
a[c-b-b]Cannot retrieve the dynamic parameters for the cmdlet. 
The specified wildcard pattern is not valid: a[c-b-b]
a[b-b-a]Ok

I've highlighted the failure cases, but what's just as interesting is which cases pass. This gives a clue as to the underlying implementation, and why the failure happens.

To cut a long story short, it appears that any time you use square brackets in your variable name, PowerShell uses wildcard matching to parse the content within the brackets. If that content contains a hypen, then the letters before and after the first hyphen are used as a range for matching, and the range end is prior to the range start (ie: alphabetically earlier), you get an error.

Nasty.

It's hard to know who to blame here. Octopus makes those variables based on what roles you define in your environment, though I'd argue the square brackets is potentially a bad choice. PSake is probably partly culpable, though you'd be forgiven for thinking that what they were doing was just fine, and there's no obvious way of supressing the wildcard detection. Ultimately I think this is probably a PowerShell bug. Whichever way you look at it, the chances of me getting it fixed soon are fairly slight.

In this case I can just change all my Octopus role names to use dots not hypens, and I think I'll be out of the woods, but this could be a right royal mess to fix otherwise. I'd probably have to forcefully remove variables from scope just to keep PSake happy, which would be ugly.

Interestingly the documentation for Test-Path is a bit confused as to whether wildcard matching is or isn't allowed here - the description says they are, but Accept wildcard characters' claims otherwise:

PS C:\> help Test-Path -parameter:Path

-Path 
    Specifies a path to be tested. Wildcards are permitted. If the path includes spaces, enclose it in quotation marks. The parameter 
    name ("Path") is optional.
    
    Required?                    true
    Position?                    1
    Default value                
    Accept pipeline input?       True (ByValue, ByPropertyName)
    Accept wildcard characters?  false

Also interesting is that Set-Variable suffers from the same issue, for exactly the same cases (and wildcarding definitely doesn't make any sense there). Which means you can do this:
${a[b-a]} = 1
but not this
Set-Variable 'a[b-a]' 1
Go figure.


Update 23/7:
  • You can work around this by escaping the square brackets with backticks, eg Set-Variable 'a`[b-a`]'
  • I raised a Connect issue for this, because I think this is a bug
  • I've raised an issue with Psake on this, because I think they should go the escaping route to work around it.

Thursday, March 14, 2013

Installing the Analysis Services Deployment Utility

The Analysis Services Deployment Utility is a utility that can be used to deploy Analysis Services build outputs (.asdatabase files) to a server, or to generate the XMLA for offline deployment.

I've often used this as part of an automated installation process to push releases out into an integration environment, but on this project I wanted to perform this installation as part of a nightly build. It failed - because the utility (and it's dependencies) weren't installed on the build server.

I wasn't entirely sure what I needed to get it installed (and I was attempting to install the minimum amount of stuff). It turns out this tool is distributed as part of Sql Management Studio, not the SSDT/BIDS projects (as I'd previously assumed). Not sure if the Basic or Complete option is required, because I picked 'complete' and that fixed it.

Also, for 2012 the path has changed, and the utility is now at:

C:\Program Files (x86)\Microsoft SQL Server\110\Tools\Binn\ManagementStudio\Microsoft.AnalysisServices.Deployment.exe

Monday, February 04, 2013

SQL 2012 Data Tier Applications explained

You: So what are these DACPAC things then?
Microsoft:
"A DAC is a database lifecycle management and productivity tool that enables declarative database development to simplify deployment and management. A developer can author a database in SQL Server Data Tool database project and then build the database into a DACPAC for handoff to a DBA"
http://msdn.microsoft.com/en-us/library/ee210546.aspx 

You: Hey, that sounds familiar. How is that different from Visual Studio 2010 Database Edition aka DataDude aka GDR aka VSDBCMD.exe?
Microsoft: They're, like, totally different ok.

You: How so?
Microsoft: Well they just are. DACPACs replace all that GDR stuff. That was just crazy stuff the Visual Studio guys came up with you know. This is the real deal from the SQL product team. And we can package data too, in BACPACs.

You: Awesome. So this'll solve that problem about also upgrading reference data when I push out a new version of the schema?
Microsoft: Oh no. BACPACs can't be used to upgrade an existing database instance. Just to load data into new databases. They're for moving databases between servers.

You: Like a backup
Microsoft: Exactly like a backup, yes.

You: ...so... can't you just use a backup?
Microsoft: No. DACPACs and BACPACs don't just contain the database. They encapsulate the whole data tier application, so they include other items you'd need to deploy, like logins.

You: Cool. And agent jobs as well I guess?
Microsoft: Oh no. Just logins and ... well logins anyway. Try doing that with GDR. And you wouldn't be using sql_variant anyway would you? No.

You: Come again?
Microsoft: Oh nothing.


The author notes this conversation was imaginary, and any resemblance to reality is entirely coincidental

Saturday, May 26, 2012

vsdbcmd 'built by a runtime newer than the currently loaded runtime and cannot be loaded'

When deploying the latest version of our application to the Production server we got this error:
Unhandled Exception: System.BadImageFormatException: Could not load file or assembly 'vsdbcmd.exe' or one of its dependencies. This assembly is built by a runtime newer than the currently loaded runtime and cannot be loaded. File name: 'vsdbcmd.exe'
Oh god, I thought. Yet more VSDBCMD wierdness. But this box had SQL Server installed, so the normal 'needs SMO / Batch Parser' caveats didn't apply. Eventually I ILSpy'd the assemblies to check the bittyness, and guess what! The error message was completely accurate. I'd accidentally picked up the VSDBCMD not from the VS 2008 folder (9.0) but instead from the VS 2010 folder (10.0). Which is .Net 4. Which really is a more recent version of the runtime than was installed on the Windows 2008 R2 server. Embarrasing to be caught out by a completely accurate error message (though if it listed the versions involved I might have paid attention)

Thursday, May 17, 2012

Analysis Services 2008 R2 breaking change when deploying from the command line

As collegues of mine will attest, I will script anything that has to be deployed. Some things are easier than others.

In the case of Analysis Services, the .asdatabase file that comes out of the build needs to be futher transformed to create the XMLA that you need to run on the server to deploy your (updated) cube definition. Rather than attempt to replicate this transformation, I have previously chosen to get the Analysis Services deployment utility to do this for me, since this can supplied with command line arguments:
write-host "Generating XMLA"
$asDeploy = "$programfiles32\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe"
& $asDeploy "$pwd\..\bin\MyCube\MyCube.asdatabase" /d /o:"$pwd\MyCube.xmla"
Which works just nicely. Except when we migrated that project to SQL 2008 R2, when it stopped working.

Well, actually that's not true. We'd been deploying to a 2008 R2 server for ages, it was when we changed the deployment script to use the 2008 version of the deployment tool that it all broke.

Basically the next line in the script kept complaining that 'MyCube.xmla' didn't exist, but I'd look in the folder after the script had run and the file was there. So it seemed like maybe there was a race condition.

Which there was.

If you examine the PE headers for the Sql 2005 version of the deployment util (using a tool like dumpbin) you'll see it's marked as a command line application:

C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\90\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem"
            4.00 subsystem version
               3 subsystem (Windows CUI)


...but the 2008 R2 version is marked as a gui application:
C:\>dumpbin /headers "C:\Program Files (x86)\Microsoft SQL Server\100\Tools\Binn\VSShell\Common7\IDE\Microsoft.AnalysisServices.Deployment.exe" | find /i "subsystem"
            4.00 subsystem version
               2 subsystem (Windows GUI)

What? Can't see the difference? One is marked CUI with a 'C', the other is GUI with a 'G'. An unfortunately high degree of visual similarity given what a fundamental difference it makes: launch the first from the command line and you wait for it, launch the second and you don't. When scripting it's pretty important to know which one you've got, or you're going to get race conditiions.

In this case the answer was to control the process launching, so we can explicitly decide to wait:
     start-process -FilePath:$asDeploy -ArgumentList:$asdatabase,"/d","/o:$xmla" -Wait;
Maybe I should just do that all the time to be safe, but just being able to use other command line tools within a script without a whole lot of ceremony is one of the really nice bits about powershell, so I tend not to. In this case the launch semantics of an existing utility changing between versions seems like a really nasty thing to be caught out by.
Good reference sources:
Stack Overflow: Can one executable be both a console and GUI app?
MSDN: A Tour of the Win32 Portable Executable File Format

Thursday, February 26, 2009

Dang! SSAS deployment configurations are held in the .user file

I really wish Microsoft would make up their mind where deployment target locations should live (ie setting the destinations for Publish / Deploy)

For SSRS this data is stored in the project file (.rptproj) , per configuration, so anyone can get latest on a project, change their solution configuration to (say) 'Test', hit deploy and away they go, happily deploying to your Test environment. This is how it should be.

For SSAS (2005 / 2008) this data is stored in the .dwproj.user file, so for any new developer / PC / workspace that comes along all those deployment configuration settings have to be re-setup (or copied from another workspace), or all their deployments are to localhost. This is just really irritating. As I discovered today.

For an XBAP (or just a WinForms ClickOnce project) a single deployment target is stored in the project file. Other, previously-used deployment locations, are stored per-user. None are correlated with a project configuration in any way. This 'deploy to prod' design mentality drives me nuts.

For an ASP.Net website the (single) deployment location isn't even stored against in the project at all, but squirreled away in %userprofile%\Local Settings\Application Data\Microsoft\WebsiteCache\Websites.xml. Of course! Web application projects put it soley in the .user file, which is at least less obscure, if no more useful.

Now I'd be the first to say that deployment to test environments etc... is best done from a seperate scripted process, but I accept that not everyone has my affinity for long PowerShell deployment scripts. Plus sometimes I just can't be arsed. So am I completely insane to wish that the in-IDE Publish / Deploy mechanisms would work in a manner that could be actually used would be usable within a team, and not scatter the deployment target information quite so liberally?

Popular Posts