Friday, February 27, 2009

GDR Grrr...

I've totally had it with Visual Studio Database Edition 2008 GDR.
CREATE VIEW [dbo].[DateRangeExpanded] WITH SCHEMABINDING
AS
select n.Number, n.Date
from dbo.Numbers n
where n.Date >= CONVERT(DATETIME,'2007-12-01') and
n.Date <= datediff(day, 0, DATEADD(MONTH, 6, GETDATE()))
Results in
Error TSD03127: Cannot schema bind view '[dbo].[DateRangeExpanded]'
because name 'DATETIME' is invalid for schema binding. Names must be in two-part format and an object cannot reference itself.
WFT? SQL Server is quite happy with the above, it's just GDR getting it's knickers in a twist yet again: DateTime is a type not a column. I have found no way of making it happy with this one[1], other than taking the schemabinding off, and I am pretty pissed about limiting my database design just to accommodate GDR's inadequacies.

If this project wasn't finishing up in a few weeks, GDR would be in the bin by now.


[1] CAST vs CONVERT doesn't make any difference, for one thing.

Thursday, February 26, 2009

Dang! SSAS deployment configurations are held in the .user file

I really wish Microsoft would make up their mind where deployment target locations should live (ie setting the destinations for Publish / Deploy)

For SSRS this data is stored in the project file (.rptproj) , per configuration, so anyone can get latest on a project, change their solution configuration to (say) 'Test', hit deploy and away they go, happily deploying to your Test environment. This is how it should be.

For SSAS (2005 / 2008) this data is stored in the .dwproj.user file, so for any new developer / PC / workspace that comes along all those deployment configuration settings have to be re-setup (or copied from another workspace), or all their deployments are to localhost. This is just really irritating. As I discovered today.

For an XBAP (or just a WinForms ClickOnce project) a single deployment target is stored in the project file. Other, previously-used deployment locations, are stored per-user. None are correlated with a project configuration in any way. This 'deploy to prod' design mentality drives me nuts.

For an ASP.Net website the (single) deployment location isn't even stored against in the project at all, but squirreled away in %userprofile%\Local Settings\Application Data\Microsoft\WebsiteCache\Websites.xml. Of course! Web application projects put it soley in the .user file, which is at least less obscure, if no more useful.

Now I'd be the first to say that deployment to test environments etc... is best done from a seperate scripted process, but I accept that not everyone has my affinity for long PowerShell deployment scripts. Plus sometimes I just can't be arsed. So am I completely insane to wish that the in-IDE Publish / Deploy mechanisms would work in a manner that could be actually used would be usable within a team, and not scatter the deployment target information quite so liberally?

Wednesday, February 18, 2009

Mini USB Convergence Finally Coming

In a rare victory for common sense, it seems that the world’s mobile phone makers have at last agreed to converge on using mini-USB as a universal mobile phone charging standard. Not only that, but chargers will be optional at point of sale, so you won’t end up with a house full of them.

The plan is to do this by 2012, which is a bit slow in my book, but it’s all progress, right?

Thursday, February 12, 2009

Substituting view/synonym causes naming collisions in Database Edition GDR

The other day we discovered a duplicated table in the database. We're doing some refactoring, and a certain amount of duplication during this process is inevitable, but to cut a long story short, the two instances of this table needed collapsing into one.

So I dropped one instance, and created a synonym in it's place (pointing to the other). This saves fixing all the views and external references (in the SSAS cube) to that table. Those references will go away over time.

But when I tried to sync the database using GDR it just didn't get it. It's quite happy to create the new synonym, just too dumb to drop the original table first - so there's a naming collision I had to fix by hand. It basically does this:

CREATE SYNONYM [dbo].[Table_2] FOR [dbo].[Table_1];
GO
DROP TABLE [dbo].[Table_2];

Dur! And it does the same thing if you substitute a view for a table too:



CREATE VIEW [dbo].[Table_2]
AS
-- Create view instead of original table
SELECT One FROM dbo.Table_1;

GO
-- Drop original table
DROP TABLE [dbo].[Table_2];
Anyone see the problem here? This is not an edge case guys.

I should point out that this only happens when sync'ing a database with the database model. If you sync two live databases, GDR gets the execution order right.

Not impressed.

(By way of comparison Sql Compare 7 is quite happy with both these scenarios, whether comparing against a live database or a snapshot. Drop table; create synonym - it's not that hard. And Sql Compare 7's been out a while (8 is nearly out))


[Update: 2008-02-20] And another thing: GDR's idea of dropping an object (eg: if you sync to a dev database and the object isn't there any more) is to comment out the object definition in the database project. All well and good, but it's too stupid to also comment out any extended properties the object might have (say the ones the view designer put in to store the view layout), so obviously those extended properties cause the build to crap out as the object they reference doesn't exist.

Thursday, February 05, 2009

Not sold on Database Edition 2008 GDR

Just in time for the end of 2008 Microsoft finally got Database Edition 2008 GDR out the door. This replaces the version that came with VS 2008, primarily bringing with it Sql 2008 compatibility.

So GDR is now Microsoft's 3rd bite at integrating the database development workflow into Visual Studio. Lets start with what it does right:

Obviously the main benefit of bringing the database into Visual Studio is you get source control for free. Sure - you've been able to script out your database and add it to source control before, but this way you can check in database changes and the code that depends on it as one atomic checkin, against a work item. That's actually pretty neat from a tracking / branching point of view.

And because it's a fully model-driven system, all the SQL in the project is tested for consistency and correctness at build time. You could do this before of course (script out and build a structure only database to check it builds), but this way you know any problems you find are local to your workspace, so it really was you that broke them.

The output from the build process is a model that completely describes your database schema, so this can be used to sync up test and production databases, either via the IDE, MSBuild or a redistributable command line utility that can be incorporated into your deployment / install process. Not that you couldn't have done all this with RedGate's SqlComparison SDK of course, but if you struggle to get licences other than MSDN purchased, this does make your life a bit easier, and at least it's been constructed with a build-once-deploy-many philosophy, unlike the clowns behind ClickOnce.

The model also (allegedly) provides a safer platform for database refactoring: it's supposed to 'remember' things like column renames as renames and treat them as such the next time a database is sync'd to the model. I guess only time will tell how well that works in the real world given, for example, multiple databases at different schema versions (ie Prod database 5 versions behind Test etc… ). Are the refactorings going to be journalled away and somehow replayed in the right order? If so, how does it know at which version to start? It's not a simple matter.

But Microsoft has still totally, totally missed the point, because there is still no design support. Not one. Nada. Not even a schema browser.

Hey I can see where this is going, and I can see they're laying foundations here, but in my mind the fundamentals of a database development tool are that they assist in database development. You know - the core stuff: creating objects and sprocs. Source control and a deployment workflow are secondary to that. But with GDR you'd better be hankering after the Sql 6.5 days, because you're going to be writing lots and lots of TSQL. The best I can say is it's colour coded, but didn't we only just get IntelliSense in Management Studio? It just doesn't work for me.

But the alternative - do your dev / design in Management Studio and sync back to the model - is equally flawed, if only because you've got to keep syncing back again (and the sync UI is still a bit ragged). That's a whole lot of friction for relatively nebulous benefits (especially if you already have your database schema under some level of source control), and in my mind questions the whole utility of the product. What's the point of 'integrating' into the Visual Studio IDE if you have to keep tabbing out to do anything even remotely useful (think: execution plans)?

Similarly, whilst there's a Data Compare feature, it's not actually part of your database project, just something you do on an ad-hoc basis. So there's no easy way of designating a certain subset of tables as 'reference data' that should essentially be considered part of the schema, rather than part of the contents, and sync'd up when you redeploy. I'll accept that's probably a vNext feature, but its absence calls into question the viability of using GDR as an end-to-end database deployment manager in the first place, compared to tools like SqlPackager (or scripting using SqlCompare \ SqlDataCompare APIs).

And of course the model parser isn't perfect, so occasionally a valid bit of SQL creates build errors because the model can't fathom what it's doing. In my case it can't seem to resolve the columns in a CTE that queries another (properly referenced) database, and barfs about ambiguous references where there are none. Unfortunately fidelity is always the problem with Model Driven Architectures, even ones that use a DSL as the model, because they have to - at some point - cater for a lowest common denominator, and at any rate they are in an arms race with whatever it is they're modelling. Microsoft is touting it's extensible architecture, and IBM are apparently writing a DB2 provider, but what do you think the chances of 100% functionality coverage are? And for Oracle? I'll wager pretty low. So be prepared to take an 80/20 view, and put some bits into pre/post build scripts and / or re-structure them to keep the parser happy.

Then there are the little things. It creates a schema and object hierarchy when your database is initially imported, but it doesn't enforce it, and the Add \ New (whatever) templates don't pick up on it, so all your new objects end up in the wrong schema unless you're concentrating. If you do anything with any system views you'll need to reference the master database ( %ProgramFiles%\Microsoft Visual Studio 9.0\VSTSDB\Extensions\SqlServer\2008\DBSchemas), so why is it not referenced to start with? And the doco's not been updated for RTM so contains all kinds of outdated information around the command line interfaces, and absolutely no content in the API section other than listing the namespaces.

http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/3f0f0888-9b37-4279-81d4-b924595630a8/
http://social.msdn.microsoft.com/Forums/en-US/vstsdb/thread/a966ac3c-c039-4c3f-9b6d-2882d9874282

So this still strikes me as neither fish nor fowl. Given the designers already exist in Management Studio it's a bit pisspoor that the DataDude team haven't borrowed them yet. I imagine I'll still be doing most of my work in Management Studio, and at best attempting to keep the database model in sync. And I suspect that for me, and many others, this will be just too much friction to bother with and so the whole effort will be more or less wasted.

Which is all a bit sad, given it's 2009, I've written a decades worth of database-centric apps, and we've still not got this right. At least no-one's actually paying for this any more.

Popular Posts