Monday, August 14, 2017

Taking the Availability Group Database Level Health Detection option for a spin

With SQL Server 2016 the Availability Groups now include the option to enable Database Level Health Detection. While this option is turned off by default, Microsoft strongly recommends you enable this on all Availability Groups.... it is just off by default for backwards compatibility and so you opt-in by choice.

In this post I will take a deep look into just what sort of database issues will cause the AG to failover with this option enabled.

For this walk through I am using the SQL Server AlwaysOn deployment in the Azure Market Place and have provided a link to all the scripts below so you can try out this functionality and show off your skills in your own demonstrations. If you don't have an Azure Subscription all you need is an environment with two Replicas and multiple disks mounted in the virtual machines (so you can separate the transaction logs of different databases).

To follow along with the scripts and screen shots here are the details of resources within my test lab.

Primary Replica: SQLSERVER-0
Secondary Replica: SQLSERVER-1
File Share Witness: CLUSTER-FSW

Existing Availability Group: Contoso-ag
AG Database: AutoHa-sample



To make this demo easier, I have created a file share on the File Share Witness to store backups. It is best practice that when using Availability Groups you use a central location for storing backups from all replicas. Obviously a File Share Witness is not that place but for this demo lab it is fine.

Central Backup Share: \\cluster-fsw\sqlbackups


All the scripts used in this post are located here. This link is also contained within my final thoughts at the end of this post.

Ok, lets get started.


1. Create the demo databases
To enhance this demo, we will setup new databases and an availability group to show that how this setting only impacts one AG and not another.

Run the following TSQL to setup the demo databases. NOTE: I have explicitly put the Transaction Logs for the SuspectDb on a different volume to the CorruptDb

[01.CreateDbs.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:Connect SQLSERVER-0

USE master
GO

-- CorruptDb
CREATE DATABASE [CorruptDb] CONTAINMENT = NONE
ON PRIMARY (NAME = N'CorruptDb_data', FILENAME = N'F:\DATA\CorruptDb_data.mdf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
LOG ON ( NAME = N'CorruptDb_log', FILENAME = N'F:\LOG\CorruptDb_log.ldf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
GO
ALTER DATABASE [CorruptDb] SET COMPATIBILITY_LEVEL = 130
GO
ALTER DATABASE [CorruptDb] SET PAGE_VERIFY CHECKSUM  
GO
USE [CorruptDb]
GO
CREATE TABLE [dbo].[DemoData] (id INT IDENTITY PRIMARY KEY, demoData VARCHAR(200));
GO
INSERT INTO [dbo].[DemoData] (demoData) VALUES ('Test data prior to simulating page level corruption');
GO 200

-- SuspectDb (NOTE: G:\ for TransLog)
CREATE DATABASE [SuspectDb] CONTAINMENT = NONE
ON PRIMARY (NAME = N'SuspectDb_data', FILENAME = N'F:\DATA\SuspectDb_data.mdf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
LOG ON ( NAME = N'SuspectDb_log', FILENAME = N'G:\LOG\SuspectDb_log.ldf' , SIZE = 8192KB , FILEGROWTH = 65536KB )
GO
ALTER DATABASE [SuspectDb] SET COMPATIBILITY_LEVEL = 130
GO
ALTER DATABASE [SuspectDb] SET PAGE_VERIFY CHECKSUM  
GO
USE [SuspectDb]
GO
CREATE TABLE [dbo].[DemoData] (id INT IDENTITY PRIMARY KEY, demoData VARCHAR(200));
GO
INSERT INTO [dbo].[DemoData] (demoData) VALUES ('Test data prior to making the database suspect');
GO 200

2, Create the demo Availability Group 
We won't actually add our corrupt database into this Availability Group yet, that will come in the next few steps.

[02.CreateAG.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.

-- Values you may need to change for your lab
-- Primary replica: SQLSERVER-0
-- Secondary replica: SQLSERVER-1
-- Backup Central Share: \\CLUSTER-FSW\SQLBACKUPS

---------------------------------------------------------
-- Prereqs for Availability Group
---------------------------------------------------------
:Connect SQLSERVER-0
-- Backup demo databases to share \\CLUSTER-FSW\SQLBACKUPS
BACKUP DATABASE [CorruptDb] TO DISK = N'\\CLUSTER-FSW\SQLBACKUPS\CorruptDb.bak' WITH FORMAT;  
BACKUP DATABASE [SuspectDb] TO DISK = N'\\CLUSTER-FSW\SQLBACKUPS\SuspectDb.bak' WITH FORMAT;  
GO  

---------------------------------------------------------
-- AG Endpoints
---------------------------------------------------------
-- If you are using your own lab then you need to create a Database Mirroring end-point
-- The Azure Market Place lab already has the end-point created for you, check these with
 :Connect SQLSERVER-0
SELECT * FROM sys.database_mirroring_endpoints;
GO
  :Connect SQLSERVER-1
SELECT * FROM sys.database_mirroring_endpoints;
GO
-- If you need to create an endpoint use the following syntax
--CREATE ENDPOINT hadr_endpoint  
--    STATE=STARTED   
--    AS TCP (LISTENER_PORT=5022)   
--    FOR DATABASE_MIRRORING (ROLE=ALL);  
--GO  
  
 
---------------------------------------------------------
-- Start the AlwaysOn Extended Events Session
---------------------------------------------------------
-- On the primary replica
:Connect SQLSERVER-0
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='AlwaysOn_health')
BEGIN
  ALTER EVENT SESSION [AlwaysOn_health] ON SERVER WITH (STARTUP_STATE=ON);
END
IF NOT EXISTS(SELECT * FROM sys.dm_xe_sessions WHERE name='AlwaysOn_health')
BEGIN
  ALTER EVENT SESSION [AlwaysOn_health] ON SERVER STATE=START;
END
GO
-- On the secondary replica
:Connect SQLSERVER-1
IF EXISTS(SELECT * FROM sys.server_event_sessions WHERE name='AlwaysOn_health')
BEGIN
  ALTER EVENT SESSION [AlwaysOn_health] ON SERVER WITH (STARTUP_STATE=ON);
END
IF NOT EXISTS(SELECT * FROM sys.dm_xe_sessions WHERE name='AlwaysOn_health')
BEGIN
  ALTER EVENT SESSION [AlwaysOn_health] ON SERVER STATE=START;
END
GO


 ---------------------------------------------------------
 -- Create the AG on the primary
 ---------------------------------------------------------
:Connect SQLSERVER-0
USE [master];
CREATE AVAILABILITY GROUP DbHealthOptDemoAg   
 WITH (DB_FAILOVER = ON)
 FOR DATABASE SuspectDb
 REPLICA ON
  'SQLSERVER-0' WITH   
   (  
   ENDPOINT_URL = 'TCP://SQLSERVER-0.Contoso.com:5022',  
   AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,  
   FAILOVER_MODE = AUTOMATIC,
   SEEDING_MODE = AUTOMATIC
   ),  
  'SQLSERVER-1' WITH   
   (  
   ENDPOINT_URL = 'TCP://SQLSERVER-1.Contoso.com:5022',  
   AVAILABILITY_MODE = SYNCHRONOUS_COMMIT,  
   FAILOVER_MODE = AUTOMATIC,
   SEEDING_MODE = AUTOMATIC
   );   
GO  

 ---------------------------------------------------------
 -- Join the secondary replica to the AG (allow for seeding)
 ---------------------------------------------------------
:Connect SQLSERVER-1
ALTER AVAILABILITY GROUP DbHealthOptDemoAg JOIN;  
GO  
ALTER AVAILABILITY GROUP DbHealthOptDemoAg GRANT CREATE ANY DATABASE;
GO

After that step you should have an Availability Group that looks like this:


On your primary replica you should have the following databases


On the secondary replica you should only have the SuspectDb for now. We still need to do some work to setup the CorruptDb.


If you do not have the SuspectDb on the secondary replica after joining the replica to the AG then the Automatic Seeding option may not have succeeded. The script provided contains the required steps to restore and join the db on the seconary replica.



3. Verify Database Level Health Detection option is enabled
Open the properties of the AG and make sure the option is checked.



Alternatively check the value of db_failover is 1 in sys.availability_groups [04.VerifyAg.EnableDBHealthLevelDetection.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:Connect SQLSERVER-0
USE [master]
GO
SELECT name, db_failover FROM sys.availability_groups
GO

If you need to enable the setting either use the GUI or run 03.AlterAG.EnableDBHealthLevelDetection.sql
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.
:Connect SQLSERVER-0
USE [master]
GO
ALTER AVAILABILITY GROUP [DbHealthOptDemoAg] SET(DB_FAILOVER = ON);
GO


4. Corrupt the demo database
The first thing we are going to look at is what happens when you have page level corruption, and is corruption really corruption. Full credit to Paul Randle for an existing blog on how to do this. See https://www.sqlskills.com/blogs/paul/dbcc-writepage/

Because we are going to use DBCC WRITEPAGE and need the database to be in single user mode, we have not yet been able to add it to the Availability Group. This won't impact the demo though, as when a database is brought online or added to an availability group the integrity of data pages are not validated.

Run the following statements to corrupt the database. WARNING!!! Use this at your own risk and never ever ever ever do this on a production environment.

[05.CorruptData.sql]
-- get the page information of our demo database and table
:Connect SQLSERVER-0
DBCC IND (N'CorruptDb', N'DemoData', -1);
GO

-- Corrup the page data (remember to change the page number)
:Connect SQLSERVER-0
ALTER DATABASE [CorruptDb] SET SINGLE_USER;
GO
DBCC WRITEPAGE (N'CorruptDb', 1, [PageId], 4000, 1, 0x45, 1);
GO
ALTER DATABASE [CorruptDb] SET MULTI_USER;
GO

-- check that we have caused page level corruption, cause a data check
:Connect SQLSERVER-0
DBCC DROPCLEANBUFFERS
GO
USE [CorruptDb]
GO
SELECT * FROM [dbo].[DemoData];
GO


If this goes to plan then you should receive this error:

Msg 824, Level 24, State 2, Line 27
SQL Server detected a logical consistency-based I/O error: incorrect checksum (expected: 0x3ea8609e; actual: 0x3ea8259e). It occurred during a read of page (1:320) in database ID 6 at offset 0x00000000280000 in file 'F:\DATA\CorruptDb_data.mdf'.  Additional messages in the SQL Server error log or system event log may provide more detail. This is a severe error condition that threatens database integrity and must be corrected immediately. Complete a full database consistency check (DBCC CHECKDB). This error can be caused by many factors; for more information, see SQL Server Books Online.




5. Add the database into the Availability Groups
Now that we have a corrupted database, lets add it into the availability group using the following statements

[06.AddCorruptDbToAG.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.

-- Add the database to the AG on the Primary
:Connect SQLSERVER-0
USE [master]
GO
ALTER AVAILABILITY GROUP [DbHealthOptDemoAg]
MODIFY REPLICA ON N'SQLSERVER-1' WITH (SEEDING_MODE = AUTOMATIC)
GO
USE [master]
GO
ALTER AVAILABILITY GROUP [DbHealthOptDemoAg]
ADD DATABASE [CorruptDb];
GO

-- Make sure the secondary is set to Auto Seed with CREATE DB permissions
:Connect SQLSERVER-1
ALTER AVAILABILITY GROUP [DbHealthOptDemoAg] GRANT CREATE ANY DATABASE;
GO


-- if auto seeding doesn't automatically work, check the logs as if you haven't cleaned up
-- the data and log files from previous demos they may prevent the auto seeding.

-- The following statements can be used if AUTO Seeding doesn't run (once the issue is resolved)
:Connect SQLSERVER-0
BACKUP DATABASE [CorruptDb] TO DISK = N'\\CLUSTER-FSW\SQLBACKUPS\CorruptDb_addtoAG.bak' WITH FORMAT;  
GO
 :Connect SQLSERVER-1
RESTORE DATABASE [CorruptDb] FROM DISK = N'\\CLUSTER-FSW\SQLBACKUPS\CorruptDb_addtoAG.bak' WITH NORECOVERY;
GO 
:Connect SQLSERVER-0 
BACKUP LOG [CorruptDb] TO DISK = N'\\CLUSTER-FSW\SQLBACKUPS\CorruptDb_addtoAG.trn' WITH FORMAT;
GO  
:Connect SQLSERVER-1
RESTORE LOG [CorruptDb] FROM DISK = N'\\CLUSTER-FSW\SQLBACKUPS\CorruptDb_addtoAG.trn' WITH NORECOVERY;  
GO
ALTER DATABASE [CorruptDb] SET HADR AVAILABILITY GROUP = [DbHealthOptDemoAg];  
GO  


6. Reviewing the behavior of a corrupted database in Availability Groups
Now you should have both databases in the Availability Group. Open the Availability Group Dashboard and take note that everything is in a health state.


.... but wait. Didn't we turn on DB health level detection for the AG and corrupt the database. Run a statement to force SQL Server to generate the Logical consistency I/O error like before

[07.GenerateLogicalConsistencyError.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.

-- Generate the Logical Consistency check error
:Connect SQLSERVER-0
DBCC DROPCLEANBUFFERS
GO
USE [CorruptDb]
GO
SELECT * FROM [dbo].[DemoData];
GO

But did this cause a failover. Check the state of the Availability Group either within Object Explorer or within the dashboard.



Notice that this didn't cause a failover. This is because this is not enough to affect the database state. If this database was not in an Availability Group it would still remain in an online state (as it was when we corrupted it). The Database Health Level Detection only triggers when an event is significant enough to affect the database state (e.g. Offline, Suspect, etc). To quote the official documentation "The database level health detection notices when a database is no longer in the online status".

So the lesson at this point is that database corruption, at least page level corruption is not enough to trigger the failover. Even if you run a DBCC CHECKDB it won't trigger the failover (HINT: give that a go for yourself). So the recommendation here is that you should be running regular DBCC CHECKDB on secondaries and failing over frequently (thus running consistency checks on all replicas regularly).

7. Simulate Suspect database state
So now lets look at what happens when something significant happens to affect the database status. The easiest way to simulate this would be to use the ALTER DATABASE statement, however when a database is added to an Availability Group there are many statements that the database engine will not allow you to execute to protect the state of the database.

So the way we can replicate this is to simulate an I/O failure.

Remember when we created the SuspectDb on SQLSERVER-0 I set the Transaction Log's path to a different disk/partition to the other databases.

Open Disk Management on the primary node SQLSERVER-0


Locate the Disk hosting the G:\ volume, or whatever volume you have stored the SuspectDb Transaction Log file. Right click the disk and select Offline.


In SQL Management Studio, refresh the Availability Groups branch of Object Explorer for SQLSERVER-0. Notice that it is still the primary replica.



At this stage we haven't caused an I/O operation (in our lab there is no Maintenance Tasks to backup the Transaction Logs). So lets insert some data to cause the Transaction Log to be accessed.

[08.WriteToSuspectDbTransLog.sql]
--- YOU MUST EXECUTE THE FOLLOWING SCRIPT IN SQLCMD MODE.

:Connect SQLSERVER-0
USE [SuspectDb]
GO
INSERT INTO [dbo].[DemoData] (demoData) VALUES ('Data to push through the TransLog');
GO

Bingo we generated an I/O which is significant enough to affect the database status.

Msg 945, Level 14, State 2, Line 3
Database 'SuspectDb' cannot be opened due to inaccessible files or insufficient memory or disk space.  See the SQL Server errorlog for details.


But what happened to the Availability Group. Refresh the Object Explorer for SQLSERVER-0, and notice the status of the database and the Availability Group role.


So it looks like we failed over, but only for the DBHealthOptAg, because that was the only AG where a database status changed. Let's check the state of our Availability Group though post failover. Switch to SQLSERVER-1 in Object Explorer and expand the Availability Groups. Open the Dashboard for the Availability Group.


This confirms we have failed over with SQLSERVER-1 now the primary replica due to the I/O error experienced on SQLSERVER-0 due to a storage sub-system error (loss of disk). The DB Health Level Detection setting worked. Without that setting enabled this would not have caused a failover.


My final thoughts
The Database Level Health Check setting is a great new capability for increasing the availability of your databases contained in an Availability Group when one of the databases experiences a significant enough issue to affect the status of the database, like a loss of disk. It still will not protect you from certain data issues like corruption. So monitoring and maintenance is still critical!

Finally, you should also consider the Flexible Automatic Failover Policy. While I do not recommend changing this blindly, you should ensure you understand what role that setting plays in Automatic Failover.

All the scripts used in this post are located in my GitHub https://github.com/Matticusau/SQLDemos/tree/master/SQL2016AGDBHealthLevelDetection



References relevant to this post

Availability group database level health detection failover option

Credit to Paul Randle for how to use DBCC WRITEPAGE - https://www.sqlskills.com/blogs/paul/dbcc-writepage/


Configure Flexible Automatic Failover Policy

Flexible Automatic Failover Policy - Availability Group


Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Thursday, April 27, 2017

Installing SqlServer PowerShell module from PowerShellGallery.com

With the release of SQL Server Management Studio 17.0 (SSMS) the SqlServer PowerShell module has finally been decoupled and now can be installed independently. To take full advantage of this change I recommend using PowerShell 5.0 (if your on Windows 10 or Server 2016 then this is your default).

Full details of the SSMS 17.0 updates https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-management-studio-ssms

The official installation details for SqlServer module are at
https://docs.microsoft.com/en-us/sql/ssms/download-sql-server-ps-module but they are a bit light on currently so this post will walk through the process in more detail. You can also find the public listing of the SqlServer on PowerShellGallery.

Firstly check what modules you have installed.

Get-Module -Name SQL* -ListAvailable


If you have installed the SQL Server DBEngine then you should have SQLPS for that particular version. On my server I upgraded SSMS 2016 to SSMS 16.0 so I have both 130 and 140 versions installed.

What I don't currently have installed is the SqlServer PowerShell module, and my SQLPS module hasn't been updated with the upgrade of SSMS to version 17.0

So let's download and install the latest SqlServer module.

First, check your PowerShellGet is configured for the public gallery (the default setup).

Get-PSRepository



You should have the PSGallery repository setup. By default it will be untrusted, this will just mean you get prompted before installing any modules from it. It is recommended public repositories should not be marked as trusted, but internal private repositories can be to make the installations easier.

If you don't have any other repositories configured then you won't have to supply the repository name to any future commands, but if you do then you can simplify the module searches by specifying the repository name. This is very useful if you want to host private custom copies of the publicly available modules on a private repository.

Now you can search for the module in the gallery to check what the most recent version is available

Find-Module -Name SqlServer

NOTE: If prompted to continue enter 'y'



Add the -AllVersions parameter to list all the available versions. Currently there is only one version in the gallery but this will change overtime.

To install the module from the gallery run Install-Module. Use the Scope parameter to install for either CurrentUser or AllUsers based on your needs. With PowerShell 5.0 it is also possible to install versions side-by-side. In my case I am installing the module for all users and requiring a specific version (the current one).

To avoid the error message like "A command with name '' is already available on this system" I will also use the parameter -AllowClobber.



Install-Module -Name SqlServer -Scope AllUsers -RequiredVersion 21.0.17099 -AllowClobber





Now check that the module is available for use on the server with

Get-Module -Name SqlServer -ListAvailable



Notice that the path is now one of the common $PSModulePath paths. This is also part of the improvements that these changes bring.

Now going forward to update the module you can use the Find-Module and Update-Module cmdlets. Look out for my post on automating that process next month.


Happy PowerShell'ing


Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.


Tuesday, March 28, 2017

Cmder, enhance your Windows Terminal Console experience

One of the best new tools I have been using lately is Cmder. A Terminal Emulator for Windows which is great for a number of reasons:

  • Tabbed terminals 
  • Multiple languages can be emulated
  • Fully customisable
  • and best of all Quake mode
If you haven't seen it yet check it out http://cmder.net



I do a lot of work with PowerShell, Git and usually have many things on the go at once. What I really like about Cmder is that you can enable Quake mode which hides the terminal window at the top of your monitor screen and allows a keyboard short cut (Ctrl+`) to show/hide it. Making it very quick to open it over the top of other applications your working in (e.g. VS Code) and run commands. For example, when I am working on development projects I usually have VS Code open to edit the source code files and then I will have a PowerShell or Bash terminal emulated in Cmder for all my Git commands on the repo. Of cause you could just use the Git feature of VS Code in this case, but I usually always find there is some conflict I have to resolve at the Git console so just prefer to work in that. Plenty of other reasons I use Cmder too.

To get you started here is my quick setup steps which will set it up at least with Quake mode and your favorite terminal as default.

  1. Download and install Cmder from http://cmder.net
  2. Open Cmder.exe from the location you extracted it to.
  3. Click on the 3 bar menu item in the bottom right

  4. Select Settings

  5. I have a Surface Book as my laptop which comes with a very large resolution (3000x2000) so I found I needed to change the "Size & Pos" settings, but obviously this is personal preference.

  6. Click the Main > Quake style option in the setting menu. Ensure "Quake style slide down" is enabled and any other options as you require.

  7. Select the Startup option from the settings menu and select the desired default task in the "Specified named task" drop down.



    This will give you the option of setting one of the default tasks as your default startup terminal. Such as Cmd.exe, PowerShell, Bash, etc. If you require anything more specific then use the Tasks section to define your own custom task. This is handy too for extending the functionality of Cmder, like setting up a SQLCMD terminal or specific PowerShell terminals for different technologies (e.g. Azure, SQL, Exchange, etc).
  8. If you want to have Cmder start at logon, something I have found increasingly important as I grew to depend on Cmder for my terminal use. Select the Integration > Default term option in the settings menu. Then check the "Register on OS Startup" option. NOTE: For this setting to work you also need to set the "Force ConEmu as default terminal for console applications", and you will be prompted to do so if you don't check it first. This setting will have no effect without that setting also enabled.

  9. Once you have configured the required setttings, such as Task Bar minimisation and other desired behaviors, click Save Settings.
  10. Now return to your other applications, when you need a terminal press Ctrl+` (that is Ctrl plus tilt character) and the cmder terminal will load.
For documentation of any of the setting see: https://conemu.github.io/en/TableOfContents.html#settings

Some handy keyboard shortcuts, for the full list see http://cmder.net

Ctrl+` : Show/Hide the cmder terminal
Ctrl+Numer : Switch to that active terminal tab
Shift+Alt+Number : Fast new tab (e,g, Shift+Alt+1 for Cmd.exe or Shift+Alt+2 for powershell)

For the official documentation see https://conemu.github.io/en/TableOfContents.html

Happy terminal emulating


Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Why VSCode has replaced Management Studio as my default SQL Database and Query editor

Firstly let me start by stating that when I originally set out in an IT career I was heading down a developer path, and certainly had a number of developer type roles over the years, or found ways of continuing development projects while working in infrastructure roles..... probably why I have an interest in DevOps. So taking that into account it's no surprise that for my entire career I have always been comfortable working in code and not relying on GUIs. Even for all the years as a SQL DBA armed with SQL Management Studio (SSMS), yet I was always most comfortable working in TSQL rather than the wizards. Probably comes from the days of Enterprise Manager and Query Analyser (ahhhhh nostalgia). Now the MS Product Team has done a great job at improving the wizards in SSMS and making tasks as easy as they can be in the tools. I will also state that this post is by no means saying SSMS is dead because there are just some things where it is better positioned.

What I will cover in this blog post is why my go to TSQL editor and tool for general database work is now VSCode with the MSSQL extension.

Please don't take this as a statement that I have now uninstalled SSMS or Visual Studio with SQL Data Tools (SSDT) from my laptop, I wish, but I have always found those tool a bit bloated with memory consumption when all I want to do is connect to a database, run some queries, or make some basic changes. What I will show is why/how I now perform those tasks with VS Code, but for anything more in depth like designing SSIS packages or performance troubleshooting I still rely on the existing tools (for now).

Another factor that is driving this adoption of a text based editor is that a large amount of my work is now with Azure and other cloud solutions, and for the majority of the work you need to do it is largely console or script based.

Now that you know why I have arrived at this place, lets get into how I setup and use VS Code for this purpose. I look forward to healthy discussions with people around this because I am not a believer of the "one size fits all" approach to a tool set either so it is always great to hear what others use.


Setup and configure your environment
Here are the steps I use to setup my VSCode environment:

  1. Download and install VSCode https://code.visualstudio.com/download
  2. Open VS Code
  3. Press Ctrl+Shift+X (on windows)
    1. Alternatively use the View > Extensions menu item
  4. Locate and install the following extensions
    1. vscode-icons
    2. mssql
    3. powershell
    4. c#
  5. Configure the extensions
    1. From File > Preferences > File Icon Theme select "VSCode Icons"
      This will ensure that any files you open and access have nicely displayed icons to make your experience easier.
  6. Configure the environment settings
    1. From File > Preferences > Settings
      1. VS Code works in two setting modes, User and Workspace. User should be personal preferences and Workspace should be used for project specific settings that will ship with the repo.
        User settings are stored in the file C:\Users\\AppData\Roaming\Code\User\settings.json but you shouldn't have to edit that manually as the VS Code window provides the best method for working with these files.
    2. I don't change too many settings at this time from the default, but some to consider depending on your needs are:
      mssql.splitPaneSelection = "current|next|end"

      IntelliSense will help you complete the values if you need to see what is available.
  7. Now you should be ready to start working inside VS Code. However, I recommend reading the release notes when new updates are made as the developer community is extremely active improving VS Code and there is always new and useful features being added.

While VSCode has a built-in integrated terminal, I like the cmder tool for my terminal use. If you aren't familiar with cmder check it out, very versatile, run multiple terminals and languages. Best of all a Quake mode. 


Connecting to a database and executing SQL queries
There are many tricks and ways to work within VS Code but here is a simply walk through on the basics to get you started.
  1. Open VS Code if you haven't already
  2. You do not need to open a folder or save files just to run queries but it could be beneficial. Think of a folder like a Project/Solution, but in a simplier (faster) format. This works great with Git and cross platform collaboration.
    For the case of this walk through just create a new file (click New File on the welcome page)
  3. Without saving the file, lets make sure we are in the right language mode.

    Click the current language in the tray menu (e.g. Plain Text)



    This will open the command palette with prompts to select the language. Either browse or type to find your language and select it.



    Now the correct SQL language is shown in the tray menu

     

    Now the color coding and formatting, along with IntelliSense, will be suitable for SQL Server development.

    TIP: When you save a file then the language mode is automatically detected based on the file extention.
  4. Press Ctrl+Shift+P to open the Command Palette
  5. Type "mssql" and select the mssql: Connect option or press Ctrl+Shift+C



    TIP: Make sure your focus is in a file with the SQL language set and not any other areas of VSCode when you press Ctrl+Shift+C as otherwise it will open a console as per those keyboard shortcuts default.
  6. Select an existing connection profile or select the Create Connection Profile to create a new one. So lets create one.
  7. Follow the wizard filling out your server/instance, database (optional), authentication etc.







    Once you start to connect the status is shown in the tray menu



    Any errors connecting will be shown with an overlay



    Once connected VS Code will update intellisense dictionary and perform other operations set by the extension.
  8. Now write your query in the file
  9. When ready you can execute the query in a few methods

    Use the Command Palette and the MSSQL: Execute Query command.



    Right click in the editor and select



    or my favorite just simply press Ctrl+Shift+E
  10. The query results tab will open. By default this opens in a new split window column, or the next one if you have multiples. The idea here is so you can see the query and result all in one window.



    You can put the query results at the bottom of the screen which might be a more familiar view to those use to SSMS. To do this select the Toggle Editor Group Layout from the View menu, or press Alt+Shiftt+1.



    Now the results are below the query you executed.



    Alternatively you can also set the query results to display in the current split window column (e.g. new tab)



    So as you can see you can customise where the results are displayed just like in SSMS.

    Something to keep in mind is that a new result tab will open for every file you execute a query from, but if you re-run a query or a new query from the same file then it will use the existing results tab for that file.
  11. Now just like the query editor in SSMS, it will either execute the entire file contents or what you have selected. So like in this example it will just execute the selected query and not the entire file contents.



    This is why I like the keyboard shortcut Ctrl+Shift+E to execute queries because it becomes really quick to work from a file and execute different selected queries as desired.
Obviously some people will really miss Object Explorer to understand the schema's of databases they are not familiar with, but keep in mind VSCode is designed for developers and so typically you would have a folder that contains all the scripts for creating the database and therefore your schema to refer to, or you would be familiar with the schema. However, as we all know there are plenty of views you can easily query to get that data (because after all that is all Object Explorer does). 


Happy SQL Scripting.


Registering your SQL Server connections
VS Code has a json based configuration system. SQL Connections can be saved in the User Settings file, think along the lines of "Registered Servers" in SSMS. I have already briefly touched on how to create a new profile when you connect. However here is how to register them ahead of time and manage existing connection profiles.

Keep in mind though, these connections are not unique to a project/solution/folder, they are unique to your user settings. So you make sure you give them meaningful names to easily identify which databases/projects they belong to.
  1. Press Ctrl+Shift+P to open the Command Palette
  2. Type "mssql" and select the MSSQL: Manage Connection Profiles option

  3. The Command Palette will then prompt you with some more options.



    Create: This will walk you through creating a new profile via the Command Palette prompts
    Edit: This will open the User Settings JSON file and allow you to manually edit the connection profiles. NOTE: Passwords can be saved in an encrypted form but are not stored in this file for security.
    Remove: This will walk you through removing an existing profile via the Command Palette prompts

    This is an example of the JSON configuration provided with the Edit option.

  4. Once you have configured the profile you can then simply select it from the list provided under the MSSQL: Connect command.

References
VS Code official site https://code.visualstudio.com/
VS Code opensource repo  https://github.com/Microsoft/vscode

Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.


Thursday, February 2, 2017

Fixing Merge conflicts in VS Code projects and VSTS (Visual Studio Online) repos

I use a combination of GitHub and Visual Studio Team Services (Visual Studio Online, VSTS, etc) repositories. GitHub is pretty straight forward for resolving merge conflicts of pull requests through it's online portal. Typically when I have come across merge conflicts in my VSTS repositories it has been with Visual Studio projects and so I could resolve them with the VS GUI. However I recently experienced a merge conflict with a pull request for one of my VSTS repositories where the code was written in Visual Studio Code. For working with the VSTS GIT repo I use the posh-git which comes with the GitHub Desktop install, but the logic here should transfer.

The situation I faced was that I had been working in a branch and editing a file, which had also been edited by a colleague in the master branch (I know, I know...). So when I pushed my changes from my local repo to VSTS and then used the web portal to create a Pull Request to merge the changes back into the master branch I received an error that there was a merge conflict blocking the pull request.


As I mentioned normally in this situation I am use to there being options provided to select which file/change you want to be kept. In VSTS i was not able to find any such option, and all articles online seemed to indicate how to fix this in Visual Studio GUI. While I do have VS installed I didn't want to use that as I am trying to just move to using VS Code and GIT command line where possible. I never did think to try the git menus in VS Code, or the GIT GUI..... and to be honest there is probably a better way to resolve this but this is how I resolved the conflict.

First I abandoned the Pull Request using the button in the screen shot above.

Next I ran git status to make sure that the "working tree is clean"

Then by looking at the history of the file in the VSTS portal within my branch, I tracked down the original commit id that the file was initialised with. Taking that identifier I ran git reset InitialCommitId PathToFile (e.g. git reset e99a################################c60 XML\9.xml

Now when you run git status it should so the file is unstaged. It will also show that there are changes for the same file to be committed. We need to discard the changes in the working directory (resetting it to the initial version the branch was created from). So run git checkout -- XML\9.xml

Now git status should show a change to be committed. Commit the change with git commit -m "description"

Your working directory should now show it is ahead of the origin branch, so push the changes upstream with git push.

Now your branch in VSTS does not contain any conflicts. I wasn't able to reactivate the existing pull request and have it succeed (it still showed the conflict), so I created a new fresh pull request for the branch and it was no longer blocked by a merge conflict.

Hopefully this helps others. If you have other suggestions on working around this please comment below.

Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.