Friday, March 20, 2015

PowerShell: Identifying Strongly or Weakly Typed Variables

PowerShell is a loosely typed language, which means we can run something like this and PowerShell will automatically select the data type for us.

$var = 123;
$var.GetType();

Results in: System.Int32

And if we go and change the value to a string, PowerShell just naturally changes the datatype as required for us.

$var = "string";
$var.GetType();

Results in: System.String 

Now if you are like me and grew up in a strict language, then you may prefer to Strongly Type your variables. There are actually very good reasons for doing this as it guarantees the data type you will be working with, particularly from user input (although there are input validation methods you can use in addition to this too).

To strongly type a variable we prefix the variable name (at creation) with the desired type.
[string]$var = "string";

Now this what I would consider good coding practice. So how do we determine if variables have been strongly typed. The following script will create an array to hold all known system variables, because we don't want to mess around with them. Then it will use Get-Variable and look at the Attributes property of a variable to determine if the type conversion was set. Now this method could also be used for some of the other attribute options (e.g. range validation, etc)


<#
This script can be used to check for Strongly and Weakly typed variables defined within the session
Things to note:
- Variables created in child scopes which have ended will not be visible
- Some System Variables may appear in the list
#>

#create an array to hold all the known system variables (variables from other modules are not included here)
$SystemVars = @('$'
    , '?'
    ,'^'
    ,'true'
    ,'false'
    ,'args'
    ,'Error'
    ,'ErrorView'
    ,'ExecutionContext'
    ,'FormatEnumerationLimit'
    ,'HOME'
    ,'Host'
    ,'input'
    ,'ConfirmPreference'
    ,'ConsoleFileName'
    ,'DebugPreference'
    ,'ErrorActionPreference'
    ,'MaximumAliasCount'
    ,'MaximumDriveCount'
    ,'MaximumErrorCount'
    ,'MaximumFunctionCount'
    ,'MaximumHistoryCount'
    ,'MaximumVariableCount'
    ,'MyInvocation'
    ,'NestedPromptLevel'
    ,'null'
    ,'OutputEncoding'
    ,'PID'
    ,'profile'
    ,'ProgressPreference'
    ,'PSBoundParameters'
    ,'PSCommandPath'
    ,'PSCulture'
    ,'PSDefaultParameterValues'
    ,'PSEmailServer'
    ,'PSHOME'
    ,'psISE'
    ,'PSScriptRoot'
    ,'PSSessionApplicationName'
    ,'PSSessionConfigurationName'
    ,'PSSessionOption'
    ,'PSUICulture'
    ,'psUnsupportedConsoleApplications'
    ,'PSVersionTable'
    ,'PWD'
    ,'ShellId'
    ,'StackTrace'
    ,'SystemVars'
    ,'VerbosePreference'
    ,'WarningPreference'
    ,'WhatIfPreference'
)

#view all variables and report if Weakly or Strongly typed
Get-Variable -Exclude $SystemVars | Select Name, @{Name='TypedState';Expression={
    If ($_.Attributes -match 'System.Management.Automation.ArgumentTypeConverterAttribute') {'Strong'} else {'Weak'}
}};


Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Friday, March 6, 2015

Two new PowerShell v5 features worth waiting with bated breath or becoming an early adopter for

Recently there were a two new PowerShell v5 Preview features which IMHO are well worth waiting with bated breath for..... or definitely becoming a WMF 5.0 early adopter for :)

The first is the ISE extension for finding and installing modules. This feature takes advantage of the new PowerShellGet module. Best of all the PowerShellGet functionality helps our community overcome the module management problem, and this feature provides an easy to use GUI for discovering modules on community or company repositories.

ISE Module Browser - A new way to manage your PowerShell modules
http://blogs.msdn.com/b/powershell/archive/2015/02/23/ise-module-browser-a-new-way-to-manage-your-powershell-modules.aspx


The second is a static code analyser for PowerShell scripts and modules. Taking a plugin written by a colleague the PowerShell team have now included the functionality in the WMF 5.0 framework (Feb release). This analyses your script against known patterns and practices. Well worth running against your scripts even just to satisfy curiosity, I know I will be.

PowerShell Script Analyzer: Static Code analysis for Windows PowerShell scripts & modules
http://blogs.msdn.com/b/powershell/archive/2015/02/24/powershell-script-analyzer-static-code-analysis-for-windows-powershell-scripts-amp-modules.aspx

Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Azure: How to utilise the Primary and Secondary storage access keys

One topic that comes up a bit when people first start to deal with Microsoft Azure storage is "what is the purpose of the Primary and Secondary storage access keys". The good news is that the guidance/documentation provided for Azure services and features is very rich in depth, and even includes screenshots. I've linked to the best documentation below.

The main purpose of the Primary and Secondary keys (and this doesn't just apply to storage resources), is to minimize application impact when the keys need to be regenerated. In fact the quote from Azure documentation is: “By providing two storage access keys, Azure enables you to regenerate the keys with no interruption to your storage service or access to that service.

Lets take the scenario of a trusted employee has left the organization. This employee was either the Azure Account Administrator, a Developer, or for whatever reason had access to one or both of the Storage Access Keys for a critical storage blob on your Azure account. You also have a critical online application hanging off this blob storage where downtime costs you $$$$.

With only a Single Access Key the process would have to be the following which depending on the time between steps 1 and 4 could be a considerable outage for the application.
  1. Stop the online application
  2. Regenerate the Access Key
  3. Update the application configuration
  4. Start the application
Maybe in the above scenario you don't even have the ability to stop the application, then the impact of this becomes even more sever because your users will receive an error as the application code fails to use the out of date key.

Now with the Primary and Secondary Access Keys the scenario has a few more steps but far less if not zero outage to the application.
  1. Regenerate the Secondary Access Key.
    This is important as the employee may have had access to this key as well and if they left under "bad terms" this may make this even more critical. Additionally you should regenerate keys routinely, so by doing this first you can test that the the secondary key works and should you have any issues then immediately stop this process and revert to the existing Primary Key while investigating what failed in your application.
  2. Update the Application Code to use the new Secondary Access Key
  3. Pause, take a breath….
    Give your some some time depending on your application to ensure all active processes have finished and new processes switched over to the new key.
  4. Check your application is working off the Secondary Key and is successful. If not then you might want to revert to the existing key while you find the other places you need to update the Storage Account details.
  5. Regenerate the Primary Access Key.
  6. Update the Application Code to use the new Primary Access Key
  7. Repeat steps 3 and 4


The best document to review regarding this topic is: http://azure.microsoft.com/en-us/documentation/articles/storage-create-storage-account/


There are some caveats when dealing with keys on storage for Virtual Machines, Media Services, and Applications (but I cover that above). For the full details see the above link but these are the extracts for reference:

Virtual Machines - If your storage account contains any virtual machines that are running, you will have to redeploy all virtual machines after you regenerate the access keys. To avoid redeployment, shut down the virtual machines before you regenerate the access keys.

Media services - If you have media services dependent on your storage account, you must re-sync the access keys with your media service after you regenerate the keys.



My suggested best practices
  1. Routinely regenerate your keys so you can more easily track who has access to the environment
    1. Do NOT regenerate keys on storage holding virtual machines, unless this can be factored into your monthly/quarterly patch maintenance
  2. Use separate Storage Accounts for Virtual Machines and other application storage (e.g. Tables, etc).
  3. Run the application off the Primary key, and only the Secondary key during key regeneration.
  4. If you must provide access to the storage to an employee/developer/etc..... use a Shared Access key.

Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Thursday, January 22, 2015

SCOM: Managing Data Aggregation in the OpsMgrDW database


One of the common discussions I have when work with a customer on their Operations Manager (SCOM) environment is how the data aggregation works and how that impacts on the database size. There are plenty of blogs around how to review the data retention periods, my favourite is:

http://blogs.technet.com/b/kevinholman/archive/2010/01/05/understanding-and-modifying-data-warehouse-retention-and-grooming.aspx

So now that you have read that blog and know that the defaults are a bad idea. How do we actually determine the data size that each aggregate dataset takes up?

The typical answer is to use the dwdatarp.exe tool (available from http://blogs.technet.com/b/momteam/archive/2008/05/14/data-warehouse-data-retention-policy-dwdatarp-exe.aspx) however some organizations may have change and security restrictions which prevent the use of such tools. Alternatively people may just be more comfortable with seeing exactly what data is being accessed..... That is the scenario I was faced with this week so I have come up with the below SQL query which will give you the output of each aggregate dataset, the total size, and the percentage of the total database size.

--Which DataSet is usin the most database space (alternative to using dwdatarp.exe
USE OperationsManagerDW;
WITH TableSizes AS
(
 SELECT so.name,
  Coalesce(8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END),0) AS data_kb,
  Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) AS index_kb,
  Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0) AS blob_kb
 FROM dbo.sysobjects AS so JOIN dbo.sysindexes AS si ON (si.id = so.id)
 WHERE 'U' = so.type 
 GROUP BY so.name 
)
SELECT (CASE WHEN CHARINDEX('_',so.name) > 0 THEN LEFT(so.name,CHARINDEX('_',so.name)-1)
 ELSE so.name
 END) as dataset,
 Coalesce(8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END),0) +
 Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) +
 Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0) AS totalsize_kb,
 CAST
   (1. * (Coalesce(8 * Sum(CASE WHEN si.indid IN (0, 1) THEN si.reserved END),0) +
   Coalesce(8 * Sum(CASE WHEN si.indid NOT IN (0, 1, 255) THEN si.reserved END), 0) +
   Coalesce(8 * Sum(CASE WHEN si.indid IN (255) THEN si.reserved END), 0))
   / (SELECT SUM(data_kb + index_kb + blob_kb) FROM TableSizes)
   * 100
    AS DECIMAL(5,2) ) AS PercentOfTotal
FROM dbo.sysobjects AS so JOIN dbo.sysindexes AS si ON (si.id = so.id)
WHERE 'U' = so.type 
AND (so.name like '%daily%'
OR so.name like '%hourly%'
OR so.name like '%raw%')
GROUP BY (CASE WHEN CHARINDEX('_',so.name) > 0 THEN LEFT(so.name,CHARINDEX('_',so.name)-1)
 ELSE so.name
 END)
ORDER BY totalsize_kb DESC

In my lab environment, which is not doing anything, I get the following output. Typically though you should see PerfHourly as the highest user when the defaults are in effect.



With this information it becomes much easier to qualify the value in adjusting the aggregate table grooming setting in the blog above.



Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Friday, January 16, 2015

GoPro Studio Importer causing High CPU

Ping back to http://blog.notmyfault.ch/?p=427

Lately I have noticed my laptop experiencing the following symptoms:
- Running Hot
- CPU constantly at >50%
- Slow to load things

After using Process Explorer (Sysinternals) I could see this was related to a WMI Provider Host (WmiPrvSE.exe) running as NETWORK SERVICE. The strange thing is the performance profile as it constantly spikes for a second, drops, then spikes back for a second, drops... repeat. Clearly indicating that there is a query being run at very frequent intervals as evident by this chart.



I found it difficult to track down the culprit causing the WMI queries, but I knew the problem was still evident after a reboot so I started checking my tray icons and start up applications. By process of elimination I found that closing the GoPro Studio Importer the problem went away.

The importer is run from the executable: C:\Program Files (x86)\GoPro\Tools\Importer\GoPro Importer.exe
GoPro Studio version: 2.5.4.404

To validate this I launched the executable and the problem returned.

I haven't found any settings in GoPro Studio to prevent the importer launching at startup, so I simply went to the Startup tab in taskmgr and disabled it from there.


Props to the blog http://blog.notmyfault.ch/?p=427 which helped to clarify my findings.

Tuesday, November 11, 2014

Importing SQLPS module results in Warnings

Update (June 2015): After originally posting this back in Nov, and debugging the issue earlier in the year I finally got around to recording my findings and updating this blog post. Hopefully this post also provides some insight into methods for Debugging PowerShell when faced with Provider issues.

I've been meaning to blog about this for a while now. If you do any work with PowerShell, SQL Server and Azure then chances are you have noticed that when you import the SQL module it reports the errors:

Errors as per below
WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on
'Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml' failed with the following error: The RPC server is
unavailable. (Exception from HRESULT: 0x800706BA)
WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on
'Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml' failed with the following error: The RPC server is
unavailable. (Exception from HRESULT: 0x800706BA)
...
WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on
'Microsoft.WindowsAzure.Commands.SqlDatabase.dll' failed with the following error: The RPC server is unavailable.
(Exception from HRESULT: 0x800706BA)
WARNING: The names of some imported commands from the module 'sqlps' include unapproved verbs that might make them less
 discoverable. To find the commands with unapproved verbs, run the Import-Module command again with the Verbose
parameter. For a list of approved verbs, type Get-Verb.

The following Modules are affected by the issues described within this blog.

Module Version Notes
SQLPS 1.0 SQL 2014 release
SQLASCMDLETS 1.0 SQL 2014 release
Azure 0.8.8 Azure SDK June 2014 Release

Firstly a warning, do not modify any of the files mentioned in this blog as I will be exploring the code which runs when you import a module, but these are signed and any modifications may break the module functionality.

People generally notice this error when they run:

Import-Module SQLPS

What happens when you run this command is PowerShell loads up the Module Mainfest C:\Program Files (x86)\Microsoft SQL Server\120\Tools\PowerShell\Modules\SQLPS\SQLPS.PSD1


This references a number of other code bases.

Firstly a number of nested modules, most significantly both "Microsoft.SqlServer.Management.PSSnapins.dll" and "Microsoft.SqlServer.Management.PSProvider.dll" which load up the SQLPS provider and set the PSDrive path to the path of PS SQLSERVER:>

It also loads up the script file SqlPsPostScript.PS1 in the same path..... It is the code in this script where the warning is generated from (but this is not the cause).

This script has a step which is designed to import the SQLASCmdlets module and it is this step which is throwing the warnings. However, this isn't really to do with the loading of the SQL module but due to the Get-Module command which is executed.
#Load the SQLAS Module
$m = Get-Module -ListAvailable | where {$_.Name -eq "SQLASCmdlets"}
if($m -ne $null) { Import-Module $m -Global }

After a bit of troubleshooting I found that the problem even happens if you just run Get-Module -ListAvailable from within the SQLPS provider.



However if you run Get-Module -ListAvailable from another Provider (e.g. FileSystem) this does not return the error.



So the culprit is within the way the SQLPS Provider (via the PSDrive) implements standard cmdlets. You can see this even with Import-Module cmdlet when it specifically tries to work with the Azure module when the path is set to SQLSERVER:\.



Now typically this is where most would get stuck as the SqlPs and Azure modules are Binary Modules (aka it loads .dlls) so the general public cannot dig into the code.

However, here we can leverage PowerShell debugging to go deep into the code stack. So the actions that I took next were:

1) Use the -Debug switch parameter on the Import-Module
Import-Module -Name SqlPs -Debug

2) Use Trace-Command
Trace-Command -Expression {Import-Module SQLPS} -FilePath C:\ImportModuleTraceCommand.txt -Name * -Option All

Using the output captured by these commands we can actually trace the problem down further in the stack.

Firstly by looking at the Debug output we can see the following function calls just before the WMI errors.

---------------------------------------------------------------------------
DEBUG: ItemExists: SQLSERVER:\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
DEBUG: ItemExists: SQLSERVER:\Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
WARNING: Could not obtain SQL Server Service information. An attempt to connect to WMI on 'Microsoft.WindowsAzure.Commands.S
qlDatabase.Types.ps1xml' failed with the following error: The RPC server is unavailable. (Exception from HRESULT: 0x800706BA
)
---------------------------------------------------------------------------

This tells us that the module loading is calling the ItemExists function to test a path for a file it is told to load (and we already know that the path is coming from the definition of the Azure module).

Next we can look at the Trace-Command data.

If you compare the logical code variations between the two extracts you can clearly see that the first resolves the path to the FILESYSTEM provider but the second resolves the path to the SQLSERVER provider. If we look even more closely we see a function call output by "Path is DRIVE-QUALIFIED" which returns true on the first and false on the second, and because it returns false on the second it then sets the SQLPROVIDER to go checking for the paths off the base drive which in this case is SQLSERVER:, and eventually fails with "ERROR: Item does not exist".


Successful loading of types file:

PathResolution Information: 0 :                                                     Resolving MSH path "C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\.\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml" to MSH path
...
PathResolution Information: 0 :                                                         Path is DRIVE-QUALIFIED
LocationGlobber Information: 0 :  WriteLine                                                           result = True
LocationGlobber Information: 0 :  WriteLine                                                           Drive Name: C
SessionState Information: 0 :  WriteLine                                                           Drive found in scope 1
LocationGlobber Information: 0 :  WriteLine                                                           path = Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\.\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                           result=Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\.\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                           result=C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\.\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
SessionState Information: 0 :  WriteLine                                                           result = C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\.\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                           basePath = C:\
...
PathResolution Information: 0 :                                                         DRIVE-RELATIVE path: Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml
PathResolution Information: 0 :                                                         Drive: C
PathResolution Information: 0 :                                                         Provider: Microsoft.PowerShell.Core\FileSystem
...
PathResolution Information: 0 :                                                         RESOLVED PATH: C:\Program Files (x86)\Microsoft SDKs\Azure\PowerShell\ServiceManagement\Azure\Services\Microsoft.WindowsAzure.Commands.Websites.Types.ps1xml




Unsuccessful loading of types file:
PathResolution Information: 0 :                                                 Resolving MSH path ".\Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml" to PROVIDER-INTERNAL path
...
PathResolution Information: 0 :                                                     Path is DRIVE-QUALIFIED
LocationGlobber Information: 0 :  WriteLine                                                       result = False
LocationGlobber Information: 0 :  WriteLine                                                       path = Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                       result=Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                       result=SQLSERVER:\Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
SessionState Information: 0 :  WriteLine                                                       result = SQLSERVER:\Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
CmdletProviderClasses Information: 0 :  WriteLine                                                       basePath = SQLSERVER:\
...
PathResolution Information: 0 :                                                     DRIVE-RELATIVE path: Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
PathResolution Information: 0 :                                                     Drive: SQLSERVER
PathResolution Information: 0 :                                                     Provider: Microsoft.SqlServer.Management.PSProvider\SqlServer
...
PathResolution Information: 0 :                                                     RESOLVED PATH: SQLSERVER:\Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml
...
PathResolution Information: 0 :                                                             ERROR: Item does not exist: Sql\Microsoft.WindowsAzure.Commands.SqlDatabase.Types.ps1xml




At this point I new there was something different happening in the implementation of a PathResolution process.

Taking in everything I had investigated I could conclude that the SQLPROVIDER was implementing the ItemExist function differently to the FileSystem provider. In particular the SQLPROVIDER was checking the given file path and trying to match it to the SQLSERVER: drive, and in this case because the next part of the path starts with "\Sql\....." this would cause it to think it is trying to get the path of a SQL Server in which it would then attempt to connect to the name (in our case the file name) as a Server Name to retrieve the instance details...... hence the WMI errors.

To the best of my knowledge this issue has been around since an update to the Azure module in early 2014, as they change the folder structure they were using. At the time of writing this post, both initially and subsequent updates, the problem still exists.

If you want to help get this resolved please vote on the MS Connect item: https://connect.microsoft.com/SQLServer/feedback/details/1420992/import-module-sqlps-may-take-longer-to-load-but-always-returns-warnings-when-the-azure-powershell-module-is-also-installed#



 Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.


Troubleshooting blocking locks in SQL Azure

I was faced with an interesting situation yesterday where a customer had created a blocking chain on their SQL Azure database. The scenario they described was:

A query took out and held a lock on object but the client connection closed and orphaned the session on the SQL Azure database. This resulted in other queries trying to access the object being blocked and timing out (even a SELECT *).

Given we do not have any graphical tools in SSMS to see current activity when you connect to a SQL Azure database, this offers a challenge for those that are use to going to Activity Monitor or the Reports to view this sort of information. Thankfully most of DMV's and DMF's have made their way into support on SQL Azure.

To troubleshoot the above scenario we then used the following queries to identify the culprit and eventually terminate it with the only option available.... KILL [spid]. To demonstrate this I have created the scenario on my Azure database through the use of PowerShell (but i am not sharing that as it is bad practice). The good news is that these queries will also work for your on-premise environments.

TSQL to identify the Blockers and Victims

WITH Blockers AS
    (select DISTINCT blocking_session_id as session_id
 from sys.dm_exec_requests
 where blocking_session_id > 0
)
SELECT 'Blocker' as type_desc
 , sys.dm_exec_sessions.session_id
 , sys.dm_exec_requests.start_time
 , sys.dm_exec_requests.status
 , sys.dm_exec_requests.command
 , sys.dm_exec_requests.wait_type
 , sys.dm_exec_requests.wait_time
 , sys.dm_exec_requests.blocking_session_id
 , '' AS stmt_text
FROM sys.dm_exec_sessions
LEFT JOIN sys.dm_exec_requests ON sys.dm_exec_requests.session_id = sys.dm_exec_sessions.session_id
INNER JOIN Blockers ON Blockers.session_id = sys.dm_exec_sessions.session_id
UNION
SELECT 'Victim' as type_desc
 , sys.dm_exec_sessions.session_id
 , sys.dm_exec_requests.start_time
 , sys.dm_exec_requests.status
 , sys.dm_exec_requests.command
 , sys.dm_exec_requests.wait_type
 , sys.dm_exec_requests.wait_time
 , sys.dm_exec_requests.blocking_session_id
 , ST.text AS stmt_text
FROM sys.dm_exec_sessions
INNER JOIN sys.dm_exec_requests ON sys.dm_exec_requests.session_id = sys.dm_exec_sessions.session_id
CROSS APPLY SYS.DM_EXEC_SQL_TEXT(sys.dm_exec_requests.sql_handle) AS ST
WHERE blocking_session_id > 0

The output of this query looks then like the following where you can clearly see the blocker and it's victims (in fact this example then has a 2nd layer of blocking).





TSQL to view the established locks within the current database

SELECT
 (CASE sys.dm_tran_locks.request_session_id
  WHEN -2 THEN 'ORPHANED DISTRIBUTED TRANSACTION'
  WHEN -3 THEN 'DEFERRED RECOVERY TRANSACTION'
  ELSE sys.dm_tran_locks.request_session_id
 END) AS session_id
 , DB_NAME(sys.dm_tran_locks.resource_database_id) AS database_name
 , sys.objects.name AS locked_obj_name
 , sys.dm_tran_locks.resource_type AS locked_resource
 , sys.dm_tran_locks.request_mode AS lock_type
 , ST.text AS stmt_text
 , sys.dm_exec_sessions.login_name AS login_name
 , sys.dm_exec_sessions.host_name AS host_name
 , sys.dm_tran_locks.request_status as request_status
FROM sys.dm_tran_locks
JOIN sys.objects ON sys.objects.object_id = sys.dm_tran_locks.resource_associated_entity_id
JOIN sys.dm_exec_sessions ON sys.dm_exec_sessions.session_id = sys.dm_tran_locks.request_session_id
JOIN sys.dm_exec_connections ON sys.dm_exec_connections.session_id = sys.dm_exec_sessions.session_id
CROSS APPLY SYS.DM_EXEC_SQL_TEXT(sys.dm_exec_connections.most_recent_sql_handle) AS st
WHERE sys.dm_tran_locks.resource_database_id = DB_ID()
ORDER BY sys.dm_tran_locks.request_session_id

The output of this query shows the various locks which have been established by each session (one-to-many in sessions-to-locks). From here we can see which locks will have taken priority and potentially blocking more queries in the future.



So while you may not have a graphical view of this information we can definitely use our trusted DMV's and DMF's to gain access to the relevant data. Who knows with the rate of change it can surely only be a matter of time until Activity Monitor and the standard reports are available to us in SSMS for SQL Azure.

Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.