Monday, December 30, 2013

Infrastructure Saturday 2013 - PowerShell for Azure..... a perfect partnership

The last time I presented at Infrastructure Saturday was 2010 and it is certainly great to see how the event has continued and grown. A huge thanks to everyone from contributors, sponsors, speakers and organizers, it was clearly appreciated by everyone.

So this year my original session on Automating SQLwith PowerShell had to be squashed to allow Heidi (WardyIT) to present on SQL. After shedding a tear for only a short moment I regrouped and decided to talk about PowerShell and Windows Azure, another passion of mine. So a massive thanks goes to everyone that chose my PowerShell session over the SQL Session because naturally we had to be scheduled at the same time so that I was then competing against my own primary skill set... nah lets be honest PowerShell is overtaking SQL in that way for me ;)

Ok ok enough banter.

A number of people have expressed interest in the scripts demonstrated during the session so I thought I would provide a walk-through recap of the session. The links to download the "cleansed" versions of the scripts is at the end of this post.

Session Key Takeaways

In my experience, the Azure Portal is a great tool for exploring new features or once off tasks, but due to some annoyances, like the lag for the portal to refresh when back end tasks complete, it can be time consuming when doing very large tasks or more specifically regular repeated tasks.

PowerShell overcomes this :)

Oh and one more takeaway, be careful what is a news headline at the time of your presentation otherwise you may have a docked picture of our PM and another guy kissing show on your Windows 8 Start Screen (it was quite comical though).

Links

http://www.windowsazure.com

SDKs: http://www.windowsazure.com/en-us/downloads/
Azure PowerShell: http://go.microsoft.com/?linkid=9811175&clcid=0x409
Scott Guthrie (ScottGu) Blog: http://weblogs.asp.net/scottgu



Session Demos

This is a walk through the PowerShell scripts that were demonstrated during the session.

For the first timers, you need to configure your PowerShell console session to be authorised to access your Azure subscription.

  1. Download and install the Azure PowerShell pack. Reboot your computer to make it easier from here on out to import the module.
  2. Import the Azure module with
    Import-Module Azure;
    
  3. Next call the cmdlet to get the publish settings file. When you call this cmdlet it will launch your browser and ask you to log into your Azure subscription with your Live ID. It will then generate a new subscription management certificate and then prompt you to download the publishsettings file. Take note of the path where this file will be saved
    Get-AzurePublishSettingsFile;

  4. Open the Publish Settings file and check that the Subscription ID matches the details of your management certificate as shown in the portal.
  5. Next import the publishing settings file which will import the required certificate into your users certificate store.
    Import-AzurePublishSettingsFile –PublishSettingsFile "C:\Subscription-credentials.publishsettings"
  6. You can verify that the certificate was imported correctly by traversing the certificate store:
    Get-ChildItem -Path Cert:\CurrentUser\my
    Or by retrieving a list of Azure Services available:
    Get-AzureService | Select ServiceName

Now that the management certificate has been imported you now need to bind to the Azure Subscription so that any subsequent cmdlets can authenticate without having to stipulate which subscription they should be executed against each time. This is very useful when you have more than one subscription within your organization.

  1. Set the values in the variable block as required and then execute the script/command
    <#######################################################
    Variable Block
    #######################################################>
    
    # Subscription Information...
    $subscriptionName = "<<<<< Insert your Subscription Name here >>>>>"
    $subscriptionID = "<<<<< Insert your Subscription ID here >>>>>"
    $thumbPrint = "<<<<< Insert the Certificate Thumb Print here >>>>>"
    
    # Storage Account Information...
    $storageAccountName = "azurestorage" # Must be globally unique and all lowercase...
    $storageAccountLabel = "azurestorage"
    
    <#######################################################
    Select the Subscription
    #######################################################>
    
    #get the certificate definition from the certificate store via the PowerShell drive
    $certificate = Get-Item cert:\currentuser\my\$thumbPrint
    
    #set the context to the required subscription
    Set-AzureSubscription `
        -SubscriptionName $subscriptionName `
        -SubscriptionId $subscriptionID `
        -Certificate $certificate `
        -CurrentStorageAccount $storageAccountName
    
    #next mark the required subscription as active
    Select-AzureSubscription `
        -SubscriptionName $subscriptionName
    

TIP: The above could be configured as part of your PowerShell profile script to ensure this is set each time you start the Console, or added to a script file to easily execute, if working regularly with Azure through PowerShell.

Next we can look at the cmdlets available to us and other useful information available from within Azure:

  1. Get all the cmdlets available from the Azure module
    Get-Command -Module Azure
    
  2. Lets look further into the various commands available with
    #get all the "Get" commands
    Get-Command -Module Azure -Verb Get
    
    #get all the "Set" commands
    Get-Command -Module Azure -Verb Set
    
    #get all the "Restart" commands
    Get-Command -Module Azure -Verb Restart
    
  3. When it comes time to deploy a VM we have the option of using some of the ready-made images, the following is an example of how to get a list of each of these:
    #Get the list of all windows images
    Get-AzureVMImage | where -property "OS" -EQ -Value "Windows" | select label,imagename | Sort -Property "label" | ft -AutoSize
    
    #what about all the images
    Get-AzureVMImage | where -property "OS" -NE -Value "Windows" | select label,imagename | Sort -Property "label" | ft -AutoSize
    


Platform as a Service


The first deployment demo during the session was to deploy a WebSite with a SQL Database. This demo used the Platform as a Service features of Azure. The script used for this deployment came from

  1. Deploy the Web Site and SQL Database with the script by specifying the required parameters:
    #create the web site with a SQL Database (we have to use the location East Asia as Southeast Asia is not valid for web sites)
    .\New-AzureWebsitewithDB.ps1 `
        -WebSiteName "InfraSatWebSite" `
        -Location "East Asia" `
        -StorageAccountName $storageAccountName `
        -ClientIPAddress "***.***.***.***"; #validate this Client IP with WhatIsMyIP.com or the Azure Portal.
    
  2. Once the deployment has completed we can see what was deployed. Firstly the new SQL Azure Server and Database
    #get the database server
    Get-AzureSqlDatabaseServer;
    
    #get the database(s) for the servers
    Get-AzureSqlDatabaseServer `
        | Get-AzureSqlDatabase;
    
    #get the database with extra info
    Get-AzureSqlDatabaseServer `
        | Get-AzureSqlDatabase -DatabaseName infrasatwebsite_db `
        | Select Name,CollationName,Edition,MaxSizeGB,Status,CreationDate `
        | Format-List;
    
  3. We can also retrieve the information about the web sites created
    #get the website
    Get-AzureWebsite 
  4. Thanks to the Azure management pack we can perform all sorts of operations on the services we created.
    #stop the web site
    Stop-AzureWebsite -Name InfraSatWebSite
    
    #start the web site
    Start-AzureWebsite -Name InfraSatWebSite
    
    #Default documents
    Get-AzureWebsite -Name "InfraSatWebSite" | Select DefaultDocuments;
    
    #Update the default document list for what we will publish
    Set-AzureWebsite -Name "InfraSatWebSite" -DefaultDocuments "default.aspx";
    
  5. In my demo the next step here was to show how to deploy the web site code from Visual Studio through to the newly provisioned Web Site.

If you are testing this and wish to do some clean up of your Azure account following a deployment, the following commands are a good starting point.
#Remove the database (WARNING: This removes ALL databases provisioned on your account.... don't say I didn't warn you as there is no UNDO!)
Get-AzureSqlDatabaseServer | Remove-AzureSqlDatabaseServer;

#Remove the database (with a filter to exclude servers with a particular admin account)
Get-AzureSqlDatabaseServer | Where-Object -NE -Property AdministratorLogin -Value <adminaccountofserveryouwishtokeep> | Remove-AzureSqlDatabaseServer;

#remove the web site
Get-AzureWebsite -Name InfraSatWebSite | Remove-AzureWebsite;

#or
Remove-AzureWebsite -Name InfraSatWebSite;

#Check that it was removed
Get-AzureWebsite;

Infrastructure as a Service


During my session I demonstrated the automation of Infrastructure as a Service, unfortunately I cannot release the exact script that was demonstrated at the request of some colleagues until it is out of BETA. However there are plenty of scripts which perform some of the components of our complete solution.

The general concept of IaaS Automation is:
  1. Deploy infrastructure components (e.g. Network, Storage, Virtual Machines, etc)
  2. Simplify configuration and common tasks
  3. Repetitive with minimal differences
  4. Minimal user input (I recommend XML input files to achieve this)
Some existing example scripts that perform "some" of these tasks.

Deploy a Windows Azure Virtual Machine with Two Data Disks
http://gallery.technet.microsoft.com/scriptcenter/Create-a-Windows-Azure-ec901cbe

Deploy Windows Azure VMs to an Availability Set and Load Balanced on an Endpoint
http://gallery.technet.microsoft.com/scriptcenter/Create-Windows-Azure-VMs-244bd3cb

Deploy Multiple Windows Azure VMs in the Same Windows Azure Virtual Network
http://gallery.technet.microsoft.com/scriptcenter/Create-Multiple-Windows-df9e95b7

Deploy a SQL Server VM with Striped Disks in Windows Azure
http://gallery.technet.microsoft.com/scriptcenter/Create-a-SQL-Server-VM-in-0b8c6eed


At a time when the script demonstrated can be released I will update this post with references to that script.



PPT and Scripts Download

Get the PPT and the "cleansed" Scripts used during the Demos here.




Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.
 

Wednesday, November 13, 2013

PowerShell: Clearing the values within a HashTable without removing construct of Keys

I had an interesting questions posed to me today:

"Is it possible to clear the contents of all the values within a HashTable in PowerShell but retaining the Key names (i.e. the construct of the hashtable)."

This sounds perfectly acceptable but unfortunately there is not native way of doing this and only have the following options:
1) Set the HashTable variable to $null
2) Rebuild the HashTable from a definition with empty values

Neither of these 2 options are very dynamic or suited the application.

The solution I thought was to loop through the HashTable keys collection and modify the value of one of the entities, however that results in:

Collection was modified; enumeration operation may not execute.

Our solution is though to use the .Clone() method of the HashTable to build a copy of it in memory just for the sake of looping through it's entities. Here is my example code:

$hash = @{FirstName="Bob";LastName="Smith";Age="23"}
$hash

#create a clone of the hash table, this avoids the 
# "Collection was modified" enumeration error
$hash2 = $hash.Clone();

#loop through each key in the cloned hastable and update the 
# value in the original table
foreach($key in $hash2.keys){$hash[$key] = '';}

#clear the cloned hashtable to free memory
$hash2 = $null;

#just output the hashtable for verification in this demo
$hash





Monday, November 11, 2013

Migration Automation Toolkit (MAT) - Best video I have seen in a while

This would have to be the best video I have seen in a while. It speaks for it self.

View Video

Obviously you need a NetApp storage device for this toolkit to work. But it is built on PowerShell so what I see is an opportunity to take the scripts and modify them for your environment even if you don't have a NetApp device there would be some use in the scripts to help you get started in automating your migrations.

Read about it http://migrate.azurewebsites.net/

Office Online feature enhancements

Some new features in Office Online makes #getitdone much easier regardless of where you and your colleagues are, but most of all regardless of what device you are using

http://blogs.office.com/b/office365tech/archive/2013/11/06/collaboration-just-got-easier-real-time-co-authoring-now-available-in-microsoft-office-web-apps.aspx


Tuesday, November 5, 2013

RML now supports SQL 2012 (and SQL 2014)

If you use the Replay Markup Language (RML) tool set then like me you will have been frustrated by the fact that it hadn't been updated to work with trace files captured from SQL 2012. You could probably assume that this was because the SQL Profiler Trace feature for purpose of trace capture (data engine only) is flagged for depreciation in future released (see http://technet.microsoft.com/en-us/library/ms181091.aspx).

Thankfully last week a new version was released which fully support SQL 2012 and even SQL 2014 CTP2 trace file definitions. I have just gone through the process of uninstalling my RML tools, installing the new version, and straight away I can process the trace files captured on a SQL 2012 instance. Hurray!

Be warned though you cannot use the new version of the Reporter tool to access any databases for previously processed trace data. I typically use this tool for performance analysis so I might keep the analysis databases around for a few weeks, so this just means I have to reprocess the trace files to generate reports compatible with the new version. Still worth it to get the benefit of the new supportability.


Description of the Replay Markup Language (RML) Utilities for SQL Server
http://support.microsoft.com/kb/944837

If you find that you cannot click on the navigational hyperlinks on the Main Performance Overview report (e.g. "Unique Batches", "Unique Statements", "Interesting Events", etc) then you will need to install the hotfix or ReportViewer.

Update fixes several Report Viewer issues after you install Visual Studio 2010 Service Pack 1
http://support.microsoft.com/kb/2549864
.....Or go directly to the download location at https://connect.microsoft.com/VisualStudio/Downloads/DownloadDetails.aspx?DownloadID=37409


Parsing SQL ERRORLOG's for errors with PowerShell

I've been finding the need to parse / grep the SQL Server ERRORLOG of late to look for any Errors which have occurred and for that I have been turning to my trusted friend PowerShell.

The cmdlet that gives us all the power here is Select-String

I will admit the hardest part about the use of the Select-String cmdlet for me was learning RegEx, but once you have the basics it is very powerful and this approach could be used for any number of applications.

So at the very basic level the following command can be used to report the lines with errors from the latest SQL ERRORLOG

Select-String -Path .\ERRORLOG -Pattern "(error:)\s\d{1,6}"

You could even use a wildcard in the path parameter to look for historical files, but for my need I was only interested in the current file.

Now that is all good and well but SQL Server doesn't record the description of the error on the same line as the error number, so I needed a way to pull the next line from the file as well. For this I used the powershell pipeline to then loop through each of the matched objects returned by Select-String using the Foreach-Object cmdlet and within the scriptblock of that cmdlet I first output the line we retrieved with Select-String, and then use Get-Content to pull in the details from the file and retrieve only the line number I am interested in (remember the Get-Content is a zero based array so I don't have to do anything with the line number from the Select-String matched object to get the next line.... think about it and you will understand).

Select-String -Path .\ERRORLOG.1 -Pattern "(error:)\s\d{1,6}" | Foreach-object {Write-Host "$($_.filename):$($_.linenumber):$($_.line)"; Write-Host "$($_.filename):$($_.linenumber+1):$((Get-Content "$($_.filename)")[$_.linenumber])"}

The performance of this one-line approach is not the greatest because it has to run the Get-Content cmdlet for each matched line. So if you are looking for speed this two-line approach has you covered there. The only downside is that if you change the file name you are looking at then you have to update this twice, where as the one line approach passes that value along the pipeline.

$filedata = (Get-Content .\ERRORLOG)
Select-String -Path .\ERRORLOG -Pattern "(error:)\s\d{1,6}" | Foreach-object {Write-Host "$($_.filename):$($_.linenumber):$($_.line)"; Write-Host "$($_.filename):$($_.linenumber+1):$($filedata[$_.linenumber])"}


Now if you were processing multiple files then you could do something like this multiline or scripted version.

$filelist = Get-ChildItem .\ERRORLOG*;
Foreach ($file in $filelist)
{
$filedata = (Get-Content $file.Name);
Select-String -Path $file.Name -Pattern "(error:)\s\d{1,6}" | Foreach-object {Write-Host "$($_.filename):$($_.linenumber):$($_.line)"; Write-Host "$($_.filename):$($_.linenumber+1):$($filedata[$_.linenumber])"}
} 

Of if you want to execute that from the powershell console then the two-line approach would be:
$filelist = Get-ChildItem .\ERRORLOG*;
Foreach ($file in $filelist){$filedata = (Get-Content $file.Name);Select-String -Path $file.Name -Pattern "(error:)\s\d{1,6}" | Foreach-object {Write-Host "$($_.filename):$($_.linenumber):$($_.line)"; Write-Host "$($_.filename):$($_.linenumber+1):$($filedata[$_.linenumber])"}}


Of if you are looking to summarize the number of errors occuring from all of the ERRORLOG files then the following provides an insight into that

Select-String -Path ERRORLOG* -Pattern "(error:)\s\d{1,6}" | Group-Object -Property {$_.Line.Substring(0,10)},{$_.Matches.Value} | Sort-Object Name


Happy investigations :)

If you are looking for a RegEx reference here are the ones I use:
http://www.regular-expressions.info/


Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Thursday, October 24, 2013

My Windows 8.1 Post Install Tips

After updating my Surface RT device and laptop to Windows 8.1 I thought I would share my experience. I will note that with both I chose to do a clean install.

Overall Experience
Firstly my overall experience has been exceptional. For the Surface RT install I simply accessed the update from the store using the direct link, took a bit for the update to start to download but I believe that was the Store app in 8.1 Preview causing that, between 1 to 2 hours later I was running Windows 8.1. Signed in with my Microsoft Account and all my settings were sync'd, then installed my apps from the store which was easy because of the new view which can filter to show only "your apps not installed on this pc".

For my laptop it was another painless install. Probably enhanced because of the deployment method adopted by Microsoft IT but still it was exceptional to see what can be achieved when deployment is managed in an efficient method. Anyway 1 hour later I was running Windows 8.1 and restoring my documents from my backup (for all the fun that went with that see my other post).

My Post Install Recommendations
These are my tips to complete once you have upgraded/installed Windows 8.1
  1. Link your Microsoft Account with your domain/local account if you didn't sign in with that account. This will give you the full experience with synchronization across your devices.
  2. Set PowerShell as the default for the Win+X menu over Command Prompt. If you are like me a frequent user of PowerShell then this will save plenty of time. And if you aren't using PowerShell yet, well get with the times you can still use native windows commands the same as you would in the command prompt.
    The following is where this setting is controlled in the TaskBar Properties

    Then this is the result, on the left is the default view, on the right is the view with PowerShell. This mention is accessed either by right clicking the Windows Logo in the bottom left corner or pressing Windows Key (Win) + X.
  3. Adjust the Power Settings for what the power buttons do. I like to take advantage of the fast hibernate feature in Windows 8 as I am often on the road, so I pick that over Sleep for what the power button does.

I will continue to add to this blog as I find more recommendations and tips.

Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Windows 7 File Recovery feature removed from Windows 8.1

As you "should" be aware keeping a backup of your important documents and files is critical to managing a PC, Server, Device. For me I have achieved this through the Windows 7 File Recovery feature in both Windows 7 and Windows 8 with an external hard drive or network storage as the backup destination.

In Windows 8 the File History feature was brought in to replace the Windows 7 File Recovery (aka Backup and Restore) feature. Well in Windows 8.1 this feature has now been removed.

So if like me you have used that feature to perform your pre-upgrade backup with the plan to then restore the files to your new Windows 8.1 system, then sadly it is not as simple as it use to be. However it is possible and here are the steps that I performed.

  1. Create a VM on your computer using the Hyper-V feature and install the Windows 7 or Windows 8 OS (but not 8.1)
    Alternatively if you have a Windows 7 or Windows 8 computer just use that. I was fortunate that I had a Windows 7 VM already built and just imported that into my laptop.
  2. If the backup is on an External HDD then attached the External Drive containing the backup to the computer. If using a Windows 7 VM then you will need to set the drive to offline before you can attach it to the VM.
  3. Start the VM (if required)
  4. Open "Backup and Recovery" on Windows 7 or "Windows 7 File Recovery" on Windows 8.
  5. Use the wizard to restore files from an existing backup and locate the backup on your external drive or network location.
  6. Select the files/folders that you wish to recover
  7. Now the important part, set the restore location to a sub folder on the same External HDD (ensure you have enough space). If you aren't using a VM you might be able to restore directly to your new Windows 8.1 computer over the network but I was using a VM without a network adapter attached.
  8. Once the restore completes, detach the external drive from the VM or other computer. Shut down the VM as it is no longer required (if appropriate)
  9. Attach the External HDD to the Windows 8.1 computer and manually copy all the files to the required locations.
  10. Finally setup Windows File History so you have backups of your documents again in a format that is supported by Windows 8.1 and easily recover files in the future.


Of cause if you are using the Windows 8 File History feature then ignore all of this as that feature is still current in Windows 8.1 and I will be updating all of my backups to use that from here on out :)

Monday, October 21, 2013

Tips for securing xp_cmdshell (when you really really have to use it)

If you have ever read anything about securing SQL server one of the most common threats that is called out is the use of xp_cmdshell. While it is generally accepted that this is a bad thing to enable within a production environment, there are some ligament cases where business, application, or probably more accurately legacy processes, require the use of xp_cmdshell. When we do enable this feature it is important to ensure that security is strictly locked down to prevent unwanted access to sensitive areas on the server or greater still, malicious actions on local and network components.

I was recently asked what my recommendations would be for securely implementing xp_cmdshell, and in my experience here are the steps to perform this:

Step 1
By default xp_cmdshell will execute under the context of the SQL Server service account, therefore the first step in reducing any risk is ensure that the account used for the SQL Server service is aligned to best practises. Such recommendations include the use of an account which:
    - Is not a local or domain administrator
    - Is a domain account where network resources may be required by aspects of the SQL environment (e.g. copying backup files to a network path)
    - Has minimal local and domain privileges
    - Has been configured using the SQL Server Configuration Manager

To completely secure the environment a separate account should also be used for the SQL Agent service with the same recommendations.

Step 2
Ensure that only the required users are members of the SQL Server SysAdmin server level role. Any members of this role will be able to execute xp_cmdshell and therefore allowed access to all aspects of the server that the SQL Service Account can access.

Step 3
Create a xp_cmdshell proxy account following the instructions at http://technet.microsoft.com/en-us/library/ms175046.aspx
This proxy account should be a unique domain user and separate to the SQL Server Service Account. It will be used when non-sysadmin SQL Logins execute xp_cmdshell and therefore an even restricted Access Control List (ACL) can be configured on the SQL Server and Network resources for that specific account. This will assist in significantly reducing the footprint area which is vulnerable to threat by xp_cmdshell.

Step 4
Grant the required permissions to specific non-sysadmin SQL Logins who require the ability to execute xp_cmdshell using the syntax: GRANT exec ON xp_cmdshell TO ''
This will ensure that only the SQL Logins which you have configured as either members of the SysAdmins built-in role, or explicitly granted execute permissions will be able to access the system via xp_cmdshell.
To view which SQL Logins have been granted permissions for xp_cmdshell run the following TSQL:












USE master;
GO
SELECT sys.schemas.name AS [schema_name]
, AllObjects.name AS [object_name]
, sys.database_permissions.permission_name
, sys.database_permissions.state_desc
, sys.database_principals.name AS [granted_to_principal_name]
FROM sys.database_permissions
INNER JOIN sys.database_principals ON sys.database_principals.principal_id = sys.database_permissions.grantee_principal_id
INNER JOIN (
        SELECT name, object_id, principal_id, schema_id, parent_object_id, type, type_desc, create_date, modify_date, is_ms_shipped, is_published, is_schema_published
        FROM sys.objects
        UNION
        SELECT name, object_id, principal_id, schema_id, parent_object_id, type, type_desc, create_date, modify_date, is_ms_shipped, is_published, is_schema_published
        FROM sys.system_objects
) AllObjects ON AllObjects.object_id = sys.database_permissions.major_id
LEFT JOIN sys.schemas ON sys.schemas.schema_id = AllObjects.schema_id
WHERE sys.schemas.name = 'sys'
AND AllObjects.name = 'xp_cmdshell'
ORDER BY sys.schemas.name
, AllObjects.name
, sys.database_principals.name
, sys.database_permissions.class;

 
Final Step
Start reviewing the processes, code, and reasons for using xp_cmdshell to determine if a more secure method could be used (e.g. PowerShell) to achieve the same outcome. Some might argue this should be the first step, but lets be realistic, you cannot always change an applications behavior especially if it is provided by a 3rd party so while this step definitely needs to be performed it may be the most difficult and longest of them all.



Legal Stuff: As always the contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Friday, August 30, 2013

Analytics for Twitter 2013 in the lead up to MS TechEd Australia

TechEd Australia is just around the corner and it's always an interesting time to dive into the BI features of Excel, especially with the out of the box Analytics for Twitter 2013. Definitely going to be watching this over the course of next week and using this to get some interesting insights into my own session (but more on that later).

If you haven't already seen the analytic tool here is a current view of the analysis of the query "auteched".


You can get some interesting info already like the fact that most activity is happening around lunch time, which is probably expected but will definitely be interesting to trend this over the course of next week.



My favorite is the geospatial data as it makes for some very interesting and easy to digest visualization.


And an interesting race at the moment is between the iOS and Windows devices (I know which one I hope wins) but still they are both outdone by the Web (and whatever other is).



Ahhhh BI, we can only make guesses without you


Wednesday, July 10, 2013

TSQL to find Top Queries by Avg CPU, IO, Time

One of my favourite performance tools has certainly become the RML reports as you can quickly identify queries with common hot spots across performance metrics. However this requires a bit of work, you first have to configure your performance trace (usually with Diag Manager), capture the SQL Profiler Trace, and then analyse it with RML/SQLNexus. Of cause the UI with XEvents makes this a bit easier if you use that as your capture. However what if you want to look at a general overall performance of a server, or for a longer duration.
We know that SQL Server is always capturing data for us in the background and exposes that to us with DMVs and DMFs. So I came up with the following query which uses the DMVs/DMFs to report the top queries.
You can change the sort order at the end of the query however as I have included the Rank number per performance metric (CPU, IO, Time) you can quickly see queries which are in the top of each group without needing to resort the results or run multiple queries.
If you did want to run multiple queries with different sort columns, then the Query Number should remain the same as it is based on CPU and the reason i included this is to allow you to easily compare the queries across multiple executions (depending on the time between executions).
;WITH CTE([Query Num], [Total Executions], [Total CPU], [Avg CPU], [Avg CPU Rank], [Total IO], [Avg IO], [Avg IO Rank], [Total Elapsed Time], [Avg Elapsed Time], [Avg Elapsed Time Rank], [Sample Statement Text], [Query Plan Handle], [Statement Hash], [Query Plan Hash])
AS
(
    SELECT TOP 50
        ROW_NUMBER() OVER (ORDER BY (SUM(total_worker_time) / SUM(execution_count)) DESC) AS [Query Num]        , SUM(execution_count) AS [Total Executions]        , SUM(total_worker_time) AS [Total CPU]        , SUM(total_worker_time) / SUM(execution_count) AS [Avg CPU]        , RANK() OVER (ORDER BY (SUM(total_worker_time) / SUM(execution_count)) DESC) AS [Avg CPU Rank]        , SUM(total_physical_reads + total_logical_reads + total_logical_writes) AS [Total IO]        , SUM(total_physical_reads + total_logical_reads + total_logical_writes) / SUM(execution_count) AS [Avg IO]        , RANK() OVER (ORDER BY (SUM(total_physical_reads + total_logical_reads + total_logical_writes) / SUM(execution_count)) DESC) AS [Avg IO Rank]        , SUM(total_elapsed_time) AS [Total Elapsed Time]
        , SUM(total_elapsed_time) / SUM(execution_count) AS [Avg Elapsed Time]        , RANK() OVER (ORDER BY (SUM(total_elapsed_time) / SUM(execution_count)) DESC) AS [Avg Elapsed Time Rank]
        , MIN(query_text) AS [Sample Statement Text]        , MIN(plan_handle) AS [Query Plan Handle]        , query_hash AS [Statement Hash]        , query_plan_hash AS [Query Plan Hash]
    FROM 
    (
        SELECT 
            qs.*
            SUBSTRING(st.[text], qs.statement_start_offset/2, (
                CASE 
                    WHEN qs.statement_end_offset = -1 THEN LEN(CONVERT(NVARCHAR(MAX), st.[text])) * 2 
                    ELSE qs.statement_end_offset                  END - qs.statement_start_offset)/2 
            ) AS query_text
        FROM sys.dm_exec_query_stats AS qs
        CROSS APPLY sys.dm_exec_sql_text(qs.[sql_handle]) AS st
        WHERE st.[text] NOT LIKE '%sys.dm_%'
        --AND DateDiff(hour, last_execution_time, getdate()) < 1 --change hour time frame
    ) AS query_stats     GROUP BY query_hash, query_plan_hash )
SELECT 
    [Query Num]
   , [Total Executions]
   , [Total CPU]
   , [Avg CPU]
   , [Avg CPU Rank]
   , [Total IO]
   , [Avg IO]
   , [Avg IO Rank]
   , [Total Elapsed Time]
   , [Avg Elapsed Time]
   , [Avg Elapsed Time Rank]
  , DB_Name(qp.dbid) AS [DB Name]   , [Sample Statement Text]
   , qp.query_plan AS [Estimated Query Plan]FROM CTE
OUTER APPLY sys.dm_exec_query_plan([Query Plan Handle]) AS qp --ORDER BY [Avg CPU] DESC
ORDER BY [Avg IO] DESC --ORDER BY [Avg Elapsed Time] DESC


NOTE: This query includes a TOP 50 which improves performance but only looks at very recent data. I would recommend using the predicate on last_execution_time to restrict the data back to a valid time frame. This would still then include the total history for those requests but ensure that you are only looking at active queries.


Here is an example output. In this example I had cleared the cache and only a small test data so the Avg IO rank is the same for each query but this gives you an idea.
TopNQueryiesbyHashandRank

So far I’ve tested this on SQL 2008 R2 and above.
I’m also closing in on finalising the BETA release of my DBA Admin and Performance SSMS reports which will include this query and take this to a whole new level of visualisation.

Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.

Thursday, June 13, 2013

Free Pages removed from Buffer Manager counterset in SQL 2012 Performance Counters

While creating some PowerShell to collect performance counters against servers for some performance analysis I am doing (more on that to come), I noticed that the script was throwing an error when trying to retrieve the SQLServer:Buffer Manager\Free Pages counter in my lab against my SQL 2012 instance. Yet it was working on site today against some earlier releases of SQL.

While there is no official guidance on this the counter is certainly no longer listed on the Buffer Manager Object definition http://technet.microsoft.com/en-us/library/ms189628.aspx.

So word of warning if you are migrating any performance collector sets to SQL 2012. Or if you are using PSSDiag/SQLDiag and PAL and you wonder why that counter is no longer reporting data.

I'd suggest focusing on Page Life Expectancy as your indication of buffer pool usage. Honestly I never really worried about Free Pages only to make sure that if it dropped that PLE also reported changes so that I knew it was normal behavior.

Tuesday, April 23, 2013

TSQL to identify databases with high number of VLFs

If you aren't aware there can be significant impact to performance when a database has a high number of VLFs within it's Transaction Log. SQL Server divides each physical transaction log file internally into a number of virtual log files (VLFs) and it is these that contain the log records.  There is no fixed size for a VLF, and there is no fixed number which would be created within a physical log file. These values are all determined dynamically by SQL Server when creating or extending physical log files.  While SQL Server tries to maintain a small number of VLFs for the best performance, file growth can result in a database having a large number of VLFs because the following equation will be used to determine the number of files to create within the new space.

- Less than 64MB = 4 VLFs
- Greater than 64MB and less than 1GB = 8 VLFs
- Greater than or equal to 1GB = 16 VLFs

So for example if I initially create a 1GB transaction log file it will be divided into 16 64MB VLFs, then if it grew a few times by 10 Mb before being switched to 10% the following trend would be experienced and result in a quickly increasing number of VLFs when then compared to manually growing the file:

ActionSpace AddedVLFs CreatedTotal SizeTotal VLFs
Initial Size1024 Mb161024 Mb16
Auto Grow10 Mb41034 Mb20
Auto Grow10 Mb41044 Mb24 
Auto Grow10 Mb41054 Mb28
Auto Grow10 Mb41064 Mb32
Auto Grow106 Mb81170 Mb40
Auto Grow117 Mb81287 Mb48
 Auto Grow128 Mb81415 Mb56
 Auto Grow141 Mb81556 Mb64
 Auto Grow155 Mb81711 Mb72
 Auto Grow171 Mb81882 Mb80
 Auto Grow188 Mb82070 Mb88

The only way to reduce the number of VLFs is to then Shrink the file and manually grow the file by a set size such as:

ActionSpace AddedVLFs CreatedTotal SizeTotal VLFs
Initial Size1024 Mb161024 Mb16
Manual Grow1024 Mb162048 Mb32
Auto Grow204 Mb82252 Mb 40

Ideally you should keep the number of VLFs to as small as possible. In SQL 2012 there is now a warning raised when a database has >10,000 VLFs, although there is no warning available in earlier versions of SQL Server. You can use the following TSQL though to report on the number of VLFs per database within your SQL Instance. Then if you compare this with your auto-grow settings for the database you can determine the reason for why the count is the way it is.

SET NOCOUNT ON;
/* declare variables required */DECLARE @DatabaseId INT;
DECLARE @TSQL varchar(MAX);
DECLARE cur_DBs CURSOR FOR
SELECT database_id FROM sys.databases;
OPEN cur_DBs;
FETCH NEXT FROM cur_DBs INTO @DatabaseId

--These table variables will be used to store the data
DECLARE @tblAllDBs Table (DBName sysname
   , FileId INT
   , FileSize BIGINT
   , StartOffset BIGINT
   , FSeqNo INT
   , Status TinyInt
   , Parity INT
   , CreateLSN NUMERIC(25,0)
)IF '11' = substring(convert(char(12),serverproperty('productversion')), 1, 2)
BEGIN   DECLARE @tblVLFs2012 Table (RecoveryUnitId BIGINT
      , FileId INT
      , FileSize BIGINT
      , StartOffset BIGINT
      , FSeqNo INT
      , Status TinyInt
      , Parity INT
      , CreateLSN NUMERIC(25,0)
   );
END
ELSE
BEGIN
   DECLARE @tblVLFs Table (
      FileId INT
      , FileSize BIGINT
      , StartOffset BIGINT
      , FSeqNo INT
      , Status TinyInt
      , Parity INT
      , CreateLSN NUMERIC(25,0)
   );
END
--loop through each database and get the info
WHILE @@FETCH_STATUS = 0
BEGIN   PRINT 'DB: ' + CONVERT(varchar(200), DB_NAME(@DatabaseId));
   SET @TSQL = 'dbcc loginfo('+CONVERT(varchar(12), @DatabaseId)+');';
   IF '11' = substring(convert(char(12),serverproperty('productversion')), 1, 2)
   BEGIN
      DELETE FROM @tblVLFs2012;
      INSERT INTO @tblVLFs2012
      EXEC(@TSQL);
      INSERT INTO @tblAllDBs
      SELECT DB_NAME(@DatabaseId)
         , FileId
         , FileSize
         , StartOffset
         , FSeqNo
         , Status
         , Parity
         , CreateLSN
      FROM @tblVLFs2012;
   END
   ELSE
   BEGIN
      DELETE FROM @tblVLFs;
      INSERT INTO @tblVLFs
      EXEC(@TSQL);
      INSERT INTO @tblAllDBs
      SELECT DB_NAME(@DatabaseId)
         , FileId
         , FileSize
         , StartOffset
         , FSeqNo
         , Status
         , Parity
         , CreateLSN
      FROM @tblVLFs;
   END
   FETCH NEXT FROM cur_DBs INTO @DatabaseId
ENDCLOSE cur_DBs;
DEALLOCATE cur_DBs;

--just for formating if output to Text
PRINT '';
PRINT '';
PRINT '';

--Return the data based on what we have found
SELECT a.DBName
    , COUNT(a.FileId) AS [TotalVLFs]
    , MAX(b.[ActiveVLFs]) AS [ActiveVLFs]
    , (SUM(a.FileSize) / COUNT(a.FileId) / 1024) AS [AvgFileSizeKb]
FROM @tblAllDBs a
INNER JOIN (
    SELECT DBName
        , COUNT(FileId) [ActiveVLFs]
    FROM @tblAllDBs
    WHERE Status = 2
    GROUP BY DBName
) b
ON b.DBName = a.DBName
GROUP BY a.DBName
ORDER BY TotalVLFs DESC;

SET NOCOUNT OFF;


Legal Stuff: The contents of this blog is provided “as-is”. The information, opinions and views expressed are those of the author and do not necessarily state or reflect those of any other company with affiliation to the products discussed. This includes any URLs or Tools. The author does not accept any responsibility from the use of the information or tools mentioned within this blog, and recommends adequate evaluation against your own requirements to measure suitability.