DAX: partitioned conditional aggregations

We want to find how often customers are behind in their payments or calculate their average debt when they are in debt.  The question might be

  1. How many customer-months of debt am I experiencing (counting 1 for each month that each customer is in debt).
  2. What is the average monthly debt of customers (aggregating only months where a customer is in debt).

And similar questions, such as profit in profitable-months.

The aggregations (i.e. COUNT or AVERAGE) should be partitioned by

  1. customer
  2. time interval (e.g. month)

and there will need to be some form of conditional clause to return a value only when the customer’s balance is above zero (debt).

Debt in indebted-months = SUMX(
	CROSSJOIN( 
		VALUES( 'Date'[Month] ), 
		VALUES( 'Customer'[CustomerId] ) 
	),
	VAR customer_balance = [Customer balance]
	RETURN
	IF (
		customer_balance > 0,
		customer_balance
	)
)

Performance

How can performance be improved? 

  1. The most obvious optimisation is already in place:  [Customer balance] is assigned to a variable, which avoids calculating it twice for indebted customers.
  2. The number of iterations will be the product of Customers and Months, so if these dimensions are small (a restricted date range is in context) then performance might be OK.
  3. If there is no filter on date in context then it might help to filter the date table to include only months where there is at least one customer balance.

Anything else?

  • Can we avoid using IF? Is there an expression using only mathematic functions that will return only positive values?  (A limitation here that I can see is that math expression will return zero where the balance is <= 0 which will be OK for some uses but not when calculating average.)
  • Is the CROSSJOIN optimal?  And the use of VALUES to return tables? For Date, the VALUES clause returns a table containing the set of months, but for Customer the VALUES clause returns the table grain, so the table itself could have been used: do those extra columns have a cost where they are not referred to? Something to investigate.

Referential integrity checks

This script is aimed at data warehouse relational databases which don’t have foreign key constraints.  Using column names it matches key column values between fact and dimension tables to identify orphans.

/*
	Referential integrity checker

	Aimed at data warehouses where dimension tables are prefixed 'dim', fact tables 'fact'
	and where the dimension primary key column name is part or all of the fact table foreign key column name.
	And there are no foreign key constraints:(

	@Template holds a dynamic sql template for counting number of rows in fact table, the number of
	orphaned foreign keys in the fact table, and the number of distinct orphaned values, etc.  It 
	contains placeholders ('{DimensionTable}', '{PrimaryKey}', '{FactTable}', '{ForeignKey}')  
	which are all that are requried to make it run a test on a pair of tables.

	The CTE "Dimensions" is a list of tables beginning "dim" together with the name of the 
	primary key column.

	The CTE "FactColumns" is all tables beginning "fact" that have a column name that contains the
	primary key column name.  ** These are identified by their names rather than by foreign key constraint.

	These two tables are joined to produce a set of pairs of dimension and fact tables that is
	used to populate a cursor.

	The cursor loop substitutes the four placeholders in the sql template, and the resulting 
	sql statement is executed.

	OUTPUT
		DimensionTable	:	the dimension table e.g. [dbo].[dimCustomer]
		PrimaryKey		:   the primary key of the dimension table  e.g. [CustomerKey]
		FactTable		:	a fact table that has a foreign key to a dimension e.g. [dbo].[factSales]
		ForeignKey		:	the foreign key of the fact table e.g. [CustomerKey] or [DeliveryCustomerKey]
		FactRows		:   total number of rows in the fact table
		NbrOfOrphans	:	number of rows in fact table where the foreign key does not match the dimension primary key
		NbrOfOrphanedValues : number of distinct values that do not match a dimension primary key value
		MaxOrphanedValue	: the highest orphaned foreign key value (including Null).  'n/a' if there are no orphaned rows.
		NbrOfSpecialRows	: number of rows in fact table where foreign key is < 0. "Unknown" and suchlike.

	Change log
		18-08-2020 AI version 1.

*/



CREATE TABLE
	#ReferentialIntegrityTests(
		RunTime				datetime NOT NULL,
		DimensionTable			nvarchar(300) NOT NULL,
		PrimaryKey				nvarchar(300) NOT NULL,
		FactTable				nvarchar(300) NOT NULL,
		ForeignKey				nvarchar(300) NOT NULL,
		FactRows				int,
		NbrOfOrphans			int,
		NbrOfOrphanedValues		int,
		MaxOrphanedValue		varchar(20),
		NbrOfSpecialRows		int
	)


DECLARE @sql nvarchar(max),
		@Template nvarchar(max),
	    @DimensionTable nvarchar(300),
		@FactTable nvarchar(300),
		@PrimaryKey nvarchar(300),
		@ForeignKey nvarchar(300),
		@cur CURSOR;


SET @Template = N';WITH RI AS
(
	SELECT 
		d.{PrimaryKey} AS PrimaryKey,
		f.{ForeignKey} AS ForeignKey,
		Count(*) AS NbrOfRows
	FROM 
		{FactTable} f
		LEFT JOIN {DimensionTable} d ON f.{ForeignKey} = d.{PrimaryKey} 
	GROUP BY
		d.{PrimaryKey},
		f.{ForeignKey}
)
INSERT #ReferentialIntegrityTests (
	RunTime				,
	DimensionTable		,
	PrimaryKey			,
	FactTable			,
	ForeignKey			,
	FactRows			,
	NbrOfOrphans		,
	NbrOfOrphanedValues	,
	MaxOrphanedValue	,
	NbrOfSpecialRows
)
SELECT 
	RunTime = GETDATE(),
	DimensionTable	= ''{DimensionTable}'',
	PrimaryKey		= ''{PrimaryKey}'',
	FactTable		= ''{FactTable}'',
	ForeignKey		= ''{ForeignKey}'',
	FactRows			= ISNULL(( SELECT SUM(NbrOfRows) FROM RI ), 0),
	NbrOfOrphans		= ISNULL(( SELECT SUM(NbrOfRows) FROM RI WHERE PrimaryKey IS NULL), 0),
	NbrOfOrphanedValues = ISNULL(( SELECT COUNT(*) FROM RI WHERE PrimaryKey IS NULL GROUP BY PrimaryKey), 0),
	MaxOrphanedValue	= ISNULL((SELECT MAX( ISNULL( CAST(ForeignKey AS varchar(25)), ''Null'')) FROM RI WHERE PrimaryKey IS NULL), ''n/a''),
	NbrOfSpecialRows	= ISNULL( (SELECT SUM(NbrOfRows) FROM RI WHERE ForeignKey < 0), 0)';
	
SET @cur = CURSOR STATIC FOR
WITH Dimensions AS
(
	SELECT 
		 QUOTENAME(S.NAME) + '.' + QUOTENAME(t.[name]) AS TableName
		,QUOTENAME(c.name) AS PrimaryKey

	FROM sys.objects t
		INNER JOIN sys.schemas S ON S.schema_id =t.schema_id
		INNER JOIN sys.columns c ON c.object_id = t.object_id
		INNER JOIN sys.index_columns  ic ON ic.object_id = t.object_id
				AND ic.column_id = c.column_id
		INNER JOIN sys.key_constraints kc ON kc.parent_object_id = ic.object_id
				AND ic.index_id = kc.unique_index_id
	WHERE 
		t.[type] = 'U'
		AND kc.[type] = 'PK'
		AND t.[name] LIKE 'dim%'

), 
FactColumns AS
(
	SELECT 
		QUOTENAME(S.NAME) + '.' + QUOTENAME(t.[name]) AS TableName
		,QUOTENAME(c.name) AS ColumnName

	FROM sys.objects t
		INNER JOIN sys.schemas S ON S.schema_id =t.schema_id
		INNER JOIN sys.columns c ON c.object_id = t.object_id
	WHERE 
		t.[type] = 'U'
		AND t.[name] LIKE 'fact%'
)

	SELECT
		d.TableName AS DimensionTable,
		d.PrimaryKey AS PrimaryKey,
		f.TableName AS FactTable,
		f.ColumnName As ForeignKey
	FROM
		Dimensions d
		CROSS JOIN FactColumns f 
	WHERE
		CHARINDEX(REPLACE(d.PrimaryKey, '[', ''), f.ColumnName) > 0;
OPEN @cur

WHILE 1 = 1
BEGIN
	FETCH @cur INTO @DimensionTable,
					@PrimaryKey,
					@FactTable,
					@ForeignKey
	IF @@fetch_status <> 0 BREAK
	SET @sql = REPLACE(@Template, '{DimensionTable}', @DimensionTable)
	SET @sql = REPLACE(@sql, '{FactTable}', @FactTable)
	SET @sql = REPLACE(@sql, '{PrimaryKey}', @PrimaryKey)
	SET @sql = REPLACE(@sql, '{ForeignKey}', @ForeignKey)

	EXEC sp_executesql @sql
END
		
SELECT * FROM #ReferentialIntegrityTests ORDER BY NbrOfOrphans desc, FactRows desc
DROP TABLE #ReferentialIntegrityTests

Error flows in Azure Data Factory

Azure data factory pipelines consist of activities that are controlled for execution precedence and flow by constructs similar to SSIS precedence constraints.

 

 

However, unlike SSIS, ADF has no error event  that can be given pipeline-wide scope which is raised on any and all errors. Adding an activity to respond to an error results in the error being cleared and the rest of the pipeline continues to run – and the pipeline can report Success on completion even if individual activities within it fail.

One solution is to use a “crowbar” activity (an activity that is guaranteed to fail) to expose an error condition. 

The following screenshots illustrate the problem and the crowbar solution, and I try to formulate a rule to explain the error flow behaviour in pipelines.

Practical examples

Child activity pipeline Error output to calling process

(a) If all activities succeed…

… the pipeline succeeds

(b) If any one fails…

… the pipeline fails

(c) The outcome is determined by the final activity of each branch

… pipeline fails

(d) However, if a logging activity is added (executes on completion – blue arrow) it will mask preceding errors.

 

Although Activity 1 failed

the pipeline as a whole returns success

 

(e)

pipeline returns success

In the final examples above a final logging or clean-up activity which is set to run on completion, not success, of preceding activities will effectively mask an error from a preceding activity. Although the failure appears in the logs, any subsequent action will run as if no error had occurred.

Crowbar activity

A solution is to “re-throw” the error by capturing the Failure output of each activity and terminate it on a dummy activity. The dummy activity – even if not executed (which it isn’t in the first example below) – creates a second vector which transitively causes the failed activity to fail the pipeline.

(f) The Crowbar activity (e.g. divide 1 by zero) does not execute but “exposes” the error vector.

Pipeline reports failure

(g) The crowbar activity only executes, in this configuration, if all the precedent activities fail

The rule

A pipeline can contain multiple paths from a start activity to a final activity. There are three paths in example (a).   The error reporting rule (I propose) is:

A pipeline fails if the last activity to be executed on any path fails.

In example (d) all three paths terminate on the “log outcome” activity – which runs on completion (success or failure) of the previous activities and which itself succeeds.   Therefore all three paths end in success, and the pipeline reports success.

In example (f) the additional crowbar activity creates a further three potential execution paths.  The error in Activity 1 causes the (red) error path to be followed.   There are now four paths being followed in (f).  In three of them the last activity to execute is “log outcome”, which succeeds.   The fourth path is from Activity 1 to the Crowbar activity.  The crowbar activity does not execute, so the last activity to execute in this path is Activity 1 – which failed.  Thus the pipeline returns failure.

Why doesn’t the Crowbar activity execute in (f)?  Because its precedent constraints restrict it to executing only on failure of all three Activities 1, 2 and 3 (which happens in (g)).   But that doesn’t stop the pathway being followed and effectively exposing the failed Activity 1 as a path-final error.

The effect of RAID cache backup battery failure

RAID write times are improved by writing to a cache.   This is a quick operation, the calling program can get on with its business, and the RAID controller can solidify the cached changes to disk(s) in its own time.  The cache power supply has a backup battery.

When the cache backup battery condition measures below a safe limit it is marked as FAILED! (which raises alerts) and the RAID controller stops using the cache and writes only direct to disk.

Performance Effect?

How much does that affect performance?   Those in the know (some network engineers) suggested it might add 20ms to write times.  They weren’t far wrong.  The Disk avg write time metric on the Redgate monitor showed just under 30ms increase

It had no effect on average disk read time, as would be exected:

 

 

Maintenance – Record Deletion

A simple framework for implementing a time-framed record deletion policy: deleting records that are older than a certain threshold.

Consists of a logging table and a stored procedure which deletes the records.   The procedure deletes rows in batches and provides for an overall maximum to be specified for a single execution to limit impact on resources when there is a huge set of records to delete.

Logging Table

CREATE TABLE [dbo].[RecordDeletions]
(
	[RecordDeletionId] INT NOT NULL IDENTITY, 
    [TableName] VARCHAR(50) NULL, 
	[DeletionThresholdDate] smalldatetime NULL,
    [OperationStatus] VARCHAR(50) NOT NULL CONSTRAINT DF_RecordDeletions_OperationStatus DEFAULT 'Running', 
    [StartTime] DATETIME NOT NULL CONSTRAINT DF_RecordDeletions_StartTime DEFAULT GetDate(), 
    [EndTime] DATETIME NULL, 
    [NbrRecordsDeleted] INT NULL , 
    [DurationSeconds] as DATEDIFF(SECOND, StartTime, EndTime), 
    CONSTRAINT [PK_RecordDeletions] PRIMARY KEY ([RecordDeletionId]),
	CONSTRAINT [CK_RecordDeletions_OperationStatus_Valid_Values] 
        CHECK (OperationStatus = 'Running' OR OperationStatus = 'Success' OR OperationStatus = 'Failure')
)

GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Keeps a log of records that have been deleted from tables for housekeeping',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = NULL,
    @level2name = NULL
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Time that this operation started',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'StartTime'
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Time that this operation ended',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'EndTime'
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Number of records deleted',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'NbrRecordsDeleted'
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Name of table',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'TableName'
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Status of operation:  Running, Success, Failure',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'OperationStatus'
GO
EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'(Computed column) Seconds duration of completed events.',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'DurationSeconds'
GO

EXEC sp_addextendedproperty @name = N'MS_Description',
    @value = N'Rows older than the DeletionThresholdDate are targetted for deletion in this event.',
    @level0type = N'SCHEMA',
    @level0name = N'dbo',
    @level1type = N'TABLE',
    @level1name = N'RecordDeletions',
    @level2type = N'COLUMN',
    @level2name = N'DeletionThresholdDate'

Record Deletion Procedure

This procedure is fairly basic and requires duplication and customisation for each table (or cluster of related tables).   Search and replace TARGET_TABLE_NAME – but the appropriate Primary Key and Date columns will also have to be edited in the “Deletion Candidates Selection Block”.

CREATE PROCEDURE dbo.DeleteOldRows_TARGET_TABLE_NAME

/*
	Delete rows older than the retention period 
	
	@DeleteBatchSize : number of records deleted in each loop.
	@OverallMaxRowsPerExecution : absolute max number of rows that will be deleted 
								in a single execution of the procedure.

*/
AS
SET NOCOUNT ON;
SET XACT_ABORT ON
BEGIN TRY
	DECLARE @DeleteBatchSize int = 5000
	DECLARE @OverallMaxRowsPerExecution int = 400000

	DECLARE @DebugInfo varchar(2000)
	DECLARE @MaxRowNumber int = 0
	DECLARE @RangeStart int
	DECLARE @RangeEnd int
	DECLARE @RecordDeletionId int
	DECLARE @NbrRecordsDeleted int = 0
	DECLARE @OperationStatus varchar(50) = 'Failure'


	CREATE TABLE #RecordsToDelete
		(RowNumber int not null PRIMARY KEY IDENTITY,
		 RecordId  int not null
		 )

	-- Insert logging row, calculate time threshold.
	INSERT dbo.RecordDeletions 
		(TableName,
		 DeletionThresholdDate)
		VALUES
		('TARGET_TABLE_NAME',
		  DATEADD(MONTH, (-60), CAST(GETDATE() AS date);

	SET @RecordDeletionId = SCOPE_IDENTITY()

	-- Deletion Candidates Selection Block
	INSERT INTO #RecordsToDelete (RecordId)
		SELECT TOP (@OverallMaxRowsPerExecution)
			ID
		FROM
			dbo.TARGET_TABLE_NAME L
			INNER JOIN dbo.RecordDeletions RD
			ON L.RowCreatedDate <= RD.DeletionThresholdDate
		WHERE
			RD.RecordDeletionId = @RecordDeletionId

	SELECT
		@RangeStart = MIN(RowNumber),
		@MaxRowNumber = MAX(RowNumber)
	FROM
		#RecordsToDelete


	WHILE (@RangeStart < @MaxRowNumber)
	BEGIN
		SET @RangeEnd = (@RangeStart + @DeleteBatchSize -1)

		DELETE TargetTable
		FROM 
			dbo.TARGET_TABLE_NAME TargetTable
			INNER JOIN #RecordsToDelete R ON TargetTable.ID = R.RecordId
		WHERE	
			R.RowNumber BETWEEN @RangeStart and @RangeEnd
	
		SET @NbrRecordsDeleted += @@ROWCOUNT
		SET @RangeStart += @DeleteBatchSize

		WAITFOR DELAY '000:00:00.500' -- pause for 500ms to allow other processes to gain locks.
	END

	SET @OperationStatus = 'Success'


END TRY
BEGIN CATCH
	EXECUTE dbo.ERROR_HandleCaughtError @DebugInfo
	SET @OperationStatus = 'Failure';
END CATCH	

BEGIN TRY
	SET @DebugInfo = 'Logging and Cleanup section'
	UPDATE
		dbo.RecordDeletions
	SET
		OperationStatus = @OperationStatus,
		EndTime = GETDATE(),
		NbrRecordsDeleted = @NbrRecordsDeleted
	WHERE
		RecordDeletionId = @RecordDeletionId

	DROP TABLE #RecordsToDelete
END TRY
BEGIN CATCH
	EXECUTE dbo.ERROR_HandleCaughtError @DebugInfo
	RETURN 5555
END CATCH

Uses the  Error Handler for T-SQL.

HTTP Access to Analysis Services (multidimensional)

In retrospect this was a simple task: the steps set out in the invaluable video Configuring HTTP for SSAS  all work.  However, in the middle of a long process trying to get it to work the first time it seems anything but simple.  So here are some observations to help reassure you that it really is straightforward, even after a full day battling without success.

Environment

These observations are from configuring HTTP SSAS access in this environment:

  • Windows Server 2016 Datacentre (EC2 instance)
  • SQL Server Analysis Services 2017 Evaluation (developer) edition
  • IIS 10

both SSAS and IIS running on the same server.

Install Cumulative Update 9

The most important lesson is to install SQL Server CU9 or later.   The fault in msmdpump.dll is indicated in this MS article: FIX: Access to SSAS by using HTTP fails in SQL Server The article is dated 18 July 2018.

Here are some of the responses you will see if you are using the pre-CU9 version of the file – assuming the configuration is otherwise correct.

SSMS

The response is immediate.

Cannot connect to http://...url/.

File system error: The following error occurred while writing to the file 'MSOLAP ISAPI 
Extension: \\?\C:\inetpub\OLAP\msmdpump.dll': Access is denied. .
Internal error: An unexpected exception occurred. (Unknown)

Excel

Response is immediate.

XML parsing failed at line 1, column 1: Incorrect document syntax.
.
File system error: The following error occurred while reading from the file 'Compressed stream'.

Browser

The response to a browser http request is the same as from the correct DLL, so this cannot be used to determine whether the msmdpump.dll version is correct.

Successful Installation

Connecting to the HTTP address on the same server as IIS returns this:

This is the right response.  It’s a 501 error code, so this won’t be seen from browsers on remote hosts.

Other Errors

Excel will report an error message from the host quickly although some are replaced by a generic “check typing of username” friendly message from Excel.  SSMS might allow server-level nodes and some cube structure to be viewed in the explorer.  If a cube is visible, attempting to browse it can provide more error information (although hidden by default).

  • Either the user, ‘NT AUTHORITY\IUSR’, does not have access to the ‘<Cube Name>’ database, or the database does not exist.    –  The solution is to make IUSR a member of the Server Administrators of the SSAS instance.  IUSR does not (and should not, for security) need to be a member of any Role within the cube.

Other Observations

SSL Offloading

The HTTP endpoint handles SSL encoding on load balancer without problem.  No additional configuration is required: IIS is configured as an unsecured http site.

Resources

Script to Configure SSRS 2017 Reporting Services

This is an attempt to create a script that will configure an instance of SSRS that has been left at the “installed but not configured” stage by the installer.   The script is required to run during the spin-up of a virtual SQL Server machine (e.g. AWS) to result in a fully functioning SSRS site with no further configuration to be done.

(Work in progress)

Versions

  • SSRS Native Mode 2017 (v14) Standard edition
  • Windows Server 2016 Standard

Actions to be Scripted

  • Reserve /Reports and /ReportServer URLs  [done]
  • Greate SSRS databases and assign rights [done]
  • SMTP Settings
  • Specify Execution Account
  • Create and Backup Encryption Keys
  • Set File Share Account for Subscriptions

Resources

The Script

Main changes to update from SSRS v12 to v14:

  • WMI object namespace is now “root\Microsoft\SqlServer\ReportServer\RS_SSRS\v14\Admin”   (v12 was “…\RS_MSSQLSERVER\v12\Admin”) – reflecting the fact that SSRS as an application is now separate to SQL Server.
  • The default service account is now “NT Service\SQL Reporting Services” (instead of “NT Service\ReportServer” for v12).
  • The Report Manager (/ReportServer) web application name is “ReportServerWebApp” (formerly “ReportManager”).  This means there are two very similarly, but logically, named applications:
    • ReportServerWebService  the service that emits the actual reports – /ReportServer
    • ReportServerWebApp (formerly ReportManager) the full SSRS user and admin interface – /Reports
## This script configures a Native mode report server without HTTPS
## SSRS Version :  2017 - v14
## Script source: https://docs.microsoft.com/en-us/azure/virtual-machines/windows/sqlclassic/virtual-machines-windows-classic-ps-sql-report
## Amended by Andrew Ing for v14
## Script Version 25-11-2018
 
 
$ErrorActionPreference = "Stop"
 
$server = $env:COMPUTERNAME
$HTTPport = 80 # change the value if you used a different port for the private HTTP endpoint when the VM was created.
 
## Set PowerShell execution policy to be able to run scripts
Set-ExecutionPolicy -Scope CurrentUser RemoteSigned -Force
 
## Utility method for verifying an operation's result
function CheckResult
{
     param($wmi_result, $actionname)
     if ($wmi_result.HRESULT -ne 0) {
         write-error "$actionname failed. Error from WMI: $($wmi_result.Error)"
     }
}
 
$starttime=Get-Date
write-host -foregroundcolor DarkGray $starttime StartTime
 
## ReportServer Database name - this can be changed if needed
$dbName='ReportServer'
 
## Register for MSReportServer_ConfigurationSetting
$RSObject = Get-WmiObject -class "MSReportServer_ConfigurationSetting" -namespace "root\Microsoft\SqlServer\ReportServer\RS_SSRS\v14\Admin"
 
## Report Server Configuration Steps

## Setting the web service URL ##
write-host -foregroundcolor green "Setting the web service URL"
write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
$time=Get-Date
write-host -foregroundcolor DarkGray $time
 
## SetVirtualDirectory for ReportServer site
     write-host 'Calling SetVirtualDirectory'
     $r = $RSObject.SetVirtualDirectory('ReportServerWebService','ReportServer',1033)
     CheckResult $r "SetVirtualDirectory for ReportServer"
 
## ReserveURL for ReportServerWebService - port $HTTPport (for local usage)
     write-host "Calling ReserveURL port $HTTPport"
     $r = $RSObject.ReserveURL('ReportServerWebService',"http://+:$HTTPport",1033)
     CheckResult $r "ReserveURL for ReportServer port $HTTPport" 
 
## Setting the Database ##
write-host -foregroundcolor green "Setting the Database"
write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
$time=Get-Date
write-host -foregroundcolor DarkGray $time
 
## GenerateDatabaseScript - for creating the database
     write-host "Calling GenerateDatabaseCreationScript for database $dbName"
     $r = $RSObject.GenerateDatabaseCreationScript($dbName,1033,$false)
     CheckResult $r "GenerateDatabaseCreationScript"
     $script = $r.Script
 
## Execute sql script to create the database
     write-host 'Executing Database Creation Script'
     $savedcvd = Get-Location
     Import-Module SQLPS              ## this automatically changes to sqlserver provider
     Invoke-SqlCmd -Query $script
     Set-Location $savedcvd
 
## GenerateGrantRightsScript 
     $DBUser = "NT SERVICE\SQLServerReportingServices"
     write-host "Calling GenerateDatabaseRightsScript with user $DBUser"
     $r = $RSObject.GenerateDatabaseRightsScript($DBUser,$dbName,$false,$true)
     CheckResult $r "GenerateDatabaseRightsScript"
     $script = $r.Script
 
## Execute grant rights script
     write-host 'Executing Database Rights Script'
     $savedcvd = Get-Location
     cd sqlserver:\
     Invoke-SqlCmd -Query $script
     Set-Location $savedcvd
 
## SetDBConnection - uses Windows Service (type 2), username is ignored
     write-host "Calling SetDatabaseConnection server $server, DB $dbName"
     $r = $RSObject.SetDatabaseConnection($server,$dbName,2,'','')
     CheckResult $r "SetDatabaseConnection"  

## Setting the Report Manager URL ##
 
write-host -foregroundcolor green "Setting the Report Manager URL"
write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
$time=Get-Date
write-host -foregroundcolor DarkGray $time
 
## SetVirtualDirectory for Reports (Report Manager) site
     write-host 'Calling SetVirtualDirectory'
     $r = $RSObject.SetVirtualDirectory('ReportServerWebApp','Reports',1033)
     CheckResult $r "SetVirtualDirectory"
 
## ReserveURL for ReportManager  - port $HTTPport
     write-host "Calling ReserveURL for ReportServerWebApp, port $HTTPport"
     $r = $RSObject.ReserveURL('ReportServerWebApp',"http://+:$HTTPport",1033)
     CheckResult $r "ReserveURL for ReportManager port $HTTPport"
 
write-host -foregroundcolor green "Open Firewall port for $HTTPport"
write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
$time=Get-Date
write-host -foregroundcolor DarkGray $time
 
## Open Firewall port for $HTTPport
     New-NetFirewallRule -DisplayName "Report Server (TCP on port $HTTPport)" -Direction Inbound –Protocol TCP –LocalPort $HTTPport
     write-host "Added rule Report Server (TCP on port $HTTPport) in Windows Firewall"
 
## Restart SSRS service ##
 write-host -foregroundcolor green "Restart SQLServerReportingServices"
 write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
 $time=Get-Date
 write-host -foregroundcolor DarkGray $time

Restart-Service SQLServerReportingServices

write-host 'Operations completed, Report Server is ready'
write-host -foregroundcolor DarkGray $starttime StartTime
$time=Get-Date
write-host -foregroundcolor DarkGray $time

Restoring the Unconfigured state

During iterative testing it’s necessary to restore the SSRS instance to the unconfigured state.   This is not possible from the SSRS Configuration Tool.  There are some WMI methods to remove URLs, which I haven’t tried yet.  Here are some manual methods.

  • Remove URL Reservations .
    • Open C:\Program Files\Microsoft SQL Server Reporting Services\SSRS\rsreportserver.config  in notepad++, find <URLReservations>  and remove all <URL> nodes  (there will be at least two once configured: for the ReportServerWebService and ReportServerWebApp applications.
    • Run commands –  assuming default names (Use netsh http show urlacl to show all current reservations):
      • netsh http delete urlacl URL=http://+:80/Reports
      • netsh http delete urlacl URL=http://+:80/ReportServer
    • Restart SQLServerReportingServices service.
    • Alternatively use this script, which calls the WMI RemoveURL methods:
      ## UN-CONFIGURE a native instance of SQL Server Reporting Services.
      ##
      ## This is intended for test iterations where SSRS needs to be returned to its freshly-installed state.
      ## SSRS Version :  2017 - v14
      ## Script Version 25-11-2018
      
      
       $ErrorActionPreference = "Stop"
      
       $server = $env:COMPUTERNAME
       $HTTPport = 80 # change to match what was configured.
      
       ## Set PowerShell execution policy to be able to run scripts
       Set-ExecutionPolicy -Scope CurrentUser RemoteSigned -Force
      
       ## Utility method for verifying an operation's result
       function CheckResult
       {
           param($wmi_result, $actionname)
           if ($wmi_result.HRESULT -ne 0) {
               write-error "$actionname failed. Error from WMI: $($wmi_result.Error)"
           }
       }
      
       $starttime=Get-Date
       write-host -foregroundcolor DarkGray $starttime StartTime
      
       ## ReportServer Database name - this can be changed if needed
       $dbName='ReportServer'
      
       ## Register for MSReportServer_ConfigurationSetting
       $RSObject = Get-WmiObject -class "MSReportServer_ConfigurationSetting" -namespace "root\Microsoft\SqlServer\ReportServer\RS_SSRS\v14\Admin"
      
       
       ## Unset Web Service (/Reports) URL ##
       write-host -foregroundcolor green "Unset Web Service (/Reports) URL"
       write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
       $time=Get-Date
       write-host -foregroundcolor DarkGray $time
      
      
           write-host "Calling RemoveURL port $HTTPport"
           $r = $RSObject.RemoveURL('ReportServerWebService',"http://+:$HTTPport", 1033)
           CheckResult $r "RemoveURL for ReportServer port $HTTPport" 
      
       ## Unset Web App (/Reportserver) URL ##
       write-host -foregroundcolor green "Unset Web App (/Reportserver) URL"
       write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
      
      
          write-host "Calling RemoveURL for ReportServerWebApp, port $HTTPport"
          $r = $RSObject.RemoveURL('ReportServerWebApp',"http://+:$HTTPport",1033)
          CheckResult $r "RemoveURL for ReportManager port $HTTPport"
      
      
       
       ## Restart SSRS service ##
       write-host -foregroundcolor green "Restart SQLServerReportingServices"
       write-host -foregroundcolor green ">>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>"
       $time=Get-Date
       write-host -foregroundcolor DarkGray $time
      
       Restart-Service SQLServerReportingServices
      
       $time=Get-Date
       write-host -foregroundcolor DarkGray $time

       

  • Drop the SSRS databases
    ALTER DATABASE ReportServer SET SINGLE_USER WITH ROLLBACK IMMEDIATE
    ALTER DATABASE ReportServerTempDB SET SINGLE_USER WITH ROLLBACK IMMEDIAT
    USE master
    DROP DATABASE ReportServer
    DROP DATABASE ReportServerTempDB

     

Selected Exception Messages

…you might encounter during testing

  • Error from WMI: The Url has already been set for this application. Remove existing URL reservations (see above) first.  
  • Error from WMI: The Url has already been reserved.   (as above)
  • GenerateDatabaseRightsScript failed. Error from WMI: [blank!]    WMI returned no error information, but at this point the script has already created the databases so is able to execute SQL commands.  This script is assigning rights, therefore it is likely  that the database user specfied in the script ($DBUser)  is not valid – which turned out to be the case in this instance.
  • Error from WMI: A virtual directory must first be created before performing this operation.   This occurs if the whole <Application> node is deleted from rsreportserver.config. Only delete the <URL> nodes.  Restore the config file and restart SQLServerReportingServices.

Further Reading

  • Powershell ReportingServicesTools (repository: https://github.com/Microsoft/ReportingServicesTools) contains commands for configuring as well as administering SSRS.  See Claudio Silva’s SQL Bits presentation: https://sqlbits.com/Sessions/Event17/Administrating_SSRS_without_boring_web_based_clicks.

Colours

Colour Choices for Graphs and Tables

Heatmap Colouring for SSRS

Sample colours for a range of 10 levels with mid-point roughly yellow.

Code for BackgroundColor property of cell:

= IIF(Fields!FieldName.Value >= .98, "#63be7b", 
  IIF(Fields!FieldName.Value >= .95, "#86c87d", 
  IIF(Fields!FieldName.Value >= .92, "#b9d780", 
  IIF(Fields!FieldName.Value >= .88, "#e1e383", 
  IIF(Fields!FieldName.Value >= .84, "#feea83", 
  IIF(Fields!FieldName.Value >= .78, "#fedf81", 
  IIF(Fields!FieldName.Value >= .72, "#fdd780", 
  IIF(Fields!FieldName.Value >= .68, "#fba977", 
  IIF(Fields!FieldName.Value >= .65, "#fa9573", 
									 "#f8696b" )
))))))))

 

Standard Colours

Sample Code Note
 123 #DDEBF7 Excel Pivot Header Blue
 123 #FFF2CC Excel Light Yellow Background

 

Mscomctl32.ocx ActiveX Control

Phenomenally useful. Not supported at 64bit.

Location

Normally:  C:\Windows\SysWOW64.

Versions

I’m using version 6.01.9846 of 9/12/2015

Registering

  • C:\Windows\system32>Regsvr32  c:\windows\syswow64\mscomctl.ocx
  • Unregister C:\Windows\system32>Regsvr32 /u  c:\windows\syswow64\mscomctl.ocx

Troubleshooting

First check References.  If it shows as MISSING, untick it and close the References dialog box.  Compile the code (which will fail, naturally), then re-add the reference.

Problem Library is  correctly (Re)Registered but you get The Expression MouseDown you entered …  error.  Or there is a The OLE server may not be registered error when double-clicking a control in design view:

The operation on the ListViewCtrl object failed.  The OLE server may not be registered.  To register the OLE server, reinstall it.

The OLE server may not be registered

Solution:  open the reference dialog and increase the Priority of the Microsoft Windows Common Controls 6.0 (SP6) entry  as far as it will go (it won’t go higher than the Access Object Library reference). This has immediate effect.

 

 

 

 

Backup Compression

Sample backup compression figures – SQL Server 2012.  The relative sizes and timings are interesting. This was a database with data that compressed well.

  • Uncompressed backup file:  73.4GB.  18 minutes to create
  • Compressed backup:  17.5GB.  About 20 minutes to create
  • Compressing the 73.4GB backup file using 7-zip:  9.7GB.  About 2 hours to compress.