As part of my journey to get better at Powershell, I gave myself a task: Create a project utilizing Powershell at its core, to solve a complicated problem as elegantly as possible.
A complicated problem indeed presented itself: How to remotely control multiple computers without having to log into them? Make them perform any task that could possibly be needed? Control them from one central location? And make the solution as easily scalable as possible?
I present to you, the Framework for Remote Execution using Powershell and SQL Server. Or in short: FREPS. Hmm… Actually, I may have to work on that name. 🤔
Let’s say that you have a Powershell script that you want to be executed on a bunch of machines. However, it could be one Powershell script one day, and a different Powershell script on another day, and you don’t know in advance which script needs to be executed when and on which machines. Even worse, most of these remote machines you may not even have direct access to! But all of these machines do have access to at least one central SQL Server instance.
What can you do?
Well, here’s how this works:
- You take your Powershell script, and shove its contents into a table at this central SQL Server instance.
- You install one special Powershell script at each of the remote machines. It is a very lean script. All it does is connect to the central SQL Server instance and basically ask: What do I need to execute now? And whatever it is – execute it.
- Set up a Scheduled Task to run this special script periodically on each of the machines.
- You only need to set this up once on each of the remote machines and then you never need to directly touch them ever again.
- Well, that’s it, actually. It’s super simple and elegant, and yet – extremely powerful!
The Secret Sauce
The special ingredient necessary for this whole thing to work is one very versatile cmdlet in Powershell, called “Invoke-Command“. This cmdlet can receive a parameter called “ScriptBlock“, the contents of which could be an entire Powershell script that would be executed! For example:
Invoke-Command -ScriptBlock {
Write-Host "Hey, I'm inside!"
}
The common purpose of this cmdlet is to execute script blocks “remotely” by specifying some other computer(s) for the “ComputerName” parameter. But in our scenario, we’ll leave it at its default, which is the “local” computer running the script.
The other special ingredient is the fact that this script block can be created from a text string! This can be done using the function [ScriptBlock]::Create(), and that, in turn, could be stored inside a variable!
For example:
$CmdText = "Write-Host 'woah I am so dynamic!'"
$CmdObject = $([scriptblock]::Create($CmdText))
Invoke-Command -ScriptBlock $CmdObject
Output:
woah I am so dynamic!
Yes, that’s right, it’s like Dynamic SQL for Powershell!
Once we have these two ingredients, it means that we can get this dynamic text script from anywhere, including a database table! Our next step becomes clear.
Make it Data-Driven
Listen up, folks, this is the plan:
We’ll create a table in our database that would hold a “queue” of commands to execute per each remote computer (identified as “TargetComputer“):
CREATE TABLE [dbo].[CommandsQueue](
[ID] [uniqueidentifier] NOT NULL PRIMARY KEY DEFAULT (NEWSEQUENTIALID()),
[TargetComputer] [nvarchar](256) NOT NULL,
[ScriptBlock] [nvarchar](MAX) NOT NULL,
[QueuedDateUTC] [datetime] NOT NULL DEFAULT (GETUTCDATE()),
[StartTimeUTC] [datetime] NULL,
[EndTimeUTC] [datetime] NULL,
[Output] [nvarchar](MAX) NULL
)
Next, we’ll create a stored procedure that will be executed by each remote computer and retrieve the next command that it needs to be executing:
CREATE PROCEDURE [dbo].[GetNextCommandToRun]
@Computer sysname = NULL
AS
BEGIN
SET NOCOUNT ON;
UPDATE Q SET StartTimeUTC = GETUTCDATE()
OUTPUT inserted.*
FROM (
SELECT TOP 1 *
FROM [dbo].[CommandsQueue] WITH(READPAST)
WHERE TargetComputer = ISNULL(@Computer,HOST_NAME())
AND StartTimeUTC IS NULL
ORDER BY QueuedDateUTC ASC
) AS Q
END
It’s critical that this would indeed be a stored procedure. We must have this abstraction layer if we ever need to change this logic, without having to update the Powershell script at each remote machine. Remember: Our scenario requires minimum direct interaction with the remote machines.
Similarly, we should also have a stored procedure that would be executed when a command completes its execution, and preferably also save the command’s output in the table:
CREATE PROCEDURE [dbo].[FinishCommandExecution]
@Computer sysname = NULL,
@CommandId uniqueidentifier = NULL,
@CommandOutput nvarchar(MAX) = NULL
AS
BEGIN
SET NOCOUNT ON;
UPDATE Q SET EndTimeUTC = GETUTCDATE(), [Output] = @CommandOutput
FROM [dbo].[CommandsQueue] AS Q
WHERE TargetComputer = ISNULL(@Computer, HOST_NAME())
AND ID = @CommandID
END
Also, Powershell-Driven
Most importantly, we need a Powershell script that would put all of the above into good use. This is the script that would be executed periodically by each remote machine. Let’s say, something like this:
$conn = New-Object System.Data.SqlClient.SQLConnection
$conn.ConnectionString="Data Source=the_sql_server;Database=RemoteExecution;Integrated Security=True"
$conn.Open()
$cmd = New-Object system.Data.SqlClient.SqlCommand($Query,$conn)
$cmd.CommandText = "EXEC [dbo].[GetNextCommandToRun] @Computer"
$cmd.Parameters.Add("@Computer", [System.Data.SqlDbType]::NVarChar, 256).Value = $env:COMPUTERNAME # This takes the local computer's name from the environment variable
$ds = New-Object system.Data.DataSet
$da = New-Object system.Data.SqlClient.SqlDataAdapter($cmd)
$da.fill($ds) | Out-Null
foreach ($r in $ds.Tables[0].Rows) {
Write-Output "Running command: $($r.ID)"
if ($r.ScriptBlock) {
Invoke-Command -ScriptBlock $([scriptblock]::Create($r.ScriptBlock)) -OutVariable CommandOutput
$Query = "EXEC [dbo].[FinishCommandExecution] @Computer, @CommandID, @CommandOutput"
$cmd=New-Object system.Data.SqlClient.SqlCommand($Query,$conn)
$cmd.Parameters.Add("@Computer", [System.Data.SqlDbType]::NVarChar, 256).Value = $env:COMPUTERNAME
$cmd.Parameters.Add("@CommandID", [System.Data.SqlDbType]::UniqueIdentifier).Value = $r.ID
# This concatenates all output messages from the execution and delimits them with an end-line:
$cmd.Parameters.Add("@CommandOutput", [System.Data.SqlDbType]::NVarChar, -1).Value = $CommandOutput -join "`n"
$cmd.ExecuteNonQuery() | Out-Null
} else {
Write-Host "Nothing to run."
}
}
$conn.Dispose()
$cmd.Dispose()
It’s a rather simplified version, but it has all of the essentials.
And last but not least, we should also have an easy way to add commands that we want to be executed, right?
After all, I’d rather not have to manually type a whole Powershell script into SSMS. But what if I already have a Powershell script lying around somewhere? Why not “import” it into that database table?
Hey, that’s a good idea! Let’s do it… With the power of Powershell! 😃
param(
[string]$TargetComputer = $env:computername
,[string]$ScriptInputFile = "inputScript.ps1"
)
$filestream = (Get-Item -Path $ScriptInputFile).OpenText()
$ScriptBlock = $filestream.ReadToEnd()
$filestream.Close()
$scon = New-Object System.Data.SqlClient.SqlConnection
$scon.ConnectionString = "Data Source=the_sql_server;Database=RemoteExecution;Integrated Security=True"
$cmd = New-Object System.Data.SqlClient.SqlCommand
$cmd.Connection = $scon
$cmd.CommandTimeout = 40
$cmd.CommandText = "INSERT INTO [dbo].[CommandsQueue]([TargetComputer],[ScriptBlock]) VALUES(@TargetComputer,@ScriptBlock)"
$cmd.Parameters.Add("@TargetComputer", [System.Data.SqlDbType]::NVarChar, 256).Value = $TargetComputer
$cmd.Parameters.Add("@ScriptBlock", [System.Data.SqlDbType]::NVarChar, -1).Value = $ScriptBlock
$scon.Open()
$cmd.ExecuteNonQuery()
$scon.Dispose()
$cmd.Dispose()
Yup, that should do it.
Now the only thing left to do is to create a Scheduled Task to run the Powershell script on each remote machine, and then we can control all of them from one central location.
But wait, there’s more!
The above is obviously just the bare minimum. We can expand this into a very flexible and robust framework that could do all sorts of things.
Arguments and Parameters
For example, what if we want to add parameters to the remote script?
We can use the “ArgumentList” parameter of the “Invoke-Command” cmdlet which accepts a simple array. The script block can then process these parameters normally.
For example, this works:
(note the declaration of the parameters $paramfirst
and $paramsecond
in the dynamic script block)
$CmdText = 'param($paramfirst,$paramsecond) Write-Output "arg1: $paramfirst , arg2: $paramsecond"'
$CmdObject = [scriptblock]::Create($CmdText)
$CmdParameters = "First parameter value","Second parameter value"
Invoke-Command -ScriptBlock $CmdObject -ArgumentList $CmdParameters
Output:
arg1: First parameter value , arg2: Second parameter value
Pretty cool, right? I’m sure you can think of a few ways to make this data-driven as well. 😉
What if I need to update the remote script?
That’s kinda like asking “who’s watching the watchers”, amiright?
Seriously, though, what if you need to update the script responsible for remote execution, without having to connect to each remote machine and replacing it manually?
Well, why not do it… with the exact same remote execution framework that we already have???
“But Eitan, you magnificent virtuoso of a man!” I hear you ask, “How can a Powershell script update its own contents?”
Simple! All we gotta do is use the “Set-Content” cmdlet and the built-in $MyInvocation.MyCommand object to cause the Powershell script… To update itself! Kinda like this:
$NewScriptBody = 'Write-Output "This is the new script. I live here now."'
Set-Content -Path $MyInvocation.MyCommand.Source -Value $NewScriptBody
Give it a try! Save the above in a Powershell script file and then run it. The script file will update itself! The only difference in our actual scenario would be that $NewScriptBody would get its value from, let’s say, a database table holding the latest version of the script?
WARNING: Obviously, if you accidentally send some kind of a faulty scriptblock to replace the remote execution script, that could break the script and make it unable to connect to the central SQL Server instance and thus – render it unusable. Testing is super-duper important here.
But what about Security?
I hope I don’t really need to tell you how goddamn dangerous all of this is. We’re consolidating a whole lot of power into one central location, which could potentially impact many remote machines.
It’s enough for some malicious actor to intervene in any one phase along the line, and the damage could be immense.
So, here are a few guidelines for you to minimize the risks as much as possible:
- The Scheduled Task on the remote machines should be run by a gMSA (Group Managed Service Account), which is a type of Windows account that can be given permissions and run services and scheduled tasks, but it doesn’t have a usable password and it cannot be used to log into a machine. You should then create for it a Windows login in the central SQL Server, which will be used for retrieving its next commands to execute.
- The permissions given to the Windows account running the scheduled tasks should be limited to only what it was meant to do. That includes both the permissions on the relevant remote Windows machines, as well as the permissions inside the central SQL Server database.
- The permissions on the central SQL Server database should be given to as few users as possible. It’s enough for someone to manually insert a record into the table in the database for it to affect all the remote machines that connect to it. So, naturally, if you have a bunch of logins on that instance with sysadmin or db_owner permissions… Then it’s gonna be a problem.
- TEST, TEST, TEST! You really don’t want to be sending off a script to a bunch of remote machines only to find out later that you had a very dangerous, very embarrassing mistake in that script. When queuing a script to be run, you can use the TargetComputer column to specify a specific remote machine it would run on, so make good use of that for this purpose.
As you may have noticed, most of the above guidelines, in essence, are simply implementations of the Principle of Least Privilege. If you can think of even more ideas on how to further reduce the potential harm this could cause, then all the more Power to you (see what I did there?).
Conclusion
I sure learned a lot while working on this and I hope you learned something new as well from this blog post.
Moving forward, I’ll be developing an open-source project on GitHub to further expand this thing. There, I’ll be sure to add more documentation wherever possible.
You can head over to the repository on GitHub to get you started:
https://github.com/MadeiraData/powershell-remote-execution
You can also submit issues to assist with suggestions and bug reports, and even fork it and create pull requests if you feel like helping out and getting your hands dirty!
I know this has little to do with SQL Server. But it is a good way to learn Powershell a little better, which is indeed becoming a very important tool in the arsenal of a Microsoft Data Platform engineer.
Additional resources
A while since originally writing this blog post, I learned about several publicly available tools that provide a similar solution for remotely managing machines using PowerShell, and executing remote scripts. Some of them have limited free editions and some have a significant cost.
Understandably, all of them are much better polished than the solution I provide in this post, and have the added benefit of having a nice graphical interface to go along with them.
For more details, check out this article that provides a list of such tools:
This is a very interesting idea. I would make one suggestion concerning the structure of the CommandArguments table. Since the arguments/parameters will likely get concatenated together with the Script Block from the CommandQueue table, you should consider changing the ArgumentValue to NVARCHAR(MAX) to avoid the possibility of truncation as described in this article; https://brentonrace.com/sql-server-string-concatenation-and-truncation/
Hi Mac, thanks for your comment.
Actually, the command arguments are not getting concatenated with the script block. They’re sent over as a string array to a separate parameter called ArgumentsList.
It’s looking good, thanks for sharing. I have a question because I am slightly confused due to novice knowledge in PS. How can I use above to get info from remote machines to check on let’s say ssrs service was stopped started and put this data into central repo within a table? The thing is we have wide scale out ssrs deployment done and since there is no sql server installed remote sql query does not work. I though of your approach but not sure if it’s feasible
Hi Rohan,
That is indeed feasible using my approach. You can check the status of a service using the Get-Service cmdlet in PowerShell. Some more info can be found here:
https://docs.microsoft.com/en-us/powershell/scripting/samples/managing-services
Thank you! Also may be it’s just me , pardon me for that, can you mention in blog post which script needs to be created where? From what I am understanding, those table and procs need to be created on central sql instance but which of those powershell script needs to be on target servers? Do I have create and save it as a ps1 and put on target server schedule tasks , little confused there. Also my target servers where I need to my script to collect data – how wil it write to my central repository? Any login or permission needs to be granted as my central is a staging server just to dump data from those prod target servers? Thank you so much for the help!
Hi Rohan,
Looks to me like you understood the architecture very well, actually!
Yes the SQL procedures are to be created in the central SQL server. Yes the ps1 script is to be run by a scheduled task on the target machines.
As the solution states, you can run whatever you want. So you could also run a PowerShell script that connects to wherever you want and do whatever you want.
Thank you! I am seeing issues when testing this. On target server I saved the script — write-output “$(get-service -displayname ‘ sql server reporting services’ | select -object status) “ and when executed , I got the first entry in table commands queue under output as @(status = running) . But when I executed second time it did not write anything in table, I did that after 5 mins hoping new UTC timestamp of the service status should be written. Before I schedule via task schedule need to understand if there is an issue here or I am understanding this incorrect, thanks
Hi Rohan,
Please make sure that the remote machine is able to find an execution command to run. Perhaps after the first execution, the command is marked as done and is not executed again. You’ll need to add some changes to the code if you want the same command to be executed multiple times, as it’s not currently built in.
Great work.
Can probably used as a mechanism to install SQL Server updates (CU’s) to multiple machines