This article and the accompanying resources are a good starting point for learning how to design and debug a .NET Framework PowerShell Provider.
In a prior article Constructing Your First PowerShell Provider. I explained how to build a basic PowerShell Provider. If you've experimented with writing Provider code you've probably realized that building a Provider is unlike, for example, a Windows Presentation Foundation (WPF) application. There are no buttons to "wire-up" and no existing pattern for partitioning your code. Most likely, questions abound, like how should I think about the user interface? What should I return to PowerShell? How do I unit test my Provider code?
My goal is to demystify some PowerShell Provider design concepts and offer some debugging techniques. First I'll provide tips for configuring your development environment and move into design concepts later in the article.
Prepping for 2.0
If you're still on PowerShell version 1.0, move to version 2.0. A complete review of all the new features in 2.0 is beyond the scope of this article, but it's enough to say everything is better and easier in 2.0. The accompanying sample code is built on 2.0 and Microsoft Visual Studio 2008
. If you're on Windows 7 or Windows 2008 Server R2, you're already on 2.0.
You'll also need the PowerShell SDK. Download and install the PowerShell 2.0 SDK. The sample code requires a default install of the SDK. Included in the SDK are samples you'll find helpful. Also, remember to call:
Set-ExecutionPolicy Unrestricted before you run any of the accompanying scripts. If you're unsure of the version you are currently running invoke:
$Host.Version to get the version number. Don't be fooled by the Version 1 folder name when you reference the PowerShell assemblies.
Resources below include links to the appropriate downloads.
Now that the environment is configured, it's time to delve into some code.
One of the more difficult aspects of developing a Provider is duplicating the environment the provider executes in. As I mentioned earlier, a PowerShell Provider is different from other .NET applications a developer may be familiar with.
Testing tools can be adopted to do some of the testing, but once a developer has moved beyond unit testing the underlying objects, nothing exists to feed a PowerShell provider the commands it will receive from the PowerShell environment.
In the sample, I built some Scaffolding to feed the PowerShell Provider a script. A developer can use the Scaffolding to do things like test a PowerShell script to exercise the Provider or walk through script execution against the underlying PowerShell Provider code using, for example, the Microsoft Visual Studio Debugger. Also, a novice Provider developer can leverage the Scaffolding to understand how PowerShell invokes the provider, looking at, for example, when, where, and how PowerShell calls the Provider code.
Here are some of the key parts of the Scaffolding followed by an explanation of how it all works.
Collection<PSObject> results = null;
var ps = System.Management.Automation.PowerShell.Create();
var cmds = new CommandFileReader(pathToFile);
foreach (var cmd in cmds)
var pipeline = ps.Runspace.CreatePipeline();
pipeline.Commands.Add(new Command(cmd.ToString(), true));
results = pipeline.Invoke();
foreach (var res in results)
RunSpaces and Pipelines are all part of the underlying data structures that the PowerShell command prompt leverages. So by using the same data structures the Scaffolding is effectively a PowerShell host. PowerShell executes its commands in the context of a RunSpace. The RunSpace must create a Pipeline, add commands to the Pipeline, and invoke the Pipeline.
Commands come from a Script Text File. Each Command is a line of text. The
CommandFileReader exposes the underlying script file as a collection of strings.
CommandFileReader implements the
IEnumerable interface allowing a developer to utilize the
foreach statement to iterate through the file. Each time the
foreach goes to the next item in the collection it invokes the MoveNext function in the
There are more resources with more on RunSpaces and Pipelines at the end of the article.
That covers debugging, now I'm shifting to Provider architecture and design, starting with some "pointers".
Data Structure "Pointers"
Below are some common functions in a Provider.
protected virtual string GetChildName(string path);
protected virtual string GetParentPath(string path, string root);
protected virtual bool IsItemContainer(string path);
protected virtual string MakePath(string parent, string child);
protected virtual bool HasChildItems(string path);
protected virtual void NewItem(string path, string itemTypeName, object newItemValue);
protected virtual object ItemExistsDynamicParameters(string path);
protected virtual void SetItem(string path, object value);
Looking at the functions above, there are two important implementation concepts a developer must grasp.
- Providers receive and return .NET Objects
- Hierarchy is everywhere encoded in the idea of the Path.
So, picking a design that moves data into and out of the Provider using a hierarchical path and .NET Objects will simplify a Provider design.
Most Providers have a
"\" navigation interface similar to one commonly associated with a hierarchy of directories, and directory contents. Since this navigation is more or less the default behavior of the PowerShell Provider classes and many IT professionals are already familiar with it, I highly recommend the
The key to supporting the navigation is identifying whether a
CmdLet is working on an Item or a Container (collection of multiple Items) and creating/returning an object class. .NET Reflection can serve both purposes.
Here is code from the sample illustrating what I mean.
private object FindMemberContainer(string objName, object obj)
PropertyInfo infos = null;
infos = obj.GetType().GetProperties();
object val = null;
foreach (PropertyInfo info in infos)
if ((info.Name.ToUpper() == objName.ToUpper()))
val = info.GetValue(obj, null);
Reflection allows a developer to create object instances and set object properties by working with the object's metadata. In the example simple classes with a default constructor and fundamental types comprise Items. Dictionaries with a string key and class types comprise the containers. So, for example, a class with two "Child" branches will contain two Dictionaries. The code also leverages the Recursive properties of a Path. A simple class like the
below can be leveraged to move through the hierarchy from class to dictionaries inside the class then to the Items inside the dictionaries. Here is code from the
public object FindObject(List<string> objPath, object obj)
object objValue = null;
if ( obj == null ) //stop right there
objValue = null;
if (objPath.Count == 1) //Found the bottom
objValue = this.SearchForObject(objPath, obj);
var newPath = new List<string>(objPath);
object objFound = null;
objFound = this.SearchForObject(objPath,obj);
var nav = new ClassNavigator ();
objValue = nav.FindObject(newPath, objFound);
private object SearchForObject(string objName, object obj)
object val = null;
val = this.FindInContainer(objName, obj);
if (val == null)
val = this.FindMemberContainer(objName, obj);
private object FindInContainer(string objName, object obj)
object val = null;
IDictionary dict = null;
//Lookup if container
dict = (IDictionary)obj;
val = dict[objName];
As you can see, this class leverages the function I showed earlier, mapping a Path to a destination into the hierarchy of classes. The class also takes advantage of some of the properties of containers in the .NET Framework, in particular properties of the Dictionary collection.
.NET Reflection enables other things, in particular Dynamic Parameters.
Dynamic Parameters Reflection
On common PowerShell
Cmdlets like, for example, Get-Item, Remove-Item, and New-Item there are a standard set of parameters. Providers can supplement the standard parameters with their own set of parameters called Dynamic Parameters. Earlier I mentioned that .NET Reflection makes this easier. Here is the sample code leveraging Reflection to generate Dynamic Parameters.
public static RuntimeDefinedParameterDictionary ParametersFactory(Type classType)
PropertyInfo infos = null;
var dict = new RuntimeDefinedParameterDictionary();
ParameterAttribute attrib = null;
RuntimeDefinedParameter runDefParm = null;
Collection<Attribute> col = null;
var parameterSetName = classType.Name;
infos = classType.GetProperties();
foreach (PropertyInfo info in infos)
//Skip if it is a container
if (!( info.PropertyType.ToString().Contains("Dictionary") ) )
attrib = new ParameterAttribute();
attrib.ParameterSetName = info.Name;
attrib.Mandatory = true;
attrib.ValueFromPipeline = false;
attrib.ParameterSetName = parameterSetName;
attrib.HelpMessage = "Datatype " + info.PropertyType.Name + " class " + parameterSetName;
new RuntimeDefinedParameter(info.Name, info.PropertyType,
As you can see in the sample above, Reflection simply scans a class and extracts the properties to generate the parameters. A developer could, for example, supplement the code above with a set of Custom Attributes to define groups of parameters. .NET Reflection can also be enlisted to create instances of a class and apply the Dynamic parameters to a new instance. The code below implements this behavior.
public static ItemComponent InstanceFactory(RuntimeDefinedParameterDictionary parms,Type classType)
object obj = null;
ItemComponent component = null;
obj = Activator.CreateInstance(classType);
var propLookup =
from p in obj.GetType().GetProperties()
where (!((p.PropertyType.ToString().Contains("Dictionary")) || (p.PropertyType.ToString().Contains("List"))))
foreach (var prop in propLookup)
prop.SetValue(obj, parms[prop.Name].Value, null);
component = new ItemComponent(parms["NameKey"].Value.ToString(), obj, classType);
In both examples above, code skips over containers like Dictionaries and Lists. The code assume that the default constructor of the class with create the Container. As I mentioned earlier, Dictionary collections contain the children on a particular Item. Finally, I have a grab bag of other thoughts you may find helpful.
Separation of Concerns and other Ideas
Dividing the code into PowerShell Provider operational code and Provider "Business Logic" code allows a developer to separate things like exception handling and data structure navigation. So, for example, behaviors associated with being inside of PowerShell can be separated from operations on the core Provider data structure. The sample didn't quite adhere to this policy.
Also consider Extension functions rather than Interfaces or subclassing, especially when dealing with a common data structure shared throughout an application. Often a shared data structure requires operations that should reside in two different assemblies. Rather than creating a third assembly and making everything public separating the code into Extension functions hidden from each assembly decouples the assemblies from each other and allows a developer to keep related code together in the appropriate assembly.
Often a developer must locate a Configuration file used by a Provider. When a Provider is running in a Command Shell the BaseDirectory for the CurrentDomain may not be where the Configuration file is located. Many Providers use PowerShell Variables or .NET Environment variables to locate the Directory with the Configuration file. I prefer something like the code below.
var dirName = (new FileInfo(this.GetType().Assembly.Location)).DirectoryName;
The code above assumes that the assembly is located in the same Directory as the Configuration file. So if, for example, the assembly is located in the Global Assembly Cache, the code above will not have the desired effect.
A PowerShell Provider implementation must fulfill some unique requirements and therefore can be as varied as any other type of .NET application. At some level though, all Providers must deal with returning objects, Path Navigation, and debugging. This article and the accompanying Resources is a good starting point for learning how to design and debug a PowerShell Provider.
PowerShell 2.0 components
PowerShell 2.0 SDK
"How PowerShell Works" - this is a great overview of how the RunSpace and Pipeline work.