Automatically Create Full Power Platform Pipelines With Power Automate
I've shared my thoughts about Personal Power Platform Pipelines although I did see, that in less than a month, they went GA and added new features, so I may follow up with another article on those. But it did get me thinking. Pipelines should be "For All" as Microsoft are calling Personal Pipelines, but with their limitations, I still don't think this is mature enough for medium to enterprise businesses, but how did we make them for all? With Automation!
The idea is that Makers should be using Pipelines, but there is some administration effort to setup a Custom Host and then orchestrate each Pipeline. While this is more manageable in smaller organisations, large to enterprise organisations face a struggle, if you have hundreds of Environments and thousands of Makers. Let's automate the creation of Full Custom Host Pipelines with a Canvas App and Power Automate Cloud flow.
Starting with the Canvas App. In my basic example below, I'm using 3 drop down menus, however, this can be easily extended to have more environments if your pipeline such as Dev -> UAT -> Test -> Pre Prod and Production. The drop downs contain tables with environment names and GUIDs that I can put into a single collection.
The code for the collection is:
ClearCollect(
colEnvironments,
{
EnvironmentName: drpDev.Selected.EnvName,
Order: 1,
Guid: drpDev.Selected.GUID,
Type: "Dev"
},
{
EnvironmentName: DrpUAT.Selected.EnvName,
Order: 2,
Guid: DrpUAT.Selected.GUID,
Type: "Target"
},
{
EnvironmentName: drpProd.Selected.EnvName,
Order: 3,
Guid: drpProd.Selected.GUID,
Type: "Target"
}
)
From the collection, we can trigger a flow and send the collection. To do this, we first convert the collection to JSON and then send it as a single payload to the flow.
Set(varJSONObject,
JSON(colEnvironments));
'PowerApps-CreatePipeline'.Run(varJSONObject)
Create a Power Automate Cloud Flow with a Power Apps V2 (Notice the space) and have a single input parameter of type Text.
Next Parse the JSON step just lets us interact with the data a bit easier, but if you are proficent at writing the Workflow Expression Language, you can skip this part.
I'm not going to provide the code for this, other than the above screenshot, as you could be adding different columns in and you can just generate your schema from a sample.
Next initialise 2 variables of type string. One of these is so that we can build a dynamic FetchXML query and one is to store the previous record ID in a loop so it can be written to the next record.
Let's make the dynamic FetchXML query now and we'll use it later on. What we do is add the GUIDs from the ParseJSON step to an append to Variable action, with some FetchXML code around it.
body('Parse_JSON')
<condition attribute="environmentid" operator="eq" value="@{items('Loop_and_Create_FetchXML')['Guid']}"/>
What the above code above does, is creates a condition FetchXML query that we can slot into another step, allowing us to just query the records we need and not bring back everything and have to filter it later. We'll use this variable much later in the flow.
The next few steps are basically a slightly altered version of one of my other blog post about how to check and then create environments in a custom host, with the differences being that I'm not pulling all environments using the Power Platform For Admins connector, instead I'm using the data from the Power App. The other difference is deciding what type of Environment you want to provision a Developer or Target environment, as this will determine where it can be used in your pipeline. I'll still add the full steps below.
We need to list the environments from the custom host to make sure we don't create duplicates.
<fetch version="1.0" mapping="logical" no-lock="false" distinct="true">
<entity name="deploymentenvironment">
<attribute name="statecode"/>
<attribute name="name"/>
<attribute name="createdon"/>
<order attribute="name" descending="false"/>
<attribute name="environmentid"/>
<attribute name="environmenttype"/>
<attribute name="validationstatus"/>
<attribute name="deploymentenvironmentid"/>
<filter type="and">
<condition attribute="statecode" operator="eq" value="0"/>
</filter>
</entity>
</fetch>
Then we shape the data from the Custom Host environment using the trick with the Select action to get an array of the environments IDs, so we can combine them as they will be the same type.
For this, use a Select action, the "From" will be the body/value of the List rows from Custom Host, and for the Map, we will use a little trick. Switch the input from Key value Mode using the toggle, which would allow you to just define the JSON structure yourself, and add the Environment Id into this field as dynamic content.
This automatically just creates an array of the values, no key value pairs required. This is a really cool trick!
Then we filter the two arrays together.
The "From" is the body of the "Parse JSON" step. The filter query Output on the left is from the "Select Pipeline Host Environment to Shape" step and the one on the right is the Body Guid from the "Parse JSO" Step. You may need to write the code for the Body Guid to be Item()['Guid] in my example.
Doing the above step will filter out existing environments from the new ones provided by the Canvas App.
Next we need a condition, this will be used to check if we need to create any new environments in our Custom Host.
This code checks to see if the array contains data and therefore, do we need to create a record or not. The code for this is:
length(outputs('Filter_existing_env_by_list_env')['body'])
At this point, I add another Parse JSON step to make it easier to work with the output of the combined arrays.
Next we loop through and create the environments that are needed for our pipeline.
The Environment ID and Name I'm taking from my previous "Parse JSON to format data" step. You can select them as dynamic content from the picker, but the code is:
items('Loop_and_create_Deployment_Env')['Guid']
items('Loop_and_create_Deployment_Env')['EnvironmentName']
For the environment type, I use a basic If() statement:
If(equals(items('Loop_and_create_Deployment_Env')['Type'], 'Dev'), 200000000,200000001)
Where the values equal the Choice values in the system. This should be the same for all pipelines, but just double check your system.
That should create your environments in the right way to be able to use them for a pipeline. This is different than my old blog post as we were creating all those are target environments to prevent personal pipelines.
Now onto Pipeline creation! Add a list rows action outside of the condition so we can retrieve the GUIDs of the record we just created or that already exist int he system. We run it on table name and this is where our dynamic FetchXML is used.
<fetch version="1.0" mapping="logical" no-lock="false" distinct="true">
<entity name="deploymentenvironment">
<attribute name="statecode"/>
<attribute name="name"/>
<attribute name="createdon"/>
<order attribute="name" descending="false"/>
<attribute name="environmentid"/>
<attribute name="environmenttype"/>
<attribute name="validationstatus"/>
<attribute name="deploymentenvironmentid"/>
<filter type="and">
<condition attribute="statecode" operator="eq" value="0"/>
<filter type="or">
@{variables('varFetchXML')}
</filter>
</filter>
</entity>
</fetch>
The above FetchXML has an Or filter and combined with our FetchXML variable, we are searching for records with the Environment GUID we've populated and it allows us to bring back the dataverse GUID/Record ID.
Again, we shape the data using a select to make it easier to work with.
Then we are going to loop through and create a single array from the array we got from the Canvas App, containing additional data like the order and type and combine it with the Dataverse GUID for the record, that way we have all the data in a single array and can easily work with it when creating records.
The data we will loop through is from the "Parse JSON" step at the start.
body('Parse_JSON')
Then we do another filter of the array.
We are filtering the "From" with the output of the "Select Shape Rows"
body('Select_Shape_Rows')
The filter query is the GUID of the "Select Shape Rows" step on the left and the right is from the "Parse JSON" loop.
Left
item()?['Guid']
Right
items('Loop_to_create_single_array')?['Guid']
Then we can combine the array inside the loop with a compose step and the following code.
{
"EnvironmentName": @{items('Loop_to_create_single_array')?['EnvironmentName']},
"Guid": @{items('Loop_to_create_single_array')?['Guid']},
"Order": @{items('Loop_to_create_single_array')?['Order']},
"Type": @{items('Loop_to_create_single_array')?['Type']},
"RecordID": @{first(body('Filter_array'))?['RecordID']}
}
Outside of the loop, we can reference this compose output which combines everything from this output into an array. This is something we lovingly refer to as Pieters Method.
I also add a sort to this, based on the order property, to ensure we get the right order for the array.
sort(outputs('Compose_Combined_Array'), 'Order')
Then need to create the Pipeline record. You may want to pass a name for the Pipeline in from the Power App or come up with something relevant to your organisation. I create the one below for testing.
Next we need to relate this newly created Pipeline record to the developer environment. In my Canvas App, I ensure my Developer environment would be number 1 in the order, and with the sort function I ran above, that it would be the first in the array I'm working with.
Use the Relate a row action for dataverse. The table is the Deployment Pipeline and the row ID is the Pipeline we just created. The relationship is the one in the screenshot.
The "Relate With" is a tricky code to get right. It's the URL of your dataverse environment or the oData ID/link for the record. You can find this as an output of the previous step if you are unsure. Then inside the brackets we add the GUID/Dataverse Record ID of the first row from the array which would be your development environment.
https://<YourOrgHere>/api/data/v9.1/deploymentenvironments(first(outputs('Compose_Outputs_of_Loops'))?['RecordID'])
Last but not least, we need to create our pipeline stages with a loop, create records and a variable.
We're going to loop through the array of the environments but we will skip over the first row as that is our development environment and we just need all subsequent environments in the pipeline.
skip(outputs('Compose_Outputs_of_Loops'), 1)
Then we create the pipeline stages
The deployment pipelines lookup is to the Pipeline record we created earlier. The environment name and the lookup to the target environment is from the array. The previous deployment stage is a small formula.
if(empty(variables('varPrevRecordID')),'',
concat('deploymentstages(', variables('varPrevRecordID'), ')')
)
This code checks to see if the variable contains data, if it does, concatenate the variable with the lookup details, else, if it does not contain data, put blank in. The reason we have this formula and the next step is the flow, is to determine the order in which the pipeline stages run. This is why we pass an order to the flow and why we sorted it based on order earlier, so that if this is supposed to run after another stage, we populate it in here.
The variable updates the variable value with the record just created, then it can be used the next time the loop runs.
body('Create_Deployment_Stages_Runs')?['deploymentstageid']
And that's it. Once it runs, you have a full custom host pipeline orchestrated. I think this will be useful for large/enterprise organisations that don't want the admin of creating multiple pipelines but also don't want to give people access to the pipeline model-driven app. This can also be extended to add in pre-deployment steps, approvals of pipeline creation etc.
Hope this is useful and again, if there is demand for this, I can release the flow as a template.
Ciao for now
MCJ
Comments