AVD – Show total sessions across all pools / subs

The AVD workbooks coming from Microsoft and other sources like the ITProCloud are nice but are based on individual hostpools. But what if you need to know your total session user count over time? Below query will do exactly that. Run it in Azure Monitor against your log analytic workspaces that capture your hostpool diagnostic data.

let StartWin = startofday(ago(30d));
let EndWin = endofday(ago(1d));
let lookbackWindow = 1d;
let StartCountDay = StartWin - lookbackWindow;
WVDConnections
| project CorrelationId, State, TimeGenerated, UserName
| as Connections
| where State == "Started"
| extend StartTime = TimeGenerated
| join kind=fullouter
(
    Connections
    | where State == "Completed"
    | extend EndTime = TimeGenerated
)
on CorrelationId
| extend EndTime = coalesce(EndTime, EndWin) // if connection not ended yet use lastday
| where EndTime >= StartCountDay  // chop out connections the ended before our window started
| extend StartTime = coalesce(StartTime, StartCountDay)  // if start aged off set at start of lookbackwindow
| where StartTime <= EndWin
| extend CorrelationId = coalesce(CorrelationId, CorrelationId1)  // fix fields that only came from a completed record
| extend UserName = coalesce(UserName, UserName1)
| project StartTime, EndTime, CorrelationId, UserName  // chop down colms to just what we need
| extend StartTime=max_of(StartTime, StartCountDay), EndTime=min_of(EndTime, EndWin)  // chop connections to window
| extend _bin = bin_at(StartTime, 1d, StartCountDay)  // #1 start of first day connection appears
| extend _endRange = iff(EndTime + lookbackWindow > EndWin, EndWin,
                             iff(EndTime + lookbackWindow - 1d < StartTime, StartTime,
                                    iff(EndTime + lookbackWindow - 1d < _bin, _bin, _bin + lookbackWindow - 1d))) // #2 last day connection will appear
| extend _range = range(_bin, _endRange, 1d) // #3 create a start of day timestamp for every day connection existed and/or day it will be counted
| mv-expand _range to typeof(datetime) // #4 
| summarize Users = dcount(UserName) by Days=bin_at(_range, 1d, StartCountDay) // #5 sum startofday timestamps
| where Days>= StartWin // #6 chop off days we dont want to display
| sort by Days asc

Power Automate – Combine multiple JSON objects for Response

Today I was looking at running multiple HTTP requests and combine all of the data into one JSON object in Power Automate to respond back to my PowerApp.

I was browsing through various blogs and articles but didn’t find an efficient solution that didn’t require a lot of steps or complicated data manipulation. I wanted to simply return the object from Response, without doing any kind of apply for each or any other sorcery in between.

My problem was though, that some of HTTP return objects had the same schema. In the beginning I was kind of successful using the union(outputs('HTTP')['body'], outputs('HTTP_2')['body'] function. But this only helped for JSON objects that had a different schema as union would simply overwrite properties if they would have the same name.

After some struggle I thought why not simply building my own schema and referring to the output of the HTTP queries. And it worked! Power Automate can be so simple. Keeps amazing me 🙂

That way I was able to append or combine all JSON objects from my HTTP calls into one response and then properly process it from the calling PowerApp. Cool stuff!

Powerapps + Gitlab + Terraform + AzureVirtualDesktop

Streamlining your AVD Project Intakes

Welcome to Part 1 (Frontend) of my multi-part series of automating your project intake request process. This article describes how to automatically roll-out AVD hostpools based on (M365) user input. I will try to give you a good enough picture so you can replicate but have in mind, there are so many specifics about how we implement all of these technologies, this will not be an easy to copy guide at all! See it as a blueprint and change technologies / steps where needed. I see a lot of folks using ADO as their CI/CD tool or maybe even use Bicep as the IaC tool. Probably all of this can work. By no means is this the perfect solution but it helped us automating a lot of our work.

My problem

I am working for quite a big company that works with numerous vendors, which process a variety of tasks for us. The vendors we work with get (A)AD accounts from us but not hardware. Most of them have to get access to our internal systems. We have been using Horizon View for this since almost a decade but now Azure Virtual Desktop became our new standard. When on-boarding new vendors, we want to separate them out into their own Resource Group, AVD Host Pool, Storage Accounts and so on. Every vendor has different requirements so we need to be able to separate them.

The design

That’s a rough diagram of our setup. We will focus on the red circle. Everything else is already stood up by our enterprise architecture group and we tap into it.

basic architecture

The red circle will include:

  • Resource Group
  • Host Pool
  • Virtual Machine(s)
  • DSC extensions for domain join and host pool join

Our requirements

  • Let users request their own AVD pools (for either new projects, new vendors that they hired or simply for testing AVD)
  • These requests need to go through an approval process (cost center owner)
  • Deployment needs to be consistent with our Azure landing zone policy (tags, naming, etc.)
  • State files need to be kept in case the requester wants to modify the number of machines or my team wants to cleanup the pool again

My solution

We want to use an IaC automation. Even better, a solution users can self-service. And in best case (yes there is more) that whole process should be happening in a quality controlled manner, where changes can be done and tested without impacting production. While our host pools are all connected to the same infrastructure, you can use the below concept to create separate vnets & subnets and put NSGs on them. All possible! Tool wise, we defined Gitlab.com as our Devops tool for pretty much all automation projects. And since we are also a Microsoft shop, I simply made use of the apps that come with that toolkit. So that is what I went with and what are the requirements for this blog post.

Frontend: Sharepoint, Powerapp and Flow

Frontend Logos

Codebase & CI/CD: Gitlab

Gitlab logo

IaC: Terraform

terraform logo

Frontend

Sharepoint

First steps first. We have to create something that users can enter their data in and that keeps a history of records. While PowerApps is a very complex tool, all we have to start with is creating a Sharepoint List. Setup that list with the columns required for your project. A good list of columns to start with:

  • ProjectTitle (Single line of text)
  • Owner (Person or Group)
  • Justification (Multiple lines of text)
  • Region (Choice)
  • Number of Machines (Number)
  • CostCenter (Single line of text)

In addition to this list you will have more columns automatically added by Sharepoint that we are going to use later (i.e. CreatedBy or ID). We can also add more features like Spot instances for validation environments, Multisession or Singlesession deployments or even adding a column for choosing VM_Size. But that will be part of an advanced section 🙂

PowerApps

After you have setup the Sharepoint List you can start adding a PowerApps Form frontend. To do that click on “Integrate > Power Apps > Customize Forms“. Feel free to add whatever you need as information for deploying your project later. For a start Project Title, Owner, Cost Center and Region is enough to start with.

Below is an example of my production form.

powerapps example

Flow

Flow will be the frontend brain. When a user submits above form, a new sharepoint list item will be created. Once that item is created, Microsoft flow will trigger and pick up the work. We will be using flow for managing the user facing interaction through O365 and handover to Gitlab using the Pipeline Trigger API. Below is an example of the flow we are using to give you a quickstart.

Watchout: To make API calls you have to have a flow premium license assigned to the owner of the flow. The good news is, you only need one, no matter how many people use the Powerapp that feeds it.

flow example

If the manager of the Owner approves, the next flow kicks off that’s doing the actual work.

if approved

As you can see we are doing some maintenance during the flow, like updating the state in the status column or sending out emails to update users during the status. The magic really happens inside the PipelineTrigger step though. This is where we call the Gitlab API to handover our payload (the user input) to terraform. Its then repeatedly calling the Gitlab API to report on the status of the pipeline. If it succeeds, that’s fine; if not, we will send a message to the AVD team to check whats going on.

The pipeline trigger as an example below. The <> values come from dynamic fields in flow.

https://www.gitlab.com/api/v4/projects/1234/trigger/pipeline?token=xxxxxxxx&ref=<branch-name>&[variables]projectTitle=<projectTitle>&[variables]owner=<Owner>&[variables]CostCenter=<CostCenter>&[variables]Region=<Region>

This should cover the frontend aspect for now. In the next article i’ll explain more about setting the up the Gitlab pipeline.

Assign RBAC Custom Role on Management Group Level

Recently i had to add custom roles to our management group and since this is not supported through the GUI i had to go through powershell.

Create the .json file first and set your management group in the assignable scope section.

{
  "Name": "Start VM on Connect",
  "Id": null,
  "IsCustom": true,
  "Description": "Allowed starting up VMs",
  "Actions": [
    "Microsoft.Compute/virtualMachines/start/action",
    "Microsoft.Compute/virtualMachines/read"
  ],
  "NotActions": [],
  "AssignableScopes": [
    "/providers/Microsoft.Management/managementGroups/mgmt-wvd"
  ]
}

After that, run the following command specifying the json file you setup.

New-AzRoleDefinition -InputFile C:\temp\RBAC.json

Thats it 🙂

(FPV) Quadcopter X Setup

So as of yesterday i finally finished something that i had in my mind for such a long time.

My (FPV) Quadcopter X setup!

As i do not have any flying skills yet i will not get the FPV (First Person View) equipment yet.
For now i definitely want to practice otherwise it would just crash my stuff.

This is my parts list and i am really really satisfied with the quality / performance of the parts.
You can basically call it a budget build but it’s not trading quality against price. So it’s not super super cheap.

Quadcopter Parts

1x AfroFlight Naze32 Acro AbuseMark FunFly Controller – Soldered version (Horizontal Pin)
2x Turnigy nano-tech 1500mah 3S 25~50C Lipo Pack
6x Afro ESC 20Amp Multi-rotor Motor Speed Controller (SimonK Firmware)
6x DYS BE1806-13 Brushless Motor for Multirotor (2300KV) 24g
1x Turnigy 9X 9Ch Transmitter w/ Module & 8ch Receiver (Mode 2) (v2 Firmware)
10x Gemfan 5030 Multirotor ABS Propellers One Pair CW CCW (Black)
1x Tarot 250mm Mini Through The Machine Quadcopter With PCB

Accessoires

1x 3W Red LED Alloy Light Strip 120mm x 10mm (2S-3S Compatible)
1x (Cutting Mat) WEDO Schneideunterlage Cutting Mat, selbstschließende Oberfläche, 45 x 30 x 0,3 cm CM 45, grün
1x (Solder Iron) OHE Profi Lötstation Starterset – Lötkolben + Station + Lötzinn + Lotsaugpumpe + Spitze
1x (Lipo Beeper) 1-8s Spannung Lipo Akku Alarm Checker Schutz Anzeiger
1x (Battery Charger) Andoer SKYRC iMAX B6 Mini Profi Balance Charger / Disch für RC-Akku Lade ( SKYRC iMAX B6, B6 Mini Balance Charger )
1x (Power Adapter for Battery Charger) LEICKE Netzteil 60W 12V 5A 5,5*2,5mm für LCD TFT Bildschirm Monitor, LED Strips, NAS, ext. Festplatten, Pico-PSU bis 60W

Some things to mention

The Afro ESCs 20A have different bullet connectors than the motors so you would need to be aware of that when you do not want to solder things.
Sadly the Afro 12A ESCs were out of stock during the time i ordered and 20A are doing the same job.

Build everything starting from the motors to the “wings” to the frame.
After that start with the transmitter and flight controller.
Try to imagine the whole setup in the beginning or just check out the internet for similar builds.

One thing to mention is this Youtube playlist that helped me a lot during the whole process:

 

Have in mind that you need to check the motor setup (which motor is where) on your flight controller.

First i was building everything counting from 1 to 4, starting on the upper left.
Big mistake! You need to checkout the setup in your configuration software of the flight controller.

Gallery

https://plus.google.com/u/0/photos/109230190124733572758/albums/6155262292044435185

:: PowerCLI | Remove VMs

:: PowerCLI | Remove VMs

Just a quick script from me that will help delete a bunch of VMs.
You need to have a list with all computer names in a .txt or .csv file for that.

Uncomment the get-credential part if you are not running the ISE with your admin that has access to the vCenter server.

<#
.SYNOPSIS Removes a VM from View and vCenter 
.EXAMPLE get-content "list-of-machines.csv" | remove-vm.ps1 
.EXAMPLE remove-vm.ps1 vm1 
.EXAMPLE remove-vm.ps1 vm1, vm2, vm3 
.PARAMETER VM One or more Virtual Machine names 
#>
[CmdletBinding()]param([Parameter(Mandatory=$True,ValueFromPipeline=$True)][string[]]$VM ) 

BEGIN { 
  $ErrorActionPreference = "Stop" 
} 

PROCESS { 
  ForEach ($a in $VM) { 
    try { 
      remove-vm -vm $a -DeletePermanently -confirm:$false 
      Write-Output "$a successful!" 
      } 

catch { 
  $ErrorMessage = $_.Exception.Message 
  Write-Output $ErrorMessage 
  Write-Output "$a failed!" 
} 

finally {} 
} 
} 
END {}

PowerCLI | Increase and Expand VM Guest OS Disk

:: Increase and Expand VM Guest OS Disk ::

Hi all,

as i finally have worked out two scripts that are able to increase and extend the Guest OS Disk (XP/Win7) i thought you might find it useful.

Please note that for the XP version to work both VMs (or all VMs) need to be powered off. Like that you can also increase/extend system disks.

 

hdd-increase-xp.ps1

<#
.SYNOPSIS
Increases Harddisks for Windows machines (including Guest OS extend)
.EXAMPLE
get-content "list-of-machines.csv" | hdd-increase.ps1
.EXAMPLE
hdd-increase.ps1 vm1
.EXAMPLE
hdd-increase.ps1 vm1, vm2, vm3
.PARAMETER VM
One or more Virtual Machine names
#>

[CmdletBinding()]
param(
    [Parameter(Mandatory=$True,ValueFromPipeline=$True)]
    [string[]]$VM
)

BEGIN {
    $ErrorActionPreference = "Stop"
    $admincred = Get-Credential
    $capacityKB = "62914560"
    $helpervm = '<insert helper vm here>'
    }

PROCESS {
    ForEach ($a in $VM) {
        Get-VM $a | Get-Harddisk | where {$_.Name -eq "Hard disk 1" } | Set-HardDisk –CapacityKB $capacityKB -ResizeGuestPartition -helpervm $helpervm -Confirm:$false -GuestCredential $admincred
        #Get-VM $a | Get-View -ViewType VirtualMachine -Filter @{"Name" = $_ } | write-output $_.Guest.Disk.Length
        Write-Output "$a successful!"
        Start-VM $a
        Write-Output "$a started."
        }
    }
END {}

hdd-increase-w7.ps1

<#
.SYNOPSIS
Increases Harddisks for Windows 7 machines (including Guest OS extend)
.EXAMPLE
get-content "list-of-machines.csv" | hdd-increase.ps1
.EXAMPLE
hdd-increase.ps1 vm1
.EXAMPLE
hdd-increase.ps1 vm1, vm2, vm3
.PARAMETER VM
One or more Virtual Machine names
#>

[CmdletBinding()]
param(
    [Parameter(Mandatory=$True,ValueFromPipeline=$True)]
    [string[]]$VM
)

BEGIN {
    $ErrorActionPreference = "Stop"
    $admincred = Get-Credential
    $capacityKB = "62914560"
    $harddisk = "Hard disk 1"
    }

PROCESS {
    try {
        foreach ($a in $VM) {
        Get-VM $a | Get-Harddisk | where { $_.Name -eq $harddisk } | Set-HardDisk –CapacityKB $capacityKB -ResizeGuestPartition -Confirm:$false
        Write-Output "$a successful!"
        }
    }
    catch {
        $ErrorMessage = $_.Exception.Message
        Write-Output $ErrorMessage
        Write-Output "$a failed!"
        }
    finally {
    }
}
END {}

 

Shellshock – Fix your security

Shellshock – A bug discovered in the widely used Bash command interpreter.

It does poses a critical security risk to Unix and Linux systems –and, thanks to their ubiquity, the internet at large.

 

More info here:

http://www.theregister.co.uk/2014/09/24/bash_shell_vuln/

 

Some more nice info:

https://blog.cloudflare.com/inside-shellshock/