Setup Proxmox with Let’s Encrypt and Loopia DNS challange

I recently installed Proxmox in my homelab and wanted to use Let’s encrypt certificates for my nodes. I have my domain registered at Loopia a Swedish webhosting company. When using their DNS services it’s possible to create an API user for the ACME client to automate the DNS challenge. This is needed to prove that you own the domain.

First log in to Loopia customer portal and create a subdomain that your proxmox node will use. In my case it’s pve.homelab.domain.com. No need to create a associated A record. That should be handled by your internal DNS.

Next step is to create a API user and assign appropriate permissions.

It needs to have the following permissions:

  • addZoneRecord
  • getZoneRecords
  • removeZoneRecord
  • removeSubdomain

Next step is to configure proxmox. Log in and locate ACME under datacenter. Click add under challenge plugins. Enter your Loopia API user in following format.
LOOPIA_User = proxmoxlab@loopiaapi
LOOPIA_Password = secretpassword

Select the node you are creating a certificate for. Under certificates and ACME click add. Set DNS as challenge type. Plugin should be the one created in previously step. And as domain pve.homelab.domain.com

The default Let’s encrypt account is configured for the staging environment. This could be used for testing. Create a new account for Let’s Encrypt V2 directory.


The final step is to press order certificate and wait for the validation. Make sure to select the production account.

Access node on https://pve.homelab.domain.com:8006/ and verify your new certificate.

Use Terraform to get a free SSL certificate for Azure

I want to use a free SSL certificate from Let’s encrypt to secure my Azure resources. For example an application behind a Application gateway. I will use a key vault to store my certificate. Beside azurerm Terraform provider vancluever/acme is also used. For documentation visit https://registry.terraform.io/providers/vancluever/acme/latest/docs

In order to get this setup to work my domain needs to be handled by Azure a public DNS zone for DNS-01 challenge (https://letsencrypt.org/docs/challenge-types/#dns-01-challenge)

First we need to create the dns zone and key vault in a TF-file for example main.tf

resource "azurerm_resource_group" "rg_dns" {
      name     = "rg-dns"
  location = "Sweden Central"
  
}
resource "azurerm_resource_group" "rg_keyvault" {
      name     = "rg-keyvault"
  location = "Sweden Central"
  
}
resource "azurerm_key_vault" "keyvault" {
    name = "kv-secrets-keyvault"
<code>    resource_group_name = azurerm_resource_group.rg_keyvault.name
</code>    location = azurerm_resource_group.rg_keyvault.location
    sku_name = "standard"
    tenant_id = "41ded048-xxxx-4758-a8a7-xxxxxxxxxxx"
    enable_rbac_authorization = true
}
resource "azurerm_dns_zone" "dns_public" {
  name                = "cloud.labbserver.online"
  resource_group_name = azurerm_resource_group.rg_dns.name
}

Point the domain or subdomain to your Azure DNS zone at your domain registrator. In this example Cloudflare is used.

Values for name servers are found on DNS zone overview page to the right.

Before proceeding make sure that your user have permissions to create certificates in the key vault. For RBAC this can be handled direct on the resource or by inheritance. The server URL is this example is for the staging environment used for testing your code before applying for a production certificate. Read more at https://letsencrypt.org/docs/staging-environment/

provider "acme" {
  server_url = "https://acme-staging-v02.api.letsencrypt.org/directory"
}

resource "tls_private_key" "private_key" {
  algorithm = "RSA"
}

resource "acme_registration" "reg" {
  account_key_pem = tls_private_key.private_key.private_key_pem
  email_address   = "blog@example.com"
}

resource "acme_certificate" "certificate" {
  account_key_pem           = acme_registration.reg.account_key_pem
  common_name               = "cloud.labbserver.online"
  dns_challenge {
    provider = "azuredns"

    config = {
        AZURE_SUBSCRIPTION_ID = "xxxxxx-xxxx-xxxx-xxxx-xxxxxxx"
        AZURE_RESOURCE_GROUP = "rg-dns"
    }
  }
}

resource "azurerm_key_vault_certificate" "keyvault" {
  name         = "cloud-labbserver-online"
  key_vault_id = azurerm_key_vault.keyvault.id

  certificate {
contents = acme_certificate.certificate.certificate_p12
    
  }
}

As we can se we now have a certificate in our key vault and can see expiration date and have the possibility to download it to our local disk in different formats. If used in Azure it’s best to load it directly from the key vault to make it easier to automate renewals.

Terraform and Active Directory

I have worked a lot with AD during the past years mostly with powershell. This time I needed to create a AD computer object with terraform and started to look into how to setup terraform AD provider.

First you need to configure the provider. I am running terraform on my non domain joined laptop. Terraform needs winrm access to a domain joined server with Active directory powershell modules installed. It’s important to use capital letters in all FQDNs for kerberos to work both in provider.tf and krb5.conf.

provider.tf

provider "ad" {
  winrm_hostname         = "SERVER.HOMELAB.DOMAIN.COM"
  winrm_username         = var.aduser
  winrm_password         = var.adpassword
  winrm_port             = 5986
  winrm_proto            = "https"
  winrm_pass_credentials = true
  krb_realm              = "HOMELAB.DOMAIN.COM"
  krb_conf               = "krb5.conf"
  krb_spn                = "SERVER"
  winrm_insecure         = true
}

We also need to create krb5.conf in order to set up kerberos authentication

[libdefaults]
   default_realm = HOMELAB.DOMAIN.COM
   dns_lookup_realm = false
   dns_lookup_kdc = false


[realms]
    HOMELAB.DOMAIN.COM = {
        kdc     =   DC01.HOMELAB.DOMAIN.COM
        admin_server = DC01.HOMELAB.DOMAIN.COM
        default_domain = HOMELAB.DOMAIN.COM
        master_kdc = DC01.HOMELAB.DOMAIN.COM
    }

[domain_realm]
    .kerberos.server = HOMELAB.DOMAIN.COM
    .homelab.domain.com = HOMELAB.DOMAIN.COM
    homelab.domain.com = HOMELAB.DOMAIN.COM

Now the provider should be configured and ready to use. Next step is to create a AD computer object in main.tf.

resource "ad_computer" "c" {
  name        = "test01"
  container   = "OU=Servers,OU=Stockholm,OU=SWE,DC=homelab,DC=domain,DC=com"
  description = "My TF AD object"
}

To test the code run terraform apply -var aduser=adadmin -var adpassword=secretpw123
and wait for output.and type yes if everything seems fine.

Now we have a new computer object in AD managed with terraform.

Next thing I wanted to test was a bit more complex. I wanted to create a OU structure with several sub OUs for each office. In powershell you can solve it with a nested foreach loop.

$sites = ("Malmo", "Ystad", "Karlstad")
$subOU = ("Servers","Computers","Groups","Users")

foreach ($site in $sites){
    New-ADOrganizationalUnit -Name $site -Description "My office in $($site)" -Path "OU=SWE,DC=homelab,DC=domain,DC=com"
    foreach ($ou in $subOU){
        New-ADOrganizationalUnit -Name $ou -Description "OU for $($ou)" -Path "OU=$($site),OU=SWE,DC=homelab,DC=domain,DC=com"
    }
}

In terraform we need to create two variables as lists. One containing each office and one with our sub OUs. We also use locals to combine them with the setproduct function.

variable "sites" {
  type = list
  default = ["Malmo", "Ystad", "Karlstad"]
}

variable "siteOUs" {
  type = list
  default = ["Servers", "Users", "Groups", "Computers"]
}

locals {
  ous = setproduct(var.sites, var.siteOUs)
}


In order to get this to work we first need to create all Office OUs with for_each and our variable sites. For all sub OUs we use our locals named ous as a source and loops trough all combinations that we created with setproduct. We pick the name from the second array and a part of the path from the first array containing office names. Note that we make sure all office OUs are created first with depends_on.

resource "ad_ou" "ou" { 
  for_each = toset(var.sites)
    name = each.value
    path = "OU=SWE,DC=homelab,DC=domain,DC=com"
    description = "OU for ${each.value} Office"
    protected = false
}

resource "ad_ou" "o" {
  for_each = {
    for o in local.ous : "${o[0]}-${o[1]}" => {
      name = o[1]
      path = "OU=${o[0]},OU=SWE,DC=homelab,DC=domain,DC=com"
      description = "OU for ${o[1]} in ${o[0]} Office"
    }

  }
  name        = each.value.name
  path        = each.value.path
  description = each.value.description
  protected   = false

    depends_on = [
    ad_ou.ou
  ]
}

This is the final result i AD console. I learned a lot while figure out how to solve this in terraform.

Setup and run a Azure automation runbook

A runbook can help you to run scripts on a schedule or trigger them with a webhook. You can either run them in Azure or on your on-premises server. This example will show how to run a script on a on-premises that is connected to Azure Arc.


First step is to create an automation account in your favorite region. You might need to create a resource group as well.


Go to your newly created automation account and look for Hybrid worker groups.

Create a new hybrid worker group and select a name.


Now we have a Hybrid worker group without any hybrid workers. Click hybrid workers to the left


Click add


Select one or more servers and add them to your hybrid worker group. If your list is empty you need to enable Azure Arc at least on one server.



Back to automation account and press Runbooks.


Create a new runbook


Give the runbook a name and select type. In this example Powershell och runtime version 5.1.

Now to the fun part, edit the new runbook and write or paste your script. Select publish when done.


When pressing start a meny to the right enables you to select to start the runbook and let it run on a server in your hybrid worker group.


When the runbook is finished we can view the output or errors. The last line shows the name of the server the script was executed on. Can be useful for troubleshooting if the hybrid worker group contains multiple servers.

To make more use of this capability ro trigger script on a local server from Azure start explore schedules and webhooks.

Add your server to Azure Arc

Azure Arc helps your manage your on-prem servers from Azure portal. To add a server to Azure Arc just search for “Servers – Azure Arc” in the portal and press Add.


This time we will only add one server and can select Generate Script option.


Select your subscription and a new or existing resource group. You also need to select a location.


In this step you can add your desired tags.


The script is ready to be downloaded or copied to your server.


Start powershell as local admin and navigate to the folder where your onbording script is stored.


The script will download and install Azure machine agent and open a web browser where you need to sign in to Azure.


After a couple of minutes our on-prem server is visible in Azure portal.


We can now see some details like operating system and the tags we defined during setup. In a future post I will show what we can achieve with Azure Arc enabled servers.

GPO and local Administrator group

It’s common to use a Group policy preference to push a AD group to local admin group on a server or client.
But something that you might not know is that you can use a variable in the group name to make it a bit more dynamic. With this setup you only need to create the GPO and one AD group per server. No need to logon or use a Powershell script to target each server and and add a group in to det local Administrators group.

First create an empty GPO and link it to an OU containing your servers you want to target.

Then edit your new GPO and go to Computer configuration -> Preferences -> Local Users and Groups and right click on the white area to the right and select New -> Local Group


Set action to update and Administrators (built-in) as group name. Then press Add… under members.


As a member name add your desired group name for server admin groups and end it with the variable %computername%. In my lab environment the group name is homelab\SeverLocalAdmin-%computername%. My AD domain is called homelab.


Next part is to create a AD group following your naming convention and add the computer name at the end. My server I want to test this on is called SQL2016 so the group name is ServerLocalAdmin-SQL2016.


Next step is to verify the result of the GPO. First log in on your server and then run gpupdate /force.


Open Computer Management to verify the members in the local admin group.


Export subnets from Meraki to phpIPAM

In order to populate phpIPAM I needed to export all subnets that was already present in Meraki Dashboard without type all information manually. I found PSMeraki module on Github which is a prerequisite for this script. The script creates a CSV that can be importet in to phpIPAM

$networks = Get-MrkNetwork
$subnets =@()

foreach ($network in $networks) {
    foreach ($vlan in $network) {
        $vlans = Get-MrkNetworkvlan -networkId $network.id

            foreach ($vlan in $vlans){

                    if (!$vlan.subnet){
                        break
                    }
            else{
            $sub = Get-Subnet $vlan.subnet

            $subnetname = $network.name + "_" + $vlan.name 
            [hashtable]$net = @{}
            $net.add('VLAN',$vlan.id)
            $net.add('Section','Company')
            $net.add('Subnet',$sub.ipaddress)
            $net.add('Mask',$sub.Maskbits)
            $net.add('Domain',$network.name)
            $net.add('Description',$subnetname)

            $objVlan = New-Object -TypeName psobject -Property $net
            $subnets += $objVlan
            }
        }
    }
}
$subnets | Export-Csv -Path subnets.csv -Delimiter ","

Powershell group export

I needed to get a list of people in som AD group for an audit so I wrote a quick script to export each group matching my filter to a CSV-file and populate it with name and samaccountname. Setting semi colon as an delimiter ensures that you can open the CSV-file in Excel with no additional work to get columns correct.

$groups = Get-ADGroup -filter { name -like "Company-Fileserver-ACL*" }

foreach ($group in $groups)
{
	Write-Output $group.name
	$file=$group.name + ".csv"
	Get-ADGroupMember $group.name | Select-Object name, samaccountname | Export-Csv -path $file -NoTypeInformation -delimiter ";"
}

Setup AWX Vcenter inventory with tags part 2

This is the second part in how you can setup Vmware inventory in AWX based on VM tags. Link to first part.

Create a file that ends with vmware.yml or vmware.yaml. For example invent. I called my file invent.vmware.yml

---
plugin: vmware_vm_inventory
strict: False
hostname: vc.homelab.domain.com
validate_certs: False
with_tags: True
hostnames: 
  - 'config.name'
compose:
  ansible_host: 'guest.hostName'
keyed_groups:
  - key: 'tags'
    separator: ''

Make sure to set you hostname and check in your file in a Git repository.

Create a project, my is called Inventory an select Git as SCCM Type. Add you Git repository URL as SCM URL. Don´t forget to create SCM credentials if you haven’t done that already. Set /opt/my-envs/vm-tags as Ansible environment.

Next step is to create a custom credential type. I called my Vmware_Inventory

#Input configuration
fields:
  - id: username
    type: string
    label: Username
  - id: password
    type: string
    label: Password
    secret: true
required:
  - username
  - password

#Injector configuration
env:
  VMWARE_PASSWORD: '{{password}}'
  VMWARE_USERNAME: '{{username}}'

Create a new credential with your new credential type. You will need a user with read permissions in your Vcenter server.

Go to Inventory and create a new Inventory and give a name and hit Save.

Click sources and create a new source. Give it a name and select Sourced from a project as a sources. Set /opt/my-ens/vm-tags/ as ansible environment. Search for and select for your credential with read permission in Vcenter.

Now you should be able to sync your new inventory.

After the sync has finished you should see your hosts and groups.

For example I have connection details as a group variable on my group win. All my windows server are in the group win. I needed you cand have second group like SQL servers and then add that group to win group to allow it to inherit all win group variables.

---
ansible_winrm_server_cert_validation: ignore
ansible_port: 5986
ansible_connection: winrm
ansible_winrm_transport: kerberos

That’s it, hopefully this can be helpful to any one seeking information about AWX and Vmware tags as inventory groups.

Setup AWX Vcenter inventory with tags part 1

New to AWX and I had a goal to setup Vcenter as an inventory source with groups based on vmware tags. I got that setup working with ansible and started to investigate how to achieve same result in AWX. After a couple of days testing I got some hints on Reddit and was able to get it working as expected. Hopefully this guide can help someone (and me next time) setup inventory with tags.

First step if not already done is to install AWX. I will not cover the setup. It is already covered here https://github.com/ansible/awx/blob/devel/INSTALL.md. I have chosen to install on a standalone Docker host in my Home lab running CentOS.


Open the inventory file install/inventory in your favorite editor.
Look for and uncomment custom_venv_dir=/opt/my-envs/

Create dir: mkdir /opt/my-envs

Run the playbook: ansible-playbook install.yml -i inventory

Create folder and Python env and install all prerequisites

mkdir /opt/my-envs/vm-tags
python3 -m venv /opt/my-envs/vm-tags/
source /opt/my-envs/vm-tags/bin/activate
yum install gcc
yum install python36-devel
pip3 install psutil
pip3 install ansible
pip3 install pyaml
pip3 install requests
pip3 install PyVmomi
pip3 install --upgrade pip setuptools
pip3 install --upgrade git+https://github.com/vmware/vsphere-automation-sdk-python.git
deactivate

Log on to AWX and navigate to Settings -> System.
Add /opt/my-envs/vm-tags to CUSTOM VIRTUAL ENVIRONMENT PATHS

Last step in this part is to verify our new custom env. Go to ORGANIZATIONS and push then pencil to edit default organization.

Verify that you can see your new Ansible Environment.