Azure DevOps — Self-Hosted Servers

Salim Haniff
5 min readOct 22, 2020


While you can leverage the convenience of Microsoft’s hosted servers to help you on your journey of incorporating DevOps into your workflows. There may be times when you want to use a server that is not hosted in the Azure DevOps network. According to the security highlights on Microsoft’s website, these may be the limitations you may encounter as well:

  • Although Microsoft-hosted agents run on Azure public network, they are not assigned public IP addresses. So, external entities cannot target Microsoft-hosted agents.
  • Microsoft-hosted agents are run in individual VMs, which are re-imaged after each run. Each agent is dedicated to a single organization, and each VM hosts only a single agent.
  • There are several benefits to running your pipeline on Microsoft-hosted agents, from a security perspective. If you run untrusted code in your pipeline, such as contributions from forks, it is safer to run the pipeline on Microsoft-hosted agents than on self-hosted agents that reside in your corporate network.
  • When a pipeline needs to access your corporate resources behind a firewall, you have to allow the IP address range for the Azure geography. This may increase your exposure as the range of IP addresses is rather large and since machines in this range can belong to other customers as well. The best way to prevent this is to avoid the need to access internal resources.
  • Hosted images do not conform to CIS hardening benchmarks. To use CIS-hardened images, you must create either self-hosted agents or scale-set agents.

The third point is somewhat debatable and comes down to how well the organization has separated networks/IAM/server into proper Dev/Stage/Prod zones. Either way, from the list there is a compelling reason to have self-hosted hosts that can tap into the Azure Pipeline. In this post, we will go over how to leverage this. It is also worth mentioning, a self-hosted server can also reside in AWS/GCP/OpenStack/VMWare or anywhere since there is an agent installed on the VM responsible for communication to the Azure Pipeline services.

In an earlier post, I went over how to automate the building of a CentOS image using Packer. This time we will use Ubuntu as it is another popular Linux distribution. To help speed up the process of creating a VM, simply clone it from the following Git repo and run the following command:

packer build -force -only=virtualbox-iso Ubuntu.json

We will assume for now that Virtualbox is the hypervisor for now. QEmu is also supported in the builder and can be used by swapping ‘VirtualBox-iso’ with ‘qemu’. The configuration used for the VM building is not secure since the username and password are guessable. It is recommended to change them to meet your security standards.

When the process of building the VM finishes, you can import the VM into VirtualBox and start it. I tend to change the network on the default VirtualBox VM to bind it to my local network so I can SSH in.

To change the network settings,

  1. Select the VM
  2. Click on ‘Settings’
  3. Click on ‘Network’
  4. select ‘Adapter 1’
  5. Select ‘Bridged Adapter’ in the ‘Attached to’ pulldown
  6. The name should match your host machines network adapter. For my system, the main interface is ‘eno1’ on Ubuntu.

Log in with the credentials used to configure the VM. If you used the default then the username and password is ‘ubuntu’. Once logged in note the IP address:

Screenshot of Ubuntu console listing IP Address

The agent needs to have a personal access token (PAT) generated by one of the users with access to the proper rights to create pipelines and access repos. To create the PAT, log into the Azure DevOps portal and perform the following steps:

  1. Under profile, click on ‘Personal Access Token’
  2. Click on ‘New Token’
  3. Enter a name for the token
  4. Click on ‘Show all scopes’
  5. Check ‘Read & manage’ under Agent Pools
  6. Click on ‘Create’
  7. Copy the generated token into a safe place for now.
Location of the Personal access token option

The personal access token needs to be passed into the command line to help link the server to the Azure Pipeline. We can now switch over to our VM and set this up with the following commands:

$ mkdir myagent
$ cd myagent
$ curl -fkSL -o vstsagent.tar.gz
$ tar zxvf vstsagent.tar.gz
$ mkdir _work
$ export AGENTNAME=agent0
$ export AZUREURL=<organization>/ # replace this with your organization
$ ./ --acceptteeeula --agent $AGENTNAME --url $AZUREURL --work _work --projectname 'ZapProxyTut' --auth PAT --token $AGENTTOKEN --pool 'Default' --runasservice
$ sudo ./ install
$ sudo ./ start

Change the projectname to match the one in your environment.

If you check the dashboard you will notice that agent0 is now online.

Screenshot of available agents

We will use the project from our first tutorial on Azure Pipelines but update the azure-pipelines.yml file to use our self-hosted server now. The following code snippet is the newer azure-pipelines.yml file:

Notice that under pool we have set it to our Default pool. When this pipeline is now triggered this pool will be used. Edit the azure-pipelines.yml file and push it back to the repo to start the pipeline.

Screenshot of the pipeline run using the self-hosted VM

You can check the VM to see the results of the build under ~/myagent/_work/1/a/

That will wrap up our building process on our self-hosted server. You now have your server that is now tied into the Azure Pipeline. This leaves you the freedom to customize the server to your specific needs. If you have encountered any issues or need clarification feel free to comment below or message me.




Salim Haniff

Founder of Factory 127, an Industry 4.0 company. Specializing in cloud, coding and circuits.