Skip to content



  • You will need to have a target device which will act as a server, this can be anything from a laptop to a Raspberry Pi
  • You will need to have another target device which will act as a runner
    • Both devices should be configured with an OS which allows ssh, we use Debian 11
    • You can obtain a Rock5 Debian image here.
  • For ease of use, add your host device into .ssh/authorized_keys on both the target devices (server and runner)
  • Create a .ssh/config file on your host device to allow promptless ssh access to the target device.
    • This file is documented here and some pertinent values are:
      • Host is used by ssh to match to the host you give on the ssh cli, this helps because you set it to match the value in your Ansible inventory file to override any defaults for that server e.g. IP address, key file or username.
      • HostName is what specifies the real hostname to log into, this is the IP or domain name that your DNS can resolve to a real device.
      • User will be similar to HostName but will be used to identify the user on the target machine. This will be the user you're trying to access via ssh.

Getting Started

To deploy everything mentioned previously to an actual box, you will need to use Ansible to automate the process. Before you run any commands, it's best to get familiar with how we arrange our Ansible variables. You can find documentation on this here.

Our Ansible folder structure:

. (directories only)
├── playbooks
│   └── roles
│       ├── role1
│       │   ├── tasks
│       │   └── defaults
│       └── role2
│           ├── tasks
│           └── defaults
│           more roles...
└── vars

There are many roles in roles directory, each with their own defaults directory which contains the default variable choices for them. There is also a global vars directory, variables defined here overwrite any existing variables defined in each separate roleX/defaults/ directory.

The global vars folder may also contain some additional, sensitive information such as username, password etc, editing them here so you won't leave any confidential things in your history command.

To set up the box in one go, all you need is:

Define your own confidential infos in global vars if you do not want our default choices. You will have to define docker_username and docker_password with your GitLab credentials in vars/tiab_vars.yml. This will help in fetching all the Docker containers T.I.A.B uses. docker_password will have to be generated as a personal access token (PAC) with at least read_registry permissions. You can find out how to generate a PAC here

SSH Config

As mentioned in the prerequisites, a ssh_config file (.ssh/config) is mandatory to allow for access to both the server and runner. It'll also allow you to use ssh without prompts. An example looks like this

Host <server_name>
    HostName <server_ip/hostname>
    User <server_user>

Host <runner_name>
    HostName <runner_ip/hostname>
    User <runner_user>
    ProxyJump <server_name>

Once the above is implemented, ssh back into the server and runner to add your SSH key to .ssh/authorized_keys. This will then allow you to run ssh <server_name> or ssh <runner_name>.


Create your own inventory file in ./inventory.yml, an example is shown as follows:

      ansible_host: <server_ip/hostname>
      ansible_host: <runner_ip/hostname>
      ansible_become_password: <runner_sudo>

Manual changes

Before deploying the playbooks, there is one file that needs editing.

In vars/tiab_vars.yml, update L10 domain: from {{ VM_NAME }} to your hostname.

Deploy Playbooks

In your control node (e.g. the machine where ansible is installed), run

cd <Project Root Directory>

# Install required packages first
ansible-galaxy role install -r requirements.yml
ansible-galaxy collection install -r requirements.yml

# The actual deployment
ansible-playbook -i inventory.yml playbooks/deploy_tiab.yml

You may add -v for more verbose output, or add -Kk if the playbook failed due to Missing sudo password.

Set up completed! You may now visit your GitLab instance at http://<server_ip>:80 and OpenQA instance at https://<server_ip>:81, if you're using our default port choices. The above script also downloads our pre-built qad binary to /root, sets up ser2net, and sets up a libvirt host. You can also clone a GitLab repo created by using this url http://<server_ip>:80/.

By default, two gitlab users will be created: one admin and one normal user. The admin user is called root and you can find its password in the output of task Print root password. The normal user is the one defined in vars/tiab_vars.yml.

You can use the root user to login into the deployed OpenQA instance and it will automatically become OpenQA administrator as well.

Deploy of Separate Components

playbooks/deploy_tiab.yml serves as the top level playbook which should deploy everything the box need. However, you can also use corresponding playbooks to deploy each component separately.

If you only need to deploy runners, you can use playbooks/deploy_runner.yml playbook instead.

Note that Gitlab runner, OpenQA webUI, OpenQA worker and ser2net are all deployed as different docker containers.

Gitlab Instance

After running playbooks/testing-in-a-box.yml playbook you can access the GitLab instance created by the playbook.

To access the GitLab instance you will need use the ip address and the gitlab_http_external_port specified in vars/tiab_vars.yml of the machine you have ran testing-in-a-box.yml playbook.


A user will be set up. You can change the username and password used in vars/tiab_vars.yml

Configuring Gitlab

You can use playbooks/configure_server.yml to configure you're server. This will playbook will create a create a user, group and repo for you.

To run this playbook you will have to first define the following variables

Name Default Value Description
initial_root_password none The password for the root user of the Gitlab
gitlab_external_url https://{{ gitlab_domain }}:{{ gitlab_https_external_port }} The url for the Gitlab instance you want to configure.

To test deployment, you can set up a docker container running Gitlab to test configuration playbook using the following command:

docker run --detach --hostname \
  --publish 443:443 \
  --publish 80:80 \
  --publish 2224:22 \
  --name gitlab   --restart always
  --volume ./config:/etc/gitlab \
  --volume ./logs:/var/log/gitlab \
  --volume ./data:/var/opt/gitlab \
  --shm-size 256m   gitlab/gitlab-ce:16.3.6-ce.0

OpenQA Instance

The deployment of OpenQA relies on GitLab instance as OpenQA needs an OAuth2 verification provider. Also, OpenQA deployment assumes there is at least a root GitLab user. After OpenQA is created, you can click the login button on the top right corner.

If OpenQA instance is running, then you can use minimal_openqa_test playbook to download GNOME tests and set up a playground for you to explore OpenQA.

ansible-playbook -i inventory.yml playbooks/minimal_openqa_test.yml
  • libvirt host

The given script sets one libvirt pool at /templates/pool and one bridge nat network at If these are not enough/appropriate, then you can define your own pool/network by setting variables in roles/deploy_libvirt_host/tasks/main.yml. Detailed explanation of the configurable variables are explained here.


There are some variables you need to specify manually in vars/tiab_vars.yml for setting up ser2net. A minimal sample configuration is as follows:

- name: "FT232R_USB_UART"
  desc: "This is a FT232 device"
    vid: "0403"
    pid: "6001"
    serialno: "AB0MLY92"

You can get more detailed explanation in here.