New incarnation of my dotfiles, plus many more.
Look at the roles that the dotfiles role depends onto, here
$ apt install vagrant vagrant-libvirt vagrant-sshfs ansible$ vagrant upor if you want to call ansible outside of vagrant:
$ ansible-playbook -i inventories/vagrant/hosts -vv all.ymlThe dotfiles VM deployed with this ansible-configs project is already prepared for nested KVM. Remember that the host needs to have enabled nested KVM (you can achieve this by applying the libvirt task or dotfiles role).
If you want to spawn nested vagrant VMs inside a dotfiles VM, do:
$ VAGRANTNESTED=yes vagrant up dotfiles(This will deploy your vagrant dotfiles VM with a different setup of the
libvirt management network interface. For more info, look at the vagrantfile)
Now you can login into the dotfiles VM, and there you can either deploy again this project or any other vagrant project normally.
Ansible will use the deploy user to ssh (see roles/common/vars).
Install the private roles with:
$ ansible-galaxy install -r galaxy-roles.yml --roles-path ./roles --forceand then run the playbooks:
$ ansible-playbook --vault-password-file=vault_pass.sh -i inventories/production/hosts all.yml --limit=<host> --checkTo decrypt/encrypt the vault:
$ ansible-vault encrypt inventories/production/group_vars/all/vault.yml --vault-password-file=vault_pass.shOr run just one role:
$ ansible localhost -m include_role -args var=foo --become --ask-become-passYadda yadda:
$ adduser deploy # empty password to disable login by password
# or set temp password, to be disabled with passwd -l deploy
$ usermod -aG sudo deploy
$ echo "deploy ALL=(ALL) NOPASSWD:ALL" > /etc/sudoers.d/deploy
$ apt install sudo # if minimal installation$ ssh-copy-id -f -i roles/bootstrap/files/vic.pub deploy@<target host>
# or do it by hand if no password set upWhether you are aiming for the vagrant or the production deployment, what you
need to do is just add localhost under the correct group in
inventories/{vagrant,production}/hosts. For example:
[dotfiles]
adotfiles ansible_host=192.168.111.2 ansible_private_key_file=.vagrant/machines/dotfiles/libvirt/private_key
-> localhost
[dotfiles:children]
desktopAnd then run the normal deployment, in this case:
$ ansible-playbook --vault-password-file=vault_pass.sh --ask-become-pass -i inventories/production/hosts -vv dotfiles.yml --checkor
$ ansible-playbook -i inventories/vagrant/hosts -vv dotfiles.ymlI have chosen this approach so one can select whatever playbook to be applied to localhost, and at the same time minimize errors when applying playbooks locally.
Change the Vagrantfile so it is looking for an image called local/bullseye.
Inside this repo, do:
$ git clone https://salsa.debian.org/cloud-team/vagrant-boxes.git
$ sudo make -C vagrant-boxes/vmdebootstrap-libvirt-vagrant bullseye
$ vagrant box add vagrant-boxes/vmdebootstrap-libvirt-vagrant/bullseye.box --name local/bullseye
$ sudo rm -rf vagrant-boxes/vmdebootstrap-libvirt-vagrant/*.box$ sudo apt install docker.io gitlab-ci-multi-runner
$ docker login registry.gitlab.com # with a valid token
$ gitlab-ci-multi-runner exec docker <test-to-run>