I like my Linux installs heavily customized and security hardened, to the extent that copying over /home
won’t cut it, but not so much that it breaks when updating Debian. Whenever someone mentions reinstalling Linux, I am instinctively nervous thinking about the work it would take for me to get from a vanilla install to my current configuration.
It started a couple of years ago, when dreading the work of configuring Debian to my taste on a new laptop, I decided to instead just shrink my existing install to match the new laptop’s drive and dd it over. I later made a VM from my install, stripped out personal files and obvious junk, and condensed it to a 30 GB raw disk image, which I then deployed on the rest of my machines.
That was still a bit too janky, so once my configuration and installed packages stabilized, I bit the bullet, spun up a new VM, and painstakingly replicated my configuration from a fresh copy of Debian. I finished with a 24 GB raw disk image, which I can now deploy as a “fresh” yet pre-configured install, whether to prepare new machines, make new VMs, fix broken installs, or just because I want to.
All that needs to be done after dd’ing the image to a new disk is:
- Some machines: boot grubx64.efi/shimx64.efi from Ventoy and “bless” the new install with
grub-install
andupdate-grub
- Reencrypt LUKS root partition with new password
- Configure user and GRUB passwords
- Set hostname
- Install updates and drivers as needed
- Configure for high DPI if needed
I’m interested to hear if any of you have a similar workflow or any feedback on mine.
You should check out Nixos. You make a config file that you can just copy over to as many machines as you want.
That or Ansible, if you will have a machine to deploy from
if you will have a machine to deploy from
You can run ansible against localhost, so you don’t even need that.
You don’t need a machine to deploy from. You just need a git repo and Ansible pull. It will pulldown and run playbooks against the host. (Use the self target to run it on the local machine)
Sounds like you need nixos
that workflow seems fine if it works for you. seems overkill for debian but if it works i don’t see anything wrong with it.
one way I do it is dpkg - l > package.txt to get a list of all install packages to feed into apt on the new machine then to setup two stow directories one for global configs. when a change is made and one for dot files in my home directory then commit and push to a personal git server.
Then when you want to setup a new system it’s install minimal install then run apt install git stow
then clone your repos grab the package.txt run apt install < package.txt then run stow on each stow directory and you are back up and running after a reboot.
Use configuration tooling such as Ansible.
You also could build a image builder to build your system. You could utilize things like docker and or Ansible to repeatedly get to the same result.
Is there a free/gratis version?
Ansible? It’s free software
Great! I’d no idea!
Just put your system configuration in Ansible playbook. When your distro has new release, go through your changes and remove ones that are no longer relevant.
For home, I recommend a dotfiles repository with subdirectories for each tool, like bash, git, vim, etc. Use GNU
stow
to symlink the required files in place on each machine.I use ansible + a debian preseed for unattended installs.
I have the exact same workflow except I have two images: one for legacy/MBR and another for EFI/GPT – once I read your post I was glad to see I’m not alone haha!
You might be able to script something with Debootstrap. I tested Bcachefs on a spare device once and couldn’t get through the standard Debian install process, so I ended up using a live image to Debootstrap the drive. You should be able to give a list of packages to install and copy over configs to the partition.
Ansible and docker would work nicely for this
I did the same, exactly the way you did but my “zygote” isnt as advanced.
I should make a raw ISO too, but currently I just use Clonezilla (which shrinks and resizes automatically) and have a small SSD with a nearly vanilla system.
Just because the Fedora ISO didnt boot
I believe that Proxmox does this because I have installed/created containers from their available images. I wonder how they create those container images?
There are many way to make a image
This is designed for Gentoo but I’ve used it for Ubuntu before: https://github.com/TheChymera/mkstage4/