When I first got started with computers, there really was no choice for how you installed software. Indeed, you tended to do things like boot off a different cassette tape, pluggable cartridge or later, floppies. But then, there was a little bit of evolution, and you would boot the operating system off of one set of media, and insert a new floppy to run the program. There were no hard drives for these computers, those were reserved for the systems which colleges/universities and businesses had, and they cost in some cases, tens of thousands of dollars. And even then, they were not that common.  When I started in college, the IBM PC was just starting to hit the market, with some units running $20K or more without a hard drive. And I was helping one of my advisors run programs by lugging 3 full boxes of punch cards and a partial box (just shy of 7000 cards total) down to the place where we would load programs onto the mainframe. It was not until 1983 that the IBM PC/XT came onto the market, with a price tag of over $7500, that a "personal computer" had a hard drive. In the meantime, machines like the DEC PDP and DEC VAX were also coming into common use, which had actual terminals and removable disk cartridges like the RL02, which had a whopping 10MB of storage to go along with the main system storage, which in later years, would reach 450MB per disk drive. Meanwhile, hard drives continued to improve.

My first chance to really begin to form a philosophy on software installs was later when I was still in college, and it was mostly forced upon folks like myself. The department had a VAX 11/750, and later also bought a couple of Sun 3/60s with a Sun 3/280 server. And the philosophy was simple... load the OS, then download what software you wanted from magnetic tape onto the limited hard drive space you had. So it was very piecemeal, and took place over weeks or months, and was entirely manual. If Professor X wanted to use a program called SPICE (an electrical engineering circuit simulation package), he handed you the administrator a tape, and if Professor Y wanted to use NEC (an electromagnetics simulation package), he handed you a different tape. And this was the state of things in the mid to late 1980s, as PCs and minicomputers continued to drop in price. But even as they did, things really did not change much, that is, until the 1990s, especially for the PCs.

During the 1990s, two things combined to change the philosophy. First was the widespread adoption of Ethernet. First developed in the 1970s, it often involved the use of thick, coaxial cable, which was very difficult to work with, because of its size (0.375in/9.5mm), and requiring special adaptors and techniques to add each computer, which added to the cost. We called this "Thicknet", with some of us using the term somewhat derogatorily even if we loved the speed.  Then, we got a version we called "Thinnet", which replaced the thick cable with thinner RG-58, and  we exchanged the taps and adaptors with a simple BNC T-connector. And that really broke the race between Ethernet and competing technologies to allow Ethernet to take off. The second was the introduction of drives in the roughly 1-Gigabyte range which had smaller form factors using 5.25" disk platters internally, instead of the 14" platters which had been common in higher capacity drives. Now, a PC really stood a chance to become a server, as was the case when I started at CompuServe, where I ended up being responsible for the operating system, and thus the ultimate owner of the install process. We still booted from floppies, but I managed to get the floppies to automatically configure the network address using the newly release DHCP protocol so that the machine could access a server to gain access to the volumes of software which we might want to install.  This was a radical improvement over loading the early versions of Windows on a typical desktop machine, which IIRC, involved a slow process of installing from over a dozen floppy drives. But the philosophy was still to load the base OS, then use special programs to install the other software which might differentiate the machine between a chatroom server, mail server, or some other form of server.

A few years later, my install philosophy was to radically change, when I started as a UN*X expert at Lucent's Bell Labs Messaging group. There, we really wanted to streamline the factory's install (Initial System Load or ISL) process. And so, while we might still boot using floppies and install from a tape drive, or later a CDROM, the idea was to load everything in one step if possible. And this affected my own personal installs for years. Rather than loading the absolute bare minimum of the OS during the initial install, then installing other software by hand selectively, I started transitioning to the point where I would come up with customized installs depending on what the machine was doing, to get most, if not all of the software I wanted installed in one massive first step. And with my switching to RHEL based systems, with Anaconda and Kickstart, this became even more the case. Indeed, with my old Cobbler server, I had 19 different profiles, 11 of which were just for the different types of machines running CentOS 7 which I would install for personal and work use. And each one had a different set of packages which would be installed, depending on whether the machine was a fileserver, a web server, etc. And this was the predominate way I would do my installs for much of the past two plus decades, since my philosophy boiled down to "Do as much as you could automatically in a single step". But things are changing.

Why are they changing? Well, while I had my own ways of doing additional software installs, it was mostly written around scripts written in shell script, Tcl/Tk/Expect, or Python. Indeed, I had been using all three of these for years, with my first using Tcl/Tk/Expect back at CompuServe to come up with a way to allow us to change the administrator passwords to 1000+ servers mostly automatically every month in just a couple of hours. And my disk partitioning for Kickstarts (modeled after what I first came up with for partitioning disks for the messaging servers under UnixWare) has been written in Python for nearly 20 years. But there was very little re-use possible... so it did not hold well with the corollary philosophy of "Don't Repeat Yourself" aka "DRY", which is a key driver behind automation itself. And with my then new job in 2019, I had to build development servers for my new job, and I discovered a package called Ansible.

While my use of Ansible did not initially impact how I did my OS installs, it has been doing so slowly. Part of what held this back was the fact that I was doing very little in the way of new installs or OS upgrades. No new machines for work were needed, and between the changes to CentOS and not having the time to do what I needed to upgrade Cobbler to where it would support a different RHEL 9 based distro (with a chicken vs. the egg situation), I really did not have the time to work through things for that philosophy to evolve. But in recent months, that changed, and radically. Firstly, while I was in favor of doing everything I could automatically, I have never been a fan of the "install everything" ideology.  It wastes disk, and in the case of things like bastion hosts like your web servers which reside in a "DMZ" network zone, if you are compromised, you are just giving a black-hat all the tools they need to attack the core of your network, and thus is a security risk. It is one reason why I am strongly in favor of dedicated firewalls, where almost nothing is installed. Secondly, I really REALLY wanted to retire that old server where cobbler has been running for ages, and upgrade to the latest version of Cobbler at the same time. But there were other things which needed done, such as using my NAS server for my home directories as I always intended.  I got that solved, picked out a new RHEL based distro (Rocky 9), and spent some time doing shell scripts to fire off my VM installs of Rocky 9 based system, to learn some of the changes to the kickstart infrastructure. And then, I had to address some issues with Cobbler, which resulted in some patches fed back into the project for installing it on RHEL 9 based systems (I am also a huge fan of packages "owning" files, as opposed to the old-fashioned running a build script which copies files into place... it is far easier to clean up after things). And then, there was extracting the configuration from the old Cobbler server and setting the new one back up. And now, I am finally back to using Cobbler and Koan, along with Kickstart to its fullest. I will do a follow up post later, but to do my latest base OS install, I just did the following command:

koan --server=cobbler.ka8zrt.com --virt --system=builds

There are still a few tweaks I need to do (such as fixing the importing of a set of recognized SSH server keys, since this is a re-install), but it is a very minimal install. Maybe not the most minimal install possible. I am sure I could find a few additional packages to remove during the install, just like I did for all the firmware for the unneeded wireless cards I will not ever use, but darn near. And I even have a pre-installed authorized SSH key installed, so that I don't need to enter a password to run Ansible scripts against the server being re-built. But when I am done, I will have things down to where with a minor configuration change on the server running Cobbler, one minor one for Ansible, it will eventually boil down to that command and one other to have multiple servers running Jenkins and Docker to perform my software builds, be it for personal use or for work.  And that second command:

script builds.errs -c 'ansible-playbook builds.yml'

With those two, I can do a total reinstall from scratch, or produce a 2nd, 3rd and so-on build server, or with similar scripts, reusing component used in common between the playbook to build the build server and my cobbler and other servers, I can turn out web servers or any other type of server, just by putting the blocks together like a bunch of Legos. And indeed, some of those blocks are ones I did not even have to write, such as the one to install Jenkins or Docker.  It is the same way I installed 5 different versions of PHP on the same server which I am using to update a project on which I am a volunteer... all I had to do on that one was to do the base OS install, and run the playbook, and the website for the project was up and running.