We should be able to expand on this wide enough that it could potentially become a standard checklist that we work through for all Releases
Things we should test for
This is a list of things that I ( KaranbirSingh ) feel we should be testing for. Perhaps we can take this doc, merge it with the tests already in the 5.x and 4.x QA series and come up with a master template. It could also serve, as a byproduct, a qa-testing endpoint. So once we have ticks on each of these, we'd know that the product is in a 'ship' stage.
At the end of this document I will attempt to create a test-matrix and define how we might be able to also create some software ( if nothing public is found ) to manage and track these.
Distro Stuff
Tree
Well, this is the basics really. All these tasks can be done at the buildsystem level, and should be done there. Packages that dont meet these specs should not be moved into the QA repo's at all. However, if required, notes on what packages failed these sanity tests should be posted somewhere.
- Ensure all packages are in the tree
- Ensure all packages are signed with the right key
- Ensure the package metadata is accurate
- Ensure that package date/time stamps are preserved for packages being moved from an older releases
- Ensure multilib sanity ( more on this later )
Branding
During the entire testing process, it should be an aim to list places that are potential upstream Branding points. Even if they are not to be changed, a mention could be made somewhere on the QA Wiki ( private please ? ) about the places where these exist. Special care should be taken to work with the packages not seen before this release.
Also, if possible we should identify places where the branding is revealed as a part of the installed OS. eg. the apache Vendor string, Firefox and Thunderbird have something similar. We want to make every reasonable effort to identify and track these.
The Clustering ( conga ) packages need special attention to check for branding removal and testing. Fabian has offered to help further define these tests and write them up for the QA process.
Installer
We need to make sure that these things are tested for each of the installer interfaces ( via Kickstart, via GUI and via TUI ). We could even automate some of these things via dogtail, at some stage. And each test for the installer should be run, both in text-mode as well as graphical mode, for each of the potential install-media options, including :
- http
- local CDROM
- local DVD
- ftp
- nfs
- disk images from a local harddisk
And then repeating each of those test mechanisms for both real iron and virtualised environments we care about. At the moment, real iron + vmware + xen + kvm + VirtualBox seem to be the 'cared about environments', but add to taste. Before removing anything, I'd like to request people to initiate a specific focused conversation on the qa-list before doing the removal, to make sure that were not stepping on anyones toes or removing a test-case that others might be relying on.
- Being able to add an external repo to the installer and running a sucessfull install, including packages from the external repo
- Ability to install via anaconda from CD1 only and have functional:
- yum
- sshd
- lvm
- software raid (mdraid)
- hardware raid (e.g. promise)
- iptables
- dhcp
- selinux
Install a system from cd('s) or dvd without network then configure/install driver (such as http://atl1.sourceforge.net/) and configure updates across lan/wan
- Ability to install a Desktop, with a functional:
- Firefox
- Thunderbird
- Evolution
- Pidgin
- Ability to install a GNOME Desktop, with functional:
- gconf-editor
- nautilus cd-burner
- gFTP
- NEEDS some additional gnome specific package lists
- Ability to install a KDE Desktop, with functional:
- XXX: Needs some KDE specific package lists
- Konqueror
- KMail
- Ability to install server roles, maybe we can have a few standard kickstarts published, each of which could represent a different server role. Even if there were a few dozen, automating the builds using these in a Virtualised environment is trivial ( there may also be some value in publishing these. So for now, I'm going to add 2 tasks under this:
- Create some kickstarts for :
Typical webserver including php & mysql
- Typical Office server
- Typical router / gateway machine
- Typical email server
- XXX: add more roles as required / discovered
- Create some kickstarts for :
- Ability to use the installer and run in an 'upgradeany' role to move from some other Distro's or older versions of CentOS to this version. Perhaps for :
- Debian
- Ubuntu
- Scientific Linux 4
- Scientific Linux 5
- CentOS-3
- CentOS-4
- CentOS-2.1(!)
- Testing for installs over an ipv6 network. Given that most people only have and use ipv4 networks, I think we can abstract the ipv6 test onto its own test and not need to add another dimension to the other existing network install tests.
- Testing on corner case hardware ( perhaps only when the other tests are done ? we might not want to spend too much time doing / testing things that are not mainstream, and hold up release. Just adding this point so we dont miss it.)
Package Management
Most of the tests under this could / would be to test any specific changes in yum/rpm and the mirror network behavior.
- Ensure yum update from CentOS-X.y moves to CentOS-X.y+1, ensuring:
- relevant config files have their state preserved.
- obsoleted and dropped packages are removed ( or handled in a sane manner )
- on Multilib machines, ensure that multilib sanity is maintained (XXX: needs more specific info on what 'sanity' is )
- Ensure yum/rpm updates do not change any behavior on the system itself ( and document anything that does change )
- Ensure yum/rpm updates do not change the functioning of mirror.centos.org and the mirrorlist setups
- Ensure that any mirror.centos.org / mirrorlist changes also going in at the same time have near zero impact on the
- Test for behavior changes in yum itself, and the main plugins that people are likely to be using. Including atleast:
- Fastestmirror
- prioroties
- protectbase
- version lock
Existing installs
Once the things above this have been tested, we should then move onto actually deploying the new QA tree into our own testing / production / Desktop machines - or clones of such machines, to see how the new distro impacts issues. If required, we could come up with some basic steps to recommend how people might clone their existing setups for these tests, however there is some merit in also using a different / diverse set of process's to get better coverage.
Functionality
- Filesystems:
- ext3
- ext4
- xfs (64-bit only)
- fuse
- Virt
- kvm (64-bit only)
- xen
- vmware
- conga
( we need details on what is considered a test for things missing those )
Bug triaging / review
Once the basic package tree's are available, nominate one or maybe two people who are going to be available for a few days to do QA work ( outside work hours even ), to track and identify bugs reported on bugs.centos.org and ensure anything / everything that we are able to fix has already gone in. We will need a process for this, and that would need a bit more thought however, something like this can work right now :
- Create an issue on bugs.centos.org that lists or has depends on all issues that we need to close or fix for 5.6.
- Create an issue on bugs.centos.org for all the issues that we need to *test* for and the outcomes recorded into the Release Notes
So we end up with 2 bug's that all the people doing qa need to track and can comment on. It would also be good if 1 person was to take ownership of this task, and perhaps have another person along to help with. If its left upto the generic 'someone will take it', its almost certain that 'noone will take the task'.
Pre release Testing
Once the tree's are put into the production mirror.centos.org, run through some basic sanity tests. Exactly what those tests are needs to be defined, but should include:
- http install
- dvd install
- cd install
- isos from local hdd and from nfs install
- a few yum operations that should be the same as what was seen in the QA tree
- LiveCD
- Boot to live environment ( and do some tests there )
- Make sure the net-installer images are correct and can be used to run an install
- Test the persistence features ( at least modify / remove / full erase )
- Make sure the release notes are done, linked from the proper places
- check that www.centos.org/docs/ has the relevant content for release notes, tech notes, CentOS specific notes