Just like the "reposchutz" and "dependabot" workflows, run "release" on
ubuntu 20.04. Podman in 22.04 creates truncated tarballs when streaming
them through stdout.
This is a new script for building Cockpit releases. It essentially does
what we do now, except:
- we use the container version specified in our .cockpit-ci/container
file
- we download everything with a single invocation of `git clone`, which
downloads all the required submodules, but only does a shallow fetch
- we run the build process entirely offline, inside of its own
container
The new script is meant to be more accessible to anyone who may be
interested in running it, including outside of GitHub, and is written in
a way that attempts to convince its reader that nothing tricky is going
on. It should always produce a byte-for-byte identical release as the
official one. It acts as a convenient way to verify the reproducibility
and validity of a Cockpit release, and also as an alternative to
downloading the tarball from GitHub.
We previously ran tasks-container-update and cockpit-lib-update at the
same time, which sometimes collides. Move it to the night between Sunday
and Monday instead, so that we have the updates ready for us to inspect
when we start our week.
Instead, use test/common/make-bots directly. The Makefile only exists
after running autogen.sh, and we really don't need to do that just to
get the bots.
Commit bc7e2bf38f was wrong: This workflow only doesn't need any
permissions if all URLs are valid. But if not, then it wants to
update/create an issue to notify us about that.
This reduces our tools like `ruff` to a single source of truth (as all
our other projects already run their unit tests and linting in the tasks
container). It also removes a lot of moving parts only relevant for CI.
In practice, us developers run the unit tests in toolbox or our own dev
machines anyway.
Move building the guide in the release workflow to the tasks container
as well.
It hasn't helped us in years, modern gcc has good static analysis (plus
of course CodeQL and Coverity), none of our supported downstream distros
care, and we are not going to add significant amounts of C code any
more.
We won't add a lot of new C code any more, valgrinding Python code isn't
very useful (or architecture specific), and more and more distributions
drop i386 support. Also, we still run the unit tests during RPM package
build through packit/COPR, which cover even more architectures.
This paves the way for dropping the unit test container altogether in
favor of running the tests in the cockpit/tasks container, once we agree
on how to build a proper staging setup.
Drop tools/valgrind.supp which was only relevant for i386.
This was added by 09fe9eeef9 which needed
it for `xml.etree.ElementTree.indent` but the `ubuntu-latest` GitHub
Actions image has gained Python 3.10 since then, so this is now
superfluous. Drop it.
Github usually updates their actions once a while and then warns about a
node env getting deprecated in runs. Which is not super easily spotted
by a developer until it's too late, so let's like npm let dependabot
handle updating.
https://github.com/cockpit-project/bots/pull/5569 changed the allowlist
from a GitHub team to a hardcoded Python list, so we don't need the
cockpituous token privilege with its `read:org` any more.
At some point this made sense as a very rudimentary smoke test that we
weren't using language features that weren't compatible with Python 3.6,
but we've long been running our entire unit tests suite under Python 3.6
in our tox workflow.
What's more: this workflow often fails due to inability to fetch patches
from the CentOS mirrors.
Let's drop this one.
Expecting the branch'es HEAD SHA does not work for proposed branches
which are behind the target branch (i.e. usually `main`). For those,
the packit source RPM build does a merge first, which produces an
unpredictable SHA, which ends up as the COPR package'es version.
Switch to a time based approach: Parse the timestamp from the package
version, and wait until it is newer than the most recent push to the
target branch.
https://issues.redhat.com/browse/COCKPIT-1071
We want to make sure to not break Anaconda with changes which affect it,
i.e. the bridge or the Storage page. As Anaconda's tests are "special"
(require booting boot.iso, can't run in tmt/Testing Farm), we need to
run them in Cockpit's CI.
Add a workflow which runs if the PR affects Anaconda (changes to the
bridge or Storage page), polls the packit COPR until it has the current
PR version available, and then test-triggers a "cockpit PR" scenario.
https://issues.redhat.com/browse/COCKPIT-1064
So far the daily updates tend to run in our mornings between 7:00 and
8:00, which blocks our CI for a long time, and thus collides with
developers sending PRs. Move them to the evening instead, when they can
use the quiet bots time.
Also reduce the number of parallel PRs from 5 to 3. Parallel ones always
need to be rebased, and thus are very expensive. We still want to be
able to have a complicated PatternFly PR open for several days without
blocking other updates.
Exercise the beiboot → SSH code path with actual interaction (user and
password).
Only run the test when giving `$COCKPIT_TEST_SSH_{HOST,PASS}`. That way,
developers can run the test locally against our usual test VM:
COCKPIT_TEST_SSH_HOST=admin@127.0.0.2:2201 COCKPIT_TEST_SSH_PASS=foobar
Create a user in the GitHub VM.
Replace --enable-pybridge with --enable-old-bridge, and flip the logic.
That way, it will slowly disappear as old distro releases become
unsupported. This also means that builders from upstream now get the
Python bridge by default.
The distcheck scenarios now apply to the Python bridge. Add a
`$DEB_PYTHON_INSTALL_LAYOUT` hack to work around
https://bugs.debian.org/1035546 to unbreak the installation of the
generated wrapper binaries, as by default they'd go into
prefix/usr/local/bin on Debian.
Add a new distcheck scenario for the C bridge, to ensure that we don't
break that.
https://issues.redhat.com/browse/COCKPIT-1037
In recent Debian testing, valgrind now shows a gazillion
Memcheck:{Cond,Value4} errors all over the place: in GMP, GnuTLS, glib
hashtables, and even strcmp(). Give up at last, and run valgrind on 64
bit only. This also becomes less important with us moving more and more
code away from C.
Add a comment about the explicit "check" run. It was introduced in
commit 1ef0001abf to run static code checks, which don't work against a
tarball in "distcheck".
Change the order in unit-tests-refresh.yml to match the order in
unit-tests.yml to reduce confusion.
Port containers/flatpak/prepare to Python, with the following changes:
- cockpit-beiboot is now unconditionally enabled
- instead of keeping a static list of .tar.xz files for extra packages
in-tree, we add a --packages= option which presents two extra
options:
- create the file by scanning upstreams for the latest release
- download the extra packages from downstream (read: flathub)
- our sed-templated .yml.in quasi-format is abandoned in favour of just
writing the manifest data directly into the prepare script. This is
easier than figuring out a better approach to templating, and allows
us to remove yaml from the process entirely. All produced files are
now JSON, which flatpak-builder is also happy to consume.
Modify the release process to scan upstream for new packages and update
the downstream list accordingly.
For other users (humans, CI): the first time containers/flatpak/prepare
is run, --packages=downstream is the default. It will write a copy of
the downloaded packages file to the current directory, and after that,
this local copy will be used.
The idea here is two-fold:
- downloading a single file from downstream is a lot faster and easier
than scanning upstream for new releases all the time
- this provides something like a "stable downstream image" to test
upstream cockpit changes against which will prevent changes in a new
release of one of our modules from causing our CI to go red in
cockpit.