From RISC OS
riscos.info has a Continuous Integration server. Continuous Integration (usually abbreviated to CI, or relatedly CD for Continuous Delivery) involves one or more servers that repeatedly build software projects, either every version control commit or on a regular interval. Motivation for CI varies, but includes checking software remains buildable in the face of developer or upstream changes, running testsuites, and building chains of software based on its dependencies. Wikipedia has more, and there are plenty of other resources in the software engineering community.
We use Jenkins, a web-based CI system. The web interface can be found at ci.riscos.info Jenkins is read-only for visitors, while those with an account can start and edit builds.
The purpose of running this under CI is to run builds in a controlled environment, one free of the vagaries of running it on a developer's machine with whatever packages they might have installed: the Jenkins build starts with a blank slate, pulling in the compiler, libraries and so on from other Jenkins jobs, which are 'upstream' of it. When successfully built, a build emits RISC OS packages and libraries which can be consumed by 'downstream' builds. The output files of a build (usually zips or tarballs in our case) are called 'artifacts' in Jenkins terminology.
The second aim of building under CI is to ensure timely builds: if upstream changes, we should rebuild. Due to some details of how the complex package dependency graph interacts with Jenkins, we aren't quite there yet. Building is currently a matter of logged-in users clicking a button on the package(s) in question.
The riscos.info service comprises a Jenkins web GUI - ci.riscos.info - and a number of build 'slaves', where the actual compilation takes place. All these run Ubuntu 14.04 LTS and communicate with the GUI over SSH using Jenkins operations set up in the GUI: the idea is the slaves are stateless vanilla Ubuntu installs, and all special setup is represented in the Jenkins configuration, which derives most of the package building information from the Subversion repository.
Jenkins has a number of top-level jobs for notable components like GCC 4.1, GCC 4.7 and pre-setup jobs for autobuilder projects. Each job follows roughly the same pattern: check out some sources from Subversion or other repositories, possibly import some build outputs from other jobs, build using a shell script, save the output files as artifacts, and report success or failure to Jenkins. The red/yellow/green state indicates whether a build was successful (yellow meaning it built but failed tests).
You can drill down into a job, for instance clicking on gcc-4.7-native (the GCC compiler to run natively on RISC OS) gives you a list of builds on the left hand side, some successful, some not. If you click on a build number (#nnn) and go to Console Output, you can see what happened during the build. If it was red, something went wrong. The build number screen also shows you the versions of source that was used to build this project, and any commit messages since the last time Jenkins built it. This means the build number is a record of the state of the project at a given point in time, and the build log records what happened. Both these pages can be hyperlinked to, to discuss what happened in a given build. If you find a successful build, it will also allow you to download the build products as artifacts. Jenkins also stores the files from the most recent build in the 'Workspace', so you can access intermediate files.
Autobuilder packages and their dependencies in Jenkins
Jenkins is also used to build GCCSDK Autobuilder packages. This is a more advanced use of Jenkins, with still some rough edges.
Before we can build a package, we must import the prerequisites. These are built by Jenkins jobs gcc-4.7-cross (the GCC cross-compiler) and riscpkg-tools (some tools for building packages). To save having to do the import afresh for each package build, a job called autobuilder_setup pulls in the prerequistes and outputs a tarball gcc-4.7-ab-ready.tar.bz2 which is the compile tree ready to start building a package. This is exported as an artifact.
Next, we must convert the GCCSDK autobuilder model of dependencies into one Jenkins understands. In the autobuilder each project has a simple text file listing the projects it depends on. In Jenkins this is represented in the GUI, with rules that a build imports artifacts from an upstream build, and can start build another job after it finishes. To convert from one to the other we use a job autobuilder-generator, that runs a script in Jenkins' Groovy scripting language to read package descriptions in the GCCSDK autobuilder SVN repository and generate Jenkins jobs with appropriate dependencies. These jobs are then placed in the packages/ subdirectory.
Each package is therefore set up to import zero or more artifacts from other packages, build and report the results. At present we can't force build ordering (if building A requires B and C first, build them and only proceed if both were successful) so we have to run the build ordering manually, or repeatedly build all the packages until all the prerequisites are met.
One problem with GCC is that it hardcodes the pathname it expects to run at into its own binaries. This doesn't work on the autobuilder, because each package build imports the tarball containing GCC and unpacks it into a separate directory with a name based on the package being built. GCC thus can't find its includes and other necessary files. To work around this problem, we use proot as a way to pretend to the build that it always runs at /home/riscos, irrespective of the actual location on the Jenkins build slave. This works, but has a substantial performance penalty.
Building existing packages and adding new ones
To build existing packages, you need a Jenkins web account - contact Theo Markettos for details. This gives you the ability to click Build on the package jobs, and re-run the autobuilder generator when the dependencies in GCCSDK change. The aim is that there is no special magic in the Jenkins job configuration, which gets thrown away every time the generator script runs - if you need to change part of the build process that should be changed in the script in GCCSDK SVN.
To add a new package, you should get it building on your own machine using the GCCSDK autobuilder. See the autobuilder documentation for how to do that. To make it easier, use an Ubuntu 14.04 virtual machine to iron out any distribution-specific requirements. A Raspberry Pi running Ubuntu 14.04 or possibly Raspbian is a slower alternative. When you are happy that it builds reliably, submit patches to the GCCSDK mailing list. We will commit those to the GCCSDK SVN. You can then check that it builds with the Jenkins web interface - re-run the generator to make your new job appear.
Currently, the build dependency system is hampered by Jenkins' requirements, which means it is not possible to build a full tree of jobs automatically. It would be good to change this.
Also, we don't generate a RiscPkg index of all the built packages. It would be simple to do this. However, care is required to capture changes in package versions appropriately: a package may keep the same upstream version number, but have relevant local changes in the build environment that mean it should have a package version bump. On the other hand, we shouldn't bump the version for every build, because that would force users to download all the builds every night - even if the only thing that changed was the build date. Alan Buckley has done some work towards fixing this, but it has not yet been integrated.
proot loses us about 50% of the CPU performance of the build slaves, making for slow builds. It would be good to find another containerisation solution that plays nicely with Jenkins and doesn't need Jenkins to have root.
For questions and comments, please contact the GCCSDK mailing list.