[gccsdk] FW: Autobuilder packaging progress

alan buckley alan_baa at hotmail.com
Wed Jan 16 01:41:34 PST 2008

> On Tue, 15 Jan 2008 21:50:03 +0100 John Tytgat wrote:
> In message 
> Peter Naulls  wrote:
>> I don't recall exactly what John said, but firstly, let's not fragment
>> and spread ourselves too thinly. We don't have hundreds or thousands of
>> developers like Debian to look after different variations of packages
>> not any particular reason to do so. Nor should we provide user
>> confusion by naming things "unstable", with the ensuing explanations
>> that'll be required. All we'll do in the end is ensure that such
>> named software won't get tested.
> I agree that the word "unstable" is not reflecting what we mean with it.

I'm quite happy with whatever names seem most appropriate.
>> The best anyone can ask or we can provide is a single distribution on
>> a best effort basis. There might be older versions of software in
>> that distribution, but that's ok too.
> I'm feeling confident enough about the ELF static based builds but the
> moment we're building with shared libs I think we better get those
> application and libraries in a separate "testing" category until we feel
> we don't have any major issues left.
> I would go for "stable" & "testing" category like we also have with the
> GCCSDK releases vs pre-releases.
>> As for manual intervention, let's avoid that too, it'll just mean
>> more manual effort later. If we can come up with more generic
>> ways of doing things, even if the initial result takes longer,
>> that'll be better for everyone.
> I don't mind seeing a bit of extra effort done upfront if this is going
> to pay off multiple times back later. I like the SCP idea and the package
> website build happening on riscos.info.

My original intention was that the current autobuilt list is basically an
automatic dump of all the packages created in a run of the autobuilder.

i.e. The process would be run once every few months somewhere. A
script would be set off that completely rebuilt everything and
transferred all the packages to the website.

This automatic process could cause things to break or replace something
with the same version with a slightly different binary. (This is inevitable
when the compiler changes).

This could cause confusion for the novice user or someone who just
wants to get something that "just works".

>From here it seemed to be sensible to offer a parallel distribution where
we can copy versions of packages that are known to work and ensure
that version numbers are changed when necessary. This has to be a
manual process as only a human can decide when an app/library is

However it doesn't have to be arduous. If riscos.info could run a
modified build-website script it would just be a matter of copying
the package and source to a known location. Even if this is not
the case I could probably create a program that could create
a mini differences website locally that could then be copied up.

Sorry if I seem to be repeating myself.

So the question becomes, should we be providing this manual
distribution as well as the autobuilt distribution?

If the answer is yes, the follow on questions are:

1. What should it be called?

We can add others if it works out in the future, but for now 
I would assume a general lack of time will mean we only get one
manual site. (John - the current autobuilt distribution probably covers
your "testing" distribution).

2. Can we have something running on riscos.info to update the website
based on new packages being copied to a specific location?


Get Hotmail on your mobile, text MSN to 63463!

More information about the gcc mailing list