diff mbox series

[11/11] MAINTAINERS: take edk2

Message ID 20220308145521.3106395-12-kraxel@redhat.com (mailing list archive)
State New, archived
Headers show
Series edk2: update to stable202202 | expand

Commit Message

Gerd Hoffmann March 8, 2022, 2:55 p.m. UTC
Philippe Mathieu-Daudé <f4bug@amsat.org>
Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
---
 MAINTAINERS | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Philippe Mathieu-Daudé March 8, 2022, 3:08 p.m. UTC | #1
On 8/3/22 15:55, Gerd Hoffmann wrote:
> Philippe Mathieu-Daudé <f4bug@amsat.org>

Hmm?

> Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> ---
>   MAINTAINERS | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/MAINTAINERS b/MAINTAINERS
> index 68adaac373c7..ad1c9a7ea133 100644
> --- a/MAINTAINERS
> +++ b/MAINTAINERS
> @@ -3144,7 +3144,7 @@ F: docs/interop/firmware.json
>   
>   EDK2 Firmware
>   M: Philippe Mathieu-Daudé <f4bug@amsat.org>
> -R: Gerd Hoffmann <kraxel@redhat.com>
> +M: Gerd Hoffmann <kraxel@redhat.com>

Thanks :)

Reviewed-by: Philippe Mathieu-Daudé <f4bug@amsat.org>

>   S: Supported
>   F: hw/i386/*ovmf*
>   F: pc-bios/descriptors/??-edk2-*.json
Gerd Hoffmann March 9, 2022, 8:16 a.m. UTC | #2
On Tue, Mar 08, 2022 at 04:08:40PM +0100, Philippe Mathieu-Daudé wrote:
> On 8/3/22 15:55, Gerd Hoffmann wrote:
> > Philippe Mathieu-Daudé <f4bug@amsat.org>
> 
> Hmm?

Oops, Cc: prefix missing.

> 
> > Signed-off-by: Gerd Hoffmann <kraxel@redhat.com>
> > ---
> >   MAINTAINERS | 2 +-
> >   1 file changed, 1 insertion(+), 1 deletion(-)
> > 
> > diff --git a/MAINTAINERS b/MAINTAINERS
> > index 68adaac373c7..ad1c9a7ea133 100644
> > --- a/MAINTAINERS
> > +++ b/MAINTAINERS
> > @@ -3144,7 +3144,7 @@ F: docs/interop/firmware.json
> >   EDK2 Firmware
> >   M: Philippe Mathieu-Daudé <f4bug@amsat.org>
> > -R: Gerd Hoffmann <kraxel@redhat.com>
> > +M: Gerd Hoffmann <kraxel@redhat.com>
> 
> Thanks :)

Any chance you can take over the macos support bits in return?

thanks,
  Gerd
Philippe Mathieu-Daudé March 9, 2022, 10:05 a.m. UTC | #3
On 9/3/22 09:16, Gerd Hoffmann wrote:
> On Tue, Mar 08, 2022 at 04:08:40PM +0100, Philippe Mathieu-Daudé wrote:

>>> diff --git a/MAINTAINERS b/MAINTAINERS
>>> index 68adaac373c7..ad1c9a7ea133 100644
>>> --- a/MAINTAINERS
>>> +++ b/MAINTAINERS
>>> @@ -3144,7 +3144,7 @@ F: docs/interop/firmware.json
>>>    EDK2 Firmware
>>>    M: Philippe Mathieu-Daudé <f4bug@amsat.org>
>>> -R: Gerd Hoffmann <kraxel@redhat.com>
>>> +M: Gerd Hoffmann <kraxel@redhat.com>
>>
>> Thanks :)
> 
> Any chance you can take over the macos support bits in return?

I suppose you mean the "Core Audio framework backend" section?

There is indeed a need for macOS host support maintenance, as patches
hang on the list until Peter finally takes them via the arm tree.

Not sure what you have in mind. I'm totally new to the macOS/Darwin
world, and have no choice but to use it as primary workstation and
for CI builds, so I can help with overall testing / maintenance.

Peter, since you take some macOS patches, would you like to maintain
this officially? Since I doubt you want to take yet another
responsibility, what about having a co-maintained section, including
technical expertise from Akihiko / Joelle / Christian? (Cc'ed)

Regards,

Phil.
Christian Schoenebeck March 9, 2022, 10:40 a.m. UTC | #4
On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> On 9/3/22 09:16, Gerd Hoffmann wrote:
> > On Tue, Mar 08, 2022 at 04:08:40PM +0100, Philippe Mathieu-Daudé wrote:
> >>> diff --git a/MAINTAINERS b/MAINTAINERS
> >>> index 68adaac373c7..ad1c9a7ea133 100644
> >>> --- a/MAINTAINERS
> >>> +++ b/MAINTAINERS
> >>> @@ -3144,7 +3144,7 @@ F: docs/interop/firmware.json
> >>> 
> >>>    EDK2 Firmware
> >>>    M: Philippe Mathieu-Daudé <f4bug@amsat.org>
> >>> 
> >>> -R: Gerd Hoffmann <kraxel@redhat.com>
> >>> +M: Gerd Hoffmann <kraxel@redhat.com>
> >> 
> >> Thanks :)
> > 
> > Any chance you can take over the macos support bits in return?
> 
> I suppose you mean the "Core Audio framework backend" section?
> 
> There is indeed a need for macOS host support maintenance, as patches
> hang on the list until Peter finally takes them via the arm tree.

Most of them are macOS UI patches I think. There are not many CoreAudio 
patches coming in.

> Not sure what you have in mind. I'm totally new to the macOS/Darwin
> world, and have no choice but to use it as primary workstation and
> for CI builds, so I can help with overall testing / maintenance.
> 
> Peter, since you take some macOS patches, would you like to maintain
> this officially? Since I doubt you want to take yet another
> responsibility, what about having a co-maintained section, including
> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> 
> Regards,

Also CCing Cameron on this, just in case someone at Apple could spend some 
slices on QEMU macOS patches in general as well.

As for my part: I try to help out more on the macOS front. As there's now 
macOS host support for 9p I have to start QEMU testing on macOS locally 
anyway. Too bad that macOS CI tests on Github are no longer available BTW.

Best regards,
Christian Schoenebeck
Gerd Hoffmann March 9, 2022, 11:18 a.m. UTC | #5
Hi,

> > Any chance you can take over the macos support bits in return?
> 
> I suppose you mean the "Core Audio framework backend" section?

cocoa too.

> There is indeed a need for macOS host support maintenance, as patches
> hang on the list until Peter finally takes them via the arm tree.
> 
> Not sure what you have in mind. I'm totally new to the macOS/Darwin
> world, and have no choice but to use it as primary workstation and
> for CI builds, so I can help with overall testing / maintenance.

Having test hardware is more than I have ;)

Also it seems you have collected stuff from the mailing list
in your macos host patch series.  If that isn't maintenance,
what is it?

I have only a virtual machine.  Updating that is a major PITA,
it's stuck at macos 10 so increasingly useless for builds and
testing, havn't booted it for months.

So I'm sitting here looking at the patches where I often simply
can't justify whenever they are correct or not because I'm not
familiar with the macos thread model, macos app bundles etc.

My maintenance model for macos bits is basically sit & wait, now
and then scan my mail folder for patches & reviews, then trying
to figure what is ready for merge based on the review comments.
On top of that I'm often busy with ekd2 stuff so patches can sit
for quite a while on the list.

This isn't that great for neither me nor the people submitting
patches ...

take care,
  Gerd
Daniel P. Berrangé March 9, 2022, 11:44 a.m. UTC | #6
On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> > Not sure what you have in mind. I'm totally new to the macOS/Darwin
> > world, and have no choice but to use it as primary workstation and
> > for CI builds, so I can help with overall testing / maintenance.
> > 
> > Peter, since you take some macOS patches, would you like to maintain
> > this officially? Since I doubt you want to take yet another
> > responsibility, what about having a co-maintained section, including
> > technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > 
> > Regards,
> 
> Also CCing Cameron on this, just in case someone at Apple could spend some 
> slices on QEMU macOS patches in general as well.
> 
> As for my part: I try to help out more on the macOS front. As there's now 
> macOS host support for 9p I have to start QEMU testing on macOS locally 
> anyway. Too bad that macOS CI tests on Github are no longer available BTW.

Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
which we use 'cirrus-run' from the GitLab job to trigger a build in
Cirrus CI's macOS builders, and pull the results back when its done.

Any contributor can get this working on their QEMU fork too, if they
configure the needed Cirrus CI API token. See the docs in

   .gitlab-ci.d/cirrus/README.rst

This is enough for build + automated tests.

Obviously of limited use for testing UI functionality or general host
OS integration like audio, which pretty much requires access to a real
machine for a maintainer to use interactively.

Regards,
Daniel
Christian Schoenebeck March 10, 2022, 11 a.m. UTC | #7
On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
> > On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> > > Not sure what you have in mind. I'm totally new to the macOS/Darwin
> > > world, and have no choice but to use it as primary workstation and
> > > for CI builds, so I can help with overall testing / maintenance.
> > > 
> > > Peter, since you take some macOS patches, would you like to maintain
> > > this officially? Since I doubt you want to take yet another
> > > responsibility, what about having a co-maintained section, including
> > > technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > > 
> > > Regards,
> > 
> > Also CCing Cameron on this, just in case someone at Apple could spend some
> > slices on QEMU macOS patches in general as well.
> > 
> > As for my part: I try to help out more on the macOS front. As there's now
> > macOS host support for 9p I have to start QEMU testing on macOS locally
> > anyway. Too bad that macOS CI tests on Github are no longer available BTW.
> 
> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> which we use 'cirrus-run' from the GitLab job to trigger a build in
> Cirrus CI's macOS builders, and pull the results back when its done.
> 
> Any contributor can get this working on their QEMU fork too, if they
> configure the needed Cirrus CI API token. See the docs in
> 
>    .gitlab-ci.d/cirrus/README.rst
> 
> This is enough for build + automated tests.

Does this mean that people no longer have to pull their credit card just for 
running CI tests on Gitlab?

And as this approach seems to use an indirection with Cirrus CI via Github. 
Will it be sufficient to just run QEMU CI jobs on Github?

Why have the previously existing QEMU CI jobs been pulled from Github anyway?

Best regards,
Christian Schoenebeck
Daniel P. Berrangé March 10, 2022, 11:07 a.m. UTC | #8
On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> > On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
> > > On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> > > > Not sure what you have in mind. I'm totally new to the macOS/Darwin
> > > > world, and have no choice but to use it as primary workstation and
> > > > for CI builds, so I can help with overall testing / maintenance.
> > > > 
> > > > Peter, since you take some macOS patches, would you like to maintain
> > > > this officially? Since I doubt you want to take yet another
> > > > responsibility, what about having a co-maintained section, including
> > > > technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > > > 
> > > > Regards,
> > > 
> > > Also CCing Cameron on this, just in case someone at Apple could spend some
> > > slices on QEMU macOS patches in general as well.
> > > 
> > > As for my part: I try to help out more on the macOS front. As there's now
> > > macOS host support for 9p I have to start QEMU testing on macOS locally
> > > anyway. Too bad that macOS CI tests on Github are no longer available BTW.
> > 
> > Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> > which we use 'cirrus-run' from the GitLab job to trigger a build in
> > Cirrus CI's macOS builders, and pull the results back when its done.
> > 
> > Any contributor can get this working on their QEMU fork too, if they
> > configure the needed Cirrus CI API token. See the docs in
> > 
> >    .gitlab-ci.d/cirrus/README.rst
> > 
> > This is enough for build + automated tests.
> 
> Does this mean that people no longer have to pull their credit card just for 
> running CI tests on Gitlab?

Not really. The CC validation is something GitLab have had to force
onto all new accounts due to cryptominer abuse of their free shared
CI runners :-( If you have VMs somewhere you could theoretically
spin up your own CI runners instead of using the shared runners and
that could avoid the CC validation need.

> And as this approach seems to use an indirection with Cirrus CI via Github. 
> Will it be sufficient to just run QEMU CI jobs on Github?
> 
> Why have the previously existing QEMU CI jobs been pulled from Github anyway?

We've never used GitHub for CI with QEMU upsteam. Before this we used
Travis first, and Cirrus CI. Travis effectively killed off their free
plan for x86 builders, and Cirrus CI is too restrictive to run enough
jobs.  GitLab is our primary target

Regards,
Daniel
Philippe Mathieu-Daudé March 10, 2022, 11:40 a.m. UTC | #9
+Stefan for overall project resources.

On 10/3/22 12:07, Daniel P. Berrangé wrote:
> On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
>> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
>>> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
>>>> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
>>>>> Not sure what you have in mind. I'm totally new to the macOS/Darwin
>>>>> world, and have no choice but to use it as primary workstation and
>>>>> for CI builds, so I can help with overall testing / maintenance.
>>>>>
>>>>> Peter, since you take some macOS patches, would you like to maintain
>>>>> this officially? Since I doubt you want to take yet another
>>>>> responsibility, what about having a co-maintained section, including
>>>>> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
>>>>>
>>>>> Regards,
>>>>
>>>> Also CCing Cameron on this, just in case someone at Apple could spend some
>>>> slices on QEMU macOS patches in general as well.
>>>>
>>>> As for my part: I try to help out more on the macOS front. As there's now
>>>> macOS host support for 9p I have to start QEMU testing on macOS locally
>>>> anyway. Too bad that macOS CI tests on Github are no longer available BTW.
>>>
>>> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
>>> which we use 'cirrus-run' from the GitLab job to trigger a build in
>>> Cirrus CI's macOS builders, and pull the results back when its done.
>>>
>>> Any contributor can get this working on their QEMU fork too, if they
>>> configure the needed Cirrus CI API token. See the docs in
>>>
>>>     .gitlab-ci.d/cirrus/README.rst
>>>
>>> This is enough for build + automated tests.
>>
>> Does this mean that people no longer have to pull their credit card just for
>> running CI tests on Gitlab?
> 
> Not really. The CC validation is something GitLab have had to force
> onto all new accounts due to cryptominer abuse of their free shared
> CI runners :-( If you have VMs somewhere you could theoretically
> spin up your own CI runners instead of using the shared runners and
> that could avoid the CC validation need.

Not that trivial, first you need to figure out the list of dependencies
GitLab images come with, then you realize you need 50GiB+ of available
storage a single pipeline (due to all the Docker images pulled / built)
and you also need a decent internet link otherwise various jobs timeout
randomly, then you have to wait 20h+ with a quad-core CPU / 16GiB RAM,
and eventually you realize you lost 3 days of your life to not register
your CC which you'll be forced to give anyway.

Long term maintainers don't realize that because they had the luxury to
open their GitLab account soon enough and are now privileged.

It is unfortunate the project strongly suggest new maintainers to pass
by that hassle and doesn't provide access to project resources instead.

But then I know, while the project has access to FOSS hardware resources
it doesn't have human resources to maintain them so can't use them, back
to square one.

Regards,

Phil.
Christian Schoenebeck March 11, 2022, 9:13 a.m. UTC | #10
On Donnerstag, 10. März 2022 12:40:06 CET Philippe Mathieu-Daudé wrote:
> +Stefan for overall project resources.
> 
> On 10/3/22 12:07, Daniel P. Berrangé wrote:
> > On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
> >> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> >>> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
> >>>> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> >>>>> Not sure what you have in mind. I'm totally new to the macOS/Darwin
> >>>>> world, and have no choice but to use it as primary workstation and
> >>>>> for CI builds, so I can help with overall testing / maintenance.
> >>>>> 
> >>>>> Peter, since you take some macOS patches, would you like to maintain
> >>>>> this officially? Since I doubt you want to take yet another
> >>>>> responsibility, what about having a co-maintained section, including
> >>>>> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> >>>>> 
> >>>>> Regards,
> >>>> 
> >>>> Also CCing Cameron on this, just in case someone at Apple could spend
> >>>> some
> >>>> slices on QEMU macOS patches in general as well.
> >>>> 
> >>>> As for my part: I try to help out more on the macOS front. As there's
> >>>> now
> >>>> macOS host support for 9p I have to start QEMU testing on macOS locally
> >>>> anyway. Too bad that macOS CI tests on Github are no longer available
> >>>> BTW.
> >>> 
> >>> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> >>> which we use 'cirrus-run' from the GitLab job to trigger a build in
> >>> Cirrus CI's macOS builders, and pull the results back when its done.
> >>> 
> >>> Any contributor can get this working on their QEMU fork too, if they
> >>> configure the needed Cirrus CI API token. See the docs in
> >>> 
> >>>     .gitlab-ci.d/cirrus/README.rst
> >>> 
> >>> This is enough for build + automated tests.
> >> 
> >> Does this mean that people no longer have to pull their credit card just
> >> for running CI tests on Gitlab?
> > 
> > Not really. The CC validation is something GitLab have had to force
> > onto all new accounts due to cryptominer abuse of their free shared
> > CI runners :-( If you have VMs somewhere you could theoretically
> > spin up your own CI runners instead of using the shared runners and
> > that could avoid the CC validation need.
> 
> Not that trivial, first you need to figure out the list of dependencies
> GitLab images come with, then you realize you need 50GiB+ of available
> storage a single pipeline (due to all the Docker images pulled / built)
> and you also need a decent internet link otherwise various jobs timeout
> randomly, then you have to wait 20h+ with a quad-core CPU / 16GiB RAM,

Considering that CI jobs currently take about 1 hour on Gitlab, which 
processor generation are you referring to that would take 20 hours?

> and eventually you realize you lost 3 days of your life to not register
> your CC which you'll be forced to give anyway.

It's an obstacle. And that keeps people away. Plus the trend seems to be that 
free CI services disappear one by one, so I am not so sure that giving your 
credit card once solves this issue for good.

> Long term maintainers don't realize that because they had the luxury to
> open their GitLab account soon enough and are now privileged.

Would it be possible to deploy all CI jobs via Cirrus-CI?

> It is unfortunate the project strongly suggest new maintainers to pass
> by that hassle and doesn't provide access to project resources instead.
> 
> But then I know, while the project has access to FOSS hardware resources
> it doesn't have human resources to maintain them so can't use them, back
> to square one.
> 
> Regards,
> 
> Phil.
Daniel P. Berrangé March 11, 2022, 9:26 a.m. UTC | #11
On Fri, Mar 11, 2022 at 10:13:24AM +0100, Christian Schoenebeck wrote:
> On Donnerstag, 10. März 2022 12:40:06 CET Philippe Mathieu-Daudé wrote:
> > +Stefan for overall project resources.
> > 
> > On 10/3/22 12:07, Daniel P. Berrangé wrote:
> > > On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
> > >> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> > >>> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck wrote:
> > >>>> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé wrote:
> > >>>>> Not sure what you have in mind. I'm totally new to the macOS/Darwin
> > >>>>> world, and have no choice but to use it as primary workstation and
> > >>>>> for CI builds, so I can help with overall testing / maintenance.
> > >>>>> 
> > >>>>> Peter, since you take some macOS patches, would you like to maintain
> > >>>>> this officially? Since I doubt you want to take yet another
> > >>>>> responsibility, what about having a co-maintained section, including
> > >>>>> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > >>>>> 
> > >>>>> Regards,
> > >>>> 
> > >>>> Also CCing Cameron on this, just in case someone at Apple could spend
> > >>>> some
> > >>>> slices on QEMU macOS patches in general as well.
> > >>>> 
> > >>>> As for my part: I try to help out more on the macOS front. As there's
> > >>>> now
> > >>>> macOS host support for 9p I have to start QEMU testing on macOS locally
> > >>>> anyway. Too bad that macOS CI tests on Github are no longer available
> > >>>> BTW.
> > >>> 
> > >>> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> > >>> which we use 'cirrus-run' from the GitLab job to trigger a build in
> > >>> Cirrus CI's macOS builders, and pull the results back when its done.
> > >>> 
> > >>> Any contributor can get this working on their QEMU fork too, if they
> > >>> configure the needed Cirrus CI API token. See the docs in
> > >>> 
> > >>>     .gitlab-ci.d/cirrus/README.rst
> > >>> 
> > >>> This is enough for build + automated tests.
> > >> 
> > >> Does this mean that people no longer have to pull their credit card just
> > >> for running CI tests on Gitlab?
> > > 
> > > Not really. The CC validation is something GitLab have had to force
> > > onto all new accounts due to cryptominer abuse of their free shared
> > > CI runners :-( If you have VMs somewhere you could theoretically
> > > spin up your own CI runners instead of using the shared runners and
> > > that could avoid the CC validation need.
> > 
> > Not that trivial, first you need to figure out the list of dependencies
> > GitLab images come with, then you realize you need 50GiB+ of available
> > storage a single pipeline (due to all the Docker images pulled / built)
> > and you also need a decent internet link otherwise various jobs timeout
> > randomly, then you have to wait 20h+ with a quad-core CPU / 16GiB RAM,
> 
> Considering that CI jobs currently take about 1 hour on Gitlab, which 
> processor generation are you referring to that would take 20 hours?

You're not taking into account parallelism. The GitLab pipeline takes
1 hour wallclock time, which is not the same as 1 hour CPU time. We
probably have 20+ jobs running in parallel on gitlab, as they get
farmed out to many machines. If you have only a single machine at your
disposal, then you'll have much less prallelism, so overall time can
be much longer.

> > and eventually you realize you lost 3 days of your life to not register
> > your CC which you'll be forced to give anyway.
> 
> It's an obstacle. And that keeps people away. Plus the trend seems to be that 
> free CI services disappear one by one, so I am not so sure that giving your 
> credit card once solves this issue for good.

The CC requirement there is primarily to act as an identity check
on accounts, so they have some mechanism to discourage and/or trace
abusive users. You can use it to purchase extra CI time, but they've
stated multiple times their intention to continue to grant free CI
time to open source projects and their contributors. They are actively
discussing their plans with a number of open source project contributors
including myself on behalf of QEMU, to better understand our needs. I
outlined my current understanding of their intentions here:

 https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg03962.html

> > Long term maintainers don't realize that because they had the luxury to
> > open their GitLab account soon enough and are now privileged.
> 
> Would it be possible to deploy all CI jobs via Cirrus-CI?

Not unless you want to wait 10 hours for the pipeline to finish. Cirrus
CI only lets you run 2 jobs at a time. It also doesn't have any integrated
container registry which we rely on for creatnig our build env.

Regards,
Daniel
Christian Schoenebeck March 12, 2022, 1:51 p.m. UTC | #12
On Freitag, 11. März 2022 10:26:47 CET Daniel P. Berrangé wrote:
> On Fri, Mar 11, 2022 at 10:13:24AM +0100, Christian Schoenebeck wrote:
> > On Donnerstag, 10. März 2022 12:40:06 CET Philippe Mathieu-Daudé wrote:
> > > +Stefan for overall project resources.
> > > 
> > > On 10/3/22 12:07, Daniel P. Berrangé wrote:
> > > > On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
> > > >> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> > > >>> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck 
wrote:
> > > >>>> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé 
wrote:
> > > >>>>> Not sure what you have in mind. I'm totally new to the
> > > >>>>> macOS/Darwin
> > > >>>>> world, and have no choice but to use it as primary workstation and
> > > >>>>> for CI builds, so I can help with overall testing / maintenance.
> > > >>>>> 
> > > >>>>> Peter, since you take some macOS patches, would you like to
> > > >>>>> maintain
> > > >>>>> this officially? Since I doubt you want to take yet another
> > > >>>>> responsibility, what about having a co-maintained section,
> > > >>>>> including
> > > >>>>> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > > >>>>> 
> > > >>>>> Regards,
> > > >>>> 
> > > >>>> Also CCing Cameron on this, just in case someone at Apple could
> > > >>>> spend
> > > >>>> some
> > > >>>> slices on QEMU macOS patches in general as well.
> > > >>>> 
> > > >>>> As for my part: I try to help out more on the macOS front. As
> > > >>>> there's
> > > >>>> now
> > > >>>> macOS host support for 9p I have to start QEMU testing on macOS
> > > >>>> locally
> > > >>>> anyway. Too bad that macOS CI tests on Github are no longer
> > > >>>> available
> > > >>>> BTW.
> > > >>> 
> > > >>> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> > > >>> which we use 'cirrus-run' from the GitLab job to trigger a build in
> > > >>> Cirrus CI's macOS builders, and pull the results back when its done.
> > > >>> 
> > > >>> Any contributor can get this working on their QEMU fork too, if they
> > > >>> configure the needed Cirrus CI API token. See the docs in
> > > >>> 
> > > >>>     .gitlab-ci.d/cirrus/README.rst
> > > >>> 
> > > >>> This is enough for build + automated tests.
> > > >> 
> > > >> Does this mean that people no longer have to pull their credit card
> > > >> just
> > > >> for running CI tests on Gitlab?
> > > > 
> > > > Not really. The CC validation is something GitLab have had to force
> > > > onto all new accounts due to cryptominer abuse of their free shared
> > > > CI runners :-( If you have VMs somewhere you could theoretically
> > > > spin up your own CI runners instead of using the shared runners and
> > > > that could avoid the CC validation need.
> > > 
> > > Not that trivial, first you need to figure out the list of dependencies
> > > GitLab images come with, then you realize you need 50GiB+ of available
> > > storage a single pipeline (due to all the Docker images pulled / built)
> > > and you also need a decent internet link otherwise various jobs timeout
> > > randomly, then you have to wait 20h+ with a quad-core CPU / 16GiB RAM,
> > 
> > Considering that CI jobs currently take about 1 hour on Gitlab, which
> > processor generation are you referring to that would take 20 hours?
> 
> You're not taking into account parallelism. The GitLab pipeline takes
> 1 hour wallclock time, which is not the same as 1 hour CPU time. We
> probably have 20+ jobs running in parallel on gitlab, as they get
> farmed out to many machines. If you have only a single machine at your
> disposal, then you'll have much less prallelism, so overall time can
> be much longer.
> 
> > > and eventually you realize you lost 3 days of your life to not register
> > > your CC which you'll be forced to give anyway.
> > 
> > It's an obstacle. And that keeps people away. Plus the trend seems to be
> > that free CI services disappear one by one, so I am not so sure that
> > giving your credit card once solves this issue for good.
> 
> The CC requirement there is primarily to act as an identity check
> on accounts, so they have some mechanism to discourage and/or trace
> abusive users. You can use it to purchase extra CI time, but they've
> stated multiple times their intention to continue to grant free CI
> time to open source projects and their contributors. They are actively
> discussing their plans with a number of open source project contributors
> including myself on behalf of QEMU, to better understand our needs. I
> outlined my current understanding of their intentions here:
> 
>  https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg03962.html

Please send an announcement (in subject) and/or CC maintainers if there are 
any news on this topic. This discussion went completely unseen on my end.

> > > Long term maintainers don't realize that because they had the luxury to
> > > open their GitLab account soon enough and are now privileged.
> > 
> > Would it be possible to deploy all CI jobs via Cirrus-CI?
> 
> Not unless you want to wait 10 hours for the pipeline to finish. Cirrus
> CI only lets you run 2 jobs at a time. It also doesn't have any integrated
> container registry which we rely on for creatnig our build env.

For the vast majority of contributors that would be absolutely fine. What 
matters is running tests for the various architectures. Average response time 
on submitted patches is much longer than 10 hours. Still better than not 
running CI tests at all.

Best regards,
Christian Schoenebeck
Daniel P. Berrangé March 14, 2022, 9:31 a.m. UTC | #13
On Sat, Mar 12, 2022 at 02:51:21PM +0100, Christian Schoenebeck wrote:
> On Freitag, 11. März 2022 10:26:47 CET Daniel P. Berrangé wrote:
> > On Fri, Mar 11, 2022 at 10:13:24AM +0100, Christian Schoenebeck wrote:
> > > On Donnerstag, 10. März 2022 12:40:06 CET Philippe Mathieu-Daudé wrote:
> > > > +Stefan for overall project resources.
> > > > 
> > > > On 10/3/22 12:07, Daniel P. Berrangé wrote:
> > > > > On Thu, Mar 10, 2022 at 12:00:35PM +0100, Christian Schoenebeck wrote:
> > > > >> On Mittwoch, 9. März 2022 12:44:16 CET Daniel P. Berrangé wrote:
> > > > >>> On Wed, Mar 09, 2022 at 11:40:42AM +0100, Christian Schoenebeck 
> wrote:
> > > > >>>> On Mittwoch, 9. März 2022 11:05:02 CET Philippe Mathieu-Daudé 
> wrote:
> > > > >>>>> Not sure what you have in mind. I'm totally new to the
> > > > >>>>> macOS/Darwin
> > > > >>>>> world, and have no choice but to use it as primary workstation and
> > > > >>>>> for CI builds, so I can help with overall testing / maintenance.
> > > > >>>>> 
> > > > >>>>> Peter, since you take some macOS patches, would you like to
> > > > >>>>> maintain
> > > > >>>>> this officially? Since I doubt you want to take yet another
> > > > >>>>> responsibility, what about having a co-maintained section,
> > > > >>>>> including
> > > > >>>>> technical expertise from Akihiko / Joelle / Christian? (Cc'ed)
> > > > >>>>> 
> > > > >>>>> Regards,
> > > > >>>> 
> > > > >>>> Also CCing Cameron on this, just in case someone at Apple could
> > > > >>>> spend
> > > > >>>> some
> > > > >>>> slices on QEMU macOS patches in general as well.
> > > > >>>> 
> > > > >>>> As for my part: I try to help out more on the macOS front. As
> > > > >>>> there's
> > > > >>>> now
> > > > >>>> macOS host support for 9p I have to start QEMU testing on macOS
> > > > >>>> locally
> > > > >>>> anyway. Too bad that macOS CI tests on Github are no longer
> > > > >>>> available
> > > > >>>> BTW.
> > > > >>> 
> > > > >>> Note QEMU gets macOS CI coverage in GitLab. We use a clever trick by
> > > > >>> which we use 'cirrus-run' from the GitLab job to trigger a build in
> > > > >>> Cirrus CI's macOS builders, and pull the results back when its done.
> > > > >>> 
> > > > >>> Any contributor can get this working on their QEMU fork too, if they
> > > > >>> configure the needed Cirrus CI API token. See the docs in
> > > > >>> 
> > > > >>>     .gitlab-ci.d/cirrus/README.rst
> > > > >>> 
> > > > >>> This is enough for build + automated tests.
> > > > >> 
> > > > >> Does this mean that people no longer have to pull their credit card
> > > > >> just
> > > > >> for running CI tests on Gitlab?
> > > > > 
> > > > > Not really. The CC validation is something GitLab have had to force
> > > > > onto all new accounts due to cryptominer abuse of their free shared
> > > > > CI runners :-( If you have VMs somewhere you could theoretically
> > > > > spin up your own CI runners instead of using the shared runners and
> > > > > that could avoid the CC validation need.
> > > > 
> > > > Not that trivial, first you need to figure out the list of dependencies
> > > > GitLab images come with, then you realize you need 50GiB+ of available
> > > > storage a single pipeline (due to all the Docker images pulled / built)
> > > > and you also need a decent internet link otherwise various jobs timeout
> > > > randomly, then you have to wait 20h+ with a quad-core CPU / 16GiB RAM,
> > > 
> > > Considering that CI jobs currently take about 1 hour on Gitlab, which
> > > processor generation are you referring to that would take 20 hours?
> > 
> > You're not taking into account parallelism. The GitLab pipeline takes
> > 1 hour wallclock time, which is not the same as 1 hour CPU time. We
> > probably have 20+ jobs running in parallel on gitlab, as they get
> > farmed out to many machines. If you have only a single machine at your
> > disposal, then you'll have much less prallelism, so overall time can
> > be much longer.
> > 
> > > > and eventually you realize you lost 3 days of your life to not register
> > > > your CC which you'll be forced to give anyway.
> > > 
> > > It's an obstacle. And that keeps people away. Plus the trend seems to be
> > > that free CI services disappear one by one, so I am not so sure that
> > > giving your credit card once solves this issue for good.
> > 
> > The CC requirement there is primarily to act as an identity check
> > on accounts, so they have some mechanism to discourage and/or trace
> > abusive users. You can use it to purchase extra CI time, but they've
> > stated multiple times their intention to continue to grant free CI
> > time to open source projects and their contributors. They are actively
> > discussing their plans with a number of open source project contributors
> > including myself on behalf of QEMU, to better understand our needs. I
> > outlined my current understanding of their intentions here:
> > 
> >  https://lists.gnu.org/archive/html/qemu-devel/2022-02/msg03962.html
> 
> Please send an announcement (in subject) and/or CC maintainers if there are 
> any news on this topic. This discussion went completely unseen on my end.
> 
> > > > Long term maintainers don't realize that because they had the luxury to
> > > > open their GitLab account soon enough and are now privileged.
> > > 
> > > Would it be possible to deploy all CI jobs via Cirrus-CI?
> > 
> > Not unless you want to wait 10 hours for the pipeline to finish. Cirrus
> > CI only lets you run 2 jobs at a time. It also doesn't have any integrated
> > container registry which we rely on for creatnig our build env.
> 
> For the vast majority of contributors that would be absolutely fine. What 
> matters is running tests for the various architectures. Average response time 
> on submitted patches is much longer than 10 hours. Still better than not
> running CI tests at all.

I don't think that's absolutely fine at all, nor a common view amongst
maintainers/contributors. People already complain that the 1 hour time
of our GitLab CI is too long for them to wait. Having a CI run take 10
hours would be horrendous. Run a CI pipeline on monday, it fails, on
tuesday fix the bug, run another CI pipeline, and got get the results
on wednesday. Your work is split over 3 days, instead of 2 hours today
with GitLab as it stands. That's assuming you got the fix right first
time too. A CI pipeline that takes 10 hours, is a pipeline that people
will not bother running most of the time.

Regards,
Daniel
diff mbox series

Patch

diff --git a/MAINTAINERS b/MAINTAINERS
index 68adaac373c7..ad1c9a7ea133 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -3144,7 +3144,7 @@  F: docs/interop/firmware.json
 
 EDK2 Firmware
 M: Philippe Mathieu-Daudé <f4bug@amsat.org>
-R: Gerd Hoffmann <kraxel@redhat.com>
+M: Gerd Hoffmann <kraxel@redhat.com>
 S: Supported
 F: hw/i386/*ovmf*
 F: pc-bios/descriptors/??-edk2-*.json