Message ID | E1kEyDy-0006nN-5A@lizzy.crudebyte.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | 9pfs: disable msize warning for synth driver | expand |
On Sun, 6 Sep 2020 18:50:32 +0200 Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > Previous patch introduced a performance warning being logged on host > side if client connected with an 'msize' <= 8192. Disable this > performance warning for the synth driver to prevent that warning from > being printed whenever the 9pfs (qtest) test cases are running. > > Introduce a new export flag V9FS_NO_PERF_WARN for that purpose, which > might also be used to disable such warnings from the CLI in future. > > We could have also prevented the warning by simply raising P9_MAX_SIZE > in virtio-9p-test.c to any value larger than 8192, however in the > context of test cases it makes sense running for edge cases, which > includes the lowest 'msize' value supported by the server which is > 4096, hence we want to preserve an msize of 4096 for the test client. > > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> > --- Heh, yes I guess it makes sense :) I guess you could maybe queue this patch before the other one. Reviewed-by: Greg Kurz <groug@kaod.org> > fsdev/file-op-9p.h | 4 ++++ > hw/9pfs/9p-synth.c | 2 ++ > hw/9pfs/9p.c | 2 +- > 3 files changed, 7 insertions(+), 1 deletion(-) > > diff --git a/fsdev/file-op-9p.h b/fsdev/file-op-9p.h > index f2f7772c86..d51cec2f3b 100644 > --- a/fsdev/file-op-9p.h > +++ b/fsdev/file-op-9p.h > @@ -64,6 +64,10 @@ typedef struct ExtendedOps { > */ > #define V9FS_REMAP_INODES 0x00000200 > #define V9FS_FORBID_MULTIDEVS 0x00000400 > +/* > + * Disables certain performance warnings from being logged on host side. > + */ > +#define V9FS_NO_PERF_WARN 0x00000800 > > #define V9FS_SEC_MASK 0x0000003C > > diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c > index 7eb210ffa8..cec8c0eefc 100644 > --- a/hw/9pfs/9p-synth.c > +++ b/hw/9pfs/9p-synth.c > @@ -541,6 +541,8 @@ static int synth_init(FsContext *ctx, Error **errp) > QLIST_INIT(&synth_root.child); > qemu_mutex_init(&synth_mutex); > > + ctx->export_flags |= V9FS_NO_PERF_WARN; > + > /* Add "." and ".." entries for root */ > v9fs_add_dir_node(&synth_root, synth_root.attr->mode, > "..", synth_root.attr, synth_root.attr->inode); > diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c > index 99b6f24fd6..741d222c3f 100644 > --- a/hw/9pfs/9p.c > +++ b/hw/9pfs/9p.c > @@ -1354,7 +1354,7 @@ static void coroutine_fn v9fs_version(void *opaque) > } > > /* 8192 is the default msize of Linux clients */ > - if (s->msize <= 8192) { > + if (s->msize <= 8192 && !(s->ctx.export_flags & V9FS_NO_PERF_WARN)) { > warn_report_once( > "9p: degraded performance: a reasonable high msize should be " > "chosen on client/guest side (chosen msize is <= 8192). See "
On Montag, 7. September 2020 11:57:58 CEST Greg Kurz wrote: > On Sun, 6 Sep 2020 18:50:32 +0200 > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > Previous patch introduced a performance warning being logged on host > > side if client connected with an 'msize' <= 8192. Disable this > > performance warning for the synth driver to prevent that warning from > > being printed whenever the 9pfs (qtest) test cases are running. > > > > Introduce a new export flag V9FS_NO_PERF_WARN for that purpose, which > > might also be used to disable such warnings from the CLI in future. > > > > We could have also prevented the warning by simply raising P9_MAX_SIZE > > in virtio-9p-test.c to any value larger than 8192, however in the > > context of test cases it makes sense running for edge cases, which > > includes the lowest 'msize' value supported by the server which is > > 4096, hence we want to preserve an msize of 4096 for the test client. > > > > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> > > --- > > Heh, yes I guess it makes sense :) > > I guess you could maybe queue this patch before the other one. > > Reviewed-by: Greg Kurz <groug@kaod.org> Thanks! BTW, have you ever tried dealing with patchwork client's 'state' feature for already transmited patches on the list (new, rejected, etc.)? Not that there were many 9p patches for actually needing that; just curious if that's an LKML only feature. Best regards, Christian Schoenebeck
On Mon, 07 Sep 2020 13:55:52 +0200 Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > On Montag, 7. September 2020 11:57:58 CEST Greg Kurz wrote: > > On Sun, 6 Sep 2020 18:50:32 +0200 > > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > > Previous patch introduced a performance warning being logged on host > > > side if client connected with an 'msize' <= 8192. Disable this > > > performance warning for the synth driver to prevent that warning from > > > being printed whenever the 9pfs (qtest) test cases are running. > > > > > > Introduce a new export flag V9FS_NO_PERF_WARN for that purpose, which > > > might also be used to disable such warnings from the CLI in future. > > > > > > We could have also prevented the warning by simply raising P9_MAX_SIZE > > > in virtio-9p-test.c to any value larger than 8192, however in the > > > context of test cases it makes sense running for edge cases, which > > > includes the lowest 'msize' value supported by the server which is > > > 4096, hence we want to preserve an msize of 4096 for the test client. > > > > > > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> > > > --- > > > > Heh, yes I guess it makes sense :) > > > > I guess you could maybe queue this patch before the other one. > > > > Reviewed-by: Greg Kurz <groug@kaod.org> > > Thanks! > > BTW, have you ever tried dealing with patchwork client's 'state' feature for > already transmited patches on the list (new, rejected, etc.)? > Nope, never used patchwork's state at all and I've no idea on how it works... but I can ask to my former IBM colleagues at Ozlabs. > Not that there were many 9p patches for actually needing that; just curious if > that's an LKML only feature. > > Best regards, > Christian Schoenebeck > >
On Mon, 7 Sep 2020 14:27:19 +0200 Greg Kurz <groug@kaod.org> wrote: > On Mon, 07 Sep 2020 13:55:52 +0200 > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > > On Montag, 7. September 2020 11:57:58 CEST Greg Kurz wrote: > > > On Sun, 6 Sep 2020 18:50:32 +0200 > > > > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > > > Previous patch introduced a performance warning being logged on host > > > > side if client connected with an 'msize' <= 8192. Disable this > > > > performance warning for the synth driver to prevent that warning from > > > > being printed whenever the 9pfs (qtest) test cases are running. > > > > > > > > Introduce a new export flag V9FS_NO_PERF_WARN for that purpose, which > > > > might also be used to disable such warnings from the CLI in future. > > > > > > > > We could have also prevented the warning by simply raising P9_MAX_SIZE > > > > in virtio-9p-test.c to any value larger than 8192, however in the > > > > context of test cases it makes sense running for edge cases, which > > > > includes the lowest 'msize' value supported by the server which is > > > > 4096, hence we want to preserve an msize of 4096 for the test client. > > > > > > > > Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> > > > > --- > > > > > > Heh, yes I guess it makes sense :) > > > > > > I guess you could maybe queue this patch before the other one. > > > > > > Reviewed-by: Greg Kurz <groug@kaod.org> > > > > Thanks! > > > > BTW, have you ever tried dealing with patchwork client's 'state' feature for > > already transmited patches on the list (new, rejected, etc.)? > > > > Nope, never used patchwork's state at all and I've no idea on how it > works... but I can ask to my former IBM colleagues at Ozlabs. > It seems that you need to be a "maintainer" from a patchwork standpoint to manipulate patch states. https://patchwork.readthedocs.io/en/latest/usage/overview/#maintainers =========== Maintainers Maintainers are a special type of user that with permissions to do certain operations that regular Patchwork users can’t. Patchwork maintainers usually have a 1:1 mapping with a project’s code maintainers though this is not necessary. The operations that a maintainer can invoke include: Change the state of a patch Archive a patch Delegate a patch, or be delegated a patch =========== No clue how to upgrade to maintainer though... > > Not that there were many 9p patches for actually needing that; just curious if > > that's an LKML only feature. > > > > Best regards, > > Christian Schoenebeck > > > > >
On Dienstag, 8. September 2020 10:10:36 CEST Greg Kurz wrote: > > > BTW, have you ever tried dealing with patchwork client's 'state' feature > > > for already transmited patches on the list (new, rejected, etc.)? > > > > Nope, never used patchwork's state at all and I've no idea on how it > > works... but I can ask to my former IBM colleagues at Ozlabs. > > It seems that you need to be a "maintainer" from a patchwork standpoint > to manipulate patch states. > > https://patchwork.readthedocs.io/en/latest/usage/overview/#maintainers > > =========== > Maintainers > > Maintainers are a special type of user that with permissions to do > certain operations that regular Patchwork users can’t. Patchwork > maintainers usually have a 1:1 mapping with a project’s code > maintainers though this is not necessary. > > The operations that a maintainer can invoke include: > > Change the state of a patch > Archive a patch > Delegate a patch, or be delegated a patch > =========== > > No clue how to upgrade to maintainer though... The command to change a patch state is, e.g.: pwclient update -s Queued 11759645 When I do that I get this error: The update action requires authentication, but no username or password is configured So looks like it would require somebody to create an account somewhere, wherever that is. Best regards, Christian Schoenebeck
On Tue, 08 Sep 2020 11:01:15 +0200 Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > On Dienstag, 8. September 2020 10:10:36 CEST Greg Kurz wrote: > > > > BTW, have you ever tried dealing with patchwork client's 'state' feature > > > > for already transmited patches on the list (new, rejected, etc.)? > > > > > > Nope, never used patchwork's state at all and I've no idea on how it > > > works... but I can ask to my former IBM colleagues at Ozlabs. > > > > It seems that you need to be a "maintainer" from a patchwork standpoint > > to manipulate patch states. > > > > https://patchwork.readthedocs.io/en/latest/usage/overview/#maintainers > > > > =========== > > Maintainers > > > > Maintainers are a special type of user that with permissions to do > > certain operations that regular Patchwork users can’t. Patchwork > > maintainers usually have a 1:1 mapping with a project’s code > > maintainers though this is not necessary. > > > > The operations that a maintainer can invoke include: > > > > Change the state of a patch > > Archive a patch > > Delegate a patch, or be delegated a patch > > =========== > > > > No clue how to upgrade to maintainer though... > > The command to change a patch state is, e.g.: > > pwclient update -s Queued 11759645 > > When I do that I get this error: > > The update action requires authentication, but no username or password > is configured > > So looks like it would require somebody to create an account somewhere, > wherever that is. > Which patchwork site are you using ? https://patchwork.ozlabs.org/ or https://patchwork.kernel.org/ ? Anyway, both support self account creation: https://patchwork.ozlabs.org/register/ https://patchwork.kernel.org/register/ This allows you to update your own patches, but you need to be maintainer to update other's. > Best regards, > Christian Schoenebeck > >
On Dienstag, 8. September 2020 11:01:15 CEST Christian Schoenebeck wrote: > The command to change a patch state is, e.g.: > > pwclient update -s Queued 11759645 > > When I do that I get this error: > > The update action requires authentication, but no username or password > is configured > > So looks like it would require somebody to create an account somewhere, > wherever that is. > > Best regards, > Christian Schoenebeck This is just a test email to set patch state by X-Patchwork-State header, as documented here: https://patchwork.readthedocs.io/_/downloads/en/latest/pdf/ Best regards, Christian Schoenebeck
On Dienstag, 8. September 2020 11:34:28 CEST Greg Kurz wrote: > On Tue, 08 Sep 2020 11:01:15 +0200 > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > On Dienstag, 8. September 2020 10:10:36 CEST Greg Kurz wrote: > > > > > BTW, have you ever tried dealing with patchwork client's 'state' > > > > > feature > > > > > for already transmited patches on the list (new, rejected, etc.)? > > > > > > > > Nope, never used patchwork's state at all and I've no idea on how it > > > > works... but I can ask to my former IBM colleagues at Ozlabs. > > > > > > It seems that you need to be a "maintainer" from a patchwork standpoint > > > to manipulate patch states. > > > > > > https://patchwork.readthedocs.io/en/latest/usage/overview/#maintainers > > > > > > =========== > > > Maintainers > > > > > > Maintainers are a special type of user that with permissions to do > > > certain operations that regular Patchwork users can’t. Patchwork > > > maintainers usually have a 1:1 mapping with a project’s code > > > maintainers though this is not necessary. > > > > > > The operations that a maintainer can invoke include: > > > Change the state of a patch > > > Archive a patch > > > Delegate a patch, or be delegated a patch > > > > > > =========== > > > > > > No clue how to upgrade to maintainer though... > > > > The command to change a patch state is, e.g.: > > pwclient update -s Queued 11759645 > > > > When I do that I get this error: > > The update action requires authentication, but no username or password > > is configured > > > > So looks like it would require somebody to create an account somewhere, > > wherever that is. > > Which patchwork site are you using ? > > https://patchwork.ozlabs.org/ or https://patchwork.kernel.org/ ? > > Anyway, both support self account creation: > > https://patchwork.ozlabs.org/register/ > > https://patchwork.kernel.org/register/ > > This allows you to update your own patches, but you need > to be maintainer to update other's. Currently there are 3 maintainers registered for the QEMU patchwork project on ozlabs.org: https://patchwork.ozlabs.org/api/1.0/projects/14/ None for qemu on kernel.org: https://patchwork.kernel.org/api/1.0/projects/301/ My other test, setting state by email header didn't work BTW. Best regards, Christian Schoenebeck
On Dienstag, 8. September 2020 12:03:35 CEST Christian Schoenebeck wrote: > On Dienstag, 8. September 2020 11:34:28 CEST Greg Kurz wrote: > > On Tue, 08 Sep 2020 11:01:15 +0200 > > > > Christian Schoenebeck <qemu_oss@crudebyte.com> wrote: > > > On Dienstag, 8. September 2020 10:10:36 CEST Greg Kurz wrote: > > > > > > BTW, have you ever tried dealing with patchwork client's 'state' > > > > > > feature > > > > > > for already transmited patches on the list (new, rejected, etc.)? > > > > > > > > > > Nope, never used patchwork's state at all and I've no idea on how it > > > > > works... but I can ask to my former IBM colleagues at Ozlabs. > > > > > > > > It seems that you need to be a "maintainer" from a patchwork > > > > standpoint > > > > to manipulate patch states. > > > > > > > > https://patchwork.readthedocs.io/en/latest/usage/overview/#maintainers > > > > > > > > =========== > > > > Maintainers > > > > > > > > Maintainers are a special type of user that with permissions to do > > > > certain operations that regular Patchwork users can’t. Patchwork > > > > maintainers usually have a 1:1 mapping with a project’s code > > > > maintainers though this is not necessary. > > > > > > > > The operations that a maintainer can invoke include: > > > > Change the state of a patch > > > > Archive a patch > > > > Delegate a patch, or be delegated a patch > > > > > > > > =========== > > > > > > > > No clue how to upgrade to maintainer though... > > > > > > The command to change a patch state is, e.g.: > > > pwclient update -s Queued 11759645 > > > > > > When I do that I get this error: > > > The update action requires authentication, but no username or password > > > is configured > > > > > > So looks like it would require somebody to create an account somewhere, > > > wherever that is. > > > > Which patchwork site are you using ? > > > > https://patchwork.ozlabs.org/ or https://patchwork.kernel.org/ ? > > > > Anyway, both support self account creation: > > > > https://patchwork.ozlabs.org/register/ > > > > https://patchwork.kernel.org/register/ > > > > This allows you to update your own patches, but you need > > to be maintainer to update other's. > > Currently there are 3 maintainers registered for the QEMU patchwork project > on ozlabs.org: > > https://patchwork.ozlabs.org/api/1.0/projects/14/ > > None for qemu on kernel.org: > > https://patchwork.kernel.org/api/1.0/projects/301/ > > My other test, setting state by email header didn't work BTW. Looks like adding/removing maintainers is done by first getting the project's current json data, e.g.: https://patchwork.ozlabs.org/api/1.0/projects/14/ and then HTTP putting the (entire) modified json data with updated "maintainers" field: PUT /api/1.0/projects/14/ HTTP/1.1 Host: patchwork.ozlabs.org Content-Type: application/json { <json-data-here> } or by just sending the updated "maintainers" json field with HTTP "PATCH" command instead: PATCH /api/1.0/projects/14/ HTTP/1.1 Host: patchwork.ozlabs.org Content-Type: application/json { "maintainers": ... } As documented here: https://patchwork.readthedocs.io/_/downloads/en/latest/pdf/ [page 90, 92] As far as I can see, this is currently not implemented in the client. So that's manual, raw HTTP handling. No word in the docs though whether it's sufficient being a project maintainer for doing that, or if you even need to have a patchwork admin account. And unfortunately patchwork does not seem to have a concept of submaintainers, so it seems to be all-or-nothing. Best regards, Christian Schoenebeck
diff --git a/fsdev/file-op-9p.h b/fsdev/file-op-9p.h index f2f7772c86..d51cec2f3b 100644 --- a/fsdev/file-op-9p.h +++ b/fsdev/file-op-9p.h @@ -64,6 +64,10 @@ typedef struct ExtendedOps { */ #define V9FS_REMAP_INODES 0x00000200 #define V9FS_FORBID_MULTIDEVS 0x00000400 +/* + * Disables certain performance warnings from being logged on host side. + */ +#define V9FS_NO_PERF_WARN 0x00000800 #define V9FS_SEC_MASK 0x0000003C diff --git a/hw/9pfs/9p-synth.c b/hw/9pfs/9p-synth.c index 7eb210ffa8..cec8c0eefc 100644 --- a/hw/9pfs/9p-synth.c +++ b/hw/9pfs/9p-synth.c @@ -541,6 +541,8 @@ static int synth_init(FsContext *ctx, Error **errp) QLIST_INIT(&synth_root.child); qemu_mutex_init(&synth_mutex); + ctx->export_flags |= V9FS_NO_PERF_WARN; + /* Add "." and ".." entries for root */ v9fs_add_dir_node(&synth_root, synth_root.attr->mode, "..", synth_root.attr, synth_root.attr->inode); diff --git a/hw/9pfs/9p.c b/hw/9pfs/9p.c index 99b6f24fd6..741d222c3f 100644 --- a/hw/9pfs/9p.c +++ b/hw/9pfs/9p.c @@ -1354,7 +1354,7 @@ static void coroutine_fn v9fs_version(void *opaque) } /* 8192 is the default msize of Linux clients */ - if (s->msize <= 8192) { + if (s->msize <= 8192 && !(s->ctx.export_flags & V9FS_NO_PERF_WARN)) { warn_report_once( "9p: degraded performance: a reasonable high msize should be " "chosen on client/guest side (chosen msize is <= 8192). See "
Previous patch introduced a performance warning being logged on host side if client connected with an 'msize' <= 8192. Disable this performance warning for the synth driver to prevent that warning from being printed whenever the 9pfs (qtest) test cases are running. Introduce a new export flag V9FS_NO_PERF_WARN for that purpose, which might also be used to disable such warnings from the CLI in future. We could have also prevented the warning by simply raising P9_MAX_SIZE in virtio-9p-test.c to any value larger than 8192, however in the context of test cases it makes sense running for edge cases, which includes the lowest 'msize' value supported by the server which is 4096, hence we want to preserve an msize of 4096 for the test client. Signed-off-by: Christian Schoenebeck <qemu_oss@crudebyte.com> --- fsdev/file-op-9p.h | 4 ++++ hw/9pfs/9p-synth.c | 2 ++ hw/9pfs/9p.c | 2 +- 3 files changed, 7 insertions(+), 1 deletion(-)