diff mbox series

[v2] platform/chrome: cros_ec_spi: Transfer messages at high priority

Message ID 20190403160526.257088-1-dianders@chromium.org (mailing list archive)
State New, archived
Headers show
Series [v2] platform/chrome: cros_ec_spi: Transfer messages at high priority | expand

Commit Message

Doug Anderson April 3, 2019, 4:05 p.m. UTC
The software running on the Chrome OS Embedded Controller (cros_ec)
handles SPI transfers in a bit of a wonky way.  Specifically if the EC
sees too long of a delay in a SPI transfer it will give up and the
transfer will be counted as failed.  Unfortunately the timeout is
fairly short, though the actual number may be different for different
EC codebases.

We can end up tripping the timeout pretty easily if we happen to
preempt the task running the SPI transfer and don't get back to it for
a little while.

Historically this hasn't been a _huge_ deal because:
1. On old devices Chrome OS used to run PREEMPT_VOLUNTARY.  That meant
   we were pretty unlikely to take a big break from the transfer.
2. On recent devices we had faster / more processors.
3. Recent devices didn't use "cros-ec-spi-pre-delay".  Using that
   delay makes us more likely to trip this use case.
4. For whatever reasons (I didn't dig) old kernels seem to be less
   likely to trip this.
5. For the most part it's kinda OK if a few transfers to the EC fail.
   Mostly we're just polling the battery or doing some other task
   where we'll try again.

Even with the above things, this issue has reared its ugly head
periodically.  We could solve this in a nice way by adding reliable
retries to the EC protocol [1] or by re-designing the code in the EC
codebase to allow it to wait longer, but that code doesn't ever seem
to get changed.  ...and even if it did, it wouldn't help old devices.

It's now time to finally take a crack at making this a little better.
This patch isn't guaranteed to make every cros_ec SPI transfer
perfect, but it should improve things by a few orders of magnitude.
Specifically you can try this on a rk3288-veyron Chromebook (which is
slower and also _does_ need "cros-ec-spi-pre-delay"):
  md5sum /dev/zero &
  md5sum /dev/zero &
  md5sum /dev/zero &
  md5sum /dev/zero &
  while true; do
    cat /sys/class/power_supply/sbs-20-000b/charge_now > /dev/null;
  done
...before this patch you'll see boatloads of errors.  After this patch I
don't see any in the testing I did.

The way this patch works is by effectively boosting the priority of
the cros_ec transfers.  As far as I know there is no simple way to
just boost the priority of the current process temporarily so the way
we accomplish this is by queuing the work on the system_highpri_wq.

NOTE: this patch relies on the fact that the SPI framework attempts to
push the messages out on the calling context (which is the one that is
boosted to high priority).  As I understand from earlier (long ago)
discussions with Mark Brown this should be a fine assumption.  Even if
it isn't true sometimes this patch will still not make things worse.

[1] https://crbug.com/678675

Signed-off-by: Douglas Anderson <dianders@chromium.org>
---

Changes in v2:
- Use system_highpri_wq + completion (Matthias)
- Avoid duplication by using a function pointer (Matthias)

 drivers/platform/chrome/cros_ec_spi.c | 83 +++++++++++++++++++++++++--
 1 file changed, 77 insertions(+), 6 deletions(-)

Comments

Brian Norris April 3, 2019, 5:04 p.m. UTC | #1
I know some of this was hashed out last night, but I wasn't reading my
email then to interject ;)

On Wed, Apr 3, 2019 at 9:05 AM Douglas Anderson <dianders@chromium.org> wrote:
> +static int cros_ec_xfer_high_pri(struct cros_ec_device *ec_dev,
> +                                struct cros_ec_command *ec_msg,
> +                                cros_ec_xfer_fn_t fn)
> +{
> +       struct cros_ec_xfer_work_params params;
> +
> +       INIT_WORK(&params.work, cros_ec_xfer_high_pri_work);
> +       params.ec_dev = ec_dev;
> +       params.ec_msg = ec_msg;
> +       params.fn = fn;
> +       init_completion(&params.completion);
> +
> +       /*
> +        * This looks a bit ridiculous.  Why do the work on a
> +        * different thread if we're just going to block waiting for
> +        * the thread to finish?  The key here is that the thread is
> +        * running at high priority but the calling context might not
> +        * be.  We need to be at high priority to avoid getting
> +        * context switched out for too long and the EC giving up on
> +        * the transfer.
> +        */
> +       queue_work(system_highpri_wq, &params.work);

Does anybody know what the definition of "too long" is for the phrase
"Don't queue works which can run for too long" in the documentation?

> +       wait_for_completion(&params.completion);

I think flush_workqueue() was discussed and rejected, but what about
flush_work()? Then you don't have to worry about the rest of the
contents of the workqueue -- just your own work--and I think you could
avoid the 'completion'.

You might also have a tiny race in the current implementation, since
(a) you can't queue the same work item twice and
(b) technically, the complete() call is still while the work item is
running -- you don't really guarantee the work item has finished
before you continue.
So the combination of (a) and (b) means that moving from one xfer to
the next, you might not successfully queue your work at all. You could
probably test this by checking the return value of queue_work() under
a heavy EC workload. Avoiding the completion would also avoid this
race.

Brian

> +
> +       return params.ret;
> +}
Doug Anderson April 3, 2019, 5:49 p.m. UTC | #2
Hi,

On Wed, Apr 3, 2019 at 10:04 AM Brian Norris <briannorris@chromium.org> wrote:
>
> I know some of this was hashed out last night, but I wasn't reading my
> email then to interject ;)
>
> On Wed, Apr 3, 2019 at 9:05 AM Douglas Anderson <dianders@chromium.org> wrote:
> > +static int cros_ec_xfer_high_pri(struct cros_ec_device *ec_dev,
> > +                                struct cros_ec_command *ec_msg,
> > +                                cros_ec_xfer_fn_t fn)
> > +{
> > +       struct cros_ec_xfer_work_params params;
> > +
> > +       INIT_WORK(&params.work, cros_ec_xfer_high_pri_work);
> > +       params.ec_dev = ec_dev;
> > +       params.ec_msg = ec_msg;
> > +       params.fn = fn;
> > +       init_completion(&params.completion);
> > +
> > +       /*
> > +        * This looks a bit ridiculous.  Why do the work on a
> > +        * different thread if we're just going to block waiting for
> > +        * the thread to finish?  The key here is that the thread is
> > +        * running at high priority but the calling context might not
> > +        * be.  We need to be at high priority to avoid getting
> > +        * context switched out for too long and the EC giving up on
> > +        * the transfer.
> > +        */
> > +       queue_work(system_highpri_wq, &params.work);
>
> Does anybody know what the definition of "too long" is for the phrase
> "Don't queue works which can run for too long" in the documentation?

Using the docs that Matthias pointed at:

https://www.kernel.org/doc/html/v5.0/core-api/workqueue.html

...take a look at the "Example Execution Scenarios".  They show tasks
that burn CPU cycles for 5 ms and this seems to be illustrating tasks
that workqueues are designed for.  That makes me feel like 5 ms is not
too long.

I don't think we'll be running for that long in our case.  While a few
tegra device trees have a big delay of 2000 for
"google,cros-ec-spi-msg-delay", I've always been of the opinion that
delay was added to work around other bugs and is probably not really
needed, so if somehow this regresses performance there we could try
tuning that down.  ...but even in that case we're only delaying for
2000 us = 2ms.  The "google,cros-ec-spi-pre-delay" on rk3288-veyron
chromebooks is 30 us.  There are some other delays hardcoded in the
driver but those are all expressed in terms of nanoseconds.

It looks like in the extreme case, though, we still could end up
having our work take 200 ms for completion as per the comments in
"EC_MSG_DEADLINE_MS".  IIRC this is a super uncommon case and only
happened on a few systems when the battery was in a particularly wonky
state and decided to do clock stretching.  Also: we shouldn't even be
busy-waiting during this time since most of it will be sitting
(presumably blocked) waiting for SPI bytes to come in.


So I think the summary is that I'm back to being convinced that the
"system_highpri_wq" should be OK.  Running lots of work with no
blocking in the system high priority workqueue can truly block other
work items there, but it seems like the amount of time we're running
is within the time bounds expected.  The work also isn't going to
fully block other workers--they will be able to run on other CPUs
still and (if they are already running) presumably they can still be
scheduled back in.


> > +       wait_for_completion(&params.completion);
>
> I think flush_workqueue() was discussed and rejected, but what about
> flush_work()? Then you don't have to worry about the rest of the
> contents of the workqueue -- just your own work--and I think you could
> avoid the 'completion'.
>
> You might also have a tiny race in the current implementation, since
> (a) you can't queue the same work item twice and
> (b) technically, the complete() call is still while the work item is
> running -- you don't really guarantee the work item has finished
> before you continue.
> So the combination of (a) and (b) means that moving from one xfer to
> the next, you might not successfully queue your work at all. You could
> probably test this by checking the return value of queue_work() under
> a heavy EC workload. Avoiding the completion would also avoid this
> race.

Thanks, I'll do that in v3 assuming someone doesn't convince me to
switch back to a private workqueue.

-Doug
Brian Norris April 3, 2019, 5:55 p.m. UTC | #3
On Wed, Apr 3, 2019 at 10:49 AM Doug Anderson <dianders@chromium.org> wrote:
> On Wed, Apr 3, 2019 at 10:04 AM Brian Norris <briannorris@chromium.org> wrote:
> > Does anybody know what the definition of "too long" is for the phrase
> > "Don't queue works which can run for too long" in the documentation?
...
> So I think the summary is that I'm back to being convinced that the
> "system_highpri_wq" should be OK.

Makes sense.

> > > +       wait_for_completion(&params.completion);
> >
> > I think flush_workqueue() was discussed and rejected, but what about
> > flush_work()? Then you don't have to worry about the rest of the
> > contents of the workqueue -- just your own work--and I think you could
> > avoid the 'completion'.
...
> Thanks, I'll do that in v3 assuming someone doesn't convince me to
> switch back to a private workqueue.

You can do flush_work() in either case :) Then, if for some reason you
start queueing up other work on your private workqueue, you still
won't have to worry about inter-'work' dependencies.

Brian
Matthias Kaehlcke April 3, 2019, 6:14 p.m. UTC | #4
On Wed, Apr 03, 2019 at 10:04:16AM -0700, Brian Norris wrote:
> I know some of this was hashed out last night, but I wasn't reading my
> email then to interject ;)
> 
> On Wed, Apr 3, 2019 at 9:05 AM Douglas Anderson <dianders@chromium.org> wrote:
> > +static int cros_ec_xfer_high_pri(struct cros_ec_device *ec_dev,
> > +                                struct cros_ec_command *ec_msg,
> > +                                cros_ec_xfer_fn_t fn)
> > +{
> > +       struct cros_ec_xfer_work_params params;
> > +
> > +       INIT_WORK(&params.work, cros_ec_xfer_high_pri_work);
> > +       params.ec_dev = ec_dev;
> > +       params.ec_msg = ec_msg;
> > +       params.fn = fn;
> > +       init_completion(&params.completion);
> > +
> > +       /*
> > +        * This looks a bit ridiculous.  Why do the work on a
> > +        * different thread if we're just going to block waiting for
> > +        * the thread to finish?  The key here is that the thread is
> > +        * running at high priority but the calling context might not
> > +        * be.  We need to be at high priority to avoid getting
> > +        * context switched out for too long and the EC giving up on
> > +        * the transfer.
> > +        */
> > +       queue_work(system_highpri_wq, &params.work);
> 
> Does anybody know what the definition of "too long" is for the phrase
> "Don't queue works which can run for too long" in the documentation?
> 
> > +       wait_for_completion(&params.completion);
> 
> I think flush_workqueue() was discussed and rejected, but what about
> flush_work()? Then you don't have to worry about the rest of the
> contents of the workqueue -- just your own work--and I think you could
> avoid the 'completion'.

Indeed, flush_work() seems the right thing to do.

I thought to remember that there is a function to wait for a work to
complete and scanned through workqueue.h for it, but somehow missed it.

> You might also have a tiny race in the current implementation, since
> (a) you can't queue the same work item twice and
> (b) technically, the complete() call is still while the work item is
> running -- you don't really guarantee the work item has finished
> before you continue.
> So the combination of (a) and (b) means that moving from one xfer to
> the next, you might not successfully queue your work at all. You could
> probably test this by checking the return value of queue_work() under
> a heavy EC workload. Avoiding the completion would also avoid this
> race.

Each transfer has it's own work struct (allocated on the stack), hence
a) does not occur. b) is still true, but shouldn't be a problem on
its own.

Anyway, using flush_work() as you suggested is the better solution :)
Doug Anderson April 3, 2019, 6:17 p.m. UTC | #5
Hi,

On Wed, Apr 3, 2019 at 11:14 AM Matthias Kaehlcke <mka@chromium.org> wrote:
> Each transfer has it's own work struct (allocated on the stack), hence
> a) does not occur. b) is still true, but shouldn't be a problem on
> its own.

Actually, it could be much worse _because_ it's on the stack.  The
worker could write something back to the work after the work has been
de-allocated.  That's bad.


> Anyway, using flush_work() as you suggested is the better solution :)

Yup, thanks!
Matthias Kaehlcke April 3, 2019, 6:30 p.m. UTC | #6
On Wed, Apr 03, 2019 at 11:17:27AM -0700, Doug Anderson wrote:
> Hi,
> 
> On Wed, Apr 3, 2019 at 11:14 AM Matthias Kaehlcke <mka@chromium.org> wrote:
> > Each transfer has it's own work struct (allocated on the stack), hence
> > a) does not occur. b) is still true, but shouldn't be a problem on
> > its own.
> 
> Actually, it could be much worse _because_ it's on the stack.  The
> worker could write something back to the work after the work has been
> de-allocated.  That's bad.

Sure, I said "not a problem on its own."
                            ~~~~~~~~~~

The worker is owned by this driver and supposedly we know what we are
doing. Changing a member in the struct after calling complete() would
likely be a bug anyway (though not necessarily a fatal one).
Brian Norris April 3, 2019, 6:39 p.m. UTC | #7
On Wed, Apr 3, 2019 at 11:30 AM Matthias Kaehlcke <mka@chromium.org> wrote:
> The worker is owned by this driver and supposedly we know what we are
> doing. Changing a member in the struct after calling complete() would
> likely be a bug anyway (though not necessarily a fatal one).

The work_struct is owned by the driver, but the *worker* is not. If we
haven't ensured the worker is totally done with the work_struct, then
we should not be freeing the struct. (i.e., we should not return from
the context where it was stack-allocated.)

Anyway, I think we've all agreed that this should be changed,

Brian
diff mbox series

Patch

diff --git a/drivers/platform/chrome/cros_ec_spi.c b/drivers/platform/chrome/cros_ec_spi.c
index ffc38f9d4829..8aaea5d93c4f 100644
--- a/drivers/platform/chrome/cros_ec_spi.c
+++ b/drivers/platform/chrome/cros_ec_spi.c
@@ -75,6 +75,29 @@  struct cros_ec_spi {
 	unsigned int end_of_msg_delay;
 };
 
+typedef int (*cros_ec_xfer_fn_t) (struct cros_ec_device *ec_dev,
+				  struct cros_ec_command *ec_msg);
+
+/**
+ * struct cros_ec_xfer_work_params - params for our high priority workers
+ *
+ * @work: The work_struct needed to queue work
+ * @fn: The function to use to transfer
+ * @completion: Will be signaled when function is done
+ * @ec_dev: ChromeOS EC device
+ * @ec_msg: Message to transfer
+ * @ret: The return value of the function
+ */
+
+struct cros_ec_xfer_work_params {
+	struct work_struct work;
+	cros_ec_xfer_fn_t fn;
+	struct completion completion;
+	struct cros_ec_device *ec_dev;
+	struct cros_ec_command *ec_msg;
+	int ret;
+};
+
 static void debug_packet(struct device *dev, const char *name, u8 *ptr,
 			 int len)
 {
@@ -350,13 +373,13 @@  static int cros_ec_spi_receive_response(struct cros_ec_device *ec_dev,
 }
 
 /**
- * cros_ec_pkt_xfer_spi - Transfer a packet over SPI and receive the reply
+ * do_cros_ec_pkt_xfer_spi - Transfer a packet over SPI and receive the reply
  *
  * @ec_dev: ChromeOS EC device
  * @ec_msg: Message to transfer
  */
-static int cros_ec_pkt_xfer_spi(struct cros_ec_device *ec_dev,
-				struct cros_ec_command *ec_msg)
+static int do_cros_ec_pkt_xfer_spi(struct cros_ec_device *ec_dev,
+				   struct cros_ec_command *ec_msg)
 {
 	struct ec_host_response *response;
 	struct cros_ec_spi *ec_spi = ec_dev->priv;
@@ -493,13 +516,13 @@  static int cros_ec_pkt_xfer_spi(struct cros_ec_device *ec_dev,
 }
 
 /**
- * cros_ec_cmd_xfer_spi - Transfer a message over SPI and receive the reply
+ * do_cros_ec_cmd_xfer_spi - Transfer a message over SPI and receive the reply
  *
  * @ec_dev: ChromeOS EC device
  * @ec_msg: Message to transfer
  */
-static int cros_ec_cmd_xfer_spi(struct cros_ec_device *ec_dev,
-				struct cros_ec_command *ec_msg)
+static int do_cros_ec_cmd_xfer_spi(struct cros_ec_device *ec_dev,
+				   struct cros_ec_command *ec_msg)
 {
 	struct cros_ec_spi *ec_spi = ec_dev->priv;
 	struct spi_transfer trans;
@@ -611,6 +634,54 @@  static int cros_ec_cmd_xfer_spi(struct cros_ec_device *ec_dev,
 	return ret;
 }
 
+static void cros_ec_xfer_high_pri_work(struct work_struct *work)
+{
+	struct cros_ec_xfer_work_params *params;
+
+	params = container_of(work, struct cros_ec_xfer_work_params, work);
+	params->ret = params->fn(params->ec_dev, params->ec_msg);
+	complete(&params->completion);
+}
+
+static int cros_ec_xfer_high_pri(struct cros_ec_device *ec_dev,
+				 struct cros_ec_command *ec_msg,
+				 cros_ec_xfer_fn_t fn)
+{
+	struct cros_ec_xfer_work_params params;
+
+	INIT_WORK(&params.work, cros_ec_xfer_high_pri_work);
+	params.ec_dev = ec_dev;
+	params.ec_msg = ec_msg;
+	params.fn = fn;
+	init_completion(&params.completion);
+
+	/*
+	 * This looks a bit ridiculous.  Why do the work on a
+	 * different thread if we're just going to block waiting for
+	 * the thread to finish?  The key here is that the thread is
+	 * running at high priority but the calling context might not
+	 * be.  We need to be at high priority to avoid getting
+	 * context switched out for too long and the EC giving up on
+	 * the transfer.
+	 */
+	queue_work(system_highpri_wq, &params.work);
+	wait_for_completion(&params.completion);
+
+	return params.ret;
+}
+
+static int cros_ec_pkt_xfer_spi(struct cros_ec_device *ec_dev,
+				struct cros_ec_command *ec_msg)
+{
+	return cros_ec_xfer_high_pri(ec_dev, ec_msg, do_cros_ec_pkt_xfer_spi);
+}
+
+static int cros_ec_cmd_xfer_spi(struct cros_ec_device *ec_dev,
+				struct cros_ec_command *ec_msg)
+{
+	return cros_ec_xfer_high_pri(ec_dev, ec_msg, do_cros_ec_cmd_xfer_spi);
+}
+
 static void cros_ec_spi_dt_probe(struct cros_ec_spi *ec_spi, struct device *dev)
 {
 	struct device_node *np = dev->of_node;