diff mbox

[RESEND] mmc: core: fix race condition in mmc_wait_data_done

Message ID 1440731589-22241-1-git-send-email-shawn.lin@rock-chips.com (mailing list archive)
State New, archived
Headers show

Commit Message

Shawn Lin Aug. 28, 2015, 3:13 a.m. UTC
From: Jialing Fu <jlfu@marvell.com>

The following panic is captured in ker3.14, but the issue still exists
in latest kernel.
---------------------------------------------------------------------
[   20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer dereference
at virtual address 00000578
......
[   20.738499] c0 3136 (Compiler) PC is at _raw_spin_lock_irqsave+0x24/0x60
[   20.738527] c0 3136 (Compiler) LR is at _raw_spin_lock_irqsave+0x20/0x60
[   20.740134] c0 3136 (Compiler) Call trace:
[   20.740165] c0 3136 (Compiler) [<ffffffc0008ee900>] _raw_spin_lock_irqsave+0x24/0x60
[   20.740200] c0 3136 (Compiler) [<ffffffc0000dd024>] __wake_up+0x1c/0x54
[   20.740230] c0 3136 (Compiler) [<ffffffc000639414>] mmc_wait_data_done+0x28/0x34
[   20.740262] c0 3136 (Compiler) [<ffffffc0006391a0>] mmc_request_done+0xa4/0x220
[   20.740314] c0 3136 (Compiler) [<ffffffc000656894>] sdhci_tasklet_finish+0xac/0x264
[   20.740352] c0 3136 (Compiler) [<ffffffc0000a2b58>] tasklet_action+0xa0/0x158
[   20.740382] c0 3136 (Compiler) [<ffffffc0000a2078>] __do_softirq+0x10c/0x2e4
[   20.740411] c0 3136 (Compiler) [<ffffffc0000a24bc>] irq_exit+0x8c/0xc0
[   20.740439] c0 3136 (Compiler) [<ffffffc00008489c>] handle_IRQ+0x48/0xac
[   20.740469] c0 3136 (Compiler) [<ffffffc000081428>] gic_handle_irq+0x38/0x7c
----------------------------------------------------------------------
Because in SMP, "mrq" has race condition between below two paths:
path1: CPU0: <tasklet context>
  static void mmc_wait_data_done(struct mmc_request *mrq)
  {
     mrq->host->context_info.is_done_rcv = true;
     //
     // If CPU0 has just finished "is_done_rcv = true" in path1, and at
     // this moment, IRQ or ICache line missing happens in CPU0.
     // What happens in CPU1 (path2)?
     //
     // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode:
     // path2 would have chance to break from wait_event_interruptible
     // in mmc_wait_for_data_req_done and continue to run for next
     // mmc_request (mmc_blk_rw_rq_prep).
     //
     // Within mmc_blk_rq_prep, mrq is cleared to 0.
     // If below line still gets host from "mrq" as the result of
     // compiler, the panic happens as we traced.
     wake_up_interruptible(&mrq->host->context_info.wait);
  }

path2: CPU1: <The mmcqd thread runs mmc_queue_thread>
  static int mmc_wait_for_data_req_done(...
  {
     ...
     while (1) {
           wait_event_interruptible(context_info->wait,
                   (context_info->is_done_rcv ||
                    context_info->is_new_req));
     	   static void mmc_blk_rw_rq_prep(...
           {
           ...
           memset(brq, 0, sizeof(struct mmc_blk_request));

This issue happens very coincidentally; however adding mdelay(1) in
mmc_wait_data_done as below could duplicate it easily.

   static void mmc_wait_data_done(struct mmc_request *mrq)
   {
     mrq->host->context_info.is_done_rcv = true;
+    mdelay(1);
     wake_up_interruptible(&mrq->host->context_info.wait);
    }

At runtime, IRQ or ICache line missing may just happen at the same place
of the mdelay(1).

This patch gets the mmc_context_info at the beginning of function, it can
avoid this race condition.

Signed-off-by: Jialing Fu <jlfu@marvell.com>
Tested-by: Shawn Lin <shawn.lin@rock-chips.com>
---

 drivers/mmc/core/core.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

Comments

Shawn Lin Aug. 28, 2015, 3:25 a.m. UTC | #1
On 2015/8/28 11:13, Shawn Lin wrote:
> From: Jialing Fu <jlfu@marvell.com>
>
> The following panic is captured in ker3.14, but the issue still exists
> in latest kernel.
> ---------------------------------------------------------------------
> [   20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer dereference
> at virtual address 00000578
> ......
> [   20.738499] c0 3136 (Compiler) PC is at _raw_spin_lock_irqsave+0x24/0x60
> [   20.738527] c0 3136 (Compiler) LR is at _raw_spin_lock_irqsave+0x20/0x60
> [   20.740134] c0 3136 (Compiler) Call trace:
> [   20.740165] c0 3136 (Compiler) [<ffffffc0008ee900>] _raw_spin_lock_irqsave+0x24/0x60
> [   20.740200] c0 3136 (Compiler) [<ffffffc0000dd024>] __wake_up+0x1c/0x54
> [   20.740230] c0 3136 (Compiler) [<ffffffc000639414>] mmc_wait_data_done+0x28/0x34
> [   20.740262] c0 3136 (Compiler) [<ffffffc0006391a0>] mmc_request_done+0xa4/0x220
> [   20.740314] c0 3136 (Compiler) [<ffffffc000656894>] sdhci_tasklet_finish+0xac/0x264
> [   20.740352] c0 3136 (Compiler) [<ffffffc0000a2b58>] tasklet_action+0xa0/0x158
> [   20.740382] c0 3136 (Compiler) [<ffffffc0000a2078>] __do_softirq+0x10c/0x2e4
> [   20.740411] c0 3136 (Compiler) [<ffffffc0000a24bc>] irq_exit+0x8c/0xc0
> [   20.740439] c0 3136 (Compiler) [<ffffffc00008489c>] handle_IRQ+0x48/0xac
> [   20.740469] c0 3136 (Compiler) [<ffffffc000081428>] gic_handle_irq+0x38/0x7c
> ----------------------------------------------------------------------
> Because in SMP, "mrq" has race condition between below two paths:
> path1: CPU0: <tasklet context>
>    static void mmc_wait_data_done(struct mmc_request *mrq)
>    {
>       mrq->host->context_info.is_done_rcv = true;
>       //
>       // If CPU0 has just finished "is_done_rcv = true" in path1, and at
>       // this moment, IRQ or ICache line missing happens in CPU0.
>       // What happens in CPU1 (path2)?
>       //
>       // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode:
>       // path2 would have chance to break from wait_event_interruptible
>       // in mmc_wait_for_data_req_done and continue to run for next
>       // mmc_request (mmc_blk_rw_rq_prep).
>       //
>       // Within mmc_blk_rq_prep, mrq is cleared to 0.
>       // If below line still gets host from "mrq" as the result of
>       // compiler, the panic happens as we traced.
>       wake_up_interruptible(&mrq->host->context_info.wait);
>    }
>
> path2: CPU1: <The mmcqd thread runs mmc_queue_thread>
>    static int mmc_wait_for_data_req_done(...
>    {
>       ...
>       while (1) {
>             wait_event_interruptible(context_info->wait,
>                     (context_info->is_done_rcv ||
>                      context_info->is_new_req));
>       	   static void mmc_blk_rw_rq_prep(...
>             {
>             ...
>             memset(brq, 0, sizeof(struct mmc_blk_request));
>
> This issue happens very coincidentally; however adding mdelay(1) in
> mmc_wait_data_done as below could duplicate it easily.
>
>     static void mmc_wait_data_done(struct mmc_request *mrq)
>     {
>       mrq->host->context_info.is_done_rcv = true;
> +    mdelay(1);
>       wake_up_interruptible(&mrq->host->context_info.wait);
>      }
>

Hi, ulf

We find this bug on Intel-C3230RK platform for very small probability.

Whereas I can easily reproduce this case if I add a mdelay(1) or  longer 
delay as Jialing did.

This patch seems useful to me. Should we push it forward? :)


> At runtime, IRQ or ICache line missing may just happen at the same place
> of the mdelay(1).
>
> This patch gets the mmc_context_info at the beginning of function, it can
> avoid this race condition.
>
> Signed-off-by: Jialing Fu <jlfu@marvell.com>
> Tested-by: Shawn Lin <shawn.lin@rock-chips.com>
> ---
>
>   drivers/mmc/core/core.c | 6 ++++--
>   1 file changed, 4 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
> index 664b617..0520064 100644
> --- a/drivers/mmc/core/core.c
> +++ b/drivers/mmc/core/core.c
> @@ -358,8 +358,10 @@ EXPORT_SYMBOL(mmc_start_bkops);
>    */
>   static void mmc_wait_data_done(struct mmc_request *mrq)
>   {
> -	mrq->host->context_info.is_done_rcv = true;
> -	wake_up_interruptible(&mrq->host->context_info.wait);
> +	struct mmc_context_info *context_info = &mrq->host->context_info;
> +
> +	context_info->is_done_rcv = true;
> +	wake_up_interruptible(&context_info->wait);
>   }
>
>   static void mmc_wait_done(struct mmc_request *mrq)
>
Ulf Hansson Aug. 28, 2015, 8:55 a.m. UTC | #2
On 28 August 2015 at 05:25, Shawn Lin <shawn.lin@rock-chips.com> wrote:
> On 2015/8/28 11:13, Shawn Lin wrote:
>>
>> From: Jialing Fu <jlfu@marvell.com>
>>
>> The following panic is captured in ker3.14, but the issue still exists
>> in latest kernel.
>> ---------------------------------------------------------------------
>> [   20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer
>> dereference
>> at virtual address 00000578
>> ......
>> [   20.738499] c0 3136 (Compiler) PC is at
>> _raw_spin_lock_irqsave+0x24/0x60
>> [   20.738527] c0 3136 (Compiler) LR is at
>> _raw_spin_lock_irqsave+0x20/0x60
>> [   20.740134] c0 3136 (Compiler) Call trace:
>> [   20.740165] c0 3136 (Compiler) [<ffffffc0008ee900>]
>> _raw_spin_lock_irqsave+0x24/0x60
>> [   20.740200] c0 3136 (Compiler) [<ffffffc0000dd024>] __wake_up+0x1c/0x54
>> [   20.740230] c0 3136 (Compiler) [<ffffffc000639414>]
>> mmc_wait_data_done+0x28/0x34
>> [   20.740262] c0 3136 (Compiler) [<ffffffc0006391a0>]
>> mmc_request_done+0xa4/0x220
>> [   20.740314] c0 3136 (Compiler) [<ffffffc000656894>]
>> sdhci_tasklet_finish+0xac/0x264
>> [   20.740352] c0 3136 (Compiler) [<ffffffc0000a2b58>]
>> tasklet_action+0xa0/0x158
>> [   20.740382] c0 3136 (Compiler) [<ffffffc0000a2078>]
>> __do_softirq+0x10c/0x2e4
>> [   20.740411] c0 3136 (Compiler) [<ffffffc0000a24bc>] irq_exit+0x8c/0xc0
>> [   20.740439] c0 3136 (Compiler) [<ffffffc00008489c>]
>> handle_IRQ+0x48/0xac
>> [   20.740469] c0 3136 (Compiler) [<ffffffc000081428>]
>> gic_handle_irq+0x38/0x7c
>> ----------------------------------------------------------------------
>> Because in SMP, "mrq" has race condition between below two paths:
>> path1: CPU0: <tasklet context>
>>    static void mmc_wait_data_done(struct mmc_request *mrq)
>>    {
>>       mrq->host->context_info.is_done_rcv = true;
>>       //
>>       // If CPU0 has just finished "is_done_rcv = true" in path1, and at
>>       // this moment, IRQ or ICache line missing happens in CPU0.
>>       // What happens in CPU1 (path2)?
>>       //
>>       // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode:
>>       // path2 would have chance to break from wait_event_interruptible
>>       // in mmc_wait_for_data_req_done and continue to run for next
>>       // mmc_request (mmc_blk_rw_rq_prep).
>>       //
>>       // Within mmc_blk_rq_prep, mrq is cleared to 0.
>>       // If below line still gets host from "mrq" as the result of
>>       // compiler, the panic happens as we traced.
>>       wake_up_interruptible(&mrq->host->context_info.wait);
>>    }
>>
>> path2: CPU1: <The mmcqd thread runs mmc_queue_thread>
>>    static int mmc_wait_for_data_req_done(...
>>    {
>>       ...
>>       while (1) {
>>             wait_event_interruptible(context_info->wait,
>>                     (context_info->is_done_rcv ||
>>                      context_info->is_new_req));
>>            static void mmc_blk_rw_rq_prep(...
>>             {
>>             ...
>>             memset(brq, 0, sizeof(struct mmc_blk_request));
>>
>> This issue happens very coincidentally; however adding mdelay(1) in
>> mmc_wait_data_done as below could duplicate it easily.
>>
>>     static void mmc_wait_data_done(struct mmc_request *mrq)
>>     {
>>       mrq->host->context_info.is_done_rcv = true;
>> +    mdelay(1);
>>       wake_up_interruptible(&mrq->host->context_info.wait);
>>      }
>>
>
> Hi, ulf
>
> We find this bug on Intel-C3230RK platform for very small probability.
>
> Whereas I can easily reproduce this case if I add a mdelay(1) or  longer
> delay as Jialing did.
>
> This patch seems useful to me. Should we push it forward? :)

It seems like a very good idea!

Should we add a fixes tag to it?

[...]

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Shawn Lin Aug. 28, 2015, 9:53 a.m. UTC | #3
? 2015/8/28 16:55, Ulf Hansson ??:
> On 28 August 2015 at 05:25, Shawn Lin <shawn.lin@rock-chips.com> wrote:
>> On 2015/8/28 11:13, Shawn Lin wrote:
>>>
>>> From: Jialing Fu <jlfu@marvell.com>
>>>
>>> The following panic is captured in ker3.14, but the issue still exists
>>> in latest kernel.
>>> ---------------------------------------------------------------------
>>> [   20.738217] c0 3136 (Compiler) Unable to handle kernel NULL pointer
>>> dereference
>>> at virtual address 00000578
>>> ......
>>> [   20.738499] c0 3136 (Compiler) PC is at
>>> _raw_spin_lock_irqsave+0x24/0x60
>>> [   20.738527] c0 3136 (Compiler) LR is at
>>> _raw_spin_lock_irqsave+0x20/0x60
>>> [   20.740134] c0 3136 (Compiler) Call trace:
>>> [   20.740165] c0 3136 (Compiler) [<ffffffc0008ee900>]
>>> _raw_spin_lock_irqsave+0x24/0x60
>>> [   20.740200] c0 3136 (Compiler) [<ffffffc0000dd024>] __wake_up+0x1c/0x54
>>> [   20.740230] c0 3136 (Compiler) [<ffffffc000639414>]
>>> mmc_wait_data_done+0x28/0x34
>>> [   20.740262] c0 3136 (Compiler) [<ffffffc0006391a0>]
>>> mmc_request_done+0xa4/0x220
>>> [   20.740314] c0 3136 (Compiler) [<ffffffc000656894>]
>>> sdhci_tasklet_finish+0xac/0x264
>>> [   20.740352] c0 3136 (Compiler) [<ffffffc0000a2b58>]
>>> tasklet_action+0xa0/0x158
>>> [   20.740382] c0 3136 (Compiler) [<ffffffc0000a2078>]
>>> __do_softirq+0x10c/0x2e4
>>> [   20.740411] c0 3136 (Compiler) [<ffffffc0000a24bc>] irq_exit+0x8c/0xc0
>>> [   20.740439] c0 3136 (Compiler) [<ffffffc00008489c>]
>>> handle_IRQ+0x48/0xac
>>> [   20.740469] c0 3136 (Compiler) [<ffffffc000081428>]
>>> gic_handle_irq+0x38/0x7c
>>> ----------------------------------------------------------------------
>>> Because in SMP, "mrq" has race condition between below two paths:
>>> path1: CPU0: <tasklet context>
>>>     static void mmc_wait_data_done(struct mmc_request *mrq)
>>>     {
>>>        mrq->host->context_info.is_done_rcv = true;
>>>        //
>>>        // If CPU0 has just finished "is_done_rcv = true" in path1, and at
>>>        // this moment, IRQ or ICache line missing happens in CPU0.
>>>        // What happens in CPU1 (path2)?
>>>        //
>>>        // If the mmcqd thread in CPU1(path2) hasn't entered to sleep mode:
>>>        // path2 would have chance to break from wait_event_interruptible
>>>        // in mmc_wait_for_data_req_done and continue to run for next
>>>        // mmc_request (mmc_blk_rw_rq_prep).
>>>        //
>>>        // Within mmc_blk_rq_prep, mrq is cleared to 0.
>>>        // If below line still gets host from "mrq" as the result of
>>>        // compiler, the panic happens as we traced.
>>>        wake_up_interruptible(&mrq->host->context_info.wait);
>>>     }
>>>
>>> path2: CPU1: <The mmcqd thread runs mmc_queue_thread>
>>>     static int mmc_wait_for_data_req_done(...
>>>     {
>>>        ...
>>>        while (1) {
>>>              wait_event_interruptible(context_info->wait,
>>>                      (context_info->is_done_rcv ||
>>>                       context_info->is_new_req));
>>>             static void mmc_blk_rw_rq_prep(...
>>>              {
>>>              ...
>>>              memset(brq, 0, sizeof(struct mmc_blk_request));
>>>
>>> This issue happens very coincidentally; however adding mdelay(1) in
>>> mmc_wait_data_done as below could duplicate it easily.
>>>
>>>      static void mmc_wait_data_done(struct mmc_request *mrq)
>>>      {
>>>        mrq->host->context_info.is_done_rcv = true;
>>> +    mdelay(1);
>>>        wake_up_interruptible(&mrq->host->context_info.wait);
>>>       }
>>>
>>
>> Hi, ulf
>>
>> We find this bug on Intel-C3230RK platform for very small probability.
>>
>> Whereas I can easily reproduce this case if I add a mdelay(1) or  longer
>> delay as Jialing did.
>>
>> This patch seems useful to me. Should we push it forward? :)
>
> It seems like a very good idea!
>
> Should we add a fixes tag to it?

That's cool, but how to add a fixes tag?

[Fixes] mmc: core: fix race condition in mmc_wait_data_done ?   :)

>
> [...]
>
> Kind regards
> Uffe
>
>
>
Ulf Hansson Aug. 28, 2015, 10:09 a.m. UTC | #4
[...]

>>> Hi, ulf
>>>
>>> We find this bug on Intel-C3230RK platform for very small probability.
>>>
>>> Whereas I can easily reproduce this case if I add a mdelay(1) or  longer
>>> delay as Jialing did.
>>>
>>> This patch seems useful to me. Should we push it forward? :)
>>
>>
>> It seems like a very good idea!
>>
>> Should we add a fixes tag to it?
>
>
> That's cool, but how to add a fixes tag?
>
> [Fixes] mmc: core: fix race condition in mmc_wait_data_done ?   :)
>

A fixes tag points to an old commit which introduced the bug. If we
can't find one, we can add a Cc tag to "stable". Just search the git
log and you will find examples.

Kind regards
Uffe
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Jialing Fu Aug. 28, 2015, 10:22 a.m. UTC | #5
DQpbLi4uXQ0KDQo+Pj4gSGksIHVsZg0KPj4+DQo+Pj4gV2UgZmluZCB0aGlzIGJ1ZyBvbiBJbnRl
bC1DMzIzMFJLIHBsYXRmb3JtIGZvciB2ZXJ5IHNtYWxsIHByb2JhYmlsaXR5Lg0KPj4+DQo+Pj4g
V2hlcmVhcyBJIGNhbiBlYXNpbHkgcmVwcm9kdWNlIHRoaXMgY2FzZSBpZiBJIGFkZCBhIG1kZWxh
eSgxKSBvciAgDQo+Pj4gbG9uZ2VyIGRlbGF5IGFzIEppYWxpbmcgZGlkLg0KPj4+DQo+Pj4gVGhp
cyBwYXRjaCBzZWVtcyB1c2VmdWwgdG8gbWUuIFNob3VsZCB3ZSBwdXNoIGl0IGZvcndhcmQ/IDop
DQo+Pg0KPj4NCj4+IEl0IHNlZW1zIGxpa2UgYSB2ZXJ5IGdvb2QgaWRlYSENCj4+DQo+PiBTaG91
bGQgd2UgYWRkIGEgZml4ZXMgdGFnIHRvIGl0Pw0KPg0KPg0KPiBUaGF0J3MgY29vbCwgYnV0IGhv
dyB0byBhZGQgYSBmaXhlcyB0YWc/DQo+DQo+IFtGaXhlc10gbW1jOiBjb3JlOiBmaXggcmFjZSBj
b25kaXRpb24gaW4gbW1jX3dhaXRfZGF0YV9kb25lID8gICA6KQ0KPg0KDQpBIGZpeGVzIHRhZyBw
b2ludHMgdG8gYW4gb2xkIGNvbW1pdCB3aGljaCBpbnRyb2R1Y2VkIHRoZSBidWcuIElmIHdlIGNh
bid0IGZpbmQgb25lLCB3ZSBjYW4gYWRkIGEgQ2MgdGFnIHRvICJzdGFibGUiLiBKdXN0IHNlYXJj
aCB0aGUgZ2l0IGxvZyBhbmQgeW91IHdpbGwgZmluZCBleGFtcGxlcy4NCg0KTGlrZSBhZGQgb25l
IGxpbmUgYXMgYmVsb3c/DQpGaXhlczogMjIyMGVlZGZkN2FlICgibW1jOiBmaXggYXN5bmMgcmVx
dWVzdCBtZWNoYW5pc20gZm9yIHNlcXVlbnRpYWwgcmVhZCBzY2VuYXJpb3MiKQ0KDQoNCktpbmQg
cmVnYXJkcw0KVWZmZQ0K
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Shawn Lin Aug. 28, 2015, 1:51 p.m. UTC | #6
? 2015/8/28 18:22, Jialing Fu ??:
>
> [...]
>
>>>> Hi, ulf
>>>>
>>>> We find this bug on Intel-C3230RK platform for very small probability.
>>>>
>>>> Whereas I can easily reproduce this case if I add a mdelay(1) or
>>>> longer delay as Jialing did.
>>>>
>>>> This patch seems useful to me. Should we push it forward? :)
>>>
>>>
>>> It seems like a very good idea!
>>>
>>> Should we add a fixes tag to it?
>>
>>
>> That's cool, but how to add a fixes tag?
>>
>> [Fixes] mmc: core: fix race condition in mmc_wait_data_done ?   :)
>>
>
> A fixes tag points to an old commit which introduced the bug. If we can't find one, we can add a Cc tag to "stable". Just search the git log and you will find examples.
>
> Like add one line as below?
> Fixes: 2220eedfd7ae ("mmc: fix async request mechanism for sequential read scenarios")
>

That's it, Jialing. From my git blame, seems this bug has been 
introduced for a long time, but I feel strange that no one had captured 
it before you did.

Anyway, I will add a fixes tag and send v2 ASAP. :)

>
> Kind regards
> Uffe
>
Jialing Fu Aug. 31, 2015, 2:03 a.m. UTC | #7
PiBbLi4uXQ0KPg0KPj4+PiBIaSwgdWxmDQo+Pj4+DQo+Pj4+IFdlIGZpbmQgdGhpcyBidWcgb24g
SW50ZWwtQzMyMzBSSyBwbGF0Zm9ybSBmb3IgdmVyeSBzbWFsbCBwcm9iYWJpbGl0eS4NCj4+Pj4N
Cj4+Pj4gV2hlcmVhcyBJIGNhbiBlYXNpbHkgcmVwcm9kdWNlIHRoaXMgY2FzZSBpZiBJIGFkZCBh
IG1kZWxheSgxKSBvciANCj4+Pj4gbG9uZ2VyIGRlbGF5IGFzIEppYWxpbmcgZGlkLg0KPj4+Pg0K
Pj4+PiBUaGlzIHBhdGNoIHNlZW1zIHVzZWZ1bCB0byBtZS4gU2hvdWxkIHdlIHB1c2ggaXQgZm9y
d2FyZD8gOikNCj4+Pg0KPj4+DQo+Pj4gSXQgc2VlbXMgbGlrZSBhIHZlcnkgZ29vZCBpZGVhIQ0K
Pj4+DQo+Pj4gU2hvdWxkIHdlIGFkZCBhIGZpeGVzIHRhZyB0byBpdD8NCj4+DQo+Pg0KPj4gVGhh
dCdzIGNvb2wsIGJ1dCBob3cgdG8gYWRkIGEgZml4ZXMgdGFnPw0KPj4NCj4+IFtGaXhlc10gbW1j
OiBjb3JlOiBmaXggcmFjZSBjb25kaXRpb24gaW4gbW1jX3dhaXRfZGF0YV9kb25lID8gICA6KQ0K
Pj4NCj4NCj4gQSBmaXhlcyB0YWcgcG9pbnRzIHRvIGFuIG9sZCBjb21taXQgd2hpY2ggaW50cm9k
dWNlZCB0aGUgYnVnLiBJZiB3ZSBjYW4ndCBmaW5kIG9uZSwgd2UgY2FuIGFkZCBhIENjIHRhZyB0
byAic3RhYmxlIi4gSnVzdCBzZWFyY2ggdGhlIGdpdCBsb2cgYW5kIHlvdSB3aWxsIGZpbmQgZXhh
bXBsZXMuDQo+DQo+IExpa2UgYWRkIG9uZSBsaW5lIGFzIGJlbG93Pw0KPiBGaXhlczogMjIyMGVl
ZGZkN2FlICgibW1jOiBmaXggYXN5bmMgcmVxdWVzdCBtZWNoYW5pc20gZm9yIHNlcXVlbnRpYWwg
DQo+IHJlYWQgc2NlbmFyaW9zIikNCj4NCg0KVGhhdCdzIGl0LCBKaWFsaW5nLiBGcm9tIG15IGdp
dCBibGFtZSwgc2VlbXMgdGhpcyBidWcgaGFzIGJlZW4gaW50cm9kdWNlZCBmb3IgYSBsb25nIHRp
bWUsIGJ1dCBJIGZlZWwgc3RyYW5nZSB0aGF0IG5vIG9uZSBoYWQgY2FwdHVyZWQgaXQgYmVmb3Jl
IHlvdSBkaWQuDQpbSmlhbGluZyBGdV0gU2hhd24sDQpZZXMsIHRoaXMgYnVnIGlzIHZlcnkgaGFy
ZCB0byBkdXBsaWNhdGUgaW4gbXkgZXhwZXJpbWVudC4NCkJ1dCBpdCBoYXBwZW5zIGluZGVlZCwg
SSBoYWQgc3VmZmVyZWQgdGhpcyBidWcgYWJvdXQgMiB5ZWFycyBiZWZvcmUgSSBmaXhlZCBpdC4N
ClRvdGFsbHkgSSBnb3QgYnVnIHJlcG9ydHMgMyB0aW1lcyBhbmQgYWJvdXQgM340IFJhbWR1bXAg
ZmlsZXMuDQpBdCBmaXJzdCwgSSBmYWlsZWQgdG8gZ2V0IHVzZWZ1bCBjbHVlIGFuZCBldmVuIHRo
cm91Z2ggaXQgd2FzIEREUiBzdGFiaWxpdHkgaXNzdWUuDQoNCkJlbG93IGlmIG15IGFuYWx5c2lz
Og0KQXMgd2hhdCBJIGhhZCBjb21tZW50ZWQgaW4gdGhlIGZpeCBwYXRjaCwgIG9ubHkgdGhlIGJl
bG93ICJMaW5lQiIgc3RpbGwgZ2V0cyAid2FpdCIgZnJvbSAibXJxIiBhcyB0aGUgY29tcGlsZXIn
cyByZXN1bHQsIHRoZW4gdGhpcyBidWcgbWF5IGJlIHRyaWdnZXJlZC4NCklmIHRoZSBjb21waWxl
ciBoYXMgc29tZSBvcHRpbWlzbSB3aGljaCBsaW5lQiBkb2Vzbid0IG5lZWQgdG8gZmV0Y2ggIndh
aXQiIGZyb20gIm1ycSIgYWdhaW4sIHRoZSBpc3N1ZSBjYW4ndCBoYXBwZW4uDQogIHN0YXRpYyB2
b2lkIG1tY193YWl0X2RhdGFfZG9uZShzdHJ1Y3QgbW1jX3JlcXVlc3QgKm1ycSkNCiAgew0KICAg
ICBtcnEtPmhvc3QtPmNvbnRleHRfaW5mby5pc19kb25lX3JjdiA9IHRydWU7CS8vTGluZUENCiAg
ICAgLy8gSWYgYmVsb3cgbGluZSBzdGlsbCBnZXRzIGhvc3QgZnJvbSAibXJxIiBhcyB0aGUgcmVz
dWx0IG9mDQogICAgIC8vIGNvbXBpbGVyLCB0aGUgcGFuaWMgaGFwcGVucyBhcyB3ZSB0cmFjZWQu
DQogICAgIHdha2VfdXBfaW50ZXJydXB0aWJsZSgmbXJxLT5ob3N0LT5jb250ZXh0X2luZm8ud2Fp
dCk7IC8vTGluZUINCiAgfQ0KQWxzbywgSSBzdXNwZWN0IHRoZSBidWcgbWF5IGJlIHRyaWdnZXJl
ZCBpZiAiSVJRIiBvciAiSUNhY2hlIGxpbmUgbWlzc2luZyIganVzdCBoYXBwZW4gYmV0d2VlbiBM
aW5lQSBhbmQgTGluZUIuIA0KRXNwZWNpYWwgdGhlICJJY2FjaGUgbWlzc2luZyIgY2FzZSwgaXQg
aXMgZWFzaWVyIHRvIGhhcHBlbnMgdGhhbiBJUlEuDQpJIGRpc2Fzc2VtYmxlIG15IGNvZGUsIGFu
ZCBmaW5kIExpbmVBIGFuZCBMaW5lQidzIGFzc2VtYmxlIGNvZGVzIGFyZSBsb2NhdGVkIGluIHR3
byBkaWZmZXJlbnQgY2FjaGUgbGluZSBpbiBteSBmYWlsIGNhc2UuIElmIHlvdSBhcmUgaW50ZXJl
c3RpbmcsIHlvdSBjYW4gY2hlY2sgeW91ciBhc3NlbWJsZSBjb2RlIHRvby4gDQoNCg0KQW55d2F5
LCBJIHdpbGwgYWRkIGEgZml4ZXMgdGFnIGFuZCBzZW5kIHYyIEFTQVAuIDopDQoNCj4NCj4gS2lu
ZCByZWdhcmRzDQo+IFVmZmUNCj4NCg0KDQotLQ0KQmVzdCBSZWdhcmRzDQpTaGF3biBMaW4NCg0K
--
To unsubscribe from this list: send the line "unsubscribe linux-mmc" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/drivers/mmc/core/core.c b/drivers/mmc/core/core.c
index 664b617..0520064 100644
--- a/drivers/mmc/core/core.c
+++ b/drivers/mmc/core/core.c
@@ -358,8 +358,10 @@  EXPORT_SYMBOL(mmc_start_bkops);
  */
 static void mmc_wait_data_done(struct mmc_request *mrq)
 {
-	mrq->host->context_info.is_done_rcv = true;
-	wake_up_interruptible(&mrq->host->context_info.wait);
+	struct mmc_context_info *context_info = &mrq->host->context_info;
+
+	context_info->is_done_rcv = true;
+	wake_up_interruptible(&context_info->wait);
 }
 
 static void mmc_wait_done(struct mmc_request *mrq)