diff mbox

[v5,7/8] ima: based on policy warn about loading firmware (pre-allocated buffer)

Message ID 1530542283-26145-8-git-send-email-zohar@linux.vnet.ibm.com (mailing list archive)
State New, archived
Headers show

Commit Message

Mimi Zohar July 2, 2018, 2:38 p.m. UTC
Some systems are memory constrained but they need to load very large
firmwares.  The firmware subsystem allows drivers to request this
firmware be loaded from the filesystem, but this requires that the
entire firmware be loaded into kernel memory first before it's provided
to the driver.  This can lead to a situation where we map the firmware
twice, once to load the firmware into kernel memory and once to copy the
firmware into the final resting place.

To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
into a pre-allocated buffer") introduced request_firmware_into_buf() API
that allows drivers to request firmware be loaded directly into a
pre-allocated buffer. (Based on the mailing list discussions, calling
dma_alloc_coherent() is unnecessary and confusing.)

(Very broken/buggy) devices using pre-allocated memory run the risk of
the firmware being accessible to the device prior to the completion of
IMA's signature verification.  For the time being, this patch emits a
warning, but does not prevent the loading of the firmware.

Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
Cc: Luis R. Rodriguez <mcgrof@suse.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Kees Cook <keescook@chromium.org>
Cc: Serge E. Hallyn <serge@hallyn.com>
Cc: Stephen Boyd <sboyd@kernel.org>
Cc: Bjorn Andersson <bjorn.andersson@linaro.org>

---
Changelog v5:
- Instead of preventing loading firmware from a pre-allocate buffer,
emit a warning.

 security/integrity/ima/ima_main.c | 25 ++++++++++++++++---------
 1 file changed, 16 insertions(+), 9 deletions(-)

Comments

Ard Biesheuvel July 2, 2018, 3:30 p.m. UTC | #1
On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
> Some systems are memory constrained but they need to load very large
> firmwares.  The firmware subsystem allows drivers to request this
> firmware be loaded from the filesystem, but this requires that the
> entire firmware be loaded into kernel memory first before it's provided
> to the driver.  This can lead to a situation where we map the firmware
> twice, once to load the firmware into kernel memory and once to copy the
> firmware into the final resting place.
>
> To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
> into a pre-allocated buffer") introduced request_firmware_into_buf() API
> that allows drivers to request firmware be loaded directly into a
> pre-allocated buffer. (Based on the mailing list discussions, calling
> dma_alloc_coherent() is unnecessary and confusing.)
>
> (Very broken/buggy) devices using pre-allocated memory run the risk of
> the firmware being accessible to the device prior to the completion of
> IMA's signature verification.  For the time being, this patch emits a
> warning, but does not prevent the loading of the firmware.
>

As I attempted to explain in the exchange with Luis, this has nothing
to do with broken or buggy devices, but is simply the reality we have
to deal with on platforms that lack IOMMUs.

Even if you load into one buffer, carry out the signature verification
and *only then* copy it to another buffer, a bus master could
potentially read it from the first buffer as well. Mapping for DMA
does *not* mean 'making the memory readable by the device' unless
IOMMUs are being used. Otherwise, a bus master can read it from the
first buffer, or even patch the code that performs the security check
in the first place. For such platforms, copying the data around to
prevent the device from reading it is simply pointless, as well as any
other mitigation in software to protect yourself from misbehaving bus
masters.

So issuing a warning in this particular case is rather arbitrary. On
these platforms, all bus masters can read (and modify) all of your
memory all of the time, and as long as the firmware loader code takes
care not to provide the DMA address to the device until after the
verification is complete, it really has done all it reasonably can in
the environment that it is expected to operate in.

(The use of dma_alloc_coherent() is a bit of a red herring here, as it
incorporates the DMA map operation. However, DMA map is a no-op on
systems with cache coherent 1:1 DMA [iow, all PCs and most arm64
platforms unless they have IOMMUs], and so there is not much
difference between memory allocated with kmalloc() or with
dma_alloc_coherent() in terms of whether the device can access it
freely)






> Signed-off-by: Mimi Zohar <zohar@linux.vnet.ibm.com>
> Cc: Luis R. Rodriguez <mcgrof@suse.com>
> Cc: David Howells <dhowells@redhat.com>
> Cc: Kees Cook <keescook@chromium.org>
> Cc: Serge E. Hallyn <serge@hallyn.com>
> Cc: Stephen Boyd <sboyd@kernel.org>
> Cc: Bjorn Andersson <bjorn.andersson@linaro.org>
>
> ---
> Changelog v5:
> - Instead of preventing loading firmware from a pre-allocate buffer,
> emit a warning.
>
>  security/integrity/ima/ima_main.c | 25 ++++++++++++++++---------
>  1 file changed, 16 insertions(+), 9 deletions(-)
>
> diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
> index e467664965e7..7da123d980ea 100644
> --- a/security/integrity/ima/ima_main.c
> +++ b/security/integrity/ima/ima_main.c
> @@ -416,6 +416,15 @@ void ima_post_path_mknod(struct dentry *dentry)
>                 iint->flags |= IMA_NEW_FILE;
>  }
>
> +static int read_idmap[READING_MAX_ID] = {
> +       [READING_FIRMWARE] = FIRMWARE_CHECK,
> +       [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
> +       [READING_MODULE] = MODULE_CHECK,
> +       [READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
> +       [READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
> +       [READING_POLICY] = POLICY_CHECK
> +};
> +
>  /**
>   * ima_read_file - pre-measure/appraise hook decision based on policy
>   * @file: pointer to the file to be measured/appraised/audit
> @@ -439,18 +448,16 @@ int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
>                 }
>                 return 0;       /* We rely on module signature checking */
>         }
> +
> +       if (read_id == READING_FIRMWARE_PREALLOC_BUFFER) {
> +               if ((ima_appraise & IMA_APPRAISE_FIRMWARE) &&
> +                   (ima_appraise & IMA_APPRAISE_ENFORCE)) {
> +                       pr_warn("device might be able to access firmware prior to signature verification completion.\n");
> +               }
> +       }
>         return 0;
>  }
>
> -static int read_idmap[READING_MAX_ID] = {
> -       [READING_FIRMWARE] = FIRMWARE_CHECK,
> -       [READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
> -       [READING_MODULE] = MODULE_CHECK,
> -       [READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
> -       [READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
> -       [READING_POLICY] = POLICY_CHECK
> -};
> -
>  /**
>   * ima_post_read_file - in memory collect/appraise/audit measurement
>   * @file: pointer to the file to be measured/appraised/audit
> --
> 2.7.5
>
--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mimi Zohar July 9, 2018, 7:41 p.m. UTC | #2
On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
> > Some systems are memory constrained but they need to load very large
> > firmwares.  The firmware subsystem allows drivers to request this
> > firmware be loaded from the filesystem, but this requires that the
> > entire firmware be loaded into kernel memory first before it's provided
> > to the driver.  This can lead to a situation where we map the firmware
> > twice, once to load the firmware into kernel memory and once to copy the
> > firmware into the final resting place.
> >
> > To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
> > into a pre-allocated buffer") introduced request_firmware_into_buf() API
> > that allows drivers to request firmware be loaded directly into a
> > pre-allocated buffer. (Based on the mailing list discussions, calling
> > dma_alloc_coherent() is unnecessary and confusing.)
> >
> > (Very broken/buggy) devices using pre-allocated memory run the risk of
> > the firmware being accessible to the device prior to the completion of
> > IMA's signature verification.  For the time being, this patch emits a
> > warning, but does not prevent the loading of the firmware.
> >
> 
> As I attempted to explain in the exchange with Luis, this has nothing
> to do with broken or buggy devices, but is simply the reality we have
> to deal with on platforms that lack IOMMUs.

> Even if you load into one buffer, carry out the signature verification
> and *only then* copy it to another buffer, a bus master could
> potentially read it from the first buffer as well. Mapping for DMA
> does *not* mean 'making the memory readable by the device' unless
> IOMMUs are being used. Otherwise, a bus master can read it from the
> first buffer, or even patch the code that performs the security check
> in the first place. For such platforms, copying the data around to
> prevent the device from reading it is simply pointless, as well as any
> other mitigation in software to protect yourself from misbehaving bus
> masters.

Thank you for taking the time to explain this again.

> So issuing a warning in this particular case is rather arbitrary. On
> these platforms, all bus masters can read (and modify) all of your
> memory all of the time, and as long as the firmware loader code takes
> care not to provide the DMA address to the device until after the
> verification is complete, it really has done all it reasonably can in
> the environment that it is expected to operate in.

So for the non-IOMMU system case, differentiating between pre-
allocated buffers vs. using two buffers doesn't make sense.

> 
> (The use of dma_alloc_coherent() is a bit of a red herring here, as it
> incorporates the DMA map operation. However, DMA map is a no-op on
> systems with cache coherent 1:1 DMA [iow, all PCs and most arm64
> platforms unless they have IOMMUs], and so there is not much
> difference between memory allocated with kmalloc() or with
> dma_alloc_coherent() in terms of whether the device can access it
> freely)
  
What about systems with an IOMMU?

Mimi

--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ard Biesheuvel July 10, 2018, 6:51 a.m. UTC | #3
On 9 July 2018 at 21:41, Mimi Zohar <zohar@linux.ibm.com> wrote:
> On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
>> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
>> > Some systems are memory constrained but they need to load very large
>> > firmwares.  The firmware subsystem allows drivers to request this
>> > firmware be loaded from the filesystem, but this requires that the
>> > entire firmware be loaded into kernel memory first before it's provided
>> > to the driver.  This can lead to a situation where we map the firmware
>> > twice, once to load the firmware into kernel memory and once to copy the
>> > firmware into the final resting place.
>> >
>> > To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
>> > into a pre-allocated buffer") introduced request_firmware_into_buf() API
>> > that allows drivers to request firmware be loaded directly into a
>> > pre-allocated buffer. (Based on the mailing list discussions, calling
>> > dma_alloc_coherent() is unnecessary and confusing.)
>> >
>> > (Very broken/buggy) devices using pre-allocated memory run the risk of
>> > the firmware being accessible to the device prior to the completion of
>> > IMA's signature verification.  For the time being, this patch emits a
>> > warning, but does not prevent the loading of the firmware.
>> >
>>
>> As I attempted to explain in the exchange with Luis, this has nothing
>> to do with broken or buggy devices, but is simply the reality we have
>> to deal with on platforms that lack IOMMUs.
>
>> Even if you load into one buffer, carry out the signature verification
>> and *only then* copy it to another buffer, a bus master could
>> potentially read it from the first buffer as well. Mapping for DMA
>> does *not* mean 'making the memory readable by the device' unless
>> IOMMUs are being used. Otherwise, a bus master can read it from the
>> first buffer, or even patch the code that performs the security check
>> in the first place. For such platforms, copying the data around to
>> prevent the device from reading it is simply pointless, as well as any
>> other mitigation in software to protect yourself from misbehaving bus
>> masters.
>
> Thank you for taking the time to explain this again.
>
>> So issuing a warning in this particular case is rather arbitrary. On
>> these platforms, all bus masters can read (and modify) all of your
>> memory all of the time, and as long as the firmware loader code takes
>> care not to provide the DMA address to the device until after the
>> verification is complete, it really has done all it reasonably can in
>> the environment that it is expected to operate in.
>
> So for the non-IOMMU system case, differentiating between pre-
> allocated buffers vs. using two buffers doesn't make sense.
>
>>
>> (The use of dma_alloc_coherent() is a bit of a red herring here, as it
>> incorporates the DMA map operation. However, DMA map is a no-op on
>> systems with cache coherent 1:1 DMA [iow, all PCs and most arm64
>> platforms unless they have IOMMUs], and so there is not much
>> difference between memory allocated with kmalloc() or with
>> dma_alloc_coherent() in terms of whether the device can access it
>> freely)
>
> What about systems with an IOMMU?
>

On systems with an IOMMU, performing the DMA map will create an entry
in the IOMMU page tables for the physical region associated with the
buffer, making the region accessible to the device. For platforms in
this category, using dma_alloc_coherent() for allocating a buffer to
pass firmware to the device does open a window where the device could
theoretically access this data while the validation is still in
progress.

Note that the device still needs to be informed about the address of
the buffer: just calling dma_alloc_coherent() will not allow the
device to find the firmware image in its memory space, and arbitrary
DMA accesses performed by the device will trigger faults that are
reported to the OS. So the window between DMA map (or
dma_alloc_coherent()) and the device specific command to pass the DMA
buffer address to the device is not inherently unsafe IMO, but I do
understand the need to cover this scenario.

As I pointed out before, using coherent DMA buffers to perform
streaming DMA is generally a bad idea, since they may be allocated
from a finite pool, and may use uncached mappings, making the access
slower than necessary (while streaming DMA can use any kmalloc'ed
buffer and will just flush the contents of the caches to main memory
when the DMA map is performed).

So to summarize again: in my opinion, using a single buffer is not a
problem as long as the validation completes before the DMA map is
performed. This will provide the expected guarantees on systems with
IOMMUs, and will not complicate matters on systems where there is no
point in obsessing about this anyway given that devices can access all
of memory whenever they want to.

As for the Qualcomm case: dma_alloc_coherent() is not needed here but
simply ends up being used because it was already wired up in the
qualcomm specific secure world API, which amounts to doing syscalls
into a higher privilege level than the one the kernel itself runs at.
So again, reasoning about whether the secure world will look at your
data before you checked the sig is rather pointless, and adding
special cases to the IMA api to cater for this use case seems like a
waste of engineering and review effort to me. If we have to do
something to tie up this loose end, let's try switching it to the
streaming DMA api instead.
Ard Biesheuvel July 10, 2018, 6:56 a.m. UTC | #4
On 10 July 2018 at 08:51, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> On 9 July 2018 at 21:41, Mimi Zohar <zohar@linux.ibm.com> wrote:
>> On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
>>> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
>>> > Some systems are memory constrained but they need to load very large
>>> > firmwares.  The firmware subsystem allows drivers to request this
>>> > firmware be loaded from the filesystem, but this requires that the
>>> > entire firmware be loaded into kernel memory first before it's provided
>>> > to the driver.  This can lead to a situation where we map the firmware
>>> > twice, once to load the firmware into kernel memory and once to copy the
>>> > firmware into the final resting place.
>>> >
>>> > To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
>>> > into a pre-allocated buffer") introduced request_firmware_into_buf() API
>>> > that allows drivers to request firmware be loaded directly into a
>>> > pre-allocated buffer. (Based on the mailing list discussions, calling
>>> > dma_alloc_coherent() is unnecessary and confusing.)
>>> >
>>> > (Very broken/buggy) devices using pre-allocated memory run the risk of
>>> > the firmware being accessible to the device prior to the completion of
>>> > IMA's signature verification.  For the time being, this patch emits a
>>> > warning, but does not prevent the loading of the firmware.
>>> >
>>>
>>> As I attempted to explain in the exchange with Luis, this has nothing
>>> to do with broken or buggy devices, but is simply the reality we have
>>> to deal with on platforms that lack IOMMUs.
>>
>>> Even if you load into one buffer, carry out the signature verification
>>> and *only then* copy it to another buffer, a bus master could
>>> potentially read it from the first buffer as well. Mapping for DMA
>>> does *not* mean 'making the memory readable by the device' unless
>>> IOMMUs are being used. Otherwise, a bus master can read it from the
>>> first buffer, or even patch the code that performs the security check
>>> in the first place. For such platforms, copying the data around to
>>> prevent the device from reading it is simply pointless, as well as any
>>> other mitigation in software to protect yourself from misbehaving bus
>>> masters.
>>
>> Thank you for taking the time to explain this again.
>>
>>> So issuing a warning in this particular case is rather arbitrary. On
>>> these platforms, all bus masters can read (and modify) all of your
>>> memory all of the time, and as long as the firmware loader code takes
>>> care not to provide the DMA address to the device until after the
>>> verification is complete, it really has done all it reasonably can in
>>> the environment that it is expected to operate in.
>>
>> So for the non-IOMMU system case, differentiating between pre-
>> allocated buffers vs. using two buffers doesn't make sense.
>>
>>>
>>> (The use of dma_alloc_coherent() is a bit of a red herring here, as it
>>> incorporates the DMA map operation. However, DMA map is a no-op on
>>> systems with cache coherent 1:1 DMA [iow, all PCs and most arm64
>>> platforms unless they have IOMMUs], and so there is not much
>>> difference between memory allocated with kmalloc() or with
>>> dma_alloc_coherent() in terms of whether the device can access it
>>> freely)
>>
>> What about systems with an IOMMU?
>>
>
> On systems with an IOMMU, performing the DMA map will create an entry
> in the IOMMU page tables for the physical region associated with the
> buffer, making the region accessible to the device. For platforms in
> this category, using dma_alloc_coherent() for allocating a buffer to
> pass firmware to the device does open a window where the device could
> theoretically access this data while the validation is still in
> progress.
>
> Note that the device still needs to be informed about the address of
> the buffer: just calling dma_alloc_coherent() will not allow the
> device to find the firmware image in its memory space, and arbitrary
> DMA accesses performed by the device will trigger faults that are
> reported to the OS. So the window between DMA map (or
> dma_alloc_coherent()) and the device specific command to pass the DMA
> buffer address to the device is not inherently unsafe IMO, but I do
> understand the need to cover this scenario.
>
> As I pointed out before, using coherent DMA buffers to perform
> streaming DMA is generally a bad idea, since they may be allocated
> from a finite pool, and may use uncached mappings, making the access
> slower than necessary (while streaming DMA can use any kmalloc'ed
> buffer and will just flush the contents of the caches to main memory
> when the DMA map is performed).
>
> So to summarize again: in my opinion, using a single buffer is not a
> problem as long as the validation completes before the DMA map is
> performed. This will provide the expected guarantees on systems with
> IOMMUs, and will not complicate matters on systems where there is no
> point in obsessing about this anyway given that devices can access all
> of memory whenever they want to.
>
> As for the Qualcomm case: dma_alloc_coherent() is not needed here but
> simply ends up being used because it was already wired up in the
> qualcomm specific secure world API, which amounts to doing syscalls
> into a higher privilege level than the one the kernel itself runs at.
> So again, reasoning about whether the secure world will look at your
> data before you checked the sig is rather pointless, and adding
> special cases to the IMA api to cater for this use case seems like a
> waste of engineering and review effort to me. If we have to do
> something to tie up this loose end, let's try switching it to the
> streaming DMA api instead.
>

Forgot to mention: the Qualcomm case is about passing data to the CPU
running at another privilege level, so IOMMU vs !IOMMU is not a factor
here.
--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Mimi Zohar July 10, 2018, 6:47 p.m. UTC | #5
On Tue, 2018-07-10 at 08:56 +0200, Ard Biesheuvel wrote:
> On 10 July 2018 at 08:51, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > On 9 July 2018 at 21:41, Mimi Zohar <zohar@linux.ibm.com> wrote:
> >> On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
> >>> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
> >>> > Some systems are memory constrained but they need to load very large
> >>> > firmwares.  The firmware subsystem allows drivers to request this
> >>> > firmware be loaded from the filesystem, but this requires that the
> >>> > entire firmware be loaded into kernel memory first before it's provided
> >>> > to the driver.  This can lead to a situation where we map the firmware
> >>> > twice, once to load the firmware into kernel memory and once to copy the
> >>> > firmware into the final resting place.
> >>> >
> >>> > To resolve this problem, commit a098ecd2fa7d ("firmware: support loading
> >>> > into a pre-allocated buffer") introduced request_firmware_into_buf() API
> >>> > that allows drivers to request firmware be loaded directly into a
> >>> > pre-allocated buffer. (Based on the mailing list discussions, calling
> >>> > dma_alloc_coherent() is unnecessary and confusing.)
> >>> >
> >>> > (Very broken/buggy) devices using pre-allocated memory run the risk of
> >>> > the firmware being accessible to the device prior to the completion of
> >>> > IMA's signature verification.  For the time being, this patch emits a
> >>> > warning, but does not prevent the loading of the firmware.
> >>> >
> >>>
> >>> As I attempted to explain in the exchange with Luis, this has nothing
> >>> to do with broken or buggy devices, but is simply the reality we have
> >>> to deal with on platforms that lack IOMMUs.
> >>
> >>> Even if you load into one buffer, carry out the signature verification
> >>> and *only then* copy it to another buffer, a bus master could
> >>> potentially read it from the first buffer as well. Mapping for DMA
> >>> does *not* mean 'making the memory readable by the device' unless
> >>> IOMMUs are being used. Otherwise, a bus master can read it from the
> >>> first buffer, or even patch the code that performs the security check
> >>> in the first place. For such platforms, copying the data around to
> >>> prevent the device from reading it is simply pointless, as well as any
> >>> other mitigation in software to protect yourself from misbehaving bus
> >>> masters.
> >>
> >> Thank you for taking the time to explain this again.
> >>
> >>> So issuing a warning in this particular case is rather arbitrary. On
> >>> these platforms, all bus masters can read (and modify) all of your
> >>> memory all of the time, and as long as the firmware loader code takes
> >>> care not to provide the DMA address to the device until after the
> >>> verification is complete, it really has done all it reasonably can in
> >>> the environment that it is expected to operate in.
> >>
> >> So for the non-IOMMU system case, differentiating between pre-
> >> allocated buffers vs. using two buffers doesn't make sense.
> >>
> >>>
> >>> (The use of dma_alloc_coherent() is a bit of a red herring here, as it
> >>> incorporates the DMA map operation. However, DMA map is a no-op on
> >>> systems with cache coherent 1:1 DMA [iow, all PCs and most arm64
> >>> platforms unless they have IOMMUs], and so there is not much
> >>> difference between memory allocated with kmalloc() or with
> >>> dma_alloc_coherent() in terms of whether the device can access it
> >>> freely)
> >>
> >> What about systems with an IOMMU?
> >>
> >
> > On systems with an IOMMU, performing the DMA map will create an entry
> > in the IOMMU page tables for the physical region associated with the
> > buffer, making the region accessible to the device. For platforms in
> > this category, using dma_alloc_coherent() for allocating a buffer to
> > pass firmware to the device does open a window where the device could
> > theoretically access this data while the validation is still in
> > progress.
> >
> > Note that the device still needs to be informed about the address of
> > the buffer: just calling dma_alloc_coherent() will not allow the
> > device to find the firmware image in its memory space, and arbitrary
> > DMA accesses performed by the device will trigger faults that are
> > reported to the OS. So the window between DMA map (or
> > dma_alloc_coherent()) and the device specific command to pass the DMA
> > buffer address to the device is not inherently unsafe IMO, but I do
> > understand the need to cover this scenario.
> >
> > As I pointed out before, using coherent DMA buffers to perform
> > streaming DMA is generally a bad idea, since they may be allocated
> > from a finite pool, and may use uncached mappings, making the access
> > slower than necessary (while streaming DMA can use any kmalloc'ed
> > buffer and will just flush the contents of the caches to main memory
> > when the DMA map is performed).
> >
> > So to summarize again: in my opinion, using a single buffer is not a
> > problem as long as the validation completes before the DMA map is
> > performed. This will provide the expected guarantees on systems with
> > IOMMUs, and will not complicate matters on systems where there is no
> > point in obsessing about this anyway given that devices can access all
> > of memory whenever they want to.

It sound like as long as the pre-allocated buffer is not being re-
used, either by being mapped to multiple devices or used to load
multiple firmware blobs, it is safe.

> >
> > As for the Qualcomm case: dma_alloc_coherent() is not needed here but
> > simply ends up being used because it was already wired up in the
> > qualcomm specific secure world API, which amounts to doing syscalls
> > into a higher privilege level than the one the kernel itself runs at.
> > So again, reasoning about whether the secure world will look at your
> > data before you checked the sig is rather pointless, and adding
> > special cases to the IMA api to cater for this use case seems like a
> > waste of engineering and review effort to me. If we have to do
> > something to tie up this loose end, let's try switching it to the
> > streaming DMA api instead.
> >
> 
> Forgot to mention: the Qualcomm case is about passing data to the CPU
> running at another privilege level, so IOMMU vs !IOMMU is not a factor
> here.

Agreed.  It sounds like the dependency would be on whether the buffer
has been DMA mapped.

Mimi

--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Andersson July 10, 2018, 7:19 p.m. UTC | #6
On Mon 09 Jul 23:56 PDT 2018, Ard Biesheuvel wrote:

> On 10 July 2018 at 08:51, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
> > On 9 July 2018 at 21:41, Mimi Zohar <zohar@linux.ibm.com> wrote:
> >> On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
> >>> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
[..]
> > So to summarize again: in my opinion, using a single buffer is not a
> > problem as long as the validation completes before the DMA map is
> > performed. This will provide the expected guarantees on systems with
> > IOMMUs, and will not complicate matters on systems where there is no
> > point in obsessing about this anyway given that devices can access all
> > of memory whenever they want to.
> >
> > As for the Qualcomm case: dma_alloc_coherent() is not needed here but
> > simply ends up being used because it was already wired up in the
> > qualcomm specific secure world API, which amounts to doing syscalls
> > into a higher privilege level than the one the kernel itself runs at.

As I said before, the dma_alloc_coherent() referred to in this
discussion holds parameters for the Trustzone call, i.e. it will hold
the address to the buffer that the firmware was loaded into - it won't
hold any data that comes from the actual firmware.

> > So again, reasoning about whether the secure world will look at your
> > data before you checked the sig is rather pointless, and adding
> > special cases to the IMA api to cater for this use case seems like a
> > waste of engineering and review effort to me.

Forgive me if I'm missing something in the implementation here, but
aren't the IMA checks done before request_firmware*() returns?

> > If we have to do
> > something to tie up this loose end, let's try switching it to the
> > streaming DMA api instead.
> >
> 
> Forgot to mention: the Qualcomm case is about passing data to the CPU
> running at another privilege level, so IOMMU vs !IOMMU is not a factor
> here.

Further more, all scenarios we've look at so far is completely
sequential, so if the firmware request fails we won't invoke the
Trustzone operation that would consume the memory or we won't turn on
the power to the CPU that would execute the firmware.


Tbh the only case I can think of where there would be a "race condition"
here is if we have a device that is polling the last byte of a
predefined firmware memory area for the firmware loader to read some
specific data into it. In cases where the firmware request is followed
by some explicit signalling to the device (or a power on sequence) I'm
unable to see the issue discussed here.

Regards,
Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Ard Biesheuvel July 11, 2018, 6:24 a.m. UTC | #7
On 10 July 2018 at 21:19, Bjorn Andersson <bjorn.andersson@linaro.org> wrote:
> On Mon 09 Jul 23:56 PDT 2018, Ard Biesheuvel wrote:
>
>> On 10 July 2018 at 08:51, Ard Biesheuvel <ard.biesheuvel@linaro.org> wrote:
>> > On 9 July 2018 at 21:41, Mimi Zohar <zohar@linux.ibm.com> wrote:
>> >> On Mon, 2018-07-02 at 17:30 +0200, Ard Biesheuvel wrote:
>> >>> On 2 July 2018 at 16:38, Mimi Zohar <zohar@linux.vnet.ibm.com> wrote:
> [..]
>> > So to summarize again: in my opinion, using a single buffer is not a
>> > problem as long as the validation completes before the DMA map is
>> > performed. This will provide the expected guarantees on systems with
>> > IOMMUs, and will not complicate matters on systems where there is no
>> > point in obsessing about this anyway given that devices can access all
>> > of memory whenever they want to.
>> >
>> > As for the Qualcomm case: dma_alloc_coherent() is not needed here but
>> > simply ends up being used because it was already wired up in the
>> > qualcomm specific secure world API, which amounts to doing syscalls
>> > into a higher privilege level than the one the kernel itself runs at.
>
> As I said before, the dma_alloc_coherent() referred to in this
> discussion holds parameters for the Trustzone call, i.e. it will hold
> the address to the buffer that the firmware was loaded into - it won't
> hold any data that comes from the actual firmware.
>

Ah yes, I forgot that detail. Thanks for reminding me.

>> > So again, reasoning about whether the secure world will look at your
>> > data before you checked the sig is rather pointless, and adding
>> > special cases to the IMA api to cater for this use case seems like a
>> > waste of engineering and review effort to me.
>
> Forgive me if I'm missing something in the implementation here, but
> aren't the IMA checks done before request_firmware*() returns?
>

The issue under discussion is whether calling request_firmware() to
load firmware into a buffer that may be readable by the device while
the IMA checks are in progress constitutes a security hazard.

>> > If we have to do
>> > something to tie up this loose end, let's try switching it to the
>> > streaming DMA api instead.
>> >
>>
>> Forgot to mention: the Qualcomm case is about passing data to the CPU
>> running at another privilege level, so IOMMU vs !IOMMU is not a factor
>> here.
>
> Further more, all scenarios we've look at so far is completely
> sequential, so if the firmware request fails we won't invoke the
> Trustzone operation that would consume the memory or we won't turn on
> the power to the CPU that would execute the firmware.
>
>
> Tbh the only case I can think of where there would be a "race condition"
> here is if we have a device that is polling the last byte of a
> predefined firmware memory area for the firmware loader to read some
> specific data into it. In cases where the firmware request is followed
> by some explicit signalling to the device (or a power on sequence) I'm
> unable to see the issue discussed here.
>

I agree. But the latter part is platform specific, and so it requires
some degree of trust in the driver author on the part of the IMA
routines that request_firmware() is called at an appropriate time.

The point I am trying to make in this thread is that there are cases
where it makes no sense for the kernel to reason about these things,
given that higher privilege levels such as the TrustZone secure world
own the kernel's execution context entirely already, and given that
masters that are not behind an IOMMU can read and write all of memory
all of the time anyway.

The bottom line is that reality does not respect the layering that IMA
assumes, and so the only meaningful way to treat some of the use cases
is simply to ignore them entirely. So we should still perform all the
checks, but we will have to live with the limited utility of doing so
in some scenarios (and not print nasty warnings to the kernel log for
such cases)
Mimi Zohar July 12, 2018, 8:03 p.m. UTC | #8
On Wed, 2018-07-11 at 08:24 +0200, Ard Biesheuvel wrote:
> On 10 July 2018 at 21:19, Bjorn Andersson <bjorn.andersson@linaro.org> wrote:

> > Tbh the only case I can think of where there would be a "race condition"
> > here is if we have a device that is polling the last byte of a
> > predefined firmware memory area for the firmware loader to read some
> > specific data into it. In cases where the firmware request is followed
> > by some explicit signalling to the device (or a power on sequence) I'm
> > unable to see the issue discussed here.
> >
> 
> I agree. But the latter part is platform specific, and so it requires
> some degree of trust in the driver author on the part of the IMA
> routines that request_firmware() is called at an appropriate time.

Exactly.  Qualcomm could be using the pre-allocated buffer
appropriately, but that doesn't guarantee how it will be used in the
future.

> The point I am trying to make in this thread is that there are cases
> where it makes no sense for the kernel to reason about these things,
> given that higher privilege levels such as the TrustZone secure world
> own the kernel's execution context entirely already, and given that
> masters that are not behind an IOMMU can read and write all of memory
> all of the time anyway.

> The bottom line is that reality does not respect the layering that IMA
> assumes, and so the only meaningful way to treat some of the use cases
> is simply to ignore them entirely. So we should still perform all the
> checks, but we will have to live with the limited utility of doing so
> in some scenarios (and not print nasty warnings to the kernel log for
> such cases)

You have convinced me that the warning shouldn't be emitted in either
the non IOMMU or in the IOMMU case, assuming the buffer has not been
DMA mapped.

The remaining concern is using the same buffer mapped to multiple
devices or re-using the same buffer to load multiple firmware blobs.
I'm not sure how easy that would be to detect.

I need to stage the rest of the patch set to be upstreamed.  Could we
just add a comment in the code reflecting this discussion?

Mimi

--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
Bjorn Andersson July 12, 2018, 8:37 p.m. UTC | #9
On Thu 12 Jul 13:03 PDT 2018, Mimi Zohar wrote:

> On Wed, 2018-07-11 at 08:24 +0200, Ard Biesheuvel wrote:
> > On 10 July 2018 at 21:19, Bjorn Andersson <bjorn.andersson@linaro.org> wrote:
> 
> > > Tbh the only case I can think of where there would be a "race condition"
> > > here is if we have a device that is polling the last byte of a
> > > predefined firmware memory area for the firmware loader to read some
> > > specific data into it. In cases where the firmware request is followed
> > > by some explicit signalling to the device (or a power on sequence) I'm
> > > unable to see the issue discussed here.
> > >
> > 
> > I agree. But the latter part is platform specific, and so it requires
> > some degree of trust in the driver author on the part of the IMA
> > routines that request_firmware() is called at an appropriate time.
> 
> Exactly.  Qualcomm could be using the pre-allocated buffer
> appropriately, but that doesn't guarantee how it will be used in the
> future.
> 

Agreed.

But a device that starts operate on the firmware memory before it's
fully loaded (and verified) will likely hit other nasty issues. Using a
traditional request_firmware() and memcpy() or simply mapping the buffer
into the remote piecemeal would have the same issue.

So I think that regardless of using IMA, if you don't have the ability
to control your device's view of the entire firmware buffer atomically
you must write your device drivers accordingly.

Regards,
Bjorn
--
To unsubscribe from this list: send the line "unsubscribe linux-security-module" in
the body of a message to majordomo@vger.kernel.org
More majordomo info at  http://vger.kernel.org/majordomo-info.html
diff mbox

Patch

diff --git a/security/integrity/ima/ima_main.c b/security/integrity/ima/ima_main.c
index e467664965e7..7da123d980ea 100644
--- a/security/integrity/ima/ima_main.c
+++ b/security/integrity/ima/ima_main.c
@@ -416,6 +416,15 @@  void ima_post_path_mknod(struct dentry *dentry)
 		iint->flags |= IMA_NEW_FILE;
 }
 
+static int read_idmap[READING_MAX_ID] = {
+	[READING_FIRMWARE] = FIRMWARE_CHECK,
+	[READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
+	[READING_MODULE] = MODULE_CHECK,
+	[READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
+	[READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
+	[READING_POLICY] = POLICY_CHECK
+};
+
 /**
  * ima_read_file - pre-measure/appraise hook decision based on policy
  * @file: pointer to the file to be measured/appraised/audit
@@ -439,18 +448,16 @@  int ima_read_file(struct file *file, enum kernel_read_file_id read_id)
 		}
 		return 0;	/* We rely on module signature checking */
 	}
+
+	if (read_id == READING_FIRMWARE_PREALLOC_BUFFER) {
+		if ((ima_appraise & IMA_APPRAISE_FIRMWARE) &&
+		    (ima_appraise & IMA_APPRAISE_ENFORCE)) {
+			pr_warn("device might be able to access firmware prior to signature verification completion.\n");
+		}
+	}
 	return 0;
 }
 
-static int read_idmap[READING_MAX_ID] = {
-	[READING_FIRMWARE] = FIRMWARE_CHECK,
-	[READING_FIRMWARE_PREALLOC_BUFFER] = FIRMWARE_CHECK,
-	[READING_MODULE] = MODULE_CHECK,
-	[READING_KEXEC_IMAGE] = KEXEC_KERNEL_CHECK,
-	[READING_KEXEC_INITRAMFS] = KEXEC_INITRAMFS_CHECK,
-	[READING_POLICY] = POLICY_CHECK
-};
-
 /**
  * ima_post_read_file - in memory collect/appraise/audit measurement
  * @file: pointer to the file to be measured/appraised/audit