diff mbox series

[2/2] mm: don't allow executable ioremap mappings

Message ID 20210824091259.1324527-3-hch@lst.de (mailing list archive)
State New
Headers show
Series [1/2] mm: move ioremap_page_range to vmalloc.c | expand

Commit Message

Christoph Hellwig Aug. 24, 2021, 9:12 a.m. UTC
There is no need to execute from iomem (and most platforms it is
impossible anyway), so add the pgprot_nx() call similar to vmap.

Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 mm/vmalloc.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

Comments

Nicholas Piggin Aug. 26, 2021, 2:46 a.m. UTC | #1
Excerpts from Christoph Hellwig's message of August 24, 2021 7:12 pm:
> There is no need to execute from iomem (and most platforms it is
> impossible anyway), so add the pgprot_nx() call similar to vmap.
> 
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> ---
>  mm/vmalloc.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/mm/vmalloc.c b/mm/vmalloc.c
> index e44983fb2d15..3055f04b486b 100644
> --- a/mm/vmalloc.c
> +++ b/mm/vmalloc.c
> @@ -316,7 +316,7 @@ int ioremap_page_range(unsigned long addr, unsigned long end,
>  {
>  	int err;
>  
> -	err = vmap_range_noflush(addr, end, phys_addr, prot,
> +	err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
>  				 ioremap_max_page_shift);

I can't see why this is a problem. powerpcs can but it seems like a bad 
idea anyway.

Any point to a WARN_ON or return -EINVAL? Hmm, maybe that doesn't work 
for archs that don't support NX. We could add a check for ones that do 
support it though... But that's for another patch.

Thanks,
Nick
Christoph Hellwig Aug. 26, 2021, 5:37 a.m. UTC | #2
On Thu, Aug 26, 2021 at 12:46:34PM +1000, Nicholas Piggin wrote:
> I can't see why this is a problem. powerpcs can but it seems like a bad 
> idea anyway.
> 
> Any point to a WARN_ON or return -EINVAL? Hmm, maybe that doesn't work 
> for archs that don't support NX. We could add a check for ones that do 
> support it though... But that's for another patch.

This is the same as we do for regular vmap.  I can't remember why we
decided on this particular approach, as it's been a while.
diff mbox series

Patch

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index e44983fb2d15..3055f04b486b 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -316,7 +316,7 @@  int ioremap_page_range(unsigned long addr, unsigned long end,
 {
 	int err;
 
-	err = vmap_range_noflush(addr, end, phys_addr, prot,
+	err = vmap_range_noflush(addr, end, phys_addr, pgprot_nx(prot),
 				 ioremap_max_page_shift);
 	flush_cache_vmap(addr, end);
 	return err;