diff mbox

[v12,08/11] arm64/kasan: add and use kasan_map_populate()

Message ID 20171013173214.27300-9-pasha.tatashin@oracle.com (mailing list archive)
State New, archived
Headers show

Commit Message

Pavel Tatashin Oct. 13, 2017, 5:32 p.m. UTC
During early boot, kasan uses vmemmap_populate() to establish its shadow
memory. But, that interface is intended for struct pages use.

Because of the current project, vmemmap won't be zeroed during allocation,
but kasan expects that memory to be zeroed. We are adding a new
kasan_map_populate() function to resolve this difference.

Therefore, we must use a new interface to allocate and map kasan shadow
memory, that also zeroes memory for us.

Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
---
 arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 66 insertions(+), 6 deletions(-)

Comments

Andrey Ryabinin Oct. 18, 2017, 4:55 p.m. UTC | #1
On 10/13/2017 08:32 PM, Pavel Tatashin wrote:
> During early boot, kasan uses vmemmap_populate() to establish its shadow
> memory. But, that interface is intended for struct pages use.
> 
> Because of the current project, vmemmap won't be zeroed during allocation,
> but kasan expects that memory to be zeroed. We are adding a new
> kasan_map_populate() function to resolve this difference.
> 
> Therefore, we must use a new interface to allocate and map kasan shadow
> memory, that also zeroes memory for us.
> 

What's the point of this patch? We have "arm64: kasan: Avoid using vmemmap_populate to initialise shadow"
which does the right thing and basically reverts this patch.
This patch as intermediate step looks absolutely useless. We can just avoid vemmap_populate() right away.

> Signed-off-by: Pavel Tatashin <pasha.tatashin@oracle.com>
> ---
>  arch/arm64/mm/kasan_init.c | 72 ++++++++++++++++++++++++++++++++++++++++++----
>  1 file changed, 66 insertions(+), 6 deletions(-)
> 
> diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
> index 81f03959a4ab..cb4af2951c90 100644
> --- a/arch/arm64/mm/kasan_init.c
> +++ b/arch/arm64/mm/kasan_init.c
> @@ -28,6 +28,66 @@
>
Pavel Tatashin Oct. 18, 2017, 5:03 p.m. UTC | #2
Hi Andrey,

I asked Will, about it, and he preferred to have this patched added to 
the end of my series instead of replacing "arm64/kasan: add and use 
kasan_map_populate()".

In addition, Will's patch stops using large pages for kasan memory, and 
thus might add some regression in which case it is easier to revert just 
that patch instead of the whole series. It is unlikely that regression 
is going to be detectable, because kasan by itself makes system quiet 
slow already.

Pasha
Will Deacon Oct. 18, 2017, 5:06 p.m. UTC | #3
On Wed, Oct 18, 2017 at 01:03:10PM -0400, Pavel Tatashin wrote:
> I asked Will, about it, and he preferred to have this patched added to the
> end of my series instead of replacing "arm64/kasan: add and use
> kasan_map_populate()".

As I said, I'm fine either way, I just didn't want to cause extra work
or rebasing:

http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html

> In addition, Will's patch stops using large pages for kasan memory, and thus
> might add some regression in which case it is easier to revert just that
> patch instead of the whole series. It is unlikely that regression is going
> to be detectable, because kasan by itself makes system quiet slow already.

If it causes problems, I'll just fix them. No need to revert.

Will
Pavel Tatashin Oct. 18, 2017, 5:08 p.m. UTC | #4
> 
> As I said, I'm fine either way, I just didn't want to cause extra work
> or rebasing:
> 
> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html

Makes sense. I am also fine either way, I can submit a new patch merging 
together the two if needed.

Pavel
Andrey Ryabinin Oct. 18, 2017, 5:18 p.m. UTC | #5
On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
>>
>> As I said, I'm fine either way, I just didn't want to cause extra work
>> or rebasing:
>>
>> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
> 
> Makes sense. I am also fine either way, I can submit a new patch merging together the two if needed.
> 

Please, do this. Single patch makes more sense


> Pavel
Pavel Tatashin Oct. 18, 2017, 5:23 p.m. UTC | #6
Hi Andrew and Michal,

There are a few changes I need to do to my series:

1. Replace these two patches:

arm64/kasan: add and use kasan_map_populate()
x86/kasan: add and use kasan_map_populate()

With:

x86/mm/kasan: don't use vmemmap_populate() to initialize
  shadow
arm64/mm/kasan: don't use vmemmap_populate() to initialize
  shadow

2. Fix a kbuild warning about section mismatch in
mm: deferred_init_memmap improvements

How should I proceed to get these replaced in mm-tree? Send three new 
patches, or send a new series?

Thank you,
Pavel

On 10/18/2017 01:18 PM, Andrey Ryabinin wrote:
> On 10/18/2017 08:08 PM, Pavel Tatashin wrote:
>>>
>>> As I said, I'm fine either way, I just didn't want to cause extra work
>>> or rebasing:
>>>
>>> http://lists.infradead.org/pipermail/linux-arm-kernel/2017-October/535703.html
>>
>> Makes sense. I am also fine either way, I can submit a new patch merging together the two if needed.
>>
> 
> Please, do this. Single patch makes more sense
> 
> 
>> Pavel
Andrey Ryabinin Nov. 3, 2017, 3:40 p.m. UTC | #7
On 10/18/2017 08:23 PM, Pavel Tatashin wrote:
> Hi Andrew and Michal,
> 
> There are a few changes I need to do to my series:
> 
> 1. Replace these two patches:
> 
> arm64/kasan: add and use kasan_map_populate()
> x86/kasan: add and use kasan_map_populate()
> 
> With:
> 
> x86/mm/kasan: don't use vmemmap_populate() to initialize
>  shadow
> arm64/mm/kasan: don't use vmemmap_populate() to initialize
>  shadow
> 

Pavel, could you please send the patches? These patches doesn't interfere with rest of the series,
so I think it should be enough to send just two patches to replace the old ones.




> 2. Fix a kbuild warning about section mismatch in
> mm: deferred_init_memmap improvements
> 
> How should I proceed to get these replaced in mm-tree? Send three new patches, or send a new series?
> 
> Thank you,
> Pavel
>
Pavel Tatashin Nov. 3, 2017, 3:50 p.m. UTC | #8
>> 1. Replace these two patches:
>>
>> arm64/kasan: add and use kasan_map_populate()
>> x86/kasan: add and use kasan_map_populate()
>>
>> With:
>>
>> x86/mm/kasan: don't use vmemmap_populate() to initialize
>>   shadow
>> arm64/mm/kasan: don't use vmemmap_populate() to initialize
>>   shadow
>>
> 
> Pavel, could you please send the patches? These patches doesn't interfere with rest of the series,
> so I think it should be enough to send just two patches to replace the old ones.
> 

Hi Andrey,

I asked Michal and Andrew how to proceed but never received a reply from 
them. The patches independent from the deferred page init series as long 
as they come before the series.

Anyway, I will post these two patches to the mailing list soon. But, not 
really sure if they will be taken into mm-tree.

Pavel
diff mbox

Patch

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index 81f03959a4ab..cb4af2951c90 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -28,6 +28,66 @@ 
 
 static pgd_t tmp_pg_dir[PTRS_PER_PGD] __initdata __aligned(PGD_SIZE);
 
+/* Creates mappings for kasan during early boot. The mapped memory is zeroed */
+static int __meminit kasan_map_populate(unsigned long start, unsigned long end,
+					int node)
+{
+	unsigned long addr, pfn, next;
+	unsigned long long size;
+	pgd_t *pgd;
+	pud_t *pud;
+	pmd_t *pmd;
+	pte_t *pte;
+	int ret;
+
+	ret = vmemmap_populate(start, end, node);
+	/*
+	 * We might have partially populated memory, so check for no entries,
+	 * and zero only those that actually exist.
+	 */
+	for (addr = start; addr < end; addr = next) {
+		pgd = pgd_offset_k(addr);
+		if (pgd_none(*pgd)) {
+			next = pgd_addr_end(addr, end);
+			continue;
+		}
+
+		pud = pud_offset(pgd, addr);
+		if (pud_none(*pud)) {
+			next = pud_addr_end(addr, end);
+			continue;
+		}
+		if (pud_sect(*pud)) {
+			/* This is PUD size page */
+			next = pud_addr_end(addr, end);
+			size = PUD_SIZE;
+			pfn = pud_pfn(*pud);
+		} else {
+			pmd = pmd_offset(pud, addr);
+			if (pmd_none(*pmd)) {
+				next = pmd_addr_end(addr, end);
+				continue;
+			}
+			if (pmd_sect(*pmd)) {
+				/* This is PMD size page */
+				next = pmd_addr_end(addr, end);
+				size = PMD_SIZE;
+				pfn = pmd_pfn(*pmd);
+			} else {
+				pte = pte_offset_kernel(pmd, addr);
+				next = addr + PAGE_SIZE;
+				if (pte_none(*pte))
+					continue;
+				/* This is base size page */
+				size = PAGE_SIZE;
+				pfn = pte_pfn(*pte);
+			}
+		}
+		memset(phys_to_virt(PFN_PHYS(pfn)), 0, size);
+	}
+	return ret;
+}
+
 /*
  * The p*d_populate functions call virt_to_phys implicitly so they can't be used
  * directly on kernel symbols (bm_p*d). All the early functions are called too
@@ -161,11 +221,11 @@  void __init kasan_init(void)
 
 	clear_pgds(KASAN_SHADOW_START, KASAN_SHADOW_END);
 
-	vmemmap_populate(kimg_shadow_start, kimg_shadow_end,
-			 pfn_to_nid(virt_to_pfn(lm_alias(_text))));
+	kasan_map_populate(kimg_shadow_start, kimg_shadow_end,
+			   pfn_to_nid(virt_to_pfn(lm_alias(_text))));
 
 	/*
-	 * vmemmap_populate() has populated the shadow region that covers the
+	 * kasan_map_populate() has populated the shadow region that covers the
 	 * kernel image with SWAPPER_BLOCK_SIZE mappings, so we have to round
 	 * the start and end addresses to SWAPPER_BLOCK_SIZE as well, to prevent
 	 * kasan_populate_zero_shadow() from replacing the page table entries
@@ -191,9 +251,9 @@  void __init kasan_init(void)
 		if (start >= end)
 			break;
 
-		vmemmap_populate((unsigned long)kasan_mem_to_shadow(start),
-				(unsigned long)kasan_mem_to_shadow(end),
-				pfn_to_nid(virt_to_pfn(start)));
+		kasan_map_populate((unsigned long)kasan_mem_to_shadow(start),
+				   (unsigned long)kasan_mem_to_shadow(end),
+				   pfn_to_nid(virt_to_pfn(start)));
 	}
 
 	/*