Message ID | 20230113035000.480021-3-ricarkol@google.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | KVM: arm64: Eager Huge-page splitting for dirty-logging | expand |
On Thu, Jan 12, 2023 at 7:50 PM Ricardo Koller <ricarkol@google.com> wrote: > > Add a stage2 helper, kvm_pgtable_stage2_create_removed(), for creating > removed tables (the opposite of kvm_pgtable_stage2_free_removed()). > Creating a removed table is useful for splitting block PTEs into > subtrees of 4K PTEs. For example, a 1G block PTE can be split into 4K > PTEs by first creating a fully populated tree, and then use it to > replace the 1G PTE in a single step. This will be used in a > subsequent commit for eager huge-page splitting (a dirty-logging > optimization). > > No functional change intended. This new function will be used in a > subsequent commit. > > Signed-off-by: Ricardo Koller <ricarkol@google.com> > --- > arch/arm64/include/asm/kvm_pgtable.h | 25 +++++++++++++++ > arch/arm64/kvm/hyp/pgtable.c | 47 ++++++++++++++++++++++++++++ > 2 files changed, 72 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index 84a271647007..8ad78d61af7f 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -450,6 +450,31 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > */ > void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); > > +/** > + * kvm_pgtable_stage2_free_removed() - Create a removed stage-2 paging structure. > + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > + * @new: Unlinked stage-2 paging structure to be created. Oh, I see so the "removed" page table is actually a new page table that has never been part of the paging structure. In that case I would find it much more intuitive to call it "unlinked" or similar. > + * @phys: Physical address of the memory to map. > + * @level: Level of the stage-2 paging structure to be created. > + * @prot: Permissions and attributes for the mapping. > + * @mc: Cache of pre-allocated and zeroed memory from which to allocate > + * page-table pages. > + * > + * Create a removed page-table tree of PAGE_SIZE leaf PTEs under *new. > + * This new page-table tree is not reachable (i.e., it is removed) from the > + * root pgd and it's therefore unreachableby the hardware page-table > + * walker. No TLB invalidation or CMOs are performed. > + * > + * If device attributes are not explicitly requested in @prot, then the > + * mapping will be normal, cacheable. > + * > + * Return: 0 only if a fully populated tree was created, negative error > + * code on failure. No partially-populated table can be returned. > + */ > +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, > + kvm_pte_t *new, u64 phys, u32 level, > + enum kvm_pgtable_prot prot, void *mc); > + > /** > * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. > * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 87fd40d09056..0dee13007776 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -1181,6 +1181,53 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) > return kvm_pgtable_walk(pgt, addr, size, &walker); > } > > +/* > + * map_data->force_pte is true in order to force creating PAGE_SIZE PTEs. > + * data->addr is 0 because the IPA is irrelevant for a removed table. > + */ > +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, > + kvm_pte_t *new, u64 phys, u32 level, > + enum kvm_pgtable_prot prot, void *mc) > +{ > + struct stage2_map_data map_data = { > + .phys = phys, > + .mmu = pgt->mmu, > + .memcache = mc, > + .force_pte = true, > + }; > + struct kvm_pgtable_walker walker = { > + .cb = stage2_map_walker, > + .flags = KVM_PGTABLE_WALK_LEAF | > + KVM_PGTABLE_WALK_REMOVED, > + .arg = &map_data, > + }; > + struct kvm_pgtable_walk_data data = { > + .walker = &walker, > + .addr = 0, > + .end = kvm_granule_size(level), > + }; > + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; > + kvm_pte_t *pgtable; > + int ret; > + > + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); > + if (ret) > + return ret; > + > + pgtable = mm_ops->zalloc_page(mc); > + if (!pgtable) > + return -ENOMEM; > + > + ret = __kvm_pgtable_walk(&data, mm_ops, pgtable, level + 1); > + if (ret) { > + kvm_pgtable_stage2_free_removed(mm_ops, pgtable, level); > + mm_ops->put_page(pgtable); > + return ret; > + } > + > + *new = kvm_init_table_pte(pgtable, mm_ops); > + return 0; > +} > > int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, > struct kvm_pgtable_mm_ops *mm_ops, > -- > 2.39.0.314.g84b9a713c41-goog >
On Mon, Jan 23, 2023 at 04:55:40PM -0800, Ben Gardon wrote: > On Thu, Jan 12, 2023 at 7:50 PM Ricardo Koller <ricarkol@google.com> wrote: > > > > Add a stage2 helper, kvm_pgtable_stage2_create_removed(), for creating > > removed tables (the opposite of kvm_pgtable_stage2_free_removed()). > > Creating a removed table is useful for splitting block PTEs into > > subtrees of 4K PTEs. For example, a 1G block PTE can be split into 4K > > PTEs by first creating a fully populated tree, and then use it to > > replace the 1G PTE in a single step. This will be used in a > > subsequent commit for eager huge-page splitting (a dirty-logging > > optimization). > > > > No functional change intended. This new function will be used in a > > subsequent commit. > > > > Signed-off-by: Ricardo Koller <ricarkol@google.com> > > --- > > arch/arm64/include/asm/kvm_pgtable.h | 25 +++++++++++++++ > > arch/arm64/kvm/hyp/pgtable.c | 47 ++++++++++++++++++++++++++++ > > 2 files changed, 72 insertions(+) > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > index 84a271647007..8ad78d61af7f 100644 > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > @@ -450,6 +450,31 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); > > */ > > void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); > > > > +/** > > + * kvm_pgtable_stage2_free_removed() - Create a removed stage-2 paging structure. > > + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > > + * @new: Unlinked stage-2 paging structure to be created. > > Oh, I see so the "removed" page table is actually a new page table > that has never been part of the paging structure. In that case I would > find it much more intuitive to call it "unlinked" or similar. > Sounds good, I like "unlinked". Oliver, are you OK if I rename free_removed() as well? just to keep them symmetric. > > + * @phys: Physical address of the memory to map. > > + * @level: Level of the stage-2 paging structure to be created. > > + * @prot: Permissions and attributes for the mapping. > > + * @mc: Cache of pre-allocated and zeroed memory from which to allocate > > + * page-table pages. > > + * > > + * Create a removed page-table tree of PAGE_SIZE leaf PTEs under *new. > > + * This new page-table tree is not reachable (i.e., it is removed) from the > > + * root pgd and it's therefore unreachableby the hardware page-table > > + * walker. No TLB invalidation or CMOs are performed. > > + * > > + * If device attributes are not explicitly requested in @prot, then the > > + * mapping will be normal, cacheable. > > + * > > + * Return: 0 only if a fully populated tree was created, negative error > > + * code on failure. No partially-populated table can be returned. > > + */ > > +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, > > + kvm_pte_t *new, u64 phys, u32 level, > > + enum kvm_pgtable_prot prot, void *mc); > > + > > /** > > * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. > > * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > index 87fd40d09056..0dee13007776 100644 > > --- a/arch/arm64/kvm/hyp/pgtable.c > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > @@ -1181,6 +1181,53 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) > > return kvm_pgtable_walk(pgt, addr, size, &walker); > > } > > > > +/* > > + * map_data->force_pte is true in order to force creating PAGE_SIZE PTEs. > > + * data->addr is 0 because the IPA is irrelevant for a removed table. > > + */ > > +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, > > + kvm_pte_t *new, u64 phys, u32 level, > > + enum kvm_pgtable_prot prot, void *mc) > > +{ > > + struct stage2_map_data map_data = { > > + .phys = phys, > > + .mmu = pgt->mmu, > > + .memcache = mc, > > + .force_pte = true, > > + }; > > + struct kvm_pgtable_walker walker = { > > + .cb = stage2_map_walker, > > + .flags = KVM_PGTABLE_WALK_LEAF | > > + KVM_PGTABLE_WALK_REMOVED, > > + .arg = &map_data, > > + }; > > + struct kvm_pgtable_walk_data data = { > > + .walker = &walker, > > + .addr = 0, > > + .end = kvm_granule_size(level), > > + }; > > + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; > > + kvm_pte_t *pgtable; > > + int ret; > > + > > + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); > > + if (ret) > > + return ret; > > + > > + pgtable = mm_ops->zalloc_page(mc); > > + if (!pgtable) > > + return -ENOMEM; > > + > > + ret = __kvm_pgtable_walk(&data, mm_ops, pgtable, level + 1); > > + if (ret) { > > + kvm_pgtable_stage2_free_removed(mm_ops, pgtable, level); > > + mm_ops->put_page(pgtable); > > + return ret; > > + } > > + > > + *new = kvm_init_table_pte(pgtable, mm_ops); > > + return 0; > > +} > > > > int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, > > struct kvm_pgtable_mm_ops *mm_ops, > > -- > > 2.39.0.314.g84b9a713c41-goog > >
On Tue, Jan 24, 2023 at 08:35:40AM -0800, Ricardo Koller wrote: > On Mon, Jan 23, 2023 at 04:55:40PM -0800, Ben Gardon wrote: [...] > > > +/** > > > + * kvm_pgtable_stage2_free_removed() - Create a removed stage-2 paging structure. > > > + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). > > > + * @new: Unlinked stage-2 paging structure to be created. > > > > Oh, I see so the "removed" page table is actually a new page table > > that has never been part of the paging structure. In that case I would > > find it much more intuitive to call it "unlinked" or similar. > > > > Sounds good, I like "unlinked". > > Oliver, are you OK if I rename free_removed() as well? just to keep them > symmetric. Fine by me, and sorry for the silly naming :) -- Thanks, Oliver
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index 84a271647007..8ad78d61af7f 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -450,6 +450,31 @@ void kvm_pgtable_stage2_destroy(struct kvm_pgtable *pgt); */ void kvm_pgtable_stage2_free_removed(struct kvm_pgtable_mm_ops *mm_ops, void *pgtable, u32 level); +/** + * kvm_pgtable_stage2_free_removed() - Create a removed stage-2 paging structure. + * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). + * @new: Unlinked stage-2 paging structure to be created. + * @phys: Physical address of the memory to map. + * @level: Level of the stage-2 paging structure to be created. + * @prot: Permissions and attributes for the mapping. + * @mc: Cache of pre-allocated and zeroed memory from which to allocate + * page-table pages. + * + * Create a removed page-table tree of PAGE_SIZE leaf PTEs under *new. + * This new page-table tree is not reachable (i.e., it is removed) from the + * root pgd and it's therefore unreachableby the hardware page-table + * walker. No TLB invalidation or CMOs are performed. + * + * If device attributes are not explicitly requested in @prot, then the + * mapping will be normal, cacheable. + * + * Return: 0 only if a fully populated tree was created, negative error + * code on failure. No partially-populated table can be returned. + */ +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, + kvm_pte_t *new, u64 phys, u32 level, + enum kvm_pgtable_prot prot, void *mc); + /** * kvm_pgtable_stage2_map() - Install a mapping in a guest stage-2 page-table. * @pgt: Page-table structure initialised by kvm_pgtable_stage2_init*(). diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 87fd40d09056..0dee13007776 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -1181,6 +1181,53 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size) return kvm_pgtable_walk(pgt, addr, size, &walker); } +/* + * map_data->force_pte is true in order to force creating PAGE_SIZE PTEs. + * data->addr is 0 because the IPA is irrelevant for a removed table. + */ +int kvm_pgtable_stage2_create_removed(struct kvm_pgtable *pgt, + kvm_pte_t *new, u64 phys, u32 level, + enum kvm_pgtable_prot prot, void *mc) +{ + struct stage2_map_data map_data = { + .phys = phys, + .mmu = pgt->mmu, + .memcache = mc, + .force_pte = true, + }; + struct kvm_pgtable_walker walker = { + .cb = stage2_map_walker, + .flags = KVM_PGTABLE_WALK_LEAF | + KVM_PGTABLE_WALK_REMOVED, + .arg = &map_data, + }; + struct kvm_pgtable_walk_data data = { + .walker = &walker, + .addr = 0, + .end = kvm_granule_size(level), + }; + struct kvm_pgtable_mm_ops *mm_ops = pgt->mm_ops; + kvm_pte_t *pgtable; + int ret; + + ret = stage2_set_prot_attr(pgt, prot, &map_data.attr); + if (ret) + return ret; + + pgtable = mm_ops->zalloc_page(mc); + if (!pgtable) + return -ENOMEM; + + ret = __kvm_pgtable_walk(&data, mm_ops, pgtable, level + 1); + if (ret) { + kvm_pgtable_stage2_free_removed(mm_ops, pgtable, level); + mm_ops->put_page(pgtable); + return ret; + } + + *new = kvm_init_table_pte(pgtable, mm_ops); + return 0; +} int __kvm_pgtable_stage2_init(struct kvm_pgtable *pgt, struct kvm_s2_mmu *mmu, struct kvm_pgtable_mm_ops *mm_ops,
Add a stage2 helper, kvm_pgtable_stage2_create_removed(), for creating removed tables (the opposite of kvm_pgtable_stage2_free_removed()). Creating a removed table is useful for splitting block PTEs into subtrees of 4K PTEs. For example, a 1G block PTE can be split into 4K PTEs by first creating a fully populated tree, and then use it to replace the 1G PTE in a single step. This will be used in a subsequent commit for eager huge-page splitting (a dirty-logging optimization). No functional change intended. This new function will be used in a subsequent commit. Signed-off-by: Ricardo Koller <ricarkol@google.com> --- arch/arm64/include/asm/kvm_pgtable.h | 25 +++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 47 ++++++++++++++++++++++++++++ 2 files changed, 72 insertions(+)