Message ID | 20210726153552.1535838-2-maz@kernel.org (mailing list archive) |
---|---|
State | New |
Headers | show |
Series | KVM: Remove kvm_is_transparent_hugepage() and friends | expand |
On Monday 26 Jul 2021 at 16:35:47 (+0100), Marc Zyngier wrote: > It is becoming a common need to fetch the PTE for a given address > together with its level. Add such a helper. Reviewed-by: Quentin Perret <qperret@google.com> Thanks, Quentin
Hi Marc, On 7/26/21 4:35 PM, Marc Zyngier wrote: > It is becoming a common need to fetch the PTE for a given address > together with its level. Add such a helper. > > Signed-off-by: Marc Zyngier <maz@kernel.org> > --- > arch/arm64/include/asm/kvm_pgtable.h | 19 ++++++++++++++ > arch/arm64/kvm/hyp/pgtable.c | 39 ++++++++++++++++++++++++++++ > 2 files changed, 58 insertions(+) > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > index f004c0115d89..082b9d65f40b 100644 > --- a/arch/arm64/include/asm/kvm_pgtable.h > +++ b/arch/arm64/include/asm/kvm_pgtable.h > @@ -432,6 +432,25 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); > int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, > struct kvm_pgtable_walker *walker); > > +/** > + * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry > + * with its level. > + * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). Yet in the next patch you use a struct kvm_pgtable_pgt not initialized by any of the kvm_pgtable_*_init() functions. It doesn't hurt correctness, but it might confuse potential users of this function. > + * @addr: Input address for the start of the walk. > + * @ptep: Pointer to storage for the retrieved PTE. > + * @level: Pointer to storage for the level of the retrieved PTE. > + * > + * The offset of @addr within a page is ignored. > + * > + * The walker will walk the page-table entries corresponding to the input > + * address specified, retrieving the leaf corresponding to this address. > + * Invalid entries are treated as leaf entries. > + * > + * Return: 0 on success, negative error code on failure. > + */ > +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, > + kvm_pte_t *ptep, u32 *level); > + > /** > * kvm_pgtable_stage2_find_range() - Find a range of Intermediate Physical > * Addresses with compatible permission > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > index 05321f4165e3..78f36bd5df6c 100644 > --- a/arch/arm64/kvm/hyp/pgtable.c > +++ b/arch/arm64/kvm/hyp/pgtable.c > @@ -326,6 +326,45 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, > return _kvm_pgtable_walk(&walk_data); > } > > +struct leaf_walk_data { > + kvm_pte_t pte; > + u32 level; > +}; > + > +static int leaf_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > + enum kvm_pgtable_walk_flags flag, void * const arg) > +{ > + struct leaf_walk_data *data = arg; > + > + data->pte = *ptep; > + data->level = level; > + > + return 0; > +} > + > +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, > + kvm_pte_t *ptep, u32 *level) > +{ > + struct leaf_walk_data data; > + struct kvm_pgtable_walker walker = { > + .cb = leaf_walker, > + .flags = KVM_PGTABLE_WALK_LEAF, > + .arg = &data, > + }; > + int ret; > + > + ret = kvm_pgtable_walk(pgt, ALIGN_DOWN(addr, PAGE_SIZE), > + PAGE_SIZE, &walker); kvm_pgtable_walk() already aligns addr down to PAGE_SIZE, I don't think that's needed here. But not harmful either. Otherwise, the patch looks good to me: Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Thanks, Alex > + if (!ret) { > + if (ptep) > + *ptep = data.pte; > + if (level) > + *level = data.level; > + } > + > + return ret; > +} > + > struct hyp_map_data { > u64 phys; > kvm_pte_t attr;
Hi Alex, On Tue, 27 Jul 2021 16:25:34 +0100, Alexandru Elisei <alexandru.elisei@arm.com> wrote: > > Hi Marc, > > On 7/26/21 4:35 PM, Marc Zyngier wrote: > > It is becoming a common need to fetch the PTE for a given address > > together with its level. Add such a helper. > > > > Signed-off-by: Marc Zyngier <maz@kernel.org> > > --- > > arch/arm64/include/asm/kvm_pgtable.h | 19 ++++++++++++++ > > arch/arm64/kvm/hyp/pgtable.c | 39 ++++++++++++++++++++++++++++ > > 2 files changed, 58 insertions(+) > > > > diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h > > index f004c0115d89..082b9d65f40b 100644 > > --- a/arch/arm64/include/asm/kvm_pgtable.h > > +++ b/arch/arm64/include/asm/kvm_pgtable.h > > @@ -432,6 +432,25 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); > > int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, > > struct kvm_pgtable_walker *walker); > > > > +/** > > + * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry > > + * with its level. > > + * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). > > Yet in the next patch you use a struct kvm_pgtable_pgt not > initialized by any of the kvm_pgtable_*_init() functions. It doesn't > hurt correctness, but it might confuse potential users of this > function. Fair enough. I'll add something like "[...] or any similar initialisation". > > > + * @addr: Input address for the start of the walk. > > + * @ptep: Pointer to storage for the retrieved PTE. > > + * @level: Pointer to storage for the level of the retrieved PTE. > > + * > > + * The offset of @addr within a page is ignored. > > + * > > + * The walker will walk the page-table entries corresponding to the input > > + * address specified, retrieving the leaf corresponding to this address. > > + * Invalid entries are treated as leaf entries. > > + * > > + * Return: 0 on success, negative error code on failure. > > + */ > > +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, > > + kvm_pte_t *ptep, u32 *level); > > + > > /** > > * kvm_pgtable_stage2_find_range() - Find a range of Intermediate Physical > > * Addresses with compatible permission > > diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c > > index 05321f4165e3..78f36bd5df6c 100644 > > --- a/arch/arm64/kvm/hyp/pgtable.c > > +++ b/arch/arm64/kvm/hyp/pgtable.c > > @@ -326,6 +326,45 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, > > return _kvm_pgtable_walk(&walk_data); > > } > > > > +struct leaf_walk_data { > > + kvm_pte_t pte; > > + u32 level; > > +}; > > + > > +static int leaf_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, > > + enum kvm_pgtable_walk_flags flag, void * const arg) > > +{ > > + struct leaf_walk_data *data = arg; > > + > > + data->pte = *ptep; > > + data->level = level; > > + > > + return 0; > > +} > > + > > +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, > > + kvm_pte_t *ptep, u32 *level) > > +{ > > + struct leaf_walk_data data; > > + struct kvm_pgtable_walker walker = { > > + .cb = leaf_walker, > > + .flags = KVM_PGTABLE_WALK_LEAF, > > + .arg = &data, > > + }; > > + int ret; > > + > > + ret = kvm_pgtable_walk(pgt, ALIGN_DOWN(addr, PAGE_SIZE), > > + PAGE_SIZE, &walker); > > kvm_pgtable_walk() already aligns addr down to PAGE_SIZE, I don't > think that's needed here. But not harmful either. It is more that if you don't align it down, the size becomes awkward to express. Masking is both cheap and readable. > > Otherwise, the patch looks good to me: > > Reviewed-by: Alexandru Elisei <alexandru.elisei@arm.com> Thanks! M.
diff --git a/arch/arm64/include/asm/kvm_pgtable.h b/arch/arm64/include/asm/kvm_pgtable.h index f004c0115d89..082b9d65f40b 100644 --- a/arch/arm64/include/asm/kvm_pgtable.h +++ b/arch/arm64/include/asm/kvm_pgtable.h @@ -432,6 +432,25 @@ int kvm_pgtable_stage2_flush(struct kvm_pgtable *pgt, u64 addr, u64 size); int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, struct kvm_pgtable_walker *walker); +/** + * kvm_pgtable_get_leaf() - Walk a page-table and retrieve the leaf entry + * with its level. + * @pgt: Page-table structure initialised by kvm_pgtable_*_init(). + * @addr: Input address for the start of the walk. + * @ptep: Pointer to storage for the retrieved PTE. + * @level: Pointer to storage for the level of the retrieved PTE. + * + * The offset of @addr within a page is ignored. + * + * The walker will walk the page-table entries corresponding to the input + * address specified, retrieving the leaf corresponding to this address. + * Invalid entries are treated as leaf entries. + * + * Return: 0 on success, negative error code on failure. + */ +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, + kvm_pte_t *ptep, u32 *level); + /** * kvm_pgtable_stage2_find_range() - Find a range of Intermediate Physical * Addresses with compatible permission diff --git a/arch/arm64/kvm/hyp/pgtable.c b/arch/arm64/kvm/hyp/pgtable.c index 05321f4165e3..78f36bd5df6c 100644 --- a/arch/arm64/kvm/hyp/pgtable.c +++ b/arch/arm64/kvm/hyp/pgtable.c @@ -326,6 +326,45 @@ int kvm_pgtable_walk(struct kvm_pgtable *pgt, u64 addr, u64 size, return _kvm_pgtable_walk(&walk_data); } +struct leaf_walk_data { + kvm_pte_t pte; + u32 level; +}; + +static int leaf_walker(u64 addr, u64 end, u32 level, kvm_pte_t *ptep, + enum kvm_pgtable_walk_flags flag, void * const arg) +{ + struct leaf_walk_data *data = arg; + + data->pte = *ptep; + data->level = level; + + return 0; +} + +int kvm_pgtable_get_leaf(struct kvm_pgtable *pgt, u64 addr, + kvm_pte_t *ptep, u32 *level) +{ + struct leaf_walk_data data; + struct kvm_pgtable_walker walker = { + .cb = leaf_walker, + .flags = KVM_PGTABLE_WALK_LEAF, + .arg = &data, + }; + int ret; + + ret = kvm_pgtable_walk(pgt, ALIGN_DOWN(addr, PAGE_SIZE), + PAGE_SIZE, &walker); + if (!ret) { + if (ptep) + *ptep = data.pte; + if (level) + *level = data.level; + } + + return ret; +} + struct hyp_map_data { u64 phys; kvm_pte_t attr;
It is becoming a common need to fetch the PTE for a given address together with its level. Add such a helper. Signed-off-by: Marc Zyngier <maz@kernel.org> --- arch/arm64/include/asm/kvm_pgtable.h | 19 ++++++++++++++ arch/arm64/kvm/hyp/pgtable.c | 39 ++++++++++++++++++++++++++++ 2 files changed, 58 insertions(+)