Message ID | 20191125042011.3002372-18-jhubbard@nvidia.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | pin_user_pages(): reduced-risk series for Linux 5.5 | expand |
On Sun 24-11-19 20:20:09, John Hubbard wrote: > 1. Convert from get_user_pages() to pin_user_pages(). > > 2. As required by pin_user_pages(), release these pages via > put_user_page(). In this case, do so via put_user_pages_dirty_lock(). > > That has the side effect of calling set_page_dirty_lock(), instead > of set_page_dirty(). This is probably more accurate. > > As Christoph Hellwig put it, "set_page_dirty() is only safe if we are > dealing with a file backed page where we have reference on the inode it > hangs off." [1] > > 3. Release each page in mem->hpages[] (instead of mem->hpas[]), because > that is the array that pin_longterm_pages() filled in. This is more > accurate and should be a little safer from a maintenance point of > view. Except that this breaks the code. hpages is unioned with hpas... > [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de > > Signed-off-by: John Hubbard <jhubbard@nvidia.com> > --- > arch/powerpc/mm/book3s64/iommu_api.c | 12 +++++------- > 1 file changed, 5 insertions(+), 7 deletions(-) > > diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c > index 56cc84520577..196383e8e5a9 100644 > --- a/arch/powerpc/mm/book3s64/iommu_api.c > +++ b/arch/powerpc/mm/book3s64/iommu_api.c > @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > for (entry = 0; entry < entries; entry += chunk) { > unsigned long n = min(entries - entry, chunk); > > - ret = get_user_pages(ua + (entry << PAGE_SHIFT), n, > + ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, > FOLL_WRITE | FOLL_LONGTERM, > mem->hpages + entry, NULL); > if (ret == n) { > @@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, > return 0; > > free_exit: > - /* free the reference taken */ > - for (i = 0; i < pinned; i++) > - put_page(mem->hpages[i]); > + /* free the references taken */ > + put_user_pages(mem->hpages, pinned); > > vfree(mem->hpas); > kfree(mem); > @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) > if (!page) > continue; > > - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) > - SetPageDirty(page); > + put_user_pages_dirty_lock(&mem->hpages[i], 1, > + MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); And the dirtying condition is wrong here as well. Currently it is always true. Honza
On 11/25/19 12:59 AM, Jan Kara wrote: > On Sun 24-11-19 20:20:09, John Hubbard wrote: >> 1. Convert from get_user_pages() to pin_user_pages(). >> >> 2. As required by pin_user_pages(), release these pages via >> put_user_page(). In this case, do so via put_user_pages_dirty_lock(). >> >> That has the side effect of calling set_page_dirty_lock(), instead >> of set_page_dirty(). This is probably more accurate. >> >> As Christoph Hellwig put it, "set_page_dirty() is only safe if we are >> dealing with a file backed page where we have reference on the inode it >> hangs off." [1] >> >> 3. Release each page in mem->hpages[] (instead of mem->hpas[]), because >> that is the array that pin_longterm_pages() filled in. This is more >> accurate and should be a little safer from a maintenance point of >> view. > > Except that this breaks the code. hpages is unioned with hpas... > OK. >> @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) >> if (!page) >> continue; >> >> - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) >> - SetPageDirty(page); >> + put_user_pages_dirty_lock(&mem->hpages[i], 1, >> + MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); > > And the dirtying condition is wrong here as well. Currently it is always > true. > > Honza > Yes. Fixed up locally. The function now looks like this (for this patch, not for the entire series, which renames "put" to "unpin"): static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) { long i; struct page *page = NULL; if (!mem->hpas) return; for (i = 0; i < mem->entries; ++i) { if (!mem->hpas[i]) continue; page = pfn_to_page(mem->hpas[i] >> PAGE_SHIFT); if (!page) continue; put_user_pages_dirty_lock(&page, 1, mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); mem->hpas[i] = 0; } } thanks,
diff --git a/arch/powerpc/mm/book3s64/iommu_api.c b/arch/powerpc/mm/book3s64/iommu_api.c index 56cc84520577..196383e8e5a9 100644 --- a/arch/powerpc/mm/book3s64/iommu_api.c +++ b/arch/powerpc/mm/book3s64/iommu_api.c @@ -103,7 +103,7 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, for (entry = 0; entry < entries; entry += chunk) { unsigned long n = min(entries - entry, chunk); - ret = get_user_pages(ua + (entry << PAGE_SHIFT), n, + ret = pin_user_pages(ua + (entry << PAGE_SHIFT), n, FOLL_WRITE | FOLL_LONGTERM, mem->hpages + entry, NULL); if (ret == n) { @@ -167,9 +167,8 @@ static long mm_iommu_do_alloc(struct mm_struct *mm, unsigned long ua, return 0; free_exit: - /* free the reference taken */ - for (i = 0; i < pinned; i++) - put_page(mem->hpages[i]); + /* free the references taken */ + put_user_pages(mem->hpages, pinned); vfree(mem->hpas); kfree(mem); @@ -212,10 +211,9 @@ static void mm_iommu_unpin(struct mm_iommu_table_group_mem_t *mem) if (!page) continue; - if (mem->hpas[i] & MM_IOMMU_TABLE_GROUP_PAGE_DIRTY) - SetPageDirty(page); + put_user_pages_dirty_lock(&mem->hpages[i], 1, + MM_IOMMU_TABLE_GROUP_PAGE_DIRTY); - put_page(page); mem->hpas[i] = 0; } }
1. Convert from get_user_pages() to pin_user_pages(). 2. As required by pin_user_pages(), release these pages via put_user_page(). In this case, do so via put_user_pages_dirty_lock(). That has the side effect of calling set_page_dirty_lock(), instead of set_page_dirty(). This is probably more accurate. As Christoph Hellwig put it, "set_page_dirty() is only safe if we are dealing with a file backed page where we have reference on the inode it hangs off." [1] 3. Release each page in mem->hpages[] (instead of mem->hpas[]), because that is the array that pin_longterm_pages() filled in. This is more accurate and should be a little safer from a maintenance point of view. [1] https://lore.kernel.org/r/20190723153640.GB720@lst.de Signed-off-by: John Hubbard <jhubbard@nvidia.com> --- arch/powerpc/mm/book3s64/iommu_api.c | 12 +++++------- 1 file changed, 5 insertions(+), 7 deletions(-)