Message ID | 20210115170907.24498-1-peterx@redhat.com (mailing list archive) |
---|---|
Headers | show |
Series | userfaultfd-wp: Support shmem and hugetlbfs | expand |
On Fri, Jan 15, 2021 at 12:08:37PM -0500, Peter Xu wrote: > This is a RFC series to support userfaultfd upon shmem and hugetlbfs. > > PS. Note that there's a known issue [0] with tlb against uffd-wp/soft-dirty in > general and Nadav is working on it. It may or may not directly affect > shmem/hugetlbfs since there're no COW on shared mappings normally. Private > shmem could hit, but still that's another problem to solve in general, and this > RFC is majorly to see whether there's any objection on the concept of the idea > specific to uffd-wp on shmem/hugetlbfs. > > The whole series can also be found online [1]. > > The major comment I'd like to get is on the new idea of swap special pte. That > comes from suggestions from both Hugh and Andrea and I appreciated a lot for > those discussions. > > In short, it's a new type of pte that doesn't exist in the past, while used in > file-backed memories to persist information across ptes being erased (but the > page cache could still exist, for example, so in the next page fault we can > reload the page cache with that specific information when necessary). > > I'm copy-pasting some commit message from the patch "mm/swap: Introduce the > idea of special swap ptes", where uffd-wp becomes the first user of it: > > We used to have special swap entries, like migration entries, hw-poison > entries, device private entries, etc. > > Those "special swap entries" reside in the range that they need to be at least > swap entries first, and their types are decided by swp_type(entry). > > This patch introduces another idea called "special swap ptes". > > It's very easy to get confused against "special swap entries", but a speical > swap pte should never contain a swap entry at all. It means, it's illegal to > call pte_to_swp_entry() upon a special swap pte. > > Make the uffd-wp special pte to be the first special swap pte. > > Before this patch, is_swap_pte()==true means one of the below: > > (a.1) The pte has a normal swap entry (non_swap_entry()==false). For > example, when an anonymous page got swapped out. > > (a.2) The pte has a special swap entry (non_swap_entry()==true). For > example, a migration entry, a hw-poison entry, etc. > > After this patch, is_swap_pte()==true means one of the below, where case (b) is > added: > > (a) The pte contains a swap entry. > > (a.1) The pte has a normal swap entry (non_swap_entry()==false). For > example, when an anonymous page got swapped out. > > (a.2) The pte has a special swap entry (non_swap_entry()==true). For > example, a migration entry, a hw-poison entry, etc. > > (b) The pte does not contain a swap entry at all (so it cannot be passed > into pte_to_swp_entry()). For example, uffd-wp special swap pte. > > Hugetlbfs needs similar thing because it's also file-backed. I directly reused > the same special pte there, though the shmem/hugetlb change on supporting this > new pte is different since they don't share code path a lot. Huge & Mike, Would any of you have comment/concerns on the high-level design of this series? It would be great to know it, especially major objection, before move on to an non-rfc version. Thanks,
On 1/29/21 2:49 PM, Peter Xu wrote: > On Fri, Jan 15, 2021 at 12:08:37PM -0500, Peter Xu wrote: >> This is a RFC series to support userfaultfd upon shmem and hugetlbfs. ... > Huge & Mike, > > Would any of you have comment/concerns on the high-level design of this series? > > It would be great to know it, especially major objection, before move on to an > non-rfc version. My apologies for not looking at this sooner. Even now, I have only taken a very brief look at the hugetlbfs patches. Coincidentally, I am working on the 'BUG' that soft dirty does not work for hugetlbfs. As you can imagine, there is some overlap in handling of wp ptes set for soft dirty. In addition, pmd sharing must be disabled for soft dirty as here and in Axel's uffd minor fault code. No objections to the overall approach based on my quick look. I'll try to take a closer look at the areas where efforts overlap.
On Fri, 29 Jan 2021, Peter Xu wrote: > > Huge & Mike, > > Would any of you have comment/concerns on the high-level design of this series? > > It would be great to know it, especially major objection, before move on to an > non-rfc version. Seeing Mike's update prompts me to speak up: I have been looking, and will continue to look through it - will report when done; but find I've been making very little forward progress from one day to the next. It is very confusing, inevitably; but you have done an *outstanding* job on acknowledging the confusion, and commenting it in great detail. Hugh
On Fri, Feb 05, 2021 at 01:53:34PM -0800, Mike Kravetz wrote: > On 1/29/21 2:49 PM, Peter Xu wrote: > > On Fri, Jan 15, 2021 at 12:08:37PM -0500, Peter Xu wrote: > >> This is a RFC series to support userfaultfd upon shmem and hugetlbfs. > ... > > Huge & Mike, > > > > Would any of you have comment/concerns on the high-level design of this series? > > > > It would be great to know it, especially major objection, before move on to an > > non-rfc version. > > My apologies for not looking at this sooner. Even now, I have only taken > a very brief look at the hugetlbfs patches. > > Coincidentally, I am working on the 'BUG' that soft dirty does not work for > hugetlbfs. As you can imagine, there is some overlap in handling of wp ptes > set for soft dirty. In addition, pmd sharing must be disabled for soft dirty > as here and in Axel's uffd minor fault code. Interesting to know that we'll reach and need something common from different directions, especially when they all mostly happen at the same time. :) Is there a real "BUG" that you mentioned? I'd be glad to read about it if there is a link or something. > > No objections to the overall approach based on my quick look. Thanks for having a look. So for hugetlb one major thing is indeed about the pmd sharing part, which seems that we've got very good consensus on. The other thing that I'd love to get some comment would be a shared topic with shmem in that: for a file-backed memory type, uffd-wp needs a consolidated way to record wr-protect information even if the pgtable entries were flushed. That comes from a fundamental difference between anonymous and file-backed memory in that anonymous pages keep all info in the pgtable entry, but file-backed memory is not, e.g., pgtable entries can be dropped at any time as long as page cache is there. I goes to look at soft-dirty then regarding this issue, and there's actually a paragraph about it: While in most cases tracking memory changes by #PF-s is more than enough there is still a scenario when we can lose soft dirty bits -- a task unmaps a previously mapped memory region and then maps a new one at exactly the same place. When unmap is called, the kernel internally clears PTE values including soft dirty bits. To notify user space application about such memory region renewal the kernel always marks new memory regions (and expanded regions) as soft dirty. I feel like it just means soft-dirty currently allows false positives: we could have set the soft dirty bit even if the page is clean. And that's what this series wanted to avoid: it used the new concept called "swap special pte" to persistent that information even for file-backed memory. That all goes for avoiding those false positives. > > I'll try to take a closer look at the areas where efforts overlap. I dumped above just to hope maybe it could help a little bit more for the reviews, but if it's not, I totally agree we can focus on the overlapped part. And, I'd be more than glad to read your work too if I can understand more on what you're working on with the hugetlb soft dirty bug, since I do feel uffd-wp is servicing similar goals just like what soft-dirty does, so we could share a lot of common knowledge there. :) Thanks again!
On Fri, Feb 05, 2021 at 02:21:47PM -0800, Hugh Dickins wrote: > On Fri, 29 Jan 2021, Peter Xu wrote: > > > > Huge & Mike, > > > > Would any of you have comment/concerns on the high-level design of this series? > > > > It would be great to know it, especially major objection, before move on to an > > non-rfc version. > > Seeing Mike's update prompts me to speak up: I have been looking, and > will continue to look through it - will report when done; but find I've > been making very little forward progress from one day to the next. > > It is very confusing, inevitably; but you have done an *outstanding* > job on acknowledging the confusion, and commenting it in great detail. I'm honored to receive such an evaluation, thanks Hugh! As a quick summary - what I did in this series is mostly what you've suggested on using swp_type==1 && swp_offset=0 as a special pte, so the swap code can trap it. The only difference is that "swp_type==1 && swp_offset=0" still uses valid swp_entry address space, so I introduced the "swap special pte" idea hoping to make it clearer, which is also based on Andrea's suggestion. I hope I didn't make it even worse. :) It's just that I don't want to make this idea that "only works for uffd-wp". What I'm thinking is whether we can provide such a common way to keep some records in pgtable entries that point to file-backed memory. Say, currently for a file-backed memory we can only have either a valid pte (either RO or RW) or a none pte. So maybe we could provide a way to start using the rest pte address space that we haven't yet used. Please take your time on reviewing the series. Any of your future comment would be greatly welcomed. Thanks,
On 2/5/21 6:36 PM, Peter Xu wrote: > On Fri, Feb 05, 2021 at 01:53:34PM -0800, Mike Kravetz wrote: >> On 1/29/21 2:49 PM, Peter Xu wrote: >>> On Fri, Jan 15, 2021 at 12:08:37PM -0500, Peter Xu wrote: >>>> This is a RFC series to support userfaultfd upon shmem and hugetlbfs. >> ... >>> Huge & Mike, >>> >>> Would any of you have comment/concerns on the high-level design of this series? >>> >>> It would be great to know it, especially major objection, before move on to an >>> non-rfc version. >> >> My apologies for not looking at this sooner. Even now, I have only taken >> a very brief look at the hugetlbfs patches. >> >> Coincidentally, I am working on the 'BUG' that soft dirty does not work for >> hugetlbfs. As you can imagine, there is some overlap in handling of wp ptes >> set for soft dirty. In addition, pmd sharing must be disabled for soft dirty >> as here and in Axel's uffd minor fault code. > > Interesting to know that we'll reach and need something common from different > directions, especially when they all mostly happen at the same time. :) > > Is there a real "BUG" that you mentioned? I'd be glad to read about it if > there is a link or something. > Sorry, I was referring to a bugzilla bug not a BUG(). Bottom line is that hugetlb was mostly overlooked when soft dirty support was added. A thread mostly from me is at: lore.kernel.org/r/999775bf-4204-2bec-7c3d-72d81b4fce30@oracle.com I am close to sending out a RFC, but keep getting distracted. >> No objections to the overall approach based on my quick look. > > Thanks for having a look. > > So for hugetlb one major thing is indeed about the pmd sharing part, which > seems that we've got very good consensus on. Yes > The other thing that I'd love to get some comment would be a shared topic with > shmem in that: for a file-backed memory type, uffd-wp needs a consolidated way > to record wr-protect information even if the pgtable entries were flushed. > That comes from a fundamental difference between anonymous and file-backed > memory in that anonymous pages keep all info in the pgtable entry, but > file-backed memory is not, e.g., pgtable entries can be dropped at any time as > long as page cache is there. Sorry, but I can not recall this difference for hugetlb pages. What operations lead to flushing of pagetable entries? It would need to be something other than unmap as it seems we want to lose the information in unmap IIUC. > I goes to look at soft-dirty then regarding this issue, and there's actually a > paragraph about it: > > While in most cases tracking memory changes by #PF-s is more than enough > there is still a scenario when we can lose soft dirty bits -- a task > unmaps a previously mapped memory region and then maps a new one at > exactly the same place. When unmap is called, the kernel internally > clears PTE values including soft dirty bits. To notify user space > application about such memory region renewal the kernel always marks > new memory regions (and expanded regions) as soft dirty. > > I feel like it just means soft-dirty currently allows false positives: we could > have set the soft dirty bit even if the page is clean. And that's what this > series wanted to avoid: it used the new concept called "swap special pte" to > persistent that information even for file-backed memory. That all goes for > avoiding those false positives. Yes, I have seen this with soft dirty. It really does not seem right. When you first create a mapping, even before faulting in anything the vma is marked VM_SOFTDIRTY and from the user's perspective all addresses/pages appear dirty. To be honest, I am not sure you want to try and carry per-process/per-mapping wp information in the file. In the comment about soft dirty above, it seems reasonable that unmapping would clear all soft dirty information. Also, unmapping would clear any uffd state/information.
On Tue, Feb 09, 2021 at 11:29:56AM -0800, Mike Kravetz wrote: > On 2/5/21 6:36 PM, Peter Xu wrote: > > On Fri, Feb 05, 2021 at 01:53:34PM -0800, Mike Kravetz wrote: > >> On 1/29/21 2:49 PM, Peter Xu wrote: > >>> On Fri, Jan 15, 2021 at 12:08:37PM -0500, Peter Xu wrote: > >>>> This is a RFC series to support userfaultfd upon shmem and hugetlbfs. > >> ... > >>> Huge & Mike, > >>> > >>> Would any of you have comment/concerns on the high-level design of this series? > >>> > >>> It would be great to know it, especially major objection, before move on to an > >>> non-rfc version. > >> > >> My apologies for not looking at this sooner. Even now, I have only taken > >> a very brief look at the hugetlbfs patches. > >> > >> Coincidentally, I am working on the 'BUG' that soft dirty does not work for > >> hugetlbfs. As you can imagine, there is some overlap in handling of wp ptes > >> set for soft dirty. In addition, pmd sharing must be disabled for soft dirty > >> as here and in Axel's uffd minor fault code. > > > > Interesting to know that we'll reach and need something common from different > > directions, especially when they all mostly happen at the same time. :) > > > > Is there a real "BUG" that you mentioned? I'd be glad to read about it if > > there is a link or something. > > > > Sorry, I was referring to a bugzilla bug not a BUG(). Bottom line is that > hugetlb was mostly overlooked when soft dirty support was added. A thread > mostly from me is at: > lore.kernel.org/r/999775bf-4204-2bec-7c3d-72d81b4fce30@oracle.com > I am close to sending out a RFC, but keep getting distracted. Thanks. Indeed I see no reason to not have hugetlb supported for soft dirty. Tracking 1G huge pages could be too coarse and heavy, but 2M at least still seems reasonable. > > >> No objections to the overall approach based on my quick look. > > > > Thanks for having a look. > > > > So for hugetlb one major thing is indeed about the pmd sharing part, which > > seems that we've got very good consensus on. > > Yes > > > The other thing that I'd love to get some comment would be a shared topic with > > shmem in that: for a file-backed memory type, uffd-wp needs a consolidated way > > to record wr-protect information even if the pgtable entries were flushed. > > That comes from a fundamental difference between anonymous and file-backed > > memory in that anonymous pages keep all info in the pgtable entry, but > > file-backed memory is not, e.g., pgtable entries can be dropped at any time as > > long as page cache is there. > > Sorry, but I can not recall this difference for hugetlb pages. What operations > lead to flushing of pagetable entries? It would need to be something other > than unmap as it seems we want to lose the information in unmap IIUC. For hugetlbfs I know two cases. One is exactly huge pmd sharing as mentioned above, where we'll drop the pgtable entries for a specific process but the page cache will still exist. The other one is hugetlbfs_punch_hole(), where hugetlb_vmdelete_list() called before remove_inode_hugepages(). For uffd-wp, there will be a very small window that a wr-protected huge page can be written before the page is finally dropped in remove_inode_hugepages() but after pgtable entry flushed. In some apps that could cause data loss. > > > I goes to look at soft-dirty then regarding this issue, and there's actually a > > paragraph about it: > > > > While in most cases tracking memory changes by #PF-s is more than enough > > there is still a scenario when we can lose soft dirty bits -- a task > > unmaps a previously mapped memory region and then maps a new one at > > exactly the same place. When unmap is called, the kernel internally > > clears PTE values including soft dirty bits. To notify user space > > application about such memory region renewal the kernel always marks > > new memory regions (and expanded regions) as soft dirty. > > > > I feel like it just means soft-dirty currently allows false positives: we could > > have set the soft dirty bit even if the page is clean. And that's what this > > series wanted to avoid: it used the new concept called "swap special pte" to > > persistent that information even for file-backed memory. That all goes for > > avoiding those false positives. > > Yes, I have seen this with soft dirty. It really does not seem right. When > you first create a mapping, even before faulting in anything the vma is marked > VM_SOFTDIRTY and from the user's perspective all addresses/pages appear dirty. Right that seems not optimal. It is understandable since dirty info is indeed tolerant to false positives, so soft-dirty avoided this issue as uffd-wp wanted to solve in this series. It would be great to know if current approach in this series would work for us to remove those false positives. > > To be honest, I am not sure you want to try and carry per-process/per-mapping > wp information in the file. What this series does is trying to persist that information in pgtable entries, rather than in the file (or page cache). Frankly I can't say whether that's optimal either, so I'm always open to any comment. So far I think it's a valid solution, but it could always be possible that I missed something important. > In the comment about soft dirty above, it seems > reasonable that unmapping would clear all soft dirty information. Also, > unmapping would clear any uffd state/information. Right, unmap should always means "dropping all information in the ptes". It's in below patch that we tried to treat it differently: https://github.com/xzpeter/linux/commit/e958e9ee8d33e9a6602f93cdbe24a0c3614ab5e2 A quick summary of above patch: only if we unmap or truncate the hugetlbfs file, would we call hugetlb_vmdelete_list() with ZAP_FLAG_DROP_FILE_UFFD_WP (which means we'll drop all the information, including uffd-wp bit). Thanks,