Message ID | 20140411172456.GA20506@redhat.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
On 04/11, Oleg Nesterov wrote: > > +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, > + struct arch_uprobe *auprobe) > +{ > +#ifndef ARCH_UPROBE_XXX > + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); > + /* > + * We probably need flush_icache_user_range() but it needs vma. > + * If this doesn't work define ARCH_UPROBE_XXX. > + */ > + flush_dcache_page(area->page); > +#else > + struct mm_struct *mm = current->mm; > + struct vm_area_struct *vma; > + > + down_read(&mm->mmap_sem); > + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); > + if (vma) { > + void *kaddr = kmap_atomic(area->page); > + copy_to_user_page(vma, area->page, > + vaddr, kaddr + (vaddr & ~PAGE_MASK), > + &auprobe->ixol, sizeof(&auprobe->ixol)); > + kunmap_atomic(kaddr); > + } > + up_read(&mm->mmap_sem); > +#endif And perhaps the patch is not complete. "if (vma)" is not enough, a probed task can mmap something else at this vaddr. copy_to_user_page() should only change the contents of area->page, so memcpy should be fine. But I am not sure that flush_icache_user_range() or flush_ptrace_access() is always safe on every arch if "struct page *page" doesn't match vma. Oleg.
On Fri, Apr 11, 2014 at 10:24 AM, Oleg Nesterov <oleg@redhat.com> wrote: > +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, > + struct arch_uprobe *auprobe) > +{ > +#ifndef ARCH_UPROBE_XXX > + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); > + /* > + * We probably need flush_icache_user_range() but it needs vma. > + * If this doesn't work define ARCH_UPROBE_XXX. > + */ > + flush_dcache_page(area->page); > +#else > + struct mm_struct *mm = current->mm; > + struct vm_area_struct *vma; > + > + down_read(&mm->mmap_sem); > + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); > + if (vma) { > + void *kaddr = kmap_atomic(area->page); > + copy_to_user_page(vma, area->page, > + vaddr, kaddr + (vaddr & ~PAGE_MASK), > + &auprobe->ixol, sizeof(&auprobe->ixol)); > + kunmap_atomic(kaddr); > + } > + up_read(&mm->mmap_sem); > +#endif Yeah, no, this is wrong. the fact is, the *only* possible use for the whole "vma" argument is the "can this be executable" optimization, afaik. So I really think we should just have a fixed "flush_icache_page(page,vaddr)" function. Maybe add a "len" argument, to allow architectures that have to loop over cachelines to do just a minimal loop. Then, to do the vma optimization, let's introduce a new arch_needs_icache_flush(vma, page) function, which on cache coherent architectures can just be zero (or one, since the icache flush itself will be a no-op, so it doesn't really matter), and on others it can do the "have we executed from this page", which may involve just looking at the vma.. Then the uprobe case can just do copy_to_page() flush_dcache_page() flush_icache_page() and be done with it. Hmm? Linus
From: Oleg Nesterov <oleg@redhat.com> Date: Fri, 11 Apr 2014 19:38:53 +0200 > And perhaps the patch is not complete. "if (vma)" is not enough, a probed > task can mmap something else at this vaddr. > > copy_to_user_page() should only change the contents of area->page, so memcpy > should be fine. But I am not sure that flush_icache_user_range() or > flush_ptrace_access() is always safe on every arch if "struct page *page" > doesn't match vma. The architectures want the VMA for two reasons: 1) To get at the 'mm'. The 'mm' is absolutely essential so that we can look at the MM cpumask and therefore determine what cpus this address space has executed upon, and therefore what cpus need the flush broadcast to. 2) To determine if the VMA is executable, in order to avoid the I-cache flush if possible. I think you can get at the 'mm' trivially in this uprobes path, and we can just as well assume that the VMA is executable since this thing is always writing instructions. So we could create a __copy_to_user_page() that takes an 'mm' and a boolean 'executable' which uprobes could unconditionally set true, and copy_to_user_page() would then be implemented in terms of __copy_to_user_page().
From: Linus Torvalds <torvalds@linux-foundation.org> Date: Fri, 11 Apr 2014 10:50:31 -0700 > So I really think we should just have a fixed > "flush_icache_page(page,vaddr)" function. Maybe add a "len" argument, > to allow architectures that have to loop over cachelines to do just a > minimal loop. It's not enough, we need to have the 'mm' so we can know what cpu's this address space has executed upon, and therefore what cpus need the broadcast flush. See my other reply, we can just make a __copy_to_user_page() that takes 'mm' and a boolean 'executable' which uprobes can unconditionally pass as true.
On Fri, Apr 11, 2014 at 11:02 AM, David Miller <davem@davemloft.net> wrote: > > It's not enough, we need to have the 'mm' so we can know what cpu's this > address space has executed upon, and therefore what cpus need the broadcast > flush. Ok. But still, it shouldn't need "vma". > See my other reply, we can just make a __copy_to_user_page() that takes 'mm' > and a boolean 'executable' which uprobes can unconditionally pass as true. Sure, that doesn't look disgusting. That said, I thought at least one architecture (powerpc) did more than just check the executable bit: I think somebody actually does a page-per-page "has this been mapped executably" thing because their icache flush is *so* expensive. So that boolean "executable" bit is potentially architecture-specific. And quite frankly, using the "vma->vm_flags" sounds potentially *incorrect* to me, since it really isn't about the vma. If you change a page through a non-executable vma, you'd want to flush the icache entry for that page mapped in a totally different vma. So I really get the feeling that passing in "vma" is actively *wrong*. The vma interface really makes little to no sense. Hmm? Linus
On 11 April 2014 11:02, David Miller <davem@davemloft.net> wrote: > From: Linus Torvalds <torvalds@linux-foundation.org> > Date: Fri, 11 Apr 2014 10:50:31 -0700 > >> So I really think we should just have a fixed >> "flush_icache_page(page,vaddr)" function. Maybe add a "len" argument, >> to allow architectures that have to loop over cachelines to do just a >> minimal loop. > > It's not enough, we need to have the 'mm' so we can know what cpu's this > address space has executed upon, and therefore what cpus need the broadcast > flush. But in uprobes case xol slot where instruction write happened will be used only by current CPU. The way I read uprobes code other core when it hit the same uprobe address will use different xol slot. Xol slot size is cache line so it will not be moved around. So as long as we know for sure that while tasks performs single step on uprobe xol area instruction it won't be migrated to another core we don't need to do broadcast to any other cores. Thanks, Victor > See my other reply, we can just make a __copy_to_user_page() that takes 'mm' > and a boolean 'executable' which uprobes can unconditionally pass as true.
From: Linus Torvalds <torvalds@linux-foundation.org> Date: Fri, 11 Apr 2014 11:11:33 -0700 > And quite frankly, using the "vma->vm_flags" sounds potentially > *incorrect* to me, since it really isn't about the vma. If you change > a page through a non-executable vma, you'd want to flush the icache > entry for that page mapped in a totally different vma. So I really get > the feeling that passing in "vma" is actively *wrong*. The vma > interface really makes little to no sense. > > Hmm? The vm_flags check is about "could it have gotten into the I-cache via this VMA". If the VMA protections change, we'd do a flush of some sort during that change.
On Fri, Apr 11, 2014 at 11:19 AM, David Miller <davem@davemloft.net> wrote: > > The vm_flags check is about "could it have gotten into the I-cache > via this VMA". .. and that's obviously complete bullshit and wrong. Which is my point. Now, it's possible that doing things right is just too much work for architectures that don't even matter, but dammit, it's still wrong. If you change a page, and it's executably mapped into some other vma, the icache is possibly stale there. The whole _point_ of our cache flushing is to make caches coherent, and anything that uses "vma" to do so is *wrong*. So your argument makes no sense. You're just re-stating that "it's wrong", but you're re-stating it in a way that makes it sounds like it could be right. The "this page has been mapped executably" approach, in contrast, is *correct*. It has a chance in hell of actually making caches coherent. Linus
On 04/11, David Miller wrote: > > From: Oleg Nesterov <oleg@redhat.com> > Date: Fri, 11 Apr 2014 19:38:53 +0200 > > > And perhaps the patch is not complete. "if (vma)" is not enough, a probed > > task can mmap something else at this vaddr. > > > > copy_to_user_page() should only change the contents of area->page, so memcpy > > should be fine. But I am not sure that flush_icache_user_range() or > > flush_ptrace_access() is always safe on every arch if "struct page *page" > > doesn't match vma. > > The architectures want the VMA for two reasons: > > 1) To get at the 'mm'. The 'mm' is absolutely essential so that we can look > at the MM cpumask and therefore determine what cpus this address space has > executed upon, and therefore what cpus need the flush broadcast to. > > 2) To determine if the VMA is executable, in order to avoid the I-cache flush > if possible. Yes, thanks, this is clear. > I think you can get at the 'mm' trivially in this uprobes path, sure, it is always current->mm. > and we can just > as well assume that the VMA is executable since this thing is always writing > instructions. yes. > So we could create a __copy_to_user_page() that takes an 'mm' and a boolean > 'executable' which uprobes could unconditionally set true, and copy_to_user_page() > would then be implemented in terms of __copy_to_user_page(). This needs a lot of per-arch changes. Plus, it seems, in general VM_EXEC is not the only thing __copy_to_user_page() should take into account... But at least we are starting to agree that we need something else ;) Oleg.
On 04/11, Victor Kamensky wrote: > > On 11 April 2014 11:02, David Miller <davem@davemloft.net> wrote: > But in uprobes case xol slot where instruction write happened will be > used only by current CPU. The way I read uprobes code other core > when it hit the same uprobe address will use different xol slot. Xol slot > size is cache line so it will not be moved around. Yes. > So as long as we > know for sure that while tasks performs single step on uprobe xol > area instruction it won't be migrated to another core we don't need to > do broadcast to any other cores. It can migrate to another CPU before it does single-step. Oleg.
From: Linus Torvalds <torvalds@linux-foundation.org> Date: Fri, 11 Apr 2014 11:24:58 -0700 > On Fri, Apr 11, 2014 at 11:19 AM, David Miller <davem@davemloft.net> wrote: >> >> The vm_flags check is about "could it have gotten into the I-cache >> via this VMA". > > .. and that's obviously complete bullshit and wrong. Which is my point. > > Now, it's possible that doing things right is just too much work for > architectures that don't even matter, but dammit, it's still wrong. If > you change a page, and it's executably mapped into some other vma, the > icache is possibly stale there. The whole _point_ of our cache > flushing is to make caches coherent, and anything that uses "vma" to > do so is *wrong*. > > So your argument makes no sense. You're just re-stating that "it's > wrong", but you're re-stating it in a way that makes it sounds like it > could be right. > > The "this page has been mapped executably" approach, in contrast, is > *correct*. It has a chance in hell of actually making caches coherent. You're right that using VMA as a hint during ptrace accesses is bogus. If it's writeable, shared, and executable, we won't do the right thing. Since we do most of the cache flushing stuff during normal operations at the PTE modification point, perhaps a piece of page state could be used to handle this. We already use such a thing for D-cache alias flushing.
On Fri, Apr 11, 2014 at 11:58 AM, David Miller <davem@davemloft.net> wrote: > > Since we do most of the cache flushing stuff during normal operations > at the PTE modification point, perhaps a piece of page state could be > used to handle this. We already use such a thing for D-cache alias > flushing. So looking at the powerpc code, I thought ppc already did this, but it seems to do something different: it lazily does the icache flush at page fault time if the page has been marked by dcache flush (with the PG_arch_1 bit indicating whether the page is coherent in the I$). But I don't see it trying to actually flush the icache of already mapped processes when modifying the dcache. So while we *could* do that, apparently no architecture does this. Even the one architecture that I thought did it doesn'r really try to make things globally coherent. (My "analysis" was mainly using "git grep", so maybe I missed something). Linus
On 04/11, Linus Torvalds wrote: > > On Fri, Apr 11, 2014 at 10:24 AM, Oleg Nesterov <oleg@redhat.com> wrote: > > +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, > > + struct arch_uprobe *auprobe) > > +{ > > +#ifndef ARCH_UPROBE_XXX > > + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); > > + /* > > + * We probably need flush_icache_user_range() but it needs vma. > > + * If this doesn't work define ARCH_UPROBE_XXX. > > + */ > > + flush_dcache_page(area->page); > > +#else > > + struct mm_struct *mm = current->mm; > > + struct vm_area_struct *vma; > > + > > + down_read(&mm->mmap_sem); > > + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); > > + if (vma) { > > + void *kaddr = kmap_atomic(area->page); > > + copy_to_user_page(vma, area->page, > > + vaddr, kaddr + (vaddr & ~PAGE_MASK), > > + &auprobe->ixol, sizeof(&auprobe->ixol)); > > + kunmap_atomic(kaddr); > > + } > > + up_read(&mm->mmap_sem); > > +#endif > > Yeah, no, this is wrong. Yesss, agreed. > So I really think we should just have a fixed > "flush_icache_page(page,vaddr)" function. > ... > Then the uprobe case can just do > > copy_to_page() > flush_dcache_page() > flush_icache_page() And I obviously like this idea because (iiuc) it more or less matches flush_icache_page_xxx() I tried to suggest. But we need a short term solution for arm. And unless I misunderstood Russell (this is quite possible), arm needs to disable preemption around copy + flush. Russel, so what do you think we can do for arm right now? Does the patch above (and subsequent discussion) answer the "why reinvent" question ? Oleg.
On 14 April 2014 11:59, Oleg Nesterov <oleg@redhat.com> wrote: > On 04/11, Linus Torvalds wrote: >> >> On Fri, Apr 11, 2014 at 10:24 AM, Oleg Nesterov <oleg@redhat.com> wrote: >> > +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, >> > + struct arch_uprobe *auprobe) >> > +{ >> > +#ifndef ARCH_UPROBE_XXX >> > + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); >> > + /* >> > + * We probably need flush_icache_user_range() but it needs vma. >> > + * If this doesn't work define ARCH_UPROBE_XXX. >> > + */ >> > + flush_dcache_page(area->page); >> > +#else >> > + struct mm_struct *mm = current->mm; >> > + struct vm_area_struct *vma; >> > + >> > + down_read(&mm->mmap_sem); >> > + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); >> > + if (vma) { >> > + void *kaddr = kmap_atomic(area->page); >> > + copy_to_user_page(vma, area->page, >> > + vaddr, kaddr + (vaddr & ~PAGE_MASK), >> > + &auprobe->ixol, sizeof(&auprobe->ixol)); >> > + kunmap_atomic(kaddr); >> > + } >> > + up_read(&mm->mmap_sem); >> > +#endif >> >> Yeah, no, this is wrong. > > Yesss, agreed. > >> So I really think we should just have a fixed >> "flush_icache_page(page,vaddr)" function. >> ... >> Then the uprobe case can just do >> >> copy_to_page() >> flush_dcache_page() >> flush_icache_page() > > > And I obviously like this idea because (iiuc) it more or less matches > flush_icache_page_xxx() I tried to suggest. Would not page granularity to be too expensive? Note you need to do that on each probe hit and you flushing whole data and instruction page every time. IMHO it will work correctly when you flush just few dcache/icache lines that correspond to xol slot that got modified. Note copy_to_user_page takes len that describes size of area that has to be flushed. Given that we are flushing xol area page at this case; and nothing except one xol slot is any interest for current task, and if CPU can flush one dcache and icache page as quickly as it can flush range, my remark may not matter. I personally would prefer if we could have function like copy_to_user_page but without requirement to pass vma to it. Thanks, Victor > But we need a short term solution for arm. And unless I misunderstood > Russell (this is quite possible), arm needs to disable preemption around > copy + flush. > > Russel, so what do you think we can do for arm right now? Does the patch > above (and subsequent discussion) answer the "why reinvent" question ? > > Oleg. >
On 14 April 2014 13:05, Victor Kamensky <victor.kamensky@linaro.org> wrote: > On 14 April 2014 11:59, Oleg Nesterov <oleg@redhat.com> wrote: >> On 04/11, Linus Torvalds wrote: >>> >>> On Fri, Apr 11, 2014 at 10:24 AM, Oleg Nesterov <oleg@redhat.com> wrote: >>> > +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, >>> > + struct arch_uprobe *auprobe) >>> > +{ >>> > +#ifndef ARCH_UPROBE_XXX >>> > + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); >>> > + /* >>> > + * We probably need flush_icache_user_range() but it needs vma. >>> > + * If this doesn't work define ARCH_UPROBE_XXX. >>> > + */ >>> > + flush_dcache_page(area->page); >>> > +#else >>> > + struct mm_struct *mm = current->mm; >>> > + struct vm_area_struct *vma; >>> > + >>> > + down_read(&mm->mmap_sem); >>> > + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); >>> > + if (vma) { >>> > + void *kaddr = kmap_atomic(area->page); >>> > + copy_to_user_page(vma, area->page, >>> > + vaddr, kaddr + (vaddr & ~PAGE_MASK), >>> > + &auprobe->ixol, sizeof(&auprobe->ixol)); >>> > + kunmap_atomic(kaddr); >>> > + } >>> > + up_read(&mm->mmap_sem); >>> > +#endif >>> >>> Yeah, no, this is wrong. >> >> Yesss, agreed. >> >>> So I really think we should just have a fixed >>> "flush_icache_page(page,vaddr)" function. >>> ... >>> Then the uprobe case can just do >>> >>> copy_to_page() >>> flush_dcache_page() >>> flush_icache_page() >> >> >> And I obviously like this idea because (iiuc) it more or less matches >> flush_icache_page_xxx() I tried to suggest. > > Would not page granularity to be too expensive? Note you need to do that on > each probe hit and you flushing whole data and instruction page every time. > IMHO it will work correctly when you flush just few dcache/icache lines that > correspond to xol slot that got modified. Note copy_to_user_page takes > len that describes size of area that has to be flushed. Given that we are > flushing xol area page at this case; and nothing except one xol slot is > any interest for current task, and if CPU can flush one dcache and icache > page as quickly as it can flush range, my remark may not matter. > > I personally would prefer if we could have function like copy_to_user_page > but without requirement to pass vma to it. I was trying to collect some experimental data around this discussion. I did not find anything super surprising and I am not sure how it would matter, but since I collected it already, I will just share anyway. The result covers only one architecture so they should be taken with grain of salt. Test was conducted on ARM h/w. Arndale with Exynos 5250 2 cores CPU and Pandaboard ES with OMAP 4460 were tested. The uporbes/systemtap test was arranged in the following way. SystemTap module was counting number of times functions was called. The SystemTap action is very simple, counter increment, is close to noop operation that allows to see tracing overhead. Traced user-land function had approximately 8000 instructions (unoptimized empty loop of 1000 iterations, with each interaction is 8 instructions). That function was constantly called in the loop 1 million times, and that interval was timed. SystemTap/uprobes testing was enabled and it was observed how targeted user-land execution time changed. Test scenarios and variations ----------------------------- Here is scenarios where measurements took place: vanilla - no tracing, 1 million calls of function that executes 8000 instructions Oleg's fix - Oleg's fix proposed on [1]. Basically it uses copy_to_user_page and it does dynamic look-up of xol area vma every tracing my arm specific fix - this one was proposed on as [2]. It is close to discussed possible solution if we could have something similar to copy_to_user_page function but which does not required vma. My code tried to shared ARM backend of copy_to_user_page and xol access flush function Oleg's fix + forced broadcast - one of concerns I had is situation where smp function call broadcast has to be happen to flush icache. On both of my board that was not needed. So to simulated such situation I've changed code ARM backend of copy_to_user_page to do smp_call_function(flush_ptrace_access_other Tested application had two possible dimensions: 1) number of threads that runs loop over traced function to see how tracing would cope with multicore, default is only one thread, but test could run another loop on second core. 2) number of mapping in target process, target process could have 1000 files mapping to create bunch of vmas. It is to test how much dynamic look-up of xol area vma matters. Results ------- Number shown in the table is time in microseconds to execute tested function with and without tracing presence. Please note well tracing overhead include all related to tracing pieces, not only cache flush that is under discussion. Those pieces will be: taking arch exception layer; uprobes layer; uprobes arch specific layer (before/after); xol look-up, update and cache flush; uprobes single stepping logic, systemtap module callback that generated for .stp tracing script, etc Arndale Pandaboard ES vanilla 5.0 (100%) 11.5 (100%) Oleg's fix 9.8 (196%) 28.1 (244%) Oleg's fix 10.0 (200%) 28.7 (250%) + 1000 mappings Victor's fix 9.4 (188%) 26.4 (230%) Oleg's fix 13.7 (274%) 39.8 (346%) + broadcast 1 thread Oleg's fix 14.1 (282%) 41.6 (361%) + broadcast 2 threads Observations ------------ x) generally uprobes tracing is a bit expensive 1 trace roughly would cost around 10000 instructions/cycles x) the way how cache is flushed matters somewhat, but because of big overall tracing overhead those differences may not matter x) looking at 'Oleg's fix' vs 'Oleg's fix + 1000 mappings' shows that vma look-up noticeable but the difference is marginal. No surprise here: rb tree search works fast. x) need to broadcast icache flush has most noticeable impact. I am not sure how essential is that. Both tested platforms Exynos 5250 and OMAP 4460 did not need that operation. I am not sure what CPU would really have this issue ... x) fix that I did for ARM that shares ARM code with copy_to_user_page but does not need vma performs best. Essentially it differs from 'Oleg's fix' that no vma lookup at all. But I have to admit resulting code looks a bit ugly. I wonder whether the gain matters enough ... maybe not. Dynamic xol slots vs cached to uprobe xol slots ----------------------------------------------- This section could be a bit off topic, but introduce interesting data point. When I looked at current uprobes single step out line code and compared it with code that was in the past (utrace times) I noticed main essential difference how xol slots are handled: Currently for each hit uprobes code allocate xol slot and needs dcache/icache flush. But in the past xol slot was attached/cached to uprobe entry and if there were enough xol slots dcache/icache flush would happen only once and latter tracing would not need it. For cases where there were not enough xol slots lru algorithm was used to rotate xol slots vs uprobes. Previous uprobes mechanism was more immune to cost of modifying instruction stream, because modifying instruction stream was one time operation and after that under normal circumstances traced apps did not touch instructions at all. I am quite sure that semi-static xol slot allocation scheme had it is own set of issues. I've tried to hack cached xol slot scheme and measure time difference it brings. Arndale hack static 8.9 (188%) xol slot 8.9 microseconds gives idea about all other overheads during uprobes tracing except of xol allocation and icache/dcache flush. I.e cost of dynamically allocating xol slot, dcache/icache flush and impact of cache flush on application is around 0.5 and 1.1 microsecond as long as no cache operations broadcast is involved. I.e cost is not that big, as long as modern CPU that does not need cache flush broadcasts, dynamic xol scheme looks OK to me. Raw Results and Test Source Code -------------------------------- I don't publish my test source code and raw results here because it is quite big. Raw results were collected with target test running under perf, so it could be seen how different schemes affect cache and tlb misses. If anyone interested in source and raw data please let me know I will post it here. Thanks, Victor [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/246595.html [2] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/245743.html > Thanks, > Victor > >> But we need a short term solution for arm. And unless I misunderstood >> Russell (this is quite possible), arm needs to disable preemption around >> copy + flush. >> >> Russel, so what do you think we can do for arm right now? Does the patch >> above (and subsequent discussion) answer the "why reinvent" question ? >> >> Oleg. >>
On 04/14, Victor Kamensky wrote: > > On 14 April 2014 11:59, Oleg Nesterov <oleg@redhat.com> wrote: > > On 04/11, Linus Torvalds wrote: > >> > >> So I really think we should just have a fixed > >> "flush_icache_page(page,vaddr)" function. > >> ... > >> Then the uprobe case can just do > >> > >> copy_to_page() > >> flush_dcache_page() > >> flush_icache_page() > > > > > > And I obviously like this idea because (iiuc) it more or less matches > > flush_icache_page_xxx() I tried to suggest. > > Would not page granularity to be too expensive? Note you need to do that on > each probe hit and you flushing whole data and instruction page every time. > IMHO it will work correctly when you flush just few dcache/icache lines that > correspond to xol slot that got modified. Note copy_to_user_page takes > len that describes size of area that has to be flushed. Given that we are > flushing xol area page at this case; and nothing except one xol slot is > any interest for current task, and if CPU can flush one dcache and icache > page as quickly as it can flush range, my remark may not matter. We can add "vaddr, len" to the argument list. > I personally would prefer if we could have function like copy_to_user_page > but without requirement to pass vma to it. I won't argue, but you need to convince maintainers. And to remind, we need something simple/nonintrusive for arm right now. Again, I won't argue if we turn copy_to_page() + flush_dcache_page() into __weak arch_uprobe_copy_ixol(), and add the necessary hacks into arm's implementatiion. This is up to you and Russel. But. Please do not add copy_to_user_page() into copy_to_page() (as your patch did). This is certainly not what uprobe_write_opcode() wants, we do not want or need "flush" in this case. The same for __create_xol_area(). Note also that currently copy_to_user_page() is always called under mmap_sem. I do not know if arm actually needs ->mmap_sem, but if you propose it as a generic solution we should probably take this lock. Also. Even if we have copy_to_user_page_no_vma() or change copy_to_user_page() to accept vma => NULL, I am not sure this will work fine on arm when the probed application unmaps xol_area and mmaps something else at the same vaddr. I mean, in this case we do not care if the application crashes, but please verify that something really bad can't happen. Let me repeat just in case, I know nothing about arm/. I can't even understand how, say, flush_pfn_alias() works, and how it should work if 2 threads call it at the same time with the same vaddr (or CACHE_COLOUR(vaddr)). Oleg.
On 04/14, Victor Kamensky wrote: > > Oleg's fix - Oleg's fix proposed on [1]. Basically it uses > copy_to_user_page and it does dynamic look-up of xol area vma > every tracing I guess I was not clear. No, I didn't really try to propose this change, I do not like it ;) I showed this hack in reply to multiple and persistent requests to reuse the ptrace solution we already have. > my arm specific fix - this one was proposed on as [2]. I didn't even try to read the changes in arm/, I can't understand them anyway. I leave this to you and Russel. But, once again, please do not add arch_uprobe_flush_xol_access(), add arch_uprobe_copy_ixol(). > x) fix that I did for ARM that shares ARM code with > copy_to_user_page but does not need vma performs best. The patch which adds copy_to_user_page(vma => NULL) into copy_to_page() ? Please see the comments in my previous email. > When I looked at current uprobes single step out line code > and compared it with code that was in the past (utrace > times) I noticed main essential difference how xol slots > are handled: Currently for each hit uprobes code allocate > xol slot and needs dcache/icache flush. But in the past > xol slot was attached/cached to uprobe entry Can't comment, I am not familiar with the old implementation. But yes, the current implementation is not perfect. Once again, it would be nice to remove this vma. Even if this is not possible, we can try to share this memory. We do not even need lru, we can make it "per cpu" and avoid the broadcasts. On x86 this is simple, we have __switch_to_xtra() which can re-copy ->ixol[] and do flush_icache_range() if UTASK_SSTEP. Not sure this is possible on arm and other arch'es. But lets not discuss this right now, this is a bit off-topic currently. Oleg.
On 15 April 2014 08:46, Oleg Nesterov <oleg@redhat.com> wrote: > On 04/14, Victor Kamensky wrote: >> >> On 14 April 2014 11:59, Oleg Nesterov <oleg@redhat.com> wrote: >> > On 04/11, Linus Torvalds wrote: >> >> >> >> So I really think we should just have a fixed >> >> "flush_icache_page(page,vaddr)" function. >> >> ... >> >> Then the uprobe case can just do >> >> >> >> copy_to_page() >> >> flush_dcache_page() >> >> flush_icache_page() >> > >> > >> > And I obviously like this idea because (iiuc) it more or less matches >> > flush_icache_page_xxx() I tried to suggest. >> >> Would not page granularity to be too expensive? Note you need to do that on >> each probe hit and you flushing whole data and instruction page every time. >> IMHO it will work correctly when you flush just few dcache/icache lines that >> correspond to xol slot that got modified. Note copy_to_user_page takes >> len that describes size of area that has to be flushed. Given that we are >> flushing xol area page at this case; and nothing except one xol slot is >> any interest for current task, and if CPU can flush one dcache and icache >> page as quickly as it can flush range, my remark may not matter. > > We can add "vaddr, len" to the argument list. > >> I personally would prefer if we could have function like copy_to_user_page >> but without requirement to pass vma to it. > > I won't argue, but you need to convince maintainers. > > > And to remind, we need something simple/nonintrusive for arm right now. > Again, I won't argue if we turn copy_to_page() + flush_dcache_page() into > __weak arch_uprobe_copy_ixol(), and add the necessary hacks into arm's > implementatiion. This is up to you and Russel. For short term arm specific solution I will follow up on [1]. Yes, I will incorporate your request to make arch_uprobe_copy_ixol() instead of arch_uprobe_flush_xol_access, will address Dave Long's comments about checkpatch and will remove special handling for broadcast situation (FLAG_UA_BROADCAST) since in further discussion it was established that task can migrate while doing uprobes xol single stepping. I don't think my patch does those things that you describe below. Anyway, I will repost new version of short term arm specific fix proposal today PST and we will see what Russell, you and all will say about it. [1] http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/245952.html http://lists.infradead.org/pipermail/linux-arm-kernel/2014-April/245743.html Thanks, Victor > > > But. Please do not add copy_to_user_page() into copy_to_page() (as your patch > did). This is certainly not what uprobe_write_opcode() wants, we do not want > or need "flush" in this case. The same for __create_xol_area(). > > Note also that currently copy_to_user_page() is always called under mmap_sem. > I do not know if arm actually needs ->mmap_sem, but if you propose it as a > generic solution we should probably take this lock. > > Also. Even if we have copy_to_user_page_no_vma() or change copy_to_user_page() > to accept vma => NULL, I am not sure this will work fine on arm when the probed > application unmaps xol_area and mmaps something else at the same vaddr. I mean, > in this case we do not care if the application crashes, but please verify that > something really bad can't happen. > > Let me repeat just in case, I know nothing about arm/. I can't even understand > how, say, flush_pfn_alias() works, and how it should work if 2 threads call it > at the same time with the same vaddr (or CACHE_COLOUR(vaddr)). > > Oleg. >
--- a/kernel/events/uprobes.c +++ b/kernel/events/uprobes.c @@ -1274,6 +1274,33 @@ static unsigned long xol_take_insn_slot(struct xol_area *area) return slot_addr; } +static void arch_uprobe_copy_ixol(struct xol_area *area, unsigned long vaddr, + struct arch_uprobe *auprobe) +{ +#ifndef ARCH_UPROBE_XXX + copy_to_page(area->page, vaddr, &auprobe->ixol, sizeof(&auprobe->ixol)); + /* + * We probably need flush_icache_user_range() but it needs vma. + * If this doesn't work define ARCH_UPROBE_XXX. + */ + flush_dcache_page(area->page); +#else + struct mm_struct *mm = current->mm; + struct vm_area_struct *vma; + + down_read(&mm->mmap_sem); + vma = find_exact_vma(mm, area->vaddr, area->vaddr + PAGE_SIZE); + if (vma) { + void *kaddr = kmap_atomic(area->page); + copy_to_user_page(vma, area->page, + vaddr, kaddr + (vaddr & ~PAGE_MASK), + &auprobe->ixol, sizeof(&auprobe->ixol)); + kunmap_atomic(kaddr); + } + up_read(&mm->mmap_sem); +#endif +} + /* * xol_get_insn_slot - allocate a slot for xol. * Returns the allocated slot address or 0. @@ -1291,15 +1318,7 @@ static unsigned long xol_get_insn_slot(struct uprobe *uprobe) if (unlikely(!xol_vaddr)) return 0; - /* Initialize the slot */ - copy_to_page(area->page, xol_vaddr, - &uprobe->arch.ixol, sizeof(uprobe->arch.ixol)); - /* - * We probably need flush_icache_user_range() but it needs vma. - * This should work on supported architectures too. - */ - flush_dcache_page(area->page); - + arch_uprobe_copy_ixol(area, xol_vaddr, &uprobe->arch); return xol_vaddr; }