Message ID | 20190103061324.16607-1-trond.myklebust@hammerspace.com (mailing list archive) |
---|---|
State | New, archived |
Headers | show |
Series | [v2] SUNRPC: Fix TCP receive code on archs with flush_dcache_page() | expand |
Hi Trond, On Thu, Jan 3, 2019 at 7:14 AM Trond Myklebust <trondmy@gmail.com> wrote: > After receiving data into the page cache, we need to call flush_dcache_page() > for the architectures that define it. > > Fixes: 277e4ab7d530b ("SUNRPC: Simplify TCP receive code by switching...") > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> > Cc: stable@vger.kernel.org # v4.20 Thanks for your patch! > --- a/net/sunrpc/xprtsock.c > +++ b/net/sunrpc/xprtsock.c > @@ -48,6 +48,7 @@ > #include <net/udp.h> > #include <net/tcp.h> > #include <linux/bvec.h> > +#include <linux/highmem.h> > #include <linux/uio.h> > > #include <trace/events/sunrpc.h> > @@ -380,6 +381,27 @@ xs_read_discard(struct socket *sock, struct msghdr *msg, int flags, > return sock_recvmsg(sock, msg, flags); > } > > +#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE > +static void > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t seek) > +{ > + struct bvec_iter bi, __start = { As for_each_bvec() assigns __start to bi, and you don't need __start afterwards, both variables can be merged into a single one. But perhaps that would make too many assumptions about the implementation of for_each_bvec()? > + .bi_size = count, > + }; > + struct bio_vec bv; > + > + bvec_iter_advance(bvec, &__start, seek & PAGE_MASK); > + > + for_each_bvec(bv, bvec, bi, __start) > + flush_dcache_page(bv.bv_page); > +} > +#else > +static inline void > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t seek) > +{ > +} > +#endif > + > static ssize_t > xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags, > struct xdr_buf *buf, size_t count, size_t seek, size_t *read) > @@ -413,6 +435,7 @@ xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags, > seek + buf->page_base); > if (ret <= 0) > goto sock_err; > + xs_flush_bvec(buf->bvec, ret, seek + buf->page_base); > offset += ret - buf->page_base; > if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC)) > goto out; I don't understand the code well enough to see why the call to xs_flush_bvec() is needed in this branch only, but it does fix TCP NFS on RBTX4927, so Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Gr{oetje,eeting}s, Geert
On Thu, 2019-01-03 at 11:16 +0100, Geert Uytterhoeven wrote: > Hi Trond, > > On Thu, Jan 3, 2019 at 7:14 AM Trond Myklebust <trondmy@gmail.com> > wrote: > > After receiving data into the page cache, we need to call > > flush_dcache_page() > > for the architectures that define it. > > > > Fixes: 277e4ab7d530b ("SUNRPC: Simplify TCP receive code by > > switching...") > > Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> > > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> > > Cc: stable@vger.kernel.org # v4.20 > > Thanks for your patch! > > > --- a/net/sunrpc/xprtsock.c > > +++ b/net/sunrpc/xprtsock.c > > @@ -48,6 +48,7 @@ > > #include <net/udp.h> > > #include <net/tcp.h> > > #include <linux/bvec.h> > > +#include <linux/highmem.h> > > #include <linux/uio.h> > > > > #include <trace/events/sunrpc.h> > > @@ -380,6 +381,27 @@ xs_read_discard(struct socket *sock, struct > > msghdr *msg, int flags, > > return sock_recvmsg(sock, msg, flags); > > } > > > > +#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE > > +static void > > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t > > seek) > > +{ > > + struct bvec_iter bi, __start = { > > As for_each_bvec() assigns __start to bi, and you don't need __start > afterwards, both variables can be merged into a single one. > But perhaps that would make too many assumptions about the > implementation of for_each_bvec()? No, that's a good suggestion. I've sent out a (hopefully) final v3 with that change. > > > + .bi_size = count, > > + }; > > + struct bio_vec bv; > > + > > + bvec_iter_advance(bvec, &__start, seek & PAGE_MASK); > > + > > + for_each_bvec(bv, bvec, bi, __start) > > + flush_dcache_page(bv.bv_page); > > +} > > +#else > > +static inline void > > +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t > > seek) > > +{ > > +} > > +#endif > > + > > static ssize_t > > xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int > > flags, > > struct xdr_buf *buf, size_t count, size_t seek, > > size_t *read) > > @@ -413,6 +435,7 @@ xs_read_xdr_buf(struct socket *sock, struct > > msghdr *msg, int flags, > > seek + buf->page_base); > > if (ret <= 0) > > goto sock_err; > > + xs_flush_bvec(buf->bvec, ret, seek + buf- > > >page_base); > > offset += ret - buf->page_base; > > if (offset == count || msg->msg_flags & > > (MSG_EOR|MSG_TRUNC)) > > goto out; > > I don't understand the code well enough to see why the call to > xs_flush_bvec() is needed in this branch only, but it does fix TCP > NFS on RBTX4927, so > Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Thanks! Trond
On Thu, 2019-01-03 at 14:07 +0000, Trond Myklebust wrote: > On Thu, 2019-01-03 at 11:16 +0100, Geert Uytterhoeven wrote: > > > > I don't understand the code well enough to see why the call to > > xs_flush_bvec() is needed in this branch only, but it does fix TCP > > NFS on RBTX4927, so > > Tested-by: Geert Uytterhoeven <geert@linux-m68k.org> Sorry. I forgot to answer the implicit question there. This is the only place where we may find ourselves reading directly from a socket into the page cache, so that's why we don't need xs_flush_bvec() in the other branches. The kvec entries in struct xdr_buf should always point to private memory buffers that can't be mapped into user space.
diff --git a/net/sunrpc/xprtsock.c b/net/sunrpc/xprtsock.c index f0b3700cec95..a72207af45d4 100644 --- a/net/sunrpc/xprtsock.c +++ b/net/sunrpc/xprtsock.c @@ -48,6 +48,7 @@ #include <net/udp.h> #include <net/tcp.h> #include <linux/bvec.h> +#include <linux/highmem.h> #include <linux/uio.h> #include <trace/events/sunrpc.h> @@ -380,6 +381,27 @@ xs_read_discard(struct socket *sock, struct msghdr *msg, int flags, return sock_recvmsg(sock, msg, flags); } +#if ARCH_IMPLEMENTS_FLUSH_DCACHE_PAGE +static void +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t seek) +{ + struct bvec_iter bi, __start = { + .bi_size = count, + }; + struct bio_vec bv; + + bvec_iter_advance(bvec, &__start, seek & PAGE_MASK); + + for_each_bvec(bv, bvec, bi, __start) + flush_dcache_page(bv.bv_page); +} +#else +static inline void +xs_flush_bvec(const struct bio_vec *bvec, size_t count, size_t seek) +{ +} +#endif + static ssize_t xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags, struct xdr_buf *buf, size_t count, size_t seek, size_t *read) @@ -413,6 +435,7 @@ xs_read_xdr_buf(struct socket *sock, struct msghdr *msg, int flags, seek + buf->page_base); if (ret <= 0) goto sock_err; + xs_flush_bvec(buf->bvec, ret, seek + buf->page_base); offset += ret - buf->page_base; if (offset == count || msg->msg_flags & (MSG_EOR|MSG_TRUNC)) goto out;
After receiving data into the page cache, we need to call flush_dcache_page() for the architectures that define it. Fixes: 277e4ab7d530b ("SUNRPC: Simplify TCP receive code by switching...") Reported-by: Geert Uytterhoeven <geert@linux-m68k.org> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com> Cc: stable@vger.kernel.org # v4.20 --- v2: fix the argument to flush_dcache_page() net/sunrpc/xprtsock.c | 23 +++++++++++++++++++++++ 1 file changed, 23 insertions(+)