diff mbox series

[v12,24/24] nfs: add FAQ section to Documentation/filesystems/nfs/localio.rst

Message ID 20240819181750.70570-25-snitzer@kernel.org (mailing list archive)
State New
Headers show
Series nfs/nfsd: add support for localio | expand

Commit Message

Mike Snitzer Aug. 19, 2024, 6:17 p.m. UTC
From: Trond Myklebust <trond.myklebust@hammerspace.com>

Add a FAQ section to give answers to questions that have been raised
during review of the localio feature.

Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
Co-developed-by: Mike Snitzer <snitzer@kernel.org>
Signed-off-by: Mike Snitzer <snitzer@kernel.org>
---
 Documentation/filesystems/nfs/localio.rst | 77 +++++++++++++++++++++++
 1 file changed, 77 insertions(+)

Comments

Jeff Layton Aug. 21, 2024, 7:03 p.m. UTC | #1
On Mon, 2024-08-19 at 14:17 -0400, Mike Snitzer wrote:
> From: Trond Myklebust <trond.myklebust@hammerspace.com>
> 
> Add a FAQ section to give answers to questions that have been raised
> during review of the localio feature.
> 
> Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> Co-developed-by: Mike Snitzer <snitzer@kernel.org>
> Signed-off-by: Mike Snitzer <snitzer@kernel.org>
> ---
>  Documentation/filesystems/nfs/localio.rst | 77 +++++++++++++++++++++++
>  1 file changed, 77 insertions(+)
> 
> diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
> index d8bdab88f1db..acd8f3e5d87a 100644
> --- a/Documentation/filesystems/nfs/localio.rst
> +++ b/Documentation/filesystems/nfs/localio.rst
> @@ -40,6 +40,83 @@ fio for 20 secs with 24 libaio threads, 128k directio reads, qd of 8,
>  - Without LOCALIO:
>    read: IOPS=12.0k, BW=1495MiB/s (1568MB/s)(29.2GiB/20015msec)
>  
> +FAQ
> +===
> +
> +1. What are the use cases for LOCALIO?
> +
> +   a. Workloads where the NFS client and server are on the same host
> +      realize improved IO performance. In particular, it is common when
> +      running containerised workloads for jobs to find themselves
> +      running on the same host as the knfsd server being used for
> +      storage.
> +
> +2. What are the requirements for LOCALIO?
> +
> +   a. Bypass use of the network RPC protocol as much as possible. This
> +      includes bypassing XDR and RPC for open, read, write and commit
> +      operations.
> +   b. Allow client and server to autonomously discover if they are
> +      running local to each other without making any assumptions about
> +      the local network topology.
> +   c. Support the use of containers by being compatible with relevant
> +      namespaces (e.g. network, user, mount).
> +   d. Support all versions of NFS. NFSv3 is of particular importance
> +      because it has wide enterprise usage and pNFS flexfiles makes use
> +      of it for the data path.
> +
> +3. Why doesn’t LOCALIO just compare IP addresses or hostnames when
> +   deciding if the NFS client and server are co-located on the same
> +   host?
> +
> +   Since one of the main use cases is containerised workloads, we cannot
> +   assume that IP addresses will be shared between the client and
> +   server. This sets up a requirement for a handshake protocol that
> +   needs to go over the same connection as the NFS traffic in order to
> +   identify that the client and the server really are running on the
> +   same host. The handshake uses a secret that is sent over the wire,
> +   and can be verified by both parties by comparing with a value stored
> +   in shared kernel memory if they are truly co-located.
> +
> +4. Does LOCALIO improve pNFS flexfiles?
> +
> +   Yes, LOCALIO complements pNFS flexfiles by allowing it to take
> +   advantage of NFS client and server locality.  Policy that initiates
> +   client IO as closely to the server where the data is stored naturally
> +   benefits from the data path optimization LOCALIO provides.
> +
> +5. Why not develop a new pNFS layout to enable LOCALIO?
> +
> +   A new pNFS layout could be developed, but doing so would put the
> +   onus on the server to somehow discover that the client is co-located
> +   when deciding to hand out the layout.
> +   There is value in a simpler approach (as provided by LOCALIO) that
> +   allows the NFS client to negotiate and leverage locality without
> +   requiring more elaborate modeling and discovery of such locality in a
> +   more centralized manner.
> +
> +6. Why is having the client perform a server-side file OPEN, without
> +   using RPC, beneficial?  Is the benefit pNFS specific?
> +
> +   Avoiding the use of XDR and RPC for file opens is beneficial to
> +   performance regardless of whether pNFS is used. However adding a
> +   requirement to go over the wire to do an open and/or close ends up
> +   negating any benefit of avoiding the wire for doing the I/O itself
> +   when we’re dealing with small files. There is no benefit to replacing
> +   the READ or WRITE with a new open and/or close operation that still
> +   needs to go over the wire.
> +
> +7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
> +
> +   Strong authentication is usually tied to the connection itself. It
> +   works by establishing a context that is cached by the server, and
> +   that acts as the key for discovering the authorisation token, which
> +   can then be passed to rpc.mountd to complete the authentication
> +   process. On the other hand, in the case of AUTH_UNIX, the credential
> +   that was passed over the wire is used directly as the key in the
> +   upcall to rpc.mountd. This simplifies the authentication process, and
> +   so makes AUTH_UNIX easier to support.
> +
>  RPC
>  ===
>  

I'd just squash this into patch #19.
Mike Snitzer Aug. 21, 2024, 8:12 p.m. UTC | #2
On Wed, Aug 21, 2024 at 03:03:07PM -0400, Jeff Layton wrote:
> On Mon, 2024-08-19 at 14:17 -0400, Mike Snitzer wrote:
> > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > 
> > Add a FAQ section to give answers to questions that have been raised
> > during review of the localio feature.
> > 
> > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > Co-developed-by: Mike Snitzer <snitzer@kernel.org>
> > Signed-off-by: Mike Snitzer <snitzer@kernel.org>
> > ---
> >  Documentation/filesystems/nfs/localio.rst | 77 +++++++++++++++++++++++
> >  1 file changed, 77 insertions(+)
> > 
> > diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
> > index d8bdab88f1db..acd8f3e5d87a 100644
> > --- a/Documentation/filesystems/nfs/localio.rst
> > +++ b/Documentation/filesystems/nfs/localio.rst
> > @@ -40,6 +40,83 @@ fio for 20 secs with 24 libaio threads, 128k directio reads, qd of 8,
> >  - Without LOCALIO:
> >    read: IOPS=12.0k, BW=1495MiB/s (1568MB/s)(29.2GiB/20015msec)
> >  
> > +FAQ
> > +===
> > +
> > +1. What are the use cases for LOCALIO?
> > +
> > +   a. Workloads where the NFS client and server are on the same host
> > +      realize improved IO performance. In particular, it is common when
> > +      running containerised workloads for jobs to find themselves
> > +      running on the same host as the knfsd server being used for
> > +      storage.
> > +
> > +2. What are the requirements for LOCALIO?
> > +
> > +   a. Bypass use of the network RPC protocol as much as possible. This
> > +      includes bypassing XDR and RPC for open, read, write and commit
> > +      operations.
> > +   b. Allow client and server to autonomously discover if they are
> > +      running local to each other without making any assumptions about
> > +      the local network topology.
> > +   c. Support the use of containers by being compatible with relevant
> > +      namespaces (e.g. network, user, mount).
> > +   d. Support all versions of NFS. NFSv3 is of particular importance
> > +      because it has wide enterprise usage and pNFS flexfiles makes use
> > +      of it for the data path.
> > +
> > +3. Why doesn´t LOCALIO just compare IP addresses or hostnames when
> > +   deciding if the NFS client and server are co-located on the same
> > +   host?
> > +
> > +   Since one of the main use cases is containerised workloads, we cannot
> > +   assume that IP addresses will be shared between the client and
> > +   server. This sets up a requirement for a handshake protocol that
> > +   needs to go over the same connection as the NFS traffic in order to
> > +   identify that the client and the server really are running on the
> > +   same host. The handshake uses a secret that is sent over the wire,
> > +   and can be verified by both parties by comparing with a value stored
> > +   in shared kernel memory if they are truly co-located.
> > +
> > +4. Does LOCALIO improve pNFS flexfiles?
> > +
> > +   Yes, LOCALIO complements pNFS flexfiles by allowing it to take
> > +   advantage of NFS client and server locality.  Policy that initiates
> > +   client IO as closely to the server where the data is stored naturally
> > +   benefits from the data path optimization LOCALIO provides.
> > +
> > +5. Why not develop a new pNFS layout to enable LOCALIO?
> > +
> > +   A new pNFS layout could be developed, but doing so would put the
> > +   onus on the server to somehow discover that the client is co-located
> > +   when deciding to hand out the layout.
> > +   There is value in a simpler approach (as provided by LOCALIO) that
> > +   allows the NFS client to negotiate and leverage locality without
> > +   requiring more elaborate modeling and discovery of such locality in a
> > +   more centralized manner.
> > +
> > +6. Why is having the client perform a server-side file OPEN, without
> > +   using RPC, beneficial?  Is the benefit pNFS specific?
> > +
> > +   Avoiding the use of XDR and RPC for file opens is beneficial to
> > +   performance regardless of whether pNFS is used. However adding a
> > +   requirement to go over the wire to do an open and/or close ends up
> > +   negating any benefit of avoiding the wire for doing the I/O itself
> > +   when we´re dealing with small files. There is no benefit to replacing
> > +   the READ or WRITE with a new open and/or close operation that still
> > +   needs to go over the wire.
> > +
> > +7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
> > +
> > +   Strong authentication is usually tied to the connection itself. It
> > +   works by establishing a context that is cached by the server, and
> > +   that acts as the key for discovering the authorisation token, which
> > +   can then be passed to rpc.mountd to complete the authentication
> > +   process. On the other hand, in the case of AUTH_UNIX, the credential
> > +   that was passed over the wire is used directly as the key in the
> > +   upcall to rpc.mountd. This simplifies the authentication process, and
> > +   so makes AUTH_UNIX easier to support.
> > +
> >  RPC
> >  ===
> >  
> 
> I'd just squash this into patch #19.

That'd use the fact Trond is the author.

Does linux have a shortage on commit ids I'm unaware of? ;)
Mike Snitzer Aug. 21, 2024, 8:14 p.m. UTC | #3
On Wed, Aug 21, 2024 at 04:12:49PM -0400, Mike Snitzer wrote:
> On Wed, Aug 21, 2024 at 03:03:07PM -0400, Jeff Layton wrote:
> > On Mon, 2024-08-19 at 14:17 -0400, Mike Snitzer wrote:
> > > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > 
> > > Add a FAQ section to give answers to questions that have been raised
> > > during review of the localio feature.
> > > 
> > > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > Co-developed-by: Mike Snitzer <snitzer@kernel.org>
> > > Signed-off-by: Mike Snitzer <snitzer@kernel.org>
> > > ---
> > >  Documentation/filesystems/nfs/localio.rst | 77 +++++++++++++++++++++++
> > >  1 file changed, 77 insertions(+)
> > > 
> > > diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
> > > index d8bdab88f1db..acd8f3e5d87a 100644
> > > --- a/Documentation/filesystems/nfs/localio.rst
> > > +++ b/Documentation/filesystems/nfs/localio.rst
> > > @@ -40,6 +40,83 @@ fio for 20 secs with 24 libaio threads, 128k directio reads, qd of 8,
> > >  - Without LOCALIO:
> > >    read: IOPS=12.0k, BW=1495MiB/s (1568MB/s)(29.2GiB/20015msec)
> > >  
> > > +FAQ
> > > +===
> > > +
> > > +1. What are the use cases for LOCALIO?
> > > +
> > > +   a. Workloads where the NFS client and server are on the same host
> > > +      realize improved IO performance. In particular, it is common when
> > > +      running containerised workloads for jobs to find themselves
> > > +      running on the same host as the knfsd server being used for
> > > +      storage.
> > > +
> > > +2. What are the requirements for LOCALIO?
> > > +
> > > +   a. Bypass use of the network RPC protocol as much as possible. This
> > > +      includes bypassing XDR and RPC for open, read, write and commit
> > > +      operations.
> > > +   b. Allow client and server to autonomously discover if they are
> > > +      running local to each other without making any assumptions about
> > > +      the local network topology.
> > > +   c. Support the use of containers by being compatible with relevant
> > > +      namespaces (e.g. network, user, mount).
> > > +   d. Support all versions of NFS. NFSv3 is of particular importance
> > > +      because it has wide enterprise usage and pNFS flexfiles makes use
> > > +      of it for the data path.
> > > +
> > > +3. Why doesn´t LOCALIO just compare IP addresses or hostnames when
> > > +   deciding if the NFS client and server are co-located on the same
> > > +   host?
> > > +
> > > +   Since one of the main use cases is containerised workloads, we cannot
> > > +   assume that IP addresses will be shared between the client and
> > > +   server. This sets up a requirement for a handshake protocol that
> > > +   needs to go over the same connection as the NFS traffic in order to
> > > +   identify that the client and the server really are running on the
> > > +   same host. The handshake uses a secret that is sent over the wire,
> > > +   and can be verified by both parties by comparing with a value stored
> > > +   in shared kernel memory if they are truly co-located.
> > > +
> > > +4. Does LOCALIO improve pNFS flexfiles?
> > > +
> > > +   Yes, LOCALIO complements pNFS flexfiles by allowing it to take
> > > +   advantage of NFS client and server locality.  Policy that initiates
> > > +   client IO as closely to the server where the data is stored naturally
> > > +   benefits from the data path optimization LOCALIO provides.
> > > +
> > > +5. Why not develop a new pNFS layout to enable LOCALIO?
> > > +
> > > +   A new pNFS layout could be developed, but doing so would put the
> > > +   onus on the server to somehow discover that the client is co-located
> > > +   when deciding to hand out the layout.
> > > +   There is value in a simpler approach (as provided by LOCALIO) that
> > > +   allows the NFS client to negotiate and leverage locality without
> > > +   requiring more elaborate modeling and discovery of such locality in a
> > > +   more centralized manner.
> > > +
> > > +6. Why is having the client perform a server-side file OPEN, without
> > > +   using RPC, beneficial?  Is the benefit pNFS specific?
> > > +
> > > +   Avoiding the use of XDR and RPC for file opens is beneficial to
> > > +   performance regardless of whether pNFS is used. However adding a
> > > +   requirement to go over the wire to do an open and/or close ends up
> > > +   negating any benefit of avoiding the wire for doing the I/O itself
> > > +   when we´re dealing with small files. There is no benefit to replacing
> > > +   the READ or WRITE with a new open and/or close operation that still
> > > +   needs to go over the wire.
> > > +
> > > +7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
> > > +
> > > +   Strong authentication is usually tied to the connection itself. It
> > > +   works by establishing a context that is cached by the server, and
> > > +   that acts as the key for discovering the authorisation token, which
> > > +   can then be passed to rpc.mountd to complete the authentication
> > > +   process. On the other hand, in the case of AUTH_UNIX, the credential
> > > +   that was passed over the wire is used directly as the key in the
> > > +   upcall to rpc.mountd. This simplifies the authentication process, and
> > > +   so makes AUTH_UNIX easier to support.
> > > +
> > >  RPC
> > >  ===
> > >  
> > 
> > I'd just squash this into patch #19.
> 
> That'd use the fact Trond is the author.

s/use/lose/

> 
> Does linux have a shortage on commit ids I'm unaware of? ;)
> 

Anyway, I'd prefer the FAQ be left split out as a separate commit
given the author is different.
Jeff Layton Aug. 21, 2024, 11:46 p.m. UTC | #4
On Wed, 2024-08-21 at 16:14 -0400, Mike Snitzer wrote:
> On Wed, Aug 21, 2024 at 04:12:49PM -0400, Mike Snitzer wrote:
> > On Wed, Aug 21, 2024 at 03:03:07PM -0400, Jeff Layton wrote:
> > > On Mon, 2024-08-19 at 14:17 -0400, Mike Snitzer wrote:
> > > > From: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > > 
> > > > Add a FAQ section to give answers to questions that have been raised
> > > > during review of the localio feature.
> > > > 
> > > > Signed-off-by: Trond Myklebust <trond.myklebust@hammerspace.com>
> > > > Co-developed-by: Mike Snitzer <snitzer@kernel.org>
> > > > Signed-off-by: Mike Snitzer <snitzer@kernel.org>
> > > > ---
> > > >  Documentation/filesystems/nfs/localio.rst | 77 +++++++++++++++++++++++
> > > >  1 file changed, 77 insertions(+)
> > > > 
> > > > diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
> > > > index d8bdab88f1db..acd8f3e5d87a 100644
> > > > --- a/Documentation/filesystems/nfs/localio.rst
> > > > +++ b/Documentation/filesystems/nfs/localio.rst
> > > > @@ -40,6 +40,83 @@ fio for 20 secs with 24 libaio threads, 128k directio reads, qd of 8,
> > > >  - Without LOCALIO:
> > > >    read: IOPS=12.0k, BW=1495MiB/s (1568MB/s)(29.2GiB/20015msec)
> > > >  
> > > > +FAQ
> > > > +===
> > > > +
> > > > +1. What are the use cases for LOCALIO?
> > > > +
> > > > +   a. Workloads where the NFS client and server are on the same host
> > > > +      realize improved IO performance. In particular, it is common when
> > > > +      running containerised workloads for jobs to find themselves
> > > > +      running on the same host as the knfsd server being used for
> > > > +      storage.
> > > > +
> > > > +2. What are the requirements for LOCALIO?
> > > > +
> > > > +   a. Bypass use of the network RPC protocol as much as possible. This
> > > > +      includes bypassing XDR and RPC for open, read, write and commit
> > > > +      operations.
> > > > +   b. Allow client and server to autonomously discover if they are
> > > > +      running local to each other without making any assumptions about
> > > > +      the local network topology.
> > > > +   c. Support the use of containers by being compatible with relevant
> > > > +      namespaces (e.g. network, user, mount).
> > > > +   d. Support all versions of NFS. NFSv3 is of particular importance
> > > > +      because it has wide enterprise usage and pNFS flexfiles makes use
> > > > +      of it for the data path.
> > > > +
> > > > +3. Why doesn´t LOCALIO just compare IP addresses or hostnames when
> > > > +   deciding if the NFS client and server are co-located on the same
> > > > +   host?
> > > > +
> > > > +   Since one of the main use cases is containerised workloads, we cannot
> > > > +   assume that IP addresses will be shared between the client and
> > > > +   server. This sets up a requirement for a handshake protocol that
> > > > +   needs to go over the same connection as the NFS traffic in order to
> > > > +   identify that the client and the server really are running on the
> > > > +   same host. The handshake uses a secret that is sent over the wire,
> > > > +   and can be verified by both parties by comparing with a value stored
> > > > +   in shared kernel memory if they are truly co-located.
> > > > +
> > > > +4. Does LOCALIO improve pNFS flexfiles?
> > > > +
> > > > +   Yes, LOCALIO complements pNFS flexfiles by allowing it to take
> > > > +   advantage of NFS client and server locality.  Policy that initiates
> > > > +   client IO as closely to the server where the data is stored naturally
> > > > +   benefits from the data path optimization LOCALIO provides.
> > > > +
> > > > +5. Why not develop a new pNFS layout to enable LOCALIO?
> > > > +
> > > > +   A new pNFS layout could be developed, but doing so would put the
> > > > +   onus on the server to somehow discover that the client is co-located
> > > > +   when deciding to hand out the layout.
> > > > +   There is value in a simpler approach (as provided by LOCALIO) that
> > > > +   allows the NFS client to negotiate and leverage locality without
> > > > +   requiring more elaborate modeling and discovery of such locality in a
> > > > +   more centralized manner.
> > > > +
> > > > +6. Why is having the client perform a server-side file OPEN, without
> > > > +   using RPC, beneficial?  Is the benefit pNFS specific?
> > > > +
> > > > +   Avoiding the use of XDR and RPC for file opens is beneficial to
> > > > +   performance regardless of whether pNFS is used. However adding a
> > > > +   requirement to go over the wire to do an open and/or close ends up
> > > > +   negating any benefit of avoiding the wire for doing the I/O itself
> > > > +   when we´re dealing with small files. There is no benefit to replacing
> > > > +   the READ or WRITE with a new open and/or close operation that still
> > > > +   needs to go over the wire.
> > > > +
> > > > +7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
> > > > +
> > > > +   Strong authentication is usually tied to the connection itself. It
> > > > +   works by establishing a context that is cached by the server, and
> > > > +   that acts as the key for discovering the authorisation token, which
> > > > +   can then be passed to rpc.mountd to complete the authentication
> > > > +   process. On the other hand, in the case of AUTH_UNIX, the credential
> > > > +   that was passed over the wire is used directly as the key in the
> > > > +   upcall to rpc.mountd. This simplifies the authentication process, and
> > > > +   so makes AUTH_UNIX easier to support.
> > > > +
> > > >  RPC
> > > >  ===
> > > >  
> > > 
> > > I'd just squash this into patch #19.
> > 
> > That'd use the fact Trond is the author.
> 
> s/use/lose/
> 
> > 
> > Does linux have a shortage on commit ids I'm unaware of? ;)
> > 
> 
> Anyway, I'd prefer the FAQ be left split out as a separate commit
> given the author is different.

No worries. I don't feel as strongly about this one.
diff mbox series

Patch

diff --git a/Documentation/filesystems/nfs/localio.rst b/Documentation/filesystems/nfs/localio.rst
index d8bdab88f1db..acd8f3e5d87a 100644
--- a/Documentation/filesystems/nfs/localio.rst
+++ b/Documentation/filesystems/nfs/localio.rst
@@ -40,6 +40,83 @@  fio for 20 secs with 24 libaio threads, 128k directio reads, qd of 8,
 - Without LOCALIO:
   read: IOPS=12.0k, BW=1495MiB/s (1568MB/s)(29.2GiB/20015msec)
 
+FAQ
+===
+
+1. What are the use cases for LOCALIO?
+
+   a. Workloads where the NFS client and server are on the same host
+      realize improved IO performance. In particular, it is common when
+      running containerised workloads for jobs to find themselves
+      running on the same host as the knfsd server being used for
+      storage.
+
+2. What are the requirements for LOCALIO?
+
+   a. Bypass use of the network RPC protocol as much as possible. This
+      includes bypassing XDR and RPC for open, read, write and commit
+      operations.
+   b. Allow client and server to autonomously discover if they are
+      running local to each other without making any assumptions about
+      the local network topology.
+   c. Support the use of containers by being compatible with relevant
+      namespaces (e.g. network, user, mount).
+   d. Support all versions of NFS. NFSv3 is of particular importance
+      because it has wide enterprise usage and pNFS flexfiles makes use
+      of it for the data path.
+
+3. Why doesn’t LOCALIO just compare IP addresses or hostnames when
+   deciding if the NFS client and server are co-located on the same
+   host?
+
+   Since one of the main use cases is containerised workloads, we cannot
+   assume that IP addresses will be shared between the client and
+   server. This sets up a requirement for a handshake protocol that
+   needs to go over the same connection as the NFS traffic in order to
+   identify that the client and the server really are running on the
+   same host. The handshake uses a secret that is sent over the wire,
+   and can be verified by both parties by comparing with a value stored
+   in shared kernel memory if they are truly co-located.
+
+4. Does LOCALIO improve pNFS flexfiles?
+
+   Yes, LOCALIO complements pNFS flexfiles by allowing it to take
+   advantage of NFS client and server locality.  Policy that initiates
+   client IO as closely to the server where the data is stored naturally
+   benefits from the data path optimization LOCALIO provides.
+
+5. Why not develop a new pNFS layout to enable LOCALIO?
+
+   A new pNFS layout could be developed, but doing so would put the
+   onus on the server to somehow discover that the client is co-located
+   when deciding to hand out the layout.
+   There is value in a simpler approach (as provided by LOCALIO) that
+   allows the NFS client to negotiate and leverage locality without
+   requiring more elaborate modeling and discovery of such locality in a
+   more centralized manner.
+
+6. Why is having the client perform a server-side file OPEN, without
+   using RPC, beneficial?  Is the benefit pNFS specific?
+
+   Avoiding the use of XDR and RPC for file opens is beneficial to
+   performance regardless of whether pNFS is used. However adding a
+   requirement to go over the wire to do an open and/or close ends up
+   negating any benefit of avoiding the wire for doing the I/O itself
+   when we’re dealing with small files. There is no benefit to replacing
+   the READ or WRITE with a new open and/or close operation that still
+   needs to go over the wire.
+
+7. Why is LOCALIO only supported with UNIX Authentication (AUTH_UNIX)?
+
+   Strong authentication is usually tied to the connection itself. It
+   works by establishing a context that is cached by the server, and
+   that acts as the key for discovering the authorisation token, which
+   can then be passed to rpc.mountd to complete the authentication
+   process. On the other hand, in the case of AUTH_UNIX, the credential
+   that was passed over the wire is used directly as the key in the
+   upcall to rpc.mountd. This simplifies the authentication process, and
+   so makes AUTH_UNIX easier to support.
+
 RPC
 ===