OASIS Virtual I/O Device (VIRTIO) TC

Expand all | Collapse all

[PATCH v10 00/13] packed ring layout spec

  • 1.  [PATCH v10 00/13] packed ring layout spec

    Posted 03-09-2018 21:24
    This is a proposal to implement an alternative ring layout. The idea is to have a r/w descriptor in a ring structure, replacing the used and available ring, index and descriptor buffer. This is more efficient and easier for devices to implement than the 1.0 layout. Additionally, a new feature flag is proposed that makes devices promise to process descriptors in-order. With this feature drivers can also be made simpler and more efficient. Discussion and performance analysis of this is in Michael Tsirkin's kvm forum 2016 and 2017 presentations. Fixes: https://github.com/oasis-tcs/virtio-spec/issues/3 --- This revision addresses review comments on v9. Thanks a lot to all reviewers of earlier revisions! I plan to start voting on this shortly. A compiled version can be found under https://github.com/oasis-tcs/virtio-docs.git See virtio-v1.1-packed-wd10-diff.pdf virtio-v1.1-packed-wd10.pdf for redline and clean versions, respectively. If you are interested in changes from v9, that's in virtio-v1.1-packed-w09-to-wd10-diff.pdf in the same directory. Note: please do not try to edit the pdf and post comments in the edited file. Please post comments in a text format, as pdfs are not archived with the list. TODO: support for actual passthrough devices will likely require more new features, such as requirement for stronger memory barriers. Changes since v9: - corrected pseudo-code to work correctly without IN_ORDER (since that's what the accompanying text says). - new bit-field notation - update format for the event suppression structure - prefix packed ring structures with pvirtq_ for consistency and to avoid confusion with the split ring structures - deferred NOTIFICATION_DATA patches - will post separately, they need more review by s390 editor, and proof of concept code is not ready yet (needs host kernel support). Note: should this proposal be accepted and approved, one or more claims disclosed to the TC admin and listed on the Virtio TC IPR page https://github.com/oasis-tcs/virtio-admin/blob/master/IPR.md might become Essential Claims. Michael S. Tsirkin (13): introduction: document bitfield notation content: move 1.0 queue format out to a separate section content: move ring text out to a separate file content: move virtqueue operation description content: len -> used length, used ring -> vq content: generalize transport ring part naming content: generalize rest of text split-ring: generalize text split-ring: typo: aligment packed virtqueues: more efficient virtqueue layout content: in-order buffer use packed-ring: add in order support split-ring: in order feature conformance.tex 5 +- content.tex 808 ++++++++----------------------------------------------- introduction.tex 41 +++ packed-ring.tex 714 ++++++++++++++++++++++++++++++++++++++++++++++++ split-ring.tex 689 +++++++++++++++++++++++++++++++++++++++++++++++ 5 files changed, 1563 insertions(+), 694 deletions(-) create mode 100644 packed-ring.tex create mode 100644 split-ring.tex -- MST


  • 2.  [PATCH v10 02/13] content: move 1.0 queue format out to a separate section

    Posted 03-09-2018 21:24
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> --- content.tex 25 ++++++++++++++++++++++++- 1 file changed, 24 insertions(+), 1 deletion(-) diff --git a/content.tex b/content.tex index c7ef7fd..4483a4b 100644 --- a/content.tex +++ b/content.tex @@ -230,7 +230,30 @@ result. The mechanism for bulk data transport on virtio devices is pretentiously called a virtqueue. Each device can have zero or more virtqueuesfootnote{For example, the simplest network device has one virtqueue for -transmit and one for receive.}. Each queue has a 16-bit queue size +transmit and one for receive.}. + +Driver makes requests available to device by adding +an available buffer to the queue - i.e. adding a buffer +describing the request to a virtqueue, and optionally triggering +a driver event - i.e. sending a notification to the device. + +Device executes the requests and - when complete - adds +a used buffer to the queue - i.e. lets the driver +know by marking the buffer as used. Device can then trigger +a device event - i.e. send an interrupt to the driver. + +For queue operation detail, see
    ef{sec:Basic Facilities of a Virtio Device / Split Virtqueues}~
    ameref{sec:Basic Facilities of a Virtio Device / Split Virtqueues}. + +section{Split Virtqueues}label{sec:Basic Facilities of a Virtio Device / Split Virtqueues} +The split virtqueue format is the original format used by legacy +virtio devices. The split virtqueue format separates the +virtqueue into several parts, where each part is write-able by +either the driver or the device, but not both. Multiple +locations need to be updated when making a buffer available +and when marking it as used. + + +Each queue has a 16-bit queue size parameter, which sets the number of entries and implies the total size of the queue. -- MST


  • 3.  [PATCH v10 13/13] split-ring: in order feature

    Posted 03-09-2018 21:24
    For a split ring, require that drivers use descriptors in order too.
    This allows devices to skip reading the available ring.

    Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    ---
    split-ring.tex | 18 ++++++++++++++++++
    1 file changed, 18 insertions(+)

    diff --git a/split-ring.tex b/split-ring.tex
    index 87ecee2..df278fe 100644
    --- a/split-ring.tex
    +++ b/split-ring.tex
    @@ -203,6 +203,10 @@ struct virtq_desc {
    The number of descriptors in the table is defined by the queue size
    for this virtqueue: this is the maximum possible descriptor chain length.

    +If VIRTIO_F_IN_ORDER has been negotiated, driver uses
    +descriptors in ring order: starting from offset 0 in the table,
    +and wrapping around at the end of the table.
    +
    \begin{note}
    The legacy \hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]}
    referred to this structure as vring_desc, and the constants as
    @@ -218,6 +222,12 @@ purposes).
    Drivers MUST NOT add a descriptor chain over than $2^{32}$ bytes long in total;
    this implies that loops in the descriptor chain are forbidden!

    +If VIRTIO_F_IN_ORDER has been negotiated, and when making a
    +descriptor with VRING_DESC_F_NEXT set in \field{flags} at offset
    +$x$ in the table available to the device, driver MUST set
    +\field{next} to $0$ for the last descriptor in the table
    +(where $x = queue\_size - 1$) and to $x + 1$ for the rest of the descriptors.
    +
    \subsubsection{Indirect Descriptors}\label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}

    Some devices benefit by concurrently dispatching a large number
    @@ -247,6 +257,10 @@ chained by \field{next}. An indirect descriptor without a valid \field{next}
    A single indirect descriptor
    table can include both device-readable and device-writable descriptors.

    +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors
    +use sequential indices, in-order: index 0 followed by index 1
    +followed by index 2, etc.
    +
    \drivernormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the
    VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT
    @@ -259,6 +273,10 @@ the device.
    A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT
    in \field{flags}.

    +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors
    +MUST appear sequentially, with \field{next} taking the value
    +of 1 for the 1st descriptor, 2 for the 2nd one, etc.
    +
    \devicenormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    The device MUST ignore the write-only flag (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an indirect table.

    --
    MST




  • 4.  [PATCH v10 12/13] packed-ring: add in order support

    Posted 03-09-2018 21:24
    Support in-order requests for packed rings. This allows selective write-out of used descriptors. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- packed-ring.tex 24 ++++++++++++++++++++++++ 1 file changed, 24 insertions(+) diff --git a/packed-ring.tex b/packed-ring.tex index ebdba09..4b3d9d9 100644 --- a/packed-ring.tex +++ b/packed-ring.tex @@ -272,6 +272,30 @@ Buffer ID is also reserved and is ignored by the device. In Descriptors with VIRTQ_DESC_F_INDIRECT set VIRTQ_DESC_F_WRITE is reserved and is ignored by the device. +subsection{In-order use of descriptors} +label{sec:Packed Virtqueues / In-order use of descriptors} + +Some devices always use descriptors in the same order in which +they have been made available. These devices can offer the +VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge allows +devices to notify the use of a batch of buffers to the driver by +only writing out a single used descriptor with the Buffer ID +corresponding to the last descriptor in the batch. + +Device then skips forward in the ring according to the size of +the batch. Driver needs to look up the used Buffer ID and +calculate the batch size to be able to advance to where the next +used descriptor will be written by the device. + +This will result in the used descriptor overwriting the first +available descriptor in the batch, the used descriptor for the +next batch overwriting the first available descriptor in the next +batch, etc. + +The skipped buffers (for which no used descriptor was written) +are assumed to have been used (read or written) by the +device completely. + subsection{Multi-buffer requests} label{sec:Packed Virtqueues / Multi-buffer requests} Some devices combine multiple buffers as part of processing of a -- MST


  • 5.  [PATCH v10 13/13] split-ring: in order feature

    Posted 03-09-2018 21:24
    For a split ring, require that drivers use descriptors in order too. This allows devices to skip reading the available ring. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- split-ring.tex 18 ++++++++++++++++++ 1 file changed, 18 insertions(+) diff --git a/split-ring.tex b/split-ring.tex index 87ecee2..df278fe 100644 --- a/split-ring.tex +++ b/split-ring.tex @@ -203,6 +203,10 @@ struct virtq_desc { The number of descriptors in the table is defined by the queue size for this virtqueue: this is the maximum possible descriptor chain length. +If VIRTIO_F_IN_ORDER has been negotiated, driver uses +descriptors in ring order: starting from offset 0 in the table, +and wrapping around at the end of the table. + egin{note} The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} referred to this structure as vring_desc, and the constants as @@ -218,6 +222,12 @@ purposes). Drivers MUST NOT add a descriptor chain over than $2^{32}$ bytes long in total; this implies that loops in the descriptor chain are forbidden! +If VIRTIO_F_IN_ORDER has been negotiated, and when making a +descriptor with VRING_DESC_F_NEXT set in field{flags} at offset +$x$ in the table available to the device, driver MUST set +field{next} to $0$ for the last descriptor in the table +(where $x = queue\_size - 1$) and to $x + 1$ for the rest of the descriptors. + subsubsection{Indirect Descriptors}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} Some devices benefit by concurrently dispatching a large number @@ -247,6 +257,10 @@ chained by field{next}. An indirect descriptor without a valid field{next} A single indirect descriptor table can include both device-readable and device-writable descriptors. +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors +use sequential indices, in-order: index 0 followed by index 1 +followed by index 2, etc. + drivernormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT @@ -259,6 +273,10 @@ the device. A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT in field{flags}. +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors +MUST appear sequentially, with field{next} taking the value +of 1 for the 1st descriptor, 2 for the 2nd one, etc. + devicenormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} The device MUST ignore the write-only flag (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an indirect table. -- MST


  • 6.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-28-2018 08:24
    Hi Michael et al

    > Behalf Of Michael S. Tsirkin
    > Sent: 9. marts 2018 22:24
    >
    > For a split ring, require that drivers use descriptors in order too.
    > This allows devices to skip reading the available ring.
    >
    > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > ---
    [snip]
    >
    > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a descriptor
    > +with VRING_DESC_F_NEXT set in \field{flags} at offset $x$ in the table
    > +available to the device, driver MUST set \field{next} to $0$ for the
    > +last descriptor in the table (where $x = queue\_size - 1$) and to $x +
    > +1$ for the rest of the descriptors.
    > +
    > \subsubsection{Indirect Descriptors}\label{sec:Basic Facilities of a Virtio
    > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    >
    > Some devices benefit by concurrently dispatching a large number @@ -247,6
    > +257,10 @@ chained by \field{next}. An indirect descriptor without a valid
    > \field{next} A single indirect descriptor table can include both device-
    > readable and device-writable descriptors.
    >
    > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use
    > +sequential indices, in-order: index 0 followed by index 1 followed by
    > +index 2, etc.
    > +
    > \drivernormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio
    > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    > The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the
    > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT
    > @@ -259,6 +273,10 @@ the device.
    > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > VIRTQ_DESC_F_NEXT in \field{flags}.
    >
    > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST
    > +appear sequentially, with \field{next} taking the value of 1 for the
    > +1st descriptor, 2 for the 2nd one, etc.
    > +
    > \devicenormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio
    > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    > The device MUST ignore the write-only flag
    > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an
    > indirect table.
    >

    The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the proposed descriptor ordering for multi-element buffers couldn't be tweaked to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER negotiated, there is no way of knowing if, or how many chained descriptors follow the descriptor pointed to by the virtq_avail.idx. A chain has to be inspected one descriptor at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload, where you want to DMA all available descriptors in one shot, instead of iterating based on the contents of received DMA data. As currently defined, HW would have to find a compromise between likely chain length, and cost of additional DMA transfers. This leads to a performance penalty for all chained descriptors, and in case the length assumption is wrong the impact can be significant.

    Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to place the last element at the lowest index, and the head-element (to which virtq_avail.idx points) at the highest index? Then all the chained element descriptors would be included in a DMA of the descriptor table from the previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward" order of the chained descriptors shouldn't pose an issue as such (at least not in HW).

    Best Regards,

    -Lars



  • 7.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-28-2018 14:39
    On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > Hi Michael et al
    >
    > > Behalf Of Michael S. Tsirkin
    > > Sent: 9. marts 2018 22:24
    > >
    > > For a split ring, require that drivers use descriptors in order too.
    > > This allows devices to skip reading the available ring.
    > >
    > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > ---
    > [snip]
    > >
    > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a descriptor
    > > +with VRING_DESC_F_NEXT set in \field{flags} at offset $x$ in the table
    > > +available to the device, driver MUST set \field{next} to $0$ for the
    > > +last descriptor in the table (where $x = queue\_size - 1$) and to $x +
    > > +1$ for the rest of the descriptors.
    > > +
    > > \subsubsection{Indirect Descriptors}\label{sec:Basic Facilities of a Virtio
    > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    > >
    > > Some devices benefit by concurrently dispatching a large number @@ -247,6
    > > +257,10 @@ chained by \field{next}. An indirect descriptor without a valid
    > > \field{next} A single indirect descriptor table can include both device-
    > > readable and device-writable descriptors.
    > >
    > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use
    > > +sequential indices, in-order: index 0 followed by index 1 followed by
    > > +index 2, etc.
    > > +
    > > \drivernormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio
    > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    > > The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the
    > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT
    > > @@ -259,6 +273,10 @@ the device.
    > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > >
    > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST
    > > +appear sequentially, with \field{next} taking the value of 1 for the
    > > +1st descriptor, 2 for the 2nd one, etc.
    > > +
    > > \devicenormative{\paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio
    > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors}
    > > The device MUST ignore the write-only flag
    > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an
    > > indirect table.
    > >
    >
    > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the proposed descriptor ordering for multi-element buffers couldn't be tweaked to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER negotiated, there is no way of knowing if, or how many chained descriptors follow the descriptor pointed to by the virtq_avail.idx. A chain has to be inspected one descriptor at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload, where you want to DMA all available descriptors in one shot, instead of iterating based on the contents of received DMA data. As currently defined, HW would have to find a compromise between likely chain length, and cost of additional DMA transfers. This leads to a performance penalty for all chained descriptors, and in case the length assumption is wrong the impact can be significant.
    >
    > Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to place the last element at the lowest index, and the head-element (to which virtq_avail.idx points) at the highest index? Then all the chained element descriptors would be included in a DMA of the descriptor table from the previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward" order of the chained descriptors shouldn't pose an issue as such (at least not in HW).
    >
    > Best Regards,
    >
    > -Lars

    virtq_avail.idx is still an index into the available ring.

    I don't really see how you can use virtq_avail.idx to guess the
    placement of a descriptor.

    I suspect the best way to optimize this is to include the
    relevant data with the VIRTIO_F_NOTIFICATION_DATA feature.


    > ---------------------------------------------------------------------
    > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



  • 8.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-28-2018 14:39
    On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote: > Hi Michael et al > > > Behalf Of Michael S. Tsirkin > > Sent: 9. marts 2018 22:24 > > > > For a split ring, require that drivers use descriptors in order too. > > This allows devices to skip reading the available ring. > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > Reviewed-by: Cornelia Huck <cohuck@redhat.com> > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > --- > [snip] > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a descriptor > > +with VRING_DESC_F_NEXT set in field{flags} at offset $x$ in the table > > +available to the device, driver MUST set field{next} to $0$ for the > > +last descriptor in the table (where $x = queue\_size - 1$) and to $x + > > +1$ for the rest of the descriptors. > > + > > subsubsection{Indirect Descriptors}label{sec:Basic Facilities of a Virtio > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} > > > > Some devices benefit by concurrently dispatching a large number @@ -247,6 > > +257,10 @@ chained by field{next}. An indirect descriptor without a valid > > field{next} A single indirect descriptor table can include both device- > > readable and device-writable descriptors. > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use > > +sequential indices, in-order: index 0 followed by index 1 followed by > > +index 2, etc. > > + > > drivernormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} > > The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT > > @@ -259,6 +273,10 @@ the device. > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and > > VIRTQ_DESC_F_NEXT in field{flags}. > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST > > +appear sequentially, with field{next} taking the value of 1 for the > > +1st descriptor, 2 for the 2nd one, etc. > > + > > devicenormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio > > Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} > > The device MUST ignore the write-only flag > > (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an > > indirect table. > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the proposed descriptor ordering for multi-element buffers couldn't be tweaked to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER negotiated, there is no way of knowing if, or how many chained descriptors follow the descriptor pointed to by the virtq_avail.idx. A chain has to be inspected one descriptor at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload, where you want to DMA all available descriptors in one shot, instead of iterating based on the contents of received DMA data. As currently defined, HW would have to find a compromise between likely chain length, and cost of additional DMA transfers. This leads to a performance penalty for all chained descriptors, and in case the length assumption is wrong the impact can be significant. > > Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to place the last element at the lowest index, and the head-element (to which virtq_avail.idx points) at the highest index? Then all the chained element descriptors would be included in a DMA of the descriptor table from the previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward" order of the chained descriptors shouldn't pose an issue as such (at least not in HW). > > Best Regards, > > -Lars virtq_avail.idx is still an index into the available ring. I don't really see how you can use virtq_avail.idx to guess the placement of a descriptor. I suspect the best way to optimize this is to include the relevant data with the VIRTIO_F_NOTIFICATION_DATA feature. > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  • 9.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-28-2018 16:12
    Missed replying to the lists. Sorry.

    > From: Michael S. Tsirkin <mst@redhat.com>
    > Sent: 28. marts 2018 16:39
    >
    > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > Hi Michael et al
    > >
    > > > Behalf Of Michael S. Tsirkin
    > > > Sent: 9. marts 2018 22:24
    > > >
    > > > For a split ring, require that drivers use descriptors in order too.
    > > > This allows devices to skip reading the available ring.
    > > >
    > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > ---
    > > [snip]
    > > >
    > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a
    > > > +descriptor with VRING_DESC_F_NEXT set in \field{flags} at offset
    > > > +$x$ in the table available to the device, driver MUST set
    > > > +\field{next} to $0$ for the last descriptor in the table (where $x
    > > > += queue\_size - 1$) and to $x + 1$ for the rest of the descriptors.
    > > > +
    > > > \subsubsection{Indirect Descriptors}\label{sec:Basic Facilities of
    > > > a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > Indirect Descriptors}
    > > >
    > > > Some devices benefit by concurrently dispatching a large number @@
    > > > -247,6
    > > > +257,10 @@ chained by \field{next}. An indirect descriptor without a
    > > > +valid
    > > > \field{next} A single indirect descriptor table can include both
    > > > device- readable and device-writable descriptors.
    > > >
    > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use
    > > > +sequential indices, in-order: index 0 followed by index 1 followed
    > > > +by index 2, etc.
    > > > +
    > > > \drivernormative{\paragraph}{Indirect Descriptors}{Basic Facilities
    > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > Indirect Descriptors} The driver MUST NOT set the
    > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST
    > NOT
    > > > @@ -259,6 +273,10 @@ the device.
    > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > >
    > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST
    > > > +appear sequentially, with \field{next} taking the value of 1 for
    > > > +the 1st descriptor, 2 for the 2nd one, etc.
    > > > +
    > > > \devicenormative{\paragraph}{Indirect Descriptors}{Basic Facilities
    > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > Indirect Descriptors} The device MUST ignore the write-only flag
    > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that refers to
    > > > an indirect table.
    > > >
    > >
    > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses
    > to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the
    > proposed descriptor ordering for multi-element buffers couldn't be tweaked
    > to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER
    > negotiated, there is no way of knowing if, or how many chained descriptors
    > follow the descriptor pointed to by the virtq_avail.idx. A chain has to be
    > inspected one descriptor at a time until
    > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload,
    > where you want to DMA all available descriptors in one shot, instead of
    > iterating based on the contents of received DMA data. As currently defined,
    > HW would have to find a compromise between likely chain length, and cost
    > of additional DMA transfers. This leads to a performance penalty for all
    > chained descriptors, and in case the length assumption is wrong the impact
    > can be significant.
    > >
    > > Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to
    > place the last element at the lowest index, and the head-element (to which
    > virtq_avail.idx points) at the highest index? Then all the chained element
    > descriptors would be included in a DMA of the descriptor table from the
    > previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward"
    > order of the chained descriptors shouldn't pose an issue as such (at least not
    > in HW).
    > >
    > > Best Regards,
    > >
    > > -Lars
    >
    > virtq_avail.idx is still an index into the available ring.
    >
    > I don't really see how you can use virtq_avail.idx to guess the placement of a
    > descriptor.
    >
    > I suspect the best way to optimize this is to include the relevant data with the
    > VIRTIO_F_NOTIFICATION_DATA feature.
    >

    Argh, naturally.

    For HW offload I'd want to avoid notifications for buffer transfer from host to device, and hoped to just poll virtq_avail.idx directly.

    A split virtqueue with VITRIO_F_IN_ORDER will maintain virtq_avail.idx==virtq_avail.ring[idx] as long as there is no chaining. It would be nice to allow negotiating away chaining, i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and driver can ignore the virtq_avail.ring[].

    >
    > > ---------------------------------------------------------------------
    > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



  • 10.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-29-2018 14:42
    On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > Missed replying to the lists. Sorry.
    >
    > > From: Michael S. Tsirkin <mst@redhat.com>
    > > Sent: 28. marts 2018 16:39
    > >
    > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > > Hi Michael et al
    > > >
    > > > > Behalf Of Michael S. Tsirkin
    > > > > Sent: 9. marts 2018 22:24
    > > > >
    > > > > For a split ring, require that drivers use descriptors in order too.
    > > > > This allows devices to skip reading the available ring.
    > > > >
    > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > ---
    > > > [snip]
    > > > >
    > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a
    > > > > +descriptor with VRING_DESC_F_NEXT set in \field{flags} at offset
    > > > > +$x$ in the table available to the device, driver MUST set
    > > > > +\field{next} to $0$ for the last descriptor in the table (where $x
    > > > > += queue\_size - 1$) and to $x + 1$ for the rest of the descriptors.
    > > > > +
    > > > > \subsubsection{Indirect Descriptors}\label{sec:Basic Facilities of
    > > > > a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > > Indirect Descriptors}
    > > > >
    > > > > Some devices benefit by concurrently dispatching a large number @@
    > > > > -247,6
    > > > > +257,10 @@ chained by \field{next}. An indirect descriptor without a
    > > > > +valid
    > > > > \field{next} A single indirect descriptor table can include both
    > > > > device- readable and device-writable descriptors.
    > > > >
    > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use
    > > > > +sequential indices, in-order: index 0 followed by index 1 followed
    > > > > +by index 2, etc.
    > > > > +
    > > > > \drivernormative{\paragraph}{Indirect Descriptors}{Basic Facilities
    > > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > > Indirect Descriptors} The driver MUST NOT set the
    > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST
    > > NOT
    > > > > @@ -259,6 +273,10 @@ the device.
    > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > >
    > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST
    > > > > +appear sequentially, with \field{next} taking the value of 1 for
    > > > > +the 1st descriptor, 2 for the 2nd one, etc.
    > > > > +
    > > > > \devicenormative{\paragraph}{Indirect Descriptors}{Basic Facilities
    > > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table /
    > > > > Indirect Descriptors} The device MUST ignore the write-only flag
    > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that refers to
    > > > > an indirect table.
    > > > >
    > > >
    > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses
    > > to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the
    > > proposed descriptor ordering for multi-element buffers couldn't be tweaked
    > > to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER
    > > negotiated, there is no way of knowing if, or how many chained descriptors
    > > follow the descriptor pointed to by the virtq_avail.idx. A chain has to be
    > > inspected one descriptor at a time until
    > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload,
    > > where you want to DMA all available descriptors in one shot, instead of
    > > iterating based on the contents of received DMA data. As currently defined,
    > > HW would have to find a compromise between likely chain length, and cost
    > > of additional DMA transfers. This leads to a performance penalty for all
    > > chained descriptors, and in case the length assumption is wrong the impact
    > > can be significant.
    > > >
    > > > Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to
    > > place the last element at the lowest index, and the head-element (to which
    > > virtq_avail.idx points) at the highest index? Then all the chained element
    > > descriptors would be included in a DMA of the descriptor table from the
    > > previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward"
    > > order of the chained descriptors shouldn't pose an issue as such (at least not
    > > in HW).
    > > >
    > > > Best Regards,
    > > >
    > > > -Lars
    > >
    > > virtq_avail.idx is still an index into the available ring.
    > >
    > > I don't really see how you can use virtq_avail.idx to guess the placement of a
    > > descriptor.
    > >
    > > I suspect the best way to optimize this is to include the relevant data with the
    > > VIRTIO_F_NOTIFICATION_DATA feature.
    > >
    >
    > Argh, naturally.

    BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the index right now.

    Do you have an opinion on whether we should change that for in-order?

    > For HW offload I'd want to avoid notifications for buffer transfer from host to device, and hoped to just poll virtq_avail.idx directly.
    >
    > A split virtqueue with VITRIO_F_IN_ORDER will maintain virtq_avail.idx==virtq_avail.ring[idx] as long as there is no chaining. It would be nice to allow negotiating away chaining, i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and driver can ignore the virtq_avail.ring[].

    My point was that device can just assume no chains, and then fall back
    on doing extra reads upon encountering a chain.



    > >
    > > > ---------------------------------------------------------------------
    > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
    >
    > ---------------------------------------------------------------------
    > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org



  • 11.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-29-2018 14:42
    On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote: > Missed replying to the lists. Sorry. > > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: 28. marts 2018 16:39 > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote: > > > Hi Michael et al > > > > > > > Behalf Of Michael S. Tsirkin > > > > Sent: 9. marts 2018 22:24 > > > > > > > > For a split ring, require that drivers use descriptors in order too. > > > > This allows devices to skip reading the available ring. > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com> > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > --- > > > [snip] > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a > > > > +descriptor with VRING_DESC_F_NEXT set in field{flags} at offset > > > > +$x$ in the table available to the device, driver MUST set > > > > +field{next} to $0$ for the last descriptor in the table (where $x > > > > += queue\_size - 1$) and to $x + 1$ for the rest of the descriptors. > > > > + > > > > subsubsection{Indirect Descriptors}label{sec:Basic Facilities of > > > > a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / > > > > Indirect Descriptors} > > > > > > > > Some devices benefit by concurrently dispatching a large number @@ > > > > -247,6 > > > > +257,10 @@ chained by field{next}. An indirect descriptor without a > > > > +valid > > > > field{next} A single indirect descriptor table can include both > > > > device- readable and device-writable descriptors. > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors use > > > > +sequential indices, in-order: index 0 followed by index 1 followed > > > > +by index 2, etc. > > > > + > > > > drivernormative{paragraph}{Indirect Descriptors}{Basic Facilities > > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / > > > > Indirect Descriptors} The driver MUST NOT set the > > VIRTQ_DESC_F_INDIRECT flag unless the > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST > > NOT > > > > @@ -259,6 +273,10 @@ the device. > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and > > > > VIRTQ_DESC_F_NEXT in field{flags}. > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect descriptors MUST > > > > +appear sequentially, with field{next} taking the value of 1 for > > > > +the 1st descriptor, 2 for the 2nd one, etc. > > > > + > > > > devicenormative{paragraph}{Indirect Descriptors}{Basic Facilities > > > > of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / > > > > Indirect Descriptors} The device MUST ignore the write-only flag > > > > (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that refers to > > > > an indirect table. > > > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some accesses > > to the virtq_avail.ring and virtq_used.ring. However I'm wondering if the > > proposed descriptor ordering for multi-element buffers couldn't be tweaked > > to be more HW friendly. Currently even with the VIRTIO_F_IN_ORDER > > negotiated, there is no way of knowing if, or how many chained descriptors > > follow the descriptor pointed to by the virtq_avail.idx. A chain has to be > > inspected one descriptor at a time until > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW offload, > > where you want to DMA all available descriptors in one shot, instead of > > iterating based on the contents of received DMA data. As currently defined, > > HW would have to find a compromise between likely chain length, and cost > > of additional DMA transfers. This leads to a performance penalty for all > > chained descriptors, and in case the length assumption is wrong the impact > > can be significant. > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required chained buffers to > > place the last element at the lowest index, and the head-element (to which > > virtq_avail.idx points) at the highest index? Then all the chained element > > descriptors would be included in a DMA of the descriptor table from the > > previous virtq_avail.idx+1 to the current virtq_avail.idx. The "backward" > > order of the chained descriptors shouldn't pose an issue as such (at least not > > in HW). > > > > > > Best Regards, > > > > > > -Lars > > > > virtq_avail.idx is still an index into the available ring. > > > > I don't really see how you can use virtq_avail.idx to guess the placement of a > > descriptor. > > > > I suspect the best way to optimize this is to include the relevant data with the > > VIRTIO_F_NOTIFICATION_DATA feature. > > > > Argh, naturally. BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the index right now. Do you have an opinion on whether we should change that for in-order? > For HW offload I'd want to avoid notifications for buffer transfer from host to device, and hoped to just poll virtq_avail.idx directly. > > A split virtqueue with VITRIO_F_IN_ORDER will maintain virtq_avail.idx==virtq_avail.ring[idx] as long as there is no chaining. It would be nice to allow negotiating away chaining, i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and driver can ignore the virtq_avail.ring[]. My point was that device can just assume no chains, and then fall back on doing extra reads upon encountering a chain. > > > > > --------------------------------------------------------------------- > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org > > --------------------------------------------------------------------- > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org


  • 12.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-29-2018 18:23


    >


  • 13.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-29-2018 19:13
    On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    >
    >
    > >


  • 14.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 03-29-2018 19:13
    On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote: > > > >


  • 15.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-03-2018 07:20
    > From: virtio-dev@lists.oasis-open.org <virtio-dev@lists.oasis-open.org> On
    > Behalf Of Michael S. Tsirkin
    > Sent: 29. marts 2018 21:13
    >
    > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    > >
    > >
    > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > Sent: 29. marts 2018 16:42
    > > >
    > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > > > > Missed replying to the lists. Sorry.
    > > > >
    > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > Sent: 28. marts 2018 16:39
    > > > > >
    > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > > > > > Hi Michael et al
    > > > > > >
    > > > > > > > Behalf Of Michael S. Tsirkin
    > > > > > > > Sent: 9. marts 2018 22:24
    > > > > > > >
    > > > > > > > For a split ring, require that drivers use descriptors in order too.
    > > > > > > > This allows devices to skip reading the available ring.
    > > > > > > >
    > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > > > > ---
    > > > > > > [snip]
    > > > > > > >
    > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a
    > > > > > > > +descriptor with VRING_DESC_F_NEXT set in \field{flags} at
    > > > > > > > +offset $x$ in the table available to the device, driver
    > > > > > > > +MUST set \field{next} to $0$ for the last descriptor in the
    > > > > > > > +table (where $x = queue\_size - 1$) and to $x + 1$ for the
    > > > > > > > +rest of the
    > > > descriptors.
    > > > > > > > +
    > > > > > > > \subsubsection{Indirect Descriptors}\label{sec:Basic
    > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > Descriptor Table / Indirect Descriptors}
    > > > > > > >
    > > > > > > > Some devices benefit by concurrently dispatching a large
    > > > > > > > number @@
    > > > > > > > -247,6
    > > > > > > > +257,10 @@ chained by \field{next}. An indirect descriptor
    > > > > > > > +without a valid
    > > > > > > > \field{next} A single indirect descriptor table can
    > > > > > > > include both
    > > > > > > > device- readable and device-writable descriptors.
    > > > > > > >
    > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > +descriptors use sequential indices, in-order: index 0
    > > > > > > > +followed by index 1 followed by index 2, etc.
    > > > > > > > +
    > > > > > > > \drivernormative{\paragraph}{Indirect Descriptors}{Basic
    > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > Descriptor Table / Indirect Descriptors} The driver MUST NOT
    > > > > > > > set the
    > > > > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver
    > MUST
    > > > > > NOT
    > > > > > > > @@ -259,6 +273,10 @@ the device.
    > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > > > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > > > > >
    > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > +descriptors MUST appear sequentially, with \field{next}
    > > > > > > > +taking the value of
    > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc.
    > > > > > > > +
    > > > > > > > \devicenormative{\paragraph}{Indirect Descriptors}{Basic
    > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > Descriptor Table / Indirect Descriptors} The device MUST
    > > > > > > > ignore the write-only flag
    > > > > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that
    > > > > > > > refers to an indirect table.
    > > > > > > >
    > > > > > >
    > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some
    > > > > > > accesses
    > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm
    > > > > > wondering if the proposed descriptor ordering for multi-element
    > > > > > buffers couldn't be tweaked to be more HW friendly. Currently
    > > > > > even with the VIRTIO_F_IN_ORDER negotiated, there is no way of
    > > > > > knowing if, or how many chained descriptors follow the
    > > > > > descriptor pointed to by the virtq_avail.idx. A chain has to be
    > > > > > inspected one descriptor at a time until
    > > > > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW
    > > > > > offload, where you want to DMA all available descriptors in one
    > > > > > shot, instead of iterating based on the contents of received DMA
    > > > > > data. As currently defined, HW would have to find a compromise
    > between likely chain length, and cost of additional DMA transfers.
    > > > > > This leads to a performance penalty for all chained descriptors,
    > > > > > and in case the length assumption is wrong the impact can be
    > significant.
    > > > > > >
    > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required chained
    > > > > > > buffers to
    > > > > > place the last element at the lowest index, and the head-element
    > > > > > (to which virtq_avail.idx points) at the highest index? Then all
    > > > > > the chained element descriptors would be included in a DMA of
    > > > > > the descriptor table from the previous virtq_avail.idx+1 to the
    > > > > > current
    > > > virtq_avail.idx. The "backward"
    > > > > > order of the chained descriptors shouldn't pose an issue as such
    > > > > > (at least not in HW).
    > > > > > >
    > > > > > > Best Regards,
    > > > > > >
    > > > > > > -Lars
    > > > > >
    > > > > > virtq_avail.idx is still an index into the available ring.
    > > > > >
    > > > > > I don't really see how you can use virtq_avail.idx to guess the
    > > > > > placement of a descriptor.
    > > > > >
    > > > > > I suspect the best way to optimize this is to include the
    > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature.
    > > > > >
    > > > >
    > > > > Argh, naturally.
    > > >
    > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the
    > > > index right now.
    > > >
    > > > Do you have an opinion on whether we should change that for in-order?
    > > >
    > >
    > > Maybe I should think more about this, however adding the last element
    > descriptor index, would be useful to accelerate interfaces that frequently
    > use chaining (from a HW DMA perspective at least).
    > >
    > > > > For HW offload I'd want to avoid notifications for buffer transfer
    > > > > from host
    > > > to device, and hoped to just poll virtq_avail.idx directly.
    > > > >
    > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain
    > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no
    > > > chaining. It would be nice to allow negotiating away chaining, i.e
    > > > add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use
    > > > chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and
    > > > driver can ignore the virtq_avail.ring[].
    > > >
    > > > My point was that device can just assume no chains, and then fall
    > > > back on doing extra reads upon encountering a chain.
    > > >
    > >
    > > Yes, you are correct that the HW can speculatively use virtq_avail.idx as the
    > >direct index to the descriptor table, and if it encounters a chain, revert to
    > >using the virtq_avail.ring[] in the traditional way, and this would work
    > >without the feature-bit.
    >
    > Sorry that was not my idea.
    >
    > Device should not need to read the ring at all.
    > It reads the descriptor table and counts the descriptors without the next bit.
    > Once the count reaches the available index, it stops.
    >

    Agreed, that would work as well, with the benefit of keeping the ring out of
    the loop.

    >
    > > However the driver would not be able to optimize away the writing of
    > > the virtq_avail.ring[] (=cache miss)
    >
    >
    > BTW writing is a separate question (there is no provision in the spec to skip
    > writes) but device does not have to read the ring.
    >

    Yes, I understand the spec currently does not allow writes to be skipped, but
    I'm wondering if that ought to be reconsidered for optimization features such
    as IN_ORDER and NO_CHAIN? By opting for such features, both driver and
    device acknowledge their willingness to accept reduced flexibility for
    improved performance. Why not then make sure they get the biggest bang for
    their buck? I would expect up to 20% improvement over PCIe (virtio-net,
    single 64B packet), if the device does not have to write to virtq_used.ring[] on
    transmit, and bandwidth over PCI is a very precious resource in e.g. virtual
    switch offload with east-west acceleration (for a discussion see Intel's white-
    paper 335625-001).

    > Without device accesses ring will not be invaliated in cache so no misses
    > hopefully.
    >
    > > unless a NO_CHAIN feature has
    > > been negotiated.
    > > The IN_ORDER by itself has already eliminated the need to maintain the
    > > TX virtq_used.ring[], since the buffer order is always known by the
    > > driver.
    > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[] related
    > > cache-misses could be eliminated. I.e.
    > > looping a packet over a split virtqueue would just experience 7 driver
    > > cache misses, down from 10 in Virtio v1.0. Multi-element buffers would
    > > still be possible provided INDIRECT is negotiated.
    >
    >
    > NO_CHAIN might be a valid optimization, it is just unfortunately somewhat
    > narrow in that devices that need to mix write and read descriptors in the
    > same ring (e.g. storage) can not use this feature.
    >

    Yes, if there was a way of making indirect buffers support it, that would be
    ideal. However I don't see how that can be done without inline headers in
    elements to hold their written length.

    At the same time storage would not be hurt by it even if they are unable to
    benefit from this particular optimization, and as long as there is a substantial
    use case/space that benefit from an optimization, it ought to be considered.
    I believe virtual switching offload with virtio-net devices over PCIe is such a
    key use-case.

    >
    > > >
    > > >
    > > > > >
    > > > > > > --------------------------------------------------------------
    > > > > > > ----
    > > > > > > --- To unsubscribe, e-mail:
    > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > For additional commands, e-mail:
    > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > >
    > > > > ------------------------------------------------------------------
    > > > > --- To unsubscribe, e-mail:
    > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > For additional commands, e-mail:
    > > > > virtio-dev-help@lists.oasis-open.org
    > > Disclaimer: This email and any files transmitted with it may contain
    > confidential information intended for the addressee(s) only. The information
    > is not to be surrendered or copied to unauthorized persons. If you have
    > received this communication in error, please notify the sender immediately
    > and delete this e-mail from your system.
    >
    > ---------------------------------------------------------------------
    > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org




  • 16.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-03-2018 11:48
    On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote:
    > > From: virtio-dev@lists.oasis-open.org <virtio-dev@lists.oasis-open.org> On
    > > Behalf Of Michael S. Tsirkin
    > > Sent: 29. marts 2018 21:13
    > >
    > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    > > >
    > > >
    > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > Sent: 29. marts 2018 16:42
    > > > >
    > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > > > > > Missed replying to the lists. Sorry.
    > > > > >
    > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > Sent: 28. marts 2018 16:39
    > > > > > >
    > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > > > > > > Hi Michael et al
    > > > > > > >
    > > > > > > > > Behalf Of Michael S. Tsirkin
    > > > > > > > > Sent: 9. marts 2018 22:24
    > > > > > > > >
    > > > > > > > > For a split ring, require that drivers use descriptors in order too.
    > > > > > > > > This allows devices to skip reading the available ring.
    > > > > > > > >
    > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > > > > > ---
    > > > > > > > [snip]
    > > > > > > > >
    > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a
    > > > > > > > > +descriptor with VRING_DESC_F_NEXT set in \field{flags} at
    > > > > > > > > +offset $x$ in the table available to the device, driver
    > > > > > > > > +MUST set \field{next} to $0$ for the last descriptor in the
    > > > > > > > > +table (where $x = queue\_size - 1$) and to $x + 1$ for the
    > > > > > > > > +rest of the
    > > > > descriptors.
    > > > > > > > > +
    > > > > > > > > \subsubsection{Indirect Descriptors}\label{sec:Basic
    > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > > Descriptor Table / Indirect Descriptors}
    > > > > > > > >
    > > > > > > > > Some devices benefit by concurrently dispatching a large
    > > > > > > > > number @@
    > > > > > > > > -247,6
    > > > > > > > > +257,10 @@ chained by \field{next}. An indirect descriptor
    > > > > > > > > +without a valid
    > > > > > > > > \field{next} A single indirect descriptor table can
    > > > > > > > > include both
    > > > > > > > > device- readable and device-writable descriptors.
    > > > > > > > >
    > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > +descriptors use sequential indices, in-order: index 0
    > > > > > > > > +followed by index 1 followed by index 2, etc.
    > > > > > > > > +
    > > > > > > > > \drivernormative{\paragraph}{Indirect Descriptors}{Basic
    > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > > Descriptor Table / Indirect Descriptors} The driver MUST NOT
    > > > > > > > > set the
    > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver
    > > MUST
    > > > > > > NOT
    > > > > > > > > @@ -259,6 +273,10 @@ the device.
    > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > > > > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > > > > > >
    > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > +descriptors MUST appear sequentially, with \field{next}
    > > > > > > > > +taking the value of
    > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc.
    > > > > > > > > +
    > > > > > > > > \devicenormative{\paragraph}{Indirect Descriptors}{Basic
    > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue
    > > > > > > > > Descriptor Table / Indirect Descriptors} The device MUST
    > > > > > > > > ignore the write-only flag
    > > > > > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor that
    > > > > > > > > refers to an indirect table.
    > > > > > > > >
    > > > > > > >
    > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some
    > > > > > > > accesses
    > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm
    > > > > > > wondering if the proposed descriptor ordering for multi-element
    > > > > > > buffers couldn't be tweaked to be more HW friendly. Currently
    > > > > > > even with the VIRTIO_F_IN_ORDER negotiated, there is no way of
    > > > > > > knowing if, or how many chained descriptors follow the
    > > > > > > descriptor pointed to by the virtq_avail.idx. A chain has to be
    > > > > > > inspected one descriptor at a time until
    > > > > > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW
    > > > > > > offload, where you want to DMA all available descriptors in one
    > > > > > > shot, instead of iterating based on the contents of received DMA
    > > > > > > data. As currently defined, HW would have to find a compromise
    > > between likely chain length, and cost of additional DMA transfers.
    > > > > > > This leads to a performance penalty for all chained descriptors,
    > > > > > > and in case the length assumption is wrong the impact can be
    > > significant.
    > > > > > > >
    > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required chained
    > > > > > > > buffers to
    > > > > > > place the last element at the lowest index, and the head-element
    > > > > > > (to which virtq_avail.idx points) at the highest index? Then all
    > > > > > > the chained element descriptors would be included in a DMA of
    > > > > > > the descriptor table from the previous virtq_avail.idx+1 to the
    > > > > > > current
    > > > > virtq_avail.idx. The "backward"
    > > > > > > order of the chained descriptors shouldn't pose an issue as such
    > > > > > > (at least not in HW).
    > > > > > > >
    > > > > > > > Best Regards,
    > > > > > > >
    > > > > > > > -Lars
    > > > > > >
    > > > > > > virtq_avail.idx is still an index into the available ring.
    > > > > > >
    > > > > > > I don't really see how you can use virtq_avail.idx to guess the
    > > > > > > placement of a descriptor.
    > > > > > >
    > > > > > > I suspect the best way to optimize this is to include the
    > > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature.
    > > > > > >
    > > > > >
    > > > > > Argh, naturally.
    > > > >
    > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the
    > > > > index right now.
    > > > >
    > > > > Do you have an opinion on whether we should change that for in-order?
    > > > >
    > > >
    > > > Maybe I should think more about this, however adding the last element
    > > descriptor index, would be useful to accelerate interfaces that frequently
    > > use chaining (from a HW DMA perspective at least).
    > > >
    > > > > > For HW offload I'd want to avoid notifications for buffer transfer
    > > > > > from host
    > > > > to device, and hoped to just poll virtq_avail.idx directly.
    > > > > >
    > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain
    > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no
    > > > > chaining. It would be nice to allow negotiating away chaining, i.e
    > > > > add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use
    > > > > chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and
    > > > > driver can ignore the virtq_avail.ring[].
    > > > >
    > > > > My point was that device can just assume no chains, and then fall
    > > > > back on doing extra reads upon encountering a chain.
    > > > >
    > > >
    > > > Yes, you are correct that the HW can speculatively use virtq_avail.idx as the
    > > >direct index to the descriptor table, and if it encounters a chain, revert to
    > > >using the virtq_avail.ring[] in the traditional way, and this would work
    > > >without the feature-bit.
    > >
    > > Sorry that was not my idea.
    > >
    > > Device should not need to read the ring at all.
    > > It reads the descriptor table and counts the descriptors without the next bit.
    > > Once the count reaches the available index, it stops.
    > >
    >
    > Agreed, that would work as well, with the benefit of keeping the ring out of
    > the loop.
    >
    > >
    > > > However the driver would not be able to optimize away the writing of
    > > > the virtq_avail.ring[] (=cache miss)
    > >
    > >
    > > BTW writing is a separate question (there is no provision in the spec to skip
    > > writes) but device does not have to read the ring.
    > >
    >
    > Yes, I understand the spec currently does not allow writes to be skipped, but
    > I'm wondering if that ought to be reconsidered for optimization features such
    > as IN_ORDER and NO_CHAIN?

    Why not just use the packed ring then?

    > By opting for such features, both driver and
    > device acknowledge their willingness to accept reduced flexibility for
    > improved performance. Why not then make sure they get the biggest bang for
    > their buck? I would expect up to 20% improvement over PCIe (virtio-net,
    > single 64B packet), if the device does not have to write to virtq_used.ring[] on
    > transmit, and bandwidth over PCI is a very precious resource in e.g. virtual
    > switch offload with east-west acceleration (for a discussion see Intel's white-
    > paper 335625-001).

    Haven't looked at it yet but we also need to consider the complexity,
    see below.

    > > Without device accesses ring will not be invaliated in cache so no misses
    > > hopefully.
    > >
    > > > unless a NO_CHAIN feature has
    > > > been negotiated.
    > > > The IN_ORDER by itself has already eliminated the need to maintain the
    > > > TX virtq_used.ring[], since the buffer order is always known by the
    > > > driver.
    > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[] related
    > > > cache-misses could be eliminated. I.e.
    > > > looping a packet over a split virtqueue would just experience 7 driver
    > > > cache misses, down from 10 in Virtio v1.0. Multi-element buffers would
    > > > still be possible provided INDIRECT is negotiated.
    > >
    > >
    > > NO_CHAIN might be a valid optimization, it is just unfortunately somewhat
    > > narrow in that devices that need to mix write and read descriptors in the
    > > same ring (e.g. storage) can not use this feature.
    > >
    >
    > Yes, if there was a way of making indirect buffers support it, that would be
    > ideal. However I don't see how that can be done without inline headers in
    > elements to hold their written length.

    Kind of like it's done with with packed ring?

    > At the same time storage would not be hurt by it even if they are unable to
    > benefit from this particular optimization,

    It will be hurt if it uses shared code paths which potentially
    take up more cache, or if bugs are introduced.

    > and as long as there is a substantial
    > use case/space that benefit from an optimization, it ought to be considered.
    > I believe virtual switching offload with virtio-net devices over PCIe is such a
    > key use-case.

    It looks like the packed ring addresses the need nicely,
    while being device-independent.


    > >
    > > > >
    > > > >
    > > > > > >
    > > > > > > > --------------------------------------------------------------
    > > > > > > > ----
    > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > For additional commands, e-mail:
    > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > >
    > > > > > ------------------------------------------------------------------
    > > > > > --- To unsubscribe, e-mail:
    > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > For additional commands, e-mail:
    > > > > > virtio-dev-help@lists.oasis-open.org
    > > > Disclaimer: This email and any files transmitted with it may contain
    > > confidential information intended for the addressee(s) only. The information
    > > is not to be surrendered or copied to unauthorized persons. If you have
    > > received this communication in error, please notify the sender immediately
    > > and delete this e-mail from your system.
    > >
    > > ---------------------------------------------------------------------
    > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
    > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
    >
    > Disclaimer: This email and any files transmitted with it may contain confidential information intended for the addressee(s) only. The information is not to be surrendered or copied to unauthorized persons. If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.



  • 17.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-03-2018 11:48
    On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote: > > From: virtio-dev@lists.oasis-open.org <virtio-dev@lists.oasis-open.org> On > > Behalf Of Michael S. Tsirkin > > Sent: 29. marts 2018 21:13 > > > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote: > > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > Sent: 29. marts 2018 16:42 > > > > > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote: > > > > > Missed replying to the lists. Sorry. > > > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > Sent: 28. marts 2018 16:39 > > > > > > > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote: > > > > > > > Hi Michael et al > > > > > > > > > > > > > > > Behalf Of Michael S. Tsirkin > > > > > > > > Sent: 9. marts 2018 22:24 > > > > > > > > > > > > > > > > For a split ring, require that drivers use descriptors in order too. > > > > > > > > This allows devices to skip reading the available ring. > > > > > > > > > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com> > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > > > > --- > > > > > > > [snip] > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when making a > > > > > > > > +descriptor with VRING_DESC_F_NEXT set in field{flags} at > > > > > > > > +offset $x$ in the table available to the device, driver > > > > > > > > +MUST set field{next} to $0$ for the last descriptor in the > > > > > > > > +table (where $x = queue\_size - 1$) and to $x + 1$ for the > > > > > > > > +rest of the > > > > descriptors. > > > > > > > > + > > > > > > > > subsubsection{Indirect Descriptors}label{sec:Basic > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue > > > > > > > > Descriptor Table / Indirect Descriptors} > > > > > > > > > > > > > > > > Some devices benefit by concurrently dispatching a large > > > > > > > > number @@ > > > > > > > > -247,6 > > > > > > > > +257,10 @@ chained by field{next}. An indirect descriptor > > > > > > > > +without a valid > > > > > > > > field{next} A single indirect descriptor table can > > > > > > > > include both > > > > > > > > device- readable and device-writable descriptors. > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect > > > > > > > > +descriptors use sequential indices, in-order: index 0 > > > > > > > > +followed by index 1 followed by index 2, etc. > > > > > > > > + > > > > > > > > drivernormative{paragraph}{Indirect Descriptors}{Basic > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue > > > > > > > > Descriptor Table / Indirect Descriptors} The driver MUST NOT > > > > > > > > set the > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver > > MUST > > > > > > NOT > > > > > > > > @@ -259,6 +273,10 @@ the device. > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and > > > > > > > > VIRTQ_DESC_F_NEXT in field{flags}. > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect > > > > > > > > +descriptors MUST appear sequentially, with field{next} > > > > > > > > +taking the value of > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc. > > > > > > > > + > > > > > > > > devicenormative{paragraph}{Indirect Descriptors}{Basic > > > > > > > > Facilities of a Virtio Device / Virtqueues / The Virtqueue > > > > > > > > Descriptor Table / Indirect Descriptors} The device MUST > > > > > > > > ignore the write-only flag > > > > > > > > (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that > > > > > > > > refers to an indirect table. > > > > > > > > > > > > > > > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate some > > > > > > > accesses > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm > > > > > > wondering if the proposed descriptor ordering for multi-element > > > > > > buffers couldn't be tweaked to be more HW friendly. Currently > > > > > > even with the VIRTIO_F_IN_ORDER negotiated, there is no way of > > > > > > knowing if, or how many chained descriptors follow the > > > > > > descriptor pointed to by the virtq_avail.idx. A chain has to be > > > > > > inspected one descriptor at a time until > > > > > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward for HW > > > > > > offload, where you want to DMA all available descriptors in one > > > > > > shot, instead of iterating based on the contents of received DMA > > > > > > data. As currently defined, HW would have to find a compromise > > between likely chain length, and cost of additional DMA transfers. > > > > > > This leads to a performance penalty for all chained descriptors, > > > > > > and in case the length assumption is wrong the impact can be > > significant. > > > > > > > > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required chained > > > > > > > buffers to > > > > > > place the last element at the lowest index, and the head-element > > > > > > (to which virtq_avail.idx points) at the highest index? Then all > > > > > > the chained element descriptors would be included in a DMA of > > > > > > the descriptor table from the previous virtq_avail.idx+1 to the > > > > > > current > > > > virtq_avail.idx. The "backward" > > > > > > order of the chained descriptors shouldn't pose an issue as such > > > > > > (at least not in HW). > > > > > > > > > > > > > > Best Regards, > > > > > > > > > > > > > > -Lars > > > > > > > > > > > > virtq_avail.idx is still an index into the available ring. > > > > > > > > > > > > I don't really see how you can use virtq_avail.idx to guess the > > > > > > placement of a descriptor. > > > > > > > > > > > > I suspect the best way to optimize this is to include the > > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature. > > > > > > > > > > > > > > > > Argh, naturally. > > > > > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the > > > > index right now. > > > > > > > > Do you have an opinion on whether we should change that for in-order? > > > > > > > > > > Maybe I should think more about this, however adding the last element > > descriptor index, would be useful to accelerate interfaces that frequently > > use chaining (from a HW DMA perspective at least). > > > > > > > > For HW offload I'd want to avoid notifications for buffer transfer > > > > > from host > > > > to device, and hoped to just poll virtq_avail.idx directly. > > > > > > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no > > > > chaining. It would be nice to allow negotiating away chaining, i.e > > > > add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees not to use > > > > chaining, and as a result (of IN_ORDER and NO_CHAIN) both device and > > > > driver can ignore the virtq_avail.ring[]. > > > > > > > > My point was that device can just assume no chains, and then fall > > > > back on doing extra reads upon encountering a chain. > > > > > > > > > > Yes, you are correct that the HW can speculatively use virtq_avail.idx as the > > >direct index to the descriptor table, and if it encounters a chain, revert to > > >using the virtq_avail.ring[] in the traditional way, and this would work > > >without the feature-bit. > > > > Sorry that was not my idea. > > > > Device should not need to read the ring at all. > > It reads the descriptor table and counts the descriptors without the next bit. > > Once the count reaches the available index, it stops. > > > > Agreed, that would work as well, with the benefit of keeping the ring out of > the loop. > > > > > > However the driver would not be able to optimize away the writing of > > > the virtq_avail.ring[] (=cache miss) > > > > > > BTW writing is a separate question (there is no provision in the spec to skip > > writes) but device does not have to read the ring. > > > > Yes, I understand the spec currently does not allow writes to be skipped, but > I'm wondering if that ought to be reconsidered for optimization features such > as IN_ORDER and NO_CHAIN? Why not just use the packed ring then? > By opting for such features, both driver and > device acknowledge their willingness to accept reduced flexibility for > improved performance. Why not then make sure they get the biggest bang for > their buck? I would expect up to 20% improvement over PCIe (virtio-net, > single 64B packet), if the device does not have to write to virtq_used.ring[] on > transmit, and bandwidth over PCI is a very precious resource in e.g. virtual > switch offload with east-west acceleration (for a discussion see Intel's white- > paper 335625-001). Haven't looked at it yet but we also need to consider the complexity, see below. > > Without device accesses ring will not be invaliated in cache so no misses > > hopefully. > > > > > unless a NO_CHAIN feature has > > > been negotiated. > > > The IN_ORDER by itself has already eliminated the need to maintain the > > > TX virtq_used.ring[], since the buffer order is always known by the > > > driver. > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[] related > > > cache-misses could be eliminated. I.e. > > > looping a packet over a split virtqueue would just experience 7 driver > > > cache misses, down from 10 in Virtio v1.0. Multi-element buffers would > > > still be possible provided INDIRECT is negotiated. > > > > > > NO_CHAIN might be a valid optimization, it is just unfortunately somewhat > > narrow in that devices that need to mix write and read descriptors in the > > same ring (e.g. storage) can not use this feature. > > > > Yes, if there was a way of making indirect buffers support it, that would be > ideal. However I don't see how that can be done without inline headers in > elements to hold their written length. Kind of like it's done with with packed ring? > At the same time storage would not be hurt by it even if they are unable to > benefit from this particular optimization, It will be hurt if it uses shared code paths which potentially take up more cache, or if bugs are introduced. > and as long as there is a substantial > use case/space that benefit from an optimization, it ought to be considered. > I believe virtual switching offload with virtio-net devices over PCIe is such a > key use-case. It looks like the packed ring addresses the need nicely, while being device-independent. > > > > > > > > > > > > > > > > > > > > > > > -------------------------------------------------------------- > > > > > > > ---- > > > > > > > --- To unsubscribe, e-mail: > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org > > > > > > > For additional commands, e-mail: > > > > > > > virtio-dev-help@lists.oasis-open.org > > > > > > > > > > ------------------------------------------------------------------ > > > > > --- To unsubscribe, e-mail: > > > > > virtio-dev-unsubscribe@lists.oasis-open.org > > > > > For additional commands, e-mail: > > > > > virtio-dev-help@lists.oasis-open.org > > > Disclaimer: This email and any files transmitted with it may contain > > confidential information intended for the addressee(s) only. The information > > is not to be surrendered or copied to unauthorized persons. If you have > > received this communication in error, please notify the sender immediately > > and delete this e-mail from your system. > > > > --------------------------------------------------------------------- > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org > > Disclaimer: This email and any files transmitted with it may contain confidential information intended for the addressee(s) only. The information is not to be surrendered or copied to unauthorized persons. If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.


  • 18.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-04-2018 15:03
    > From: Michael S. Tsirkin <mst@redhat.com>
    > Sent: 3. april 2018 13:48
    > To: Lars Ganrot <lga@napatech.com>
    > Cc: virtio@lists.oasis-open.org; virtio-dev@lists.oasis-open.org
    > Subject: Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature
    >
    > On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote:
    > > > From: virtio-dev@lists.oasis-open.org
    > > > <virtio-dev@lists.oasis-open.org> On Behalf Of Michael S. Tsirkin
    > > > Sent: 29. marts 2018 21:13
    > > >
    > > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    > > > >
    > > > >
    > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > Sent: 29. marts 2018 16:42
    > > > > >
    > > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > > > > > > Missed replying to the lists. Sorry.
    > > > > > >
    > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > Sent: 28. marts 2018 16:39
    > > > > > > >
    > > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > > > > > > > Hi Michael et al
    > > > > > > > >
    > > > > > > > > > Behalf Of Michael S. Tsirkin
    > > > > > > > > > Sent: 9. marts 2018 22:24
    > > > > > > > > >
    > > > > > > > > > For a split ring, require that drivers use descriptors in order
    > too.
    > > > > > > > > > This allows devices to skip reading the available ring.
    > > > > > > > > >
    > > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > > > > > > ---
    > > > > > > > > [snip]
    > > > > > > > > >
    > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when
    > > > > > > > > > +making a descriptor with VRING_DESC_F_NEXT set in
    > > > > > > > > > +\field{flags} at offset $x$ in the table available to
    > > > > > > > > > +the device, driver MUST set \field{next} to $0$ for the
    > > > > > > > > > +last descriptor in the table (where $x = queue\_size -
    > > > > > > > > > +1$) and to $x + 1$ for the rest of the
    > > > > > descriptors.
    > > > > > > > > > +
    > > > > > > > > > \subsubsection{Indirect Descriptors}\label{sec:Basic
    > > > > > > > > > Facilities of a Virtio Device / Virtqueues / The
    > > > > > > > > > Virtqueue Descriptor Table / Indirect Descriptors}
    > > > > > > > > >
    > > > > > > > > > Some devices benefit by concurrently dispatching a
    > > > > > > > > > large number @@
    > > > > > > > > > -247,6
    > > > > > > > > > +257,10 @@ chained by \field{next}. An indirect
    > > > > > > > > > +descriptor without a valid
    > > > > > > > > > \field{next} A single indirect descriptor table can
    > > > > > > > > > include both
    > > > > > > > > > device- readable and device-writable descriptors.
    > > > > > > > > >
    > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > +descriptors use sequential indices, in-order: index 0
    > > > > > > > > > +followed by index 1 followed by index 2, etc.
    > > > > > > > > > +
    > > > > > > > > > \drivernormative{\paragraph}{Indirect
    > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect
    > > > > > > > > > Descriptors} The driver MUST NOT set the
    > > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The
    > driver
    > > > MUST
    > > > > > > > NOT
    > > > > > > > > > @@ -259,6 +273,10 @@ the device.
    > > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > > > > > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > > > > > > >
    > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > +descriptors MUST appear sequentially, with \field{next}
    > > > > > > > > > +taking the value of
    > > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc.
    > > > > > > > > > +
    > > > > > > > > > \devicenormative{\paragraph}{Indirect
    > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect
    > > > > > > > > > Descriptors} The device MUST ignore the write-only flag
    > > > > > > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor
    > > > > > > > > > that refers to an indirect table.
    > > > > > > > > >
    > > > > > > > >
    > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate
    > > > > > > > > some accesses
    > > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm
    > > > > > > > wondering if the proposed descriptor ordering for
    > > > > > > > multi-element buffers couldn't be tweaked to be more HW
    > > > > > > > friendly. Currently even with the VIRTIO_F_IN_ORDER
    > > > > > > > negotiated, there is no way of knowing if, or how many
    > > > > > > > chained descriptors follow the descriptor pointed to by the
    > > > > > > > virtq_avail.idx. A chain has to be inspected one descriptor
    > > > > > > > at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This
    > > > > > > > is awkward for HW offload, where you want to DMA all
    > > > > > > > available descriptors in one shot, instead of iterating
    > > > > > > > based on the contents of received DMA data. As currently
    > > > > > > > defined, HW would have to find a compromise
    > > > between likely chain length, and cost of additional DMA transfers.
    > > > > > > > This leads to a performance penalty for all chained
    > > > > > > > descriptors, and in case the length assumption is wrong the
    > > > > > > > impact can be
    > > > significant.
    > > > > > > > >
    > > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required
    > > > > > > > > chained buffers to
    > > > > > > > place the last element at the lowest index, and the
    > > > > > > > head-element (to which virtq_avail.idx points) at the
    > > > > > > > highest index? Then all the chained element descriptors
    > > > > > > > would be included in a DMA of the descriptor table from the
    > > > > > > > previous virtq_avail.idx+1 to the current
    > > > > > virtq_avail.idx. The "backward"
    > > > > > > > order of the chained descriptors shouldn't pose an issue as
    > > > > > > > such (at least not in HW).
    > > > > > > > >
    > > > > > > > > Best Regards,
    > > > > > > > >
    > > > > > > > > -Lars
    > > > > > > >
    > > > > > > > virtq_avail.idx is still an index into the available ring.
    > > > > > > >
    > > > > > > > I don't really see how you can use virtq_avail.idx to guess
    > > > > > > > the placement of a descriptor.
    > > > > > > >
    > > > > > > > I suspect the best way to optimize this is to include the
    > > > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature.
    > > > > > > >
    > > > > > >
    > > > > > > Argh, naturally.
    > > > > >
    > > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the
    > > > > > index right now.
    > > > > >
    > > > > > Do you have an opinion on whether we should change that for in-
    > order?
    > > > > >
    > > > >
    > > > > Maybe I should think more about this, however adding the last
    > > > > element
    > > > descriptor index, would be useful to accelerate interfaces that
    > > > frequently use chaining (from a HW DMA perspective at least).
    > > > >
    > > > > > > For HW offload I'd want to avoid notifications for buffer
    > > > > > > transfer from host
    > > > > > to device, and hoped to just poll virtq_avail.idx directly.
    > > > > > >
    > > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain
    > > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no
    > > > > > chaining. It would be nice to allow negotiating away chaining,
    > > > > > i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees
    > > > > > not to use chaining, and as a result (of IN_ORDER and NO_CHAIN)
    > > > > > both device and driver can ignore the virtq_avail.ring[].
    > > > > >
    > > > > > My point was that device can just assume no chains, and then
    > > > > > fall back on doing extra reads upon encountering a chain.
    > > > > >
    > > > >
    > > > > Yes, you are correct that the HW can speculatively use
    > > > >virtq_avail.idx as the direct index to the descriptor table, and if
    > > > >it encounters a chain, revert to using the virtq_avail.ring[] in
    > > > >the traditional way, and this would work without the feature-bit.
    > > >
    > > > Sorry that was not my idea.
    > > >
    > > > Device should not need to read the ring at all.
    > > > It reads the descriptor table and counts the descriptors without the next
    > bit.
    > > > Once the count reaches the available index, it stops.
    > > >
    > >
    > > Agreed, that would work as well, with the benefit of keeping the ring
    > > out of the loop.
    > >
    > > >
    > > > > However the driver would not be able to optimize away the writing
    > > > > of the virtq_avail.ring[] (=cache miss)
    > > >
    > > >
    > > > BTW writing is a separate question (there is no provision in the
    > > > spec to skip
    > > > writes) but device does not have to read the ring.
    > > >
    > >
    > > Yes, I understand the spec currently does not allow writes to be
    > > skipped, but I'm wondering if that ought to be reconsidered for
    > > optimization features such as IN_ORDER and NO_CHAIN?
    >
    > Why not just use the packed ring then?
    >

    Device notification. While the packed ring solves some of the issues in
    the split ring, it also comes at a cost. In my view the two complement
    each other, however the required use of driver to device notifications
    in the packed ring for all driver to device transfers over PCIe (to handle
    the update granularity issue with Qwords as pointed out by Ilya on 14:th
    Jan) will limit performance (latency and throughput) in our experience.
    We want to use device polling.

    Btw, won't the notification add one extra cache miss for all TX over PCIe
    transport?

    > > By opting for such features, both driver and device acknowledge their
    > > willingness to accept reduced flexibility for improved performance.
    > > Why not then make sure they get the biggest bang for their buck? I
    > > would expect up to 20% improvement over PCIe (virtio-net, single 64B
    > > packet), if the device does not have to write to virtq_used.ring[] on
    > > transmit, and bandwidth over PCI is a very precious resource in e.g.
    > > virtual switch offload with east-west acceleration (for a discussion
    > > see Intel's white- paper 335625-001).
    >
    > Haven't looked at it yet but we also need to consider the complexity, see
    > below.
    >
    > > > Without device accesses ring will not be invaliated in cache so no
    > > > misses hopefully.
    > > >
    > > > > unless a NO_CHAIN feature has
    > > > > been negotiated.
    > > > > The IN_ORDER by itself has already eliminated the need to maintain
    > > > > the TX virtq_used.ring[], since the buffer order is always known
    > > > > by the driver.
    > > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[]
    > > > > related cache-misses could be eliminated. I.e.
    > > > > looping a packet over a split virtqueue would just experience 7
    > > > > driver cache misses, down from 10 in Virtio v1.0. Multi-element
    > > > > buffers would still be possible provided INDIRECT is negotiated.
    > > >
    > > >
    > > > NO_CHAIN might be a valid optimization, it is just unfortunately
    > > > somewhat narrow in that devices that need to mix write and read
    > > > descriptors in the same ring (e.g. storage) can not use this feature.
    > > >
    > >
    > > Yes, if there was a way of making indirect buffers support it, that
    > > would be ideal. However I don't see how that can be done without
    > > inline headers in elements to hold their written length.
    >
    > Kind of like it's done with with packed ring?
    >
    > > At the same time storage would not be hurt by it even if they are
    > > unable to benefit from this particular optimization,
    >
    > It will be hurt if it uses shared code paths which potentially take up more
    > cache, or if bugs are introduced.
    >
    > > and as long as there is a substantial
    > > use case/space that benefit from an optimization, it ought to be
    > considered.
    > > I believe virtual switching offload with virtio-net devices over PCIe
    > > is such a key use-case.
    >
    > It looks like the packed ring addresses the need nicely, while being device-
    > independent.
    >
    >
    > > >
    > > > > >
    > > > > >
    > > > > > > >
    > > > > > > > > ----------------------------------------------------------
    > > > > > > > > ----
    > > > > > > > > ----
    > > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > > For additional commands, e-mail:
    > > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > > >
    > > > > > > --------------------------------------------------------------
    > > > > > > ----
    > > > > > > --- To unsubscribe, e-mail:
    > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > For additional commands, e-mail:
    > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > Disclaimer: This email and any files transmitted with it may
    > > > > contain
    > > > confidential information intended for the addressee(s) only. The
    > > > information is not to be surrendered or copied to unauthorized
    > > > persons. If you have received this communication in error, please
    > > > notify the sender immediately and delete this e-mail from your system.
    > > >
    > > > --------------------------------------------------------------------
    > > > - To unsubscribe, e-mail:
    > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > For additional commands, e-mail:
    > > > virtio-dev-help@lists.oasis-open.org
    > >
    > > Disclaimer: This email and any files transmitted with it may contain
    > confidential information intended for the addressee(s) only. The information
    > is not to be surrendered or copied to unauthorized persons. If you have
    > received this communication in error, please notify the sender immediately
    > and delete this e-mail from your system.



  • 19.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-04-2018 16:08
    On Wed, Apr 04, 2018 at 03:03:16PM +0000, Lars Ganrot wrote:
    > > From: Michael S. Tsirkin <mst@redhat.com>
    > > Sent: 3. april 2018 13:48
    > > To: Lars Ganrot <lga@napatech.com>
    > > Cc: virtio@lists.oasis-open.org; virtio-dev@lists.oasis-open.org
    > > Subject: Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature
    > >
    > > On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote:
    > > > > From: virtio-dev@lists.oasis-open.org
    > > > > <virtio-dev@lists.oasis-open.org> On Behalf Of Michael S. Tsirkin
    > > > > Sent: 29. marts 2018 21:13
    > > > >
    > > > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    > > > > >
    > > > > >
    > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > Sent: 29. marts 2018 16:42
    > > > > > >
    > > > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > > > > > > > Missed replying to the lists. Sorry.
    > > > > > > >
    > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > Sent: 28. marts 2018 16:39
    > > > > > > > >
    > > > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote:
    > > > > > > > > > Hi Michael et al
    > > > > > > > > >
    > > > > > > > > > > Behalf Of Michael S. Tsirkin
    > > > > > > > > > > Sent: 9. marts 2018 22:24
    > > > > > > > > > >
    > > > > > > > > > > For a split ring, require that drivers use descriptors in order
    > > too.
    > > > > > > > > > > This allows devices to skip reading the available ring.
    > > > > > > > > > >
    > > > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > > > > > > > ---
    > > > > > > > > > [snip]
    > > > > > > > > > >
    > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when
    > > > > > > > > > > +making a descriptor with VRING_DESC_F_NEXT set in
    > > > > > > > > > > +\field{flags} at offset $x$ in the table available to
    > > > > > > > > > > +the device, driver MUST set \field{next} to $0$ for the
    > > > > > > > > > > +last descriptor in the table (where $x = queue\_size -
    > > > > > > > > > > +1$) and to $x + 1$ for the rest of the
    > > > > > > descriptors.
    > > > > > > > > > > +
    > > > > > > > > > > \subsubsection{Indirect Descriptors}\label{sec:Basic
    > > > > > > > > > > Facilities of a Virtio Device / Virtqueues / The
    > > > > > > > > > > Virtqueue Descriptor Table / Indirect Descriptors}
    > > > > > > > > > >
    > > > > > > > > > > Some devices benefit by concurrently dispatching a
    > > > > > > > > > > large number @@
    > > > > > > > > > > -247,6
    > > > > > > > > > > +257,10 @@ chained by \field{next}. An indirect
    > > > > > > > > > > +descriptor without a valid
    > > > > > > > > > > \field{next} A single indirect descriptor table can
    > > > > > > > > > > include both
    > > > > > > > > > > device- readable and device-writable descriptors.
    > > > > > > > > > >
    > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > > +descriptors use sequential indices, in-order: index 0
    > > > > > > > > > > +followed by index 1 followed by index 2, etc.
    > > > > > > > > > > +
    > > > > > > > > > > \drivernormative{\paragraph}{Indirect
    > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect
    > > > > > > > > > > Descriptors} The driver MUST NOT set the
    > > > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The
    > > driver
    > > > > MUST
    > > > > > > > > NOT
    > > > > > > > > > > @@ -259,6 +273,10 @@ the device.
    > > > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and
    > > > > > > > > > > VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > > > > > > > >
    > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > > +descriptors MUST appear sequentially, with \field{next}
    > > > > > > > > > > +taking the value of
    > > > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc.
    > > > > > > > > > > +
    > > > > > > > > > > \devicenormative{\paragraph}{Indirect
    > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect
    > > > > > > > > > > Descriptors} The device MUST ignore the write-only flag
    > > > > > > > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the descriptor
    > > > > > > > > > > that refers to an indirect table.
    > > > > > > > > > >
    > > > > > > > > >
    > > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate
    > > > > > > > > > some accesses
    > > > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm
    > > > > > > > > wondering if the proposed descriptor ordering for
    > > > > > > > > multi-element buffers couldn't be tweaked to be more HW
    > > > > > > > > friendly. Currently even with the VIRTIO_F_IN_ORDER
    > > > > > > > > negotiated, there is no way of knowing if, or how many
    > > > > > > > > chained descriptors follow the descriptor pointed to by the
    > > > > > > > > virtq_avail.idx. A chain has to be inspected one descriptor
    > > > > > > > > at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This
    > > > > > > > > is awkward for HW offload, where you want to DMA all
    > > > > > > > > available descriptors in one shot, instead of iterating
    > > > > > > > > based on the contents of received DMA data. As currently
    > > > > > > > > defined, HW would have to find a compromise
    > > > > between likely chain length, and cost of additional DMA transfers.
    > > > > > > > > This leads to a performance penalty for all chained
    > > > > > > > > descriptors, and in case the length assumption is wrong the
    > > > > > > > > impact can be
    > > > > significant.
    > > > > > > > > >
    > > > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required
    > > > > > > > > > chained buffers to
    > > > > > > > > place the last element at the lowest index, and the
    > > > > > > > > head-element (to which virtq_avail.idx points) at the
    > > > > > > > > highest index? Then all the chained element descriptors
    > > > > > > > > would be included in a DMA of the descriptor table from the
    > > > > > > > > previous virtq_avail.idx+1 to the current
    > > > > > > virtq_avail.idx. The "backward"
    > > > > > > > > order of the chained descriptors shouldn't pose an issue as
    > > > > > > > > such (at least not in HW).
    > > > > > > > > >
    > > > > > > > > > Best Regards,
    > > > > > > > > >
    > > > > > > > > > -Lars
    > > > > > > > >
    > > > > > > > > virtq_avail.idx is still an index into the available ring.
    > > > > > > > >
    > > > > > > > > I don't really see how you can use virtq_avail.idx to guess
    > > > > > > > > the placement of a descriptor.
    > > > > > > > >
    > > > > > > > > I suspect the best way to optimize this is to include the
    > > > > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature.
    > > > > > > > >
    > > > > > > >
    > > > > > > > Argh, naturally.
    > > > > > >
    > > > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the
    > > > > > > index right now.
    > > > > > >
    > > > > > > Do you have an opinion on whether we should change that for in-
    > > order?
    > > > > > >
    > > > > >
    > > > > > Maybe I should think more about this, however adding the last
    > > > > > element
    > > > > descriptor index, would be useful to accelerate interfaces that
    > > > > frequently use chaining (from a HW DMA perspective at least).
    > > > > >
    > > > > > > > For HW offload I'd want to avoid notifications for buffer
    > > > > > > > transfer from host
    > > > > > > to device, and hoped to just poll virtq_avail.idx directly.
    > > > > > > >
    > > > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain
    > > > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no
    > > > > > > chaining. It would be nice to allow negotiating away chaining,
    > > > > > > i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees
    > > > > > > not to use chaining, and as a result (of IN_ORDER and NO_CHAIN)
    > > > > > > both device and driver can ignore the virtq_avail.ring[].
    > > > > > >
    > > > > > > My point was that device can just assume no chains, and then
    > > > > > > fall back on doing extra reads upon encountering a chain.
    > > > > > >
    > > > > >
    > > > > > Yes, you are correct that the HW can speculatively use
    > > > > >virtq_avail.idx as the direct index to the descriptor table, and if
    > > > > >it encounters a chain, revert to using the virtq_avail.ring[] in
    > > > > >the traditional way, and this would work without the feature-bit.
    > > > >
    > > > > Sorry that was not my idea.
    > > > >
    > > > > Device should not need to read the ring at all.
    > > > > It reads the descriptor table and counts the descriptors without the next
    > > bit.
    > > > > Once the count reaches the available index, it stops.
    > > > >
    > > >
    > > > Agreed, that would work as well, with the benefit of keeping the ring
    > > > out of the loop.
    > > >
    > > > >
    > > > > > However the driver would not be able to optimize away the writing
    > > > > > of the virtq_avail.ring[] (=cache miss)
    > > > >
    > > > >
    > > > > BTW writing is a separate question (there is no provision in the
    > > > > spec to skip
    > > > > writes) but device does not have to read the ring.
    > > > >
    > > >
    > > > Yes, I understand the spec currently does not allow writes to be
    > > > skipped, but I'm wondering if that ought to be reconsidered for
    > > > optimization features such as IN_ORDER and NO_CHAIN?
    > >
    > > Why not just use the packed ring then?
    > >
    >
    > Device notification. While the packed ring solves some of the issues in
    > the split ring, it also comes at a cost. In my view the two complement
    > each other, however the required use of driver to device notifications
    > in the packed ring for all driver to device transfers over PCIe (to handle
    > the update granularity issue with Qwords as pointed out by Ilya on 14:th
    > Jan) will limit performance (latency and throughput) in our experience.
    > We want to use device polling.

    You can poll the descriptor for sure.

    I think you refer to this:

    As an example of update ordering, assume that the block of data is in host memory, and a host CPU
    writes first to location A and then to a different location B. A Requester reading that data block
    with a single read transaction is not guaranteed to observe those updates in order. In other words,
    the Requester may observe an updated value in location B and an old value in location A, regardless
    of the placement of locations A and B within the data block. Unless a Completer makes its own
    guarantees (outside this specification) with respect to update ordering, a Requester that relies on
    update ordering must observe the update to location B via one read transaction before initiating a
    subsequent read to location A to return its updated value.

    One question would be whether placing a memory barrier (such as sfence on x86)
    after writing out A will guarantee update ordering.

    Do you know anything about it?



    > Btw, won't the notification add one extra cache miss for all TX over PCIe
    > transport?

    It's a posted write, these are typically not cached.

    > > > By opting for such features, both driver and device acknowledge their
    > > > willingness to accept reduced flexibility for improved performance.
    > > > Why not then make sure they get the biggest bang for their buck? I
    > > > would expect up to 20% improvement over PCIe (virtio-net, single 64B
    > > > packet), if the device does not have to write to virtq_used.ring[] on
    > > > transmit, and bandwidth over PCI is a very precious resource in e.g.
    > > > virtual switch offload with east-west acceleration (for a discussion
    > > > see Intel's white- paper 335625-001).
    > >
    > > Haven't looked at it yet but we also need to consider the complexity, see
    > > below.
    > >
    > > > > Without device accesses ring will not be invaliated in cache so no
    > > > > misses hopefully.
    > > > >
    > > > > > unless a NO_CHAIN feature has
    > > > > > been negotiated.
    > > > > > The IN_ORDER by itself has already eliminated the need to maintain
    > > > > > the TX virtq_used.ring[], since the buffer order is always known
    > > > > > by the driver.
    > > > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[]
    > > > > > related cache-misses could be eliminated. I.e.
    > > > > > looping a packet over a split virtqueue would just experience 7
    > > > > > driver cache misses, down from 10 in Virtio v1.0. Multi-element
    > > > > > buffers would still be possible provided INDIRECT is negotiated.
    > > > >
    > > > >
    > > > > NO_CHAIN might be a valid optimization, it is just unfortunately
    > > > > somewhat narrow in that devices that need to mix write and read
    > > > > descriptors in the same ring (e.g. storage) can not use this feature.
    > > > >
    > > >
    > > > Yes, if there was a way of making indirect buffers support it, that
    > > > would be ideal. However I don't see how that can be done without
    > > > inline headers in elements to hold their written length.
    > >
    > > Kind of like it's done with with packed ring?
    > >
    > > > At the same time storage would not be hurt by it even if they are
    > > > unable to benefit from this particular optimization,
    > >
    > > It will be hurt if it uses shared code paths which potentially take up more
    > > cache, or if bugs are introduced.
    > >
    > > > and as long as there is a substantial
    > > > use case/space that benefit from an optimization, it ought to be
    > > considered.
    > > > I believe virtual switching offload with virtio-net devices over PCIe
    > > > is such a key use-case.
    > >
    > > It looks like the packed ring addresses the need nicely, while being device-
    > > independent.
    > >
    > >
    > > > >
    > > > > > >
    > > > > > >
    > > > > > > > >
    > > > > > > > > > ----------------------------------------------------------
    > > > > > > > > > ----
    > > > > > > > > > ----
    > > > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > > > For additional commands, e-mail:
    > > > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > > > >
    > > > > > > > --------------------------------------------------------------
    > > > > > > > ----
    > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > For additional commands, e-mail:
    > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > > Disclaimer: This email and any files transmitted with it may
    > > > > > contain
    > > > > confidential information intended for the addressee(s) only. The
    > > > > information is not to be surrendered or copied to unauthorized
    > > > > persons. If you have received this communication in error, please
    > > > > notify the sender immediately and delete this e-mail from your system.
    > > > >
    > > > > --------------------------------------------------------------------
    > > > > - To unsubscribe, e-mail:
    > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > For additional commands, e-mail:
    > > > > virtio-dev-help@lists.oasis-open.org
    > > >
    > > > Disclaimer: This email and any files transmitted with it may contain
    > > confidential information intended for the addressee(s) only. The information
    > > is not to be surrendered or copied to unauthorized persons. If you have
    > > received this communication in error, please notify the sender immediately
    > > and delete this e-mail from your system.
    > Disclaimer: This email and any files transmitted with it may contain confidential information intended for the addressee(s) only. The information is not to be surrendered or copied to unauthorized persons. If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.



  • 20.  Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-04-2018 16:08
    On Wed, Apr 04, 2018 at 03:03:16PM +0000, Lars Ganrot wrote: > > From: Michael S. Tsirkin <mst@redhat.com> > > Sent: 3. april 2018 13:48 > > To: Lars Ganrot <lga@napatech.com> > > Cc: virtio@lists.oasis-open.org; virtio-dev@lists.oasis-open.org > > Subject: Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature > > > > On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote: > > > > From: virtio-dev@lists.oasis-open.org > > > > <virtio-dev@lists.oasis-open.org> On Behalf Of Michael S. Tsirkin > > > > Sent: 29. marts 2018 21:13 > > > > > > > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote: > > > > > > > > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > Sent: 29. marts 2018 16:42 > > > > > > > > > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote: > > > > > > > Missed replying to the lists. Sorry. > > > > > > > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com> > > > > > > > > Sent: 28. marts 2018 16:39 > > > > > > > > > > > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot wrote: > > > > > > > > > Hi Michael et al > > > > > > > > > > > > > > > > > > > Behalf Of Michael S. Tsirkin > > > > > > > > > > Sent: 9. marts 2018 22:24 > > > > > > > > > > > > > > > > > > > > For a split ring, require that drivers use descriptors in order > > too. > > > > > > > > > > This allows devices to skip reading the available ring. > > > > > > > > > > > > > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com> > > > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com> > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> > > > > > > > > > > --- > > > > > > > > > [snip] > > > > > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when > > > > > > > > > > +making a descriptor with VRING_DESC_F_NEXT set in > > > > > > > > > > +field{flags} at offset $x$ in the table available to > > > > > > > > > > +the device, driver MUST set field{next} to $0$ for the > > > > > > > > > > +last descriptor in the table (where $x = queue\_size - > > > > > > > > > > +1$) and to $x + 1$ for the rest of the > > > > > > descriptors. > > > > > > > > > > + > > > > > > > > > > subsubsection{Indirect Descriptors}label{sec:Basic > > > > > > > > > > Facilities of a Virtio Device / Virtqueues / The > > > > > > > > > > Virtqueue Descriptor Table / Indirect Descriptors} > > > > > > > > > > > > > > > > > > > > Some devices benefit by concurrently dispatching a > > > > > > > > > > large number @@ > > > > > > > > > > -247,6 > > > > > > > > > > +257,10 @@ chained by field{next}. An indirect > > > > > > > > > > +descriptor without a valid > > > > > > > > > > field{next} A single indirect descriptor table can > > > > > > > > > > include both > > > > > > > > > > device- readable and device-writable descriptors. > > > > > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect > > > > > > > > > > +descriptors use sequential indices, in-order: index 0 > > > > > > > > > > +followed by index 1 followed by index 2, etc. > > > > > > > > > > + > > > > > > > > > > drivernormative{paragraph}{Indirect > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device / > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect > > > > > > > > > > Descriptors} The driver MUST NOT set the > > > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the > > > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The > > driver > > > > MUST > > > > > > > > NOT > > > > > > > > > > @@ -259,6 +273,10 @@ the device. > > > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and > > > > > > > > > > VIRTQ_DESC_F_NEXT in field{flags}. > > > > > > > > > > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect > > > > > > > > > > +descriptors MUST appear sequentially, with field{next} > > > > > > > > > > +taking the value of > > > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc. > > > > > > > > > > + > > > > > > > > > > devicenormative{paragraph}{Indirect > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device / > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table / Indirect > > > > > > > > > > Descriptors} The device MUST ignore the write-only flag > > > > > > > > > > (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor > > > > > > > > > > that refers to an indirect table. > > > > > > > > > > > > > > > > > > > > > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can eliminate > > > > > > > > > some accesses > > > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm > > > > > > > > wondering if the proposed descriptor ordering for > > > > > > > > multi-element buffers couldn't be tweaked to be more HW > > > > > > > > friendly. Currently even with the VIRTIO_F_IN_ORDER > > > > > > > > negotiated, there is no way of knowing if, or how many > > > > > > > > chained descriptors follow the descriptor pointed to by the > > > > > > > > virtq_avail.idx. A chain has to be inspected one descriptor > > > > > > > > at a time until virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This > > > > > > > > is awkward for HW offload, where you want to DMA all > > > > > > > > available descriptors in one shot, instead of iterating > > > > > > > > based on the contents of received DMA data. As currently > > > > > > > > defined, HW would have to find a compromise > > > > between likely chain length, and cost of additional DMA transfers. > > > > > > > > This leads to a performance penalty for all chained > > > > > > > > descriptors, and in case the length assumption is wrong the > > > > > > > > impact can be > > > > significant. > > > > > > > > > > > > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required > > > > > > > > > chained buffers to > > > > > > > > place the last element at the lowest index, and the > > > > > > > > head-element (to which virtq_avail.idx points) at the > > > > > > > > highest index? Then all the chained element descriptors > > > > > > > > would be included in a DMA of the descriptor table from the > > > > > > > > previous virtq_avail.idx+1 to the current > > > > > > virtq_avail.idx. The "backward" > > > > > > > > order of the chained descriptors shouldn't pose an issue as > > > > > > > > such (at least not in HW). > > > > > > > > > > > > > > > > > > Best Regards, > > > > > > > > > > > > > > > > > > -Lars > > > > > > > > > > > > > > > > virtq_avail.idx is still an index into the available ring. > > > > > > > > > > > > > > > > I don't really see how you can use virtq_avail.idx to guess > > > > > > > > the placement of a descriptor. > > > > > > > > > > > > > > > > I suspect the best way to optimize this is to include the > > > > > > > > relevant data with the VIRTIO_F_NOTIFICATION_DATA feature. > > > > > > > > > > > > > > > > > > > > > > Argh, naturally. > > > > > > > > > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies the > > > > > > index right now. > > > > > > > > > > > > Do you have an opinion on whether we should change that for in- > > order? > > > > > > > > > > > > > > > > Maybe I should think more about this, however adding the last > > > > > element > > > > descriptor index, would be useful to accelerate interfaces that > > > > frequently use chaining (from a HW DMA perspective at least). > > > > > > > > > > > > For HW offload I'd want to avoid notifications for buffer > > > > > > > transfer from host > > > > > > to device, and hoped to just poll virtq_avail.idx directly. > > > > > > > > > > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain > > > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is no > > > > > > chaining. It would be nice to allow negotiating away chaining, > > > > > > i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the driver agrees > > > > > > not to use chaining, and as a result (of IN_ORDER and NO_CHAIN) > > > > > > both device and driver can ignore the virtq_avail.ring[]. > > > > > > > > > > > > My point was that device can just assume no chains, and then > > > > > > fall back on doing extra reads upon encountering a chain. > > > > > > > > > > > > > > > > Yes, you are correct that the HW can speculatively use > > > > >virtq_avail.idx as the direct index to the descriptor table, and if > > > > >it encounters a chain, revert to using the virtq_avail.ring[] in > > > > >the traditional way, and this would work without the feature-bit. > > > > > > > > Sorry that was not my idea. > > > > > > > > Device should not need to read the ring at all. > > > > It reads the descriptor table and counts the descriptors without the next > > bit. > > > > Once the count reaches the available index, it stops. > > > > > > > > > > Agreed, that would work as well, with the benefit of keeping the ring > > > out of the loop. > > > > > > > > > > > > However the driver would not be able to optimize away the writing > > > > > of the virtq_avail.ring[] (=cache miss) > > > > > > > > > > > > BTW writing is a separate question (there is no provision in the > > > > spec to skip > > > > writes) but device does not have to read the ring. > > > > > > > > > > Yes, I understand the spec currently does not allow writes to be > > > skipped, but I'm wondering if that ought to be reconsidered for > > > optimization features such as IN_ORDER and NO_CHAIN? > > > > Why not just use the packed ring then? > > > > Device notification. While the packed ring solves some of the issues in > the split ring, it also comes at a cost. In my view the two complement > each other, however the required use of driver to device notifications > in the packed ring for all driver to device transfers over PCIe (to handle > the update granularity issue with Qwords as pointed out by Ilya on 14:th > Jan) will limit performance (latency and throughput) in our experience. > We want to use device polling. You can poll the descriptor for sure. I think you refer to this: As an example of update ordering, assume that the block of data is in host memory, and a host CPU writes first to location A and then to a different location B. A Requester reading that data block with a single read transaction is not guaranteed to observe those updates in order. In other words, the Requester may observe an updated value in location B and an old value in location A, regardless of the placement of locations A and B within the data block. Unless a Completer makes its own guarantees (outside this specification) with respect to update ordering, a Requester that relies on update ordering must observe the update to location B via one read transaction before initiating a subsequent read to location A to return its updated value. One question would be whether placing a memory barrier (such as sfence on x86) after writing out A will guarantee update ordering. Do you know anything about it? > Btw, won't the notification add one extra cache miss for all TX over PCIe > transport? It's a posted write, these are typically not cached. > > > By opting for such features, both driver and device acknowledge their > > > willingness to accept reduced flexibility for improved performance. > > > Why not then make sure they get the biggest bang for their buck? I > > > would expect up to 20% improvement over PCIe (virtio-net, single 64B > > > packet), if the device does not have to write to virtq_used.ring[] on > > > transmit, and bandwidth over PCI is a very precious resource in e.g. > > > virtual switch offload with east-west acceleration (for a discussion > > > see Intel's white- paper 335625-001). > > > > Haven't looked at it yet but we also need to consider the complexity, see > > below. > > > > > > Without device accesses ring will not be invaliated in cache so no > > > > misses hopefully. > > > > > > > > > unless a NO_CHAIN feature has > > > > > been negotiated. > > > > > The IN_ORDER by itself has already eliminated the need to maintain > > > > > the TX virtq_used.ring[], since the buffer order is always known > > > > > by the driver. > > > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[] > > > > > related cache-misses could be eliminated. I.e. > > > > > looping a packet over a split virtqueue would just experience 7 > > > > > driver cache misses, down from 10 in Virtio v1.0. Multi-element > > > > > buffers would still be possible provided INDIRECT is negotiated. > > > > > > > > > > > > NO_CHAIN might be a valid optimization, it is just unfortunately > > > > somewhat narrow in that devices that need to mix write and read > > > > descriptors in the same ring (e.g. storage) can not use this feature. > > > > > > > > > > Yes, if there was a way of making indirect buffers support it, that > > > would be ideal. However I don't see how that can be done without > > > inline headers in elements to hold their written length. > > > > Kind of like it's done with with packed ring? > > > > > At the same time storage would not be hurt by it even if they are > > > unable to benefit from this particular optimization, > > > > It will be hurt if it uses shared code paths which potentially take up more > > cache, or if bugs are introduced. > > > > > and as long as there is a substantial > > > use case/space that benefit from an optimization, it ought to be > > considered. > > > I believe virtual switching offload with virtio-net devices over PCIe > > > is such a key use-case. > > > > It looks like the packed ring addresses the need nicely, while being device- > > independent. > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > > ---------------------------------------------------------- > > > > > > > > > ---- > > > > > > > > > ---- > > > > > > > > > --- To unsubscribe, e-mail: > > > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org > > > > > > > > > For additional commands, e-mail: > > > > > > > > > virtio-dev-help@lists.oasis-open.org > > > > > > > > > > > > > > -------------------------------------------------------------- > > > > > > > ---- > > > > > > > --- To unsubscribe, e-mail: > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org > > > > > > > For additional commands, e-mail: > > > > > > > virtio-dev-help@lists.oasis-open.org > > > > > Disclaimer: This email and any files transmitted with it may > > > > > contain > > > > confidential information intended for the addressee(s) only. The > > > > information is not to be surrendered or copied to unauthorized > > > > persons. If you have received this communication in error, please > > > > notify the sender immediately and delete this e-mail from your system. > > > > > > > > -------------------------------------------------------------------- > > > > - To unsubscribe, e-mail: > > > > virtio-dev-unsubscribe@lists.oasis-open.org > > > > For additional commands, e-mail: > > > > virtio-dev-help@lists.oasis-open.org > > > > > > Disclaimer: This email and any files transmitted with it may contain > > confidential information intended for the addressee(s) only. The information > > is not to be surrendered or copied to unauthorized persons. If you have > > received this communication in error, please notify the sender immediately > > and delete this e-mail from your system. > Disclaimer: This email and any files transmitted with it may contain confidential information intended for the addressee(s) only. The information is not to be surrendered or copied to unauthorized persons. If you have received this communication in error, please notify the sender immediately and delete this e-mail from your system.


  • 21.  RE: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature

    Posted 04-05-2018 07:19
    > From: Michael S. Tsirkin <mst@redhat.com>
    > Sent: 4. april 2018 18:08
    > To: Lars Ganrot <lga@napatech.com>
    > Cc: virtio@lists.oasis-open.org; virtio-dev@lists.oasis-open.org
    > Subject: Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order feature
    >
    > On Wed, Apr 04, 2018 at 03:03:16PM +0000, Lars Ganrot wrote:
    > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > Sent: 3. april 2018 13:48
    > > > To: Lars Ganrot <lga@napatech.com>
    > > > Cc: virtio@lists.oasis-open.org; virtio-dev@lists.oasis-open.org
    > > > Subject: Re: [virtio-dev] [PATCH v10 13/13] split-ring: in order
    > > > feature
    > > >
    > > > On Tue, Apr 03, 2018 at 07:19:47AM +0000, Lars Ganrot wrote:
    > > > > > From: virtio-dev@lists.oasis-open.org
    > > > > > <virtio-dev@lists.oasis-open.org> On Behalf Of Michael S.
    > > > > > Tsirkin
    > > > > > Sent: 29. marts 2018 21:13
    > > > > >
    > > > > > On Thu, Mar 29, 2018 at 06:23:28PM +0000, Lars Ganrot wrote:
    > > > > > >
    > > > > > >
    > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > Sent: 29. marts 2018 16:42
    > > > > > > >
    > > > > > > > On Wed, Mar 28, 2018 at 04:12:10PM +0000, Lars Ganrot wrote:
    > > > > > > > > Missed replying to the lists. Sorry.
    > > > > > > > >
    > > > > > > > > > From: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > > Sent: 28. marts 2018 16:39
    > > > > > > > > >
    > > > > > > > > > On Wed, Mar 28, 2018 at 08:23:38AM +0000, Lars Ganrot
    > wrote:
    > > > > > > > > > > Hi Michael et al
    > > > > > > > > > >
    > > > > > > > > > > > Behalf Of Michael S. Tsirkin
    > > > > > > > > > > > Sent: 9. marts 2018 22:24
    > > > > > > > > > > >
    > > > > > > > > > > > For a split ring, require that drivers use
    > > > > > > > > > > > descriptors in order
    > > > too.
    > > > > > > > > > > > This allows devices to skip reading the available ring.
    > > > > > > > > > > >
    > > > > > > > > > > > Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
    > > > > > > > > > > > Reviewed-by: Cornelia Huck <cohuck@redhat.com>
    > > > > > > > > > > > Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com>
    > > > > > > > > > > > ---
    > > > > > > > > > > [snip]
    > > > > > > > > > > >
    > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, and when
    > > > > > > > > > > > +making a descriptor with VRING_DESC_F_NEXT set in
    > > > > > > > > > > > +\field{flags} at offset $x$ in the table available
    > > > > > > > > > > > +to the device, driver MUST set \field{next} to $0$
    > > > > > > > > > > > +for the last descriptor in the table (where $x =
    > > > > > > > > > > > +queue\_size -
    > > > > > > > > > > > +1$) and to $x + 1$ for the rest of the
    > > > > > > > descriptors.
    > > > > > > > > > > > +
    > > > > > > > > > > > \subsubsection{Indirect
    > > > > > > > > > > > Descriptors}\label{sec:Basic Facilities of a Virtio
    > > > > > > > > > > > Device / Virtqueues / The Virtqueue Descriptor Table
    > > > > > > > > > > > / Indirect Descriptors}
    > > > > > > > > > > >
    > > > > > > > > > > > Some devices benefit by concurrently dispatching a
    > > > > > > > > > > > large number @@
    > > > > > > > > > > > -247,6
    > > > > > > > > > > > +257,10 @@ chained by \field{next}. An indirect
    > > > > > > > > > > > +descriptor without a valid
    > > > > > > > > > > > \field{next} A single indirect descriptor table
    > > > > > > > > > > > can include both
    > > > > > > > > > > > device- readable and device-writable descriptors.
    > > > > > > > > > > >
    > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > > > +descriptors use sequential indices, in-order: index
    > > > > > > > > > > > +0 followed by index 1 followed by index 2, etc.
    > > > > > > > > > > > +
    > > > > > > > > > > > \drivernormative{\paragraph}{Indirect
    > > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table /
    > > > > > > > > > > > Indirect Descriptors} The driver MUST NOT set the
    > > > > > > > > > VIRTQ_DESC_F_INDIRECT flag unless the
    > > > > > > > > > > > VIRTIO_F_INDIRECT_DESC feature was negotiated. The
    > > > driver
    > > > > > MUST
    > > > > > > > > > NOT
    > > > > > > > > > > > @@ -259,6 +273,10 @@ the device.
    > > > > > > > > > > > A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT
    > > > > > > > > > > > and VIRTQ_DESC_F_NEXT in \field{flags}.
    > > > > > > > > > > >
    > > > > > > > > > > > +If VIRTIO_F_IN_ORDER has been negotiated, indirect
    > > > > > > > > > > > +descriptors MUST appear sequentially, with
    > > > > > > > > > > > +\field{next} taking the value of
    > > > > > > > > > > > +1 for the 1st descriptor, 2 for the 2nd one, etc.
    > > > > > > > > > > > +
    > > > > > > > > > > > \devicenormative{\paragraph}{Indirect
    > > > > > > > > > > > Descriptors}{Basic Facilities of a Virtio Device /
    > > > > > > > > > > > Virtqueues / The Virtqueue Descriptor Table /
    > > > > > > > > > > > Indirect Descriptors} The device MUST ignore the
    > > > > > > > > > > > write-only flag
    > > > > > > > > > > > (\field{flags}\&VIRTQ_DESC_F_WRITE) in the
    > > > > > > > > > > > descriptor that refers to an indirect table.
    > > > > > > > > > > >
    > > > > > > > > > >
    > > > > > > > > > > The use of VIRTIO_F_IN_ORDER for split-ring can
    > > > > > > > > > > eliminate some accesses
    > > > > > > > > > to the virtq_avail.ring and virtq_used.ring. However I'm
    > > > > > > > > > wondering if the proposed descriptor ordering for
    > > > > > > > > > multi-element buffers couldn't be tweaked to be more HW
    > > > > > > > > > friendly. Currently even with the VIRTIO_F_IN_ORDER
    > > > > > > > > > negotiated, there is no way of knowing if, or how many
    > > > > > > > > > chained descriptors follow the descriptor pointed to by
    > > > > > > > > > the virtq_avail.idx. A chain has to be inspected one
    > > > > > > > > > descriptor at a time until
    > > > > > > > > > virtq_desc.flags[VIRTQ_DESC_F_NEXT]=0. This is awkward
    > > > > > > > > > for HW offload, where you want to DMA all available
    > > > > > > > > > descriptors in one shot, instead of iterating based on
    > > > > > > > > > the contents of received DMA data. As currently defined,
    > > > > > > > > > HW would have to find a compromise
    > > > > > between likely chain length, and cost of additional DMA transfers.
    > > > > > > > > > This leads to a performance penalty for all chained
    > > > > > > > > > descriptors, and in case the length assumption is wrong
    > > > > > > > > > the impact can be
    > > > > > significant.
    > > > > > > > > > >
    > > > > > > > > > > Now, what if the VIRTIO_F_IN_ORDER instead required
    > > > > > > > > > > chained buffers to
    > > > > > > > > > place the last element at the lowest index, and the
    > > > > > > > > > head-element (to which virtq_avail.idx points) at the
    > > > > > > > > > highest index? Then all the chained element descriptors
    > > > > > > > > > would be included in a DMA of the descriptor table from
    > > > > > > > > > the previous virtq_avail.idx+1 to the current
    > > > > > > > virtq_avail.idx. The "backward"
    > > > > > > > > > order of the chained descriptors shouldn't pose an issue
    > > > > > > > > > as such (at least not in HW).
    > > > > > > > > > >
    > > > > > > > > > > Best Regards,
    > > > > > > > > > >
    > > > > > > > > > > -Lars
    > > > > > > > > >
    > > > > > > > > > virtq_avail.idx is still an index into the available ring.
    > > > > > > > > >
    > > > > > > > > > I don't really see how you can use virtq_avail.idx to
    > > > > > > > > > guess the placement of a descriptor.
    > > > > > > > > >
    > > > > > > > > > I suspect the best way to optimize this is to include
    > > > > > > > > > the relevant data with the VIRTIO_F_NOTIFICATION_DATA
    > feature.
    > > > > > > > > >
    > > > > > > > >
    > > > > > > > > Argh, naturally.
    > > > > > > >
    > > > > > > > BTW, for split rings VIRTIO_F_NOTIFICATION_DATA just copies
    > > > > > > > the index right now.
    > > > > > > >
    > > > > > > > Do you have an opinion on whether we should change that for
    > > > > > > > in-
    > > > order?
    > > > > > > >
    > > > > > >
    > > > > > > Maybe I should think more about this, however adding the last
    > > > > > > element
    > > > > > descriptor index, would be useful to accelerate interfaces that
    > > > > > frequently use chaining (from a HW DMA perspective at least).
    > > > > > >
    > > > > > > > > For HW offload I'd want to avoid notifications for buffer
    > > > > > > > > transfer from host
    > > > > > > > to device, and hoped to just poll virtq_avail.idx directly.
    > > > > > > > >
    > > > > > > > > A split virtqueue with VITRIO_F_IN_ORDER will maintain
    > > > > > > > virtq_avail.idx==virtq_avail.ring[idx] as long as there is
    > > > > > > > no chaining. It would be nice to allow negotiating away
    > > > > > > > chaining, i.e add a VIRTIO_F_NO_CHAIN. If negotiated, the
    > > > > > > > driver agrees not to use chaining, and as a result (of
    > > > > > > > IN_ORDER and NO_CHAIN) both device and driver can ignore the
    > virtq_avail.ring[].
    > > > > > > >
    > > > > > > > My point was that device can just assume no chains, and then
    > > > > > > > fall back on doing extra reads upon encountering a chain.
    > > > > > > >
    > > > > > >
    > > > > > > Yes, you are correct that the HW can speculatively use
    > > > > > >virtq_avail.idx as the direct index to the descriptor table,
    > > > > > >and if it encounters a chain, revert to using the
    > > > > > >virtq_avail.ring[] in the traditional way, and this would work without
    > the feature-bit.
    > > > > >
    > > > > > Sorry that was not my idea.
    > > > > >
    > > > > > Device should not need to read the ring at all.
    > > > > > It reads the descriptor table and counts the descriptors without
    > > > > > the next
    > > > bit.
    > > > > > Once the count reaches the available index, it stops.
    > > > > >
    > > > >
    > > > > Agreed, that would work as well, with the benefit of keeping the
    > > > > ring out of the loop.
    > > > >
    > > > > >
    > > > > > > However the driver would not be able to optimize away the
    > > > > > > writing of the virtq_avail.ring[] (=cache miss)
    > > > > >
    > > > > >
    > > > > > BTW writing is a separate question (there is no provision in the
    > > > > > spec to skip
    > > > > > writes) but device does not have to read the ring.
    > > > > >
    > > > >
    > > > > Yes, I understand the spec currently does not allow writes to be
    > > > > skipped, but I'm wondering if that ought to be reconsidered for
    > > > > optimization features such as IN_ORDER and NO_CHAIN?
    > > >
    > > > Why not just use the packed ring then?
    > > >
    > >
    > > Device notification. While the packed ring solves some of the issues
    > > in the split ring, it also comes at a cost. In my view the two
    > > complement each other, however the required use of driver to device
    > > notifications in the packed ring for all driver to device transfers
    > > over PCIe (to handle the update granularity issue with Qwords as
    > > pointed out by Ilya on 14:th
    > > Jan) will limit performance (latency and throughput) in our experience.
    > > We want to use device polling.
    >
    > You can poll the descriptor for sure.
    >
    > I think you refer to this:
    >

    Not quite, on Jan 14 2018 Ilya Lesokhin in his mail: " [virtio-dev] PCIe ordering
    and new VIRTIO packed ring format", highlighted the subsequent section
    regarding observed read granularity in the PCIe rev 2.0 (paragraph 2.4.2):

    ## As an example of update granularity, if a host CPU writes a QWORD to host
    ## memory, a Requester reading that QWORD from host memory may observe
    ## a portion of the QWORD updated and another portion of it containing
    ## the old value"

    To which you on Jan 16 2018 responded:

    ## This is a very good point. This consideration is one of the reasons I included
    ## last valid descriptor in the driver notification. My guess would be that such
    ## hardware should never use driver event suppression. As a result, driver will
    ## always send notifications after each batch of descriptors. Device can use
    ## that to figure out which descriptors to fetch. Luckily, with pass-through
    ## device memory can be mapped directly into the VM, so no the notification
    ## will not trigger a VM exit. It would be interesting to find out whether specific
    ## host systems give a stronger guarantee than what is required by the PCIE
    ## spec. If so we could add e.g. a feature bit to let the device know it's safe to
    ## read beyond the index supplied in the kick notification. Drivers would detect
    ## this and use it to reduce the overhead."

    As I understand it, the notification is required for safe operation, unless the
    host can be determined (how?) to uphold a stronger guarantee for update
    granularity than the PCIe specification requires.

    > As an example of update ordering, assume that the block of data is in host
    > memory, and a host CPU writes first to location A and then to a different
    > location B. A Requester reading that data block with a single read transaction
    > is not guaranteed to observe those updates in order. In other words, the
    > Requester may observe an updated value in location B and an old value in
    > location A, regardless of the placement of locations A and B within the data
    > block. Unless a Completer makes its own guarantees (outside this
    > specification) with respect to update ordering, a Requester that relies on
    > update ordering must observe the update to location B via one read
    > transaction before initiating a subsequent read to location A to return its
    > updated value.
    >
    > One question would be whether placing a memory barrier (such as sfence on
    > x86) after writing out A will guarantee update ordering.
    >
    > Do you know anything about it?
    >

    My knowledge of PCIe is not deep enough, but the PCIe update granularity
    should not be affected by barriers, since it relates to a single write.

    >
    >
    > > Btw, won't the notification add one extra cache miss for all TX over
    > > PCIe transport?
    >
    > It's a posted write, these are typically not cached.
    >
    > > > > By opting for such features, both driver and device acknowledge
    > > > > their willingness to accept reduced flexibility for improved
    > performance.
    > > > > Why not then make sure they get the biggest bang for their buck? I
    > > > > would expect up to 20% improvement over PCIe (virtio-net, single
    > > > > 64B packet), if the device does not have to write to
    > > > > virtq_used.ring[] on transmit, and bandwidth over PCI is a very precious
    > resource in e.g.
    > > > > virtual switch offload with east-west acceleration (for a
    > > > > discussion see Intel's white- paper 335625-001).
    > > >
    > > > Haven't looked at it yet but we also need to consider the
    > > > complexity, see below.
    > > >
    > > > > > Without device accesses ring will not be invaliated in cache so
    > > > > > no misses hopefully.
    > > > > >
    > > > > > > unless a NO_CHAIN feature has
    > > > > > > been negotiated.
    > > > > > > The IN_ORDER by itself has already eliminated the need to
    > > > > > > maintain the TX virtq_used.ring[], since the buffer order is
    > > > > > > always known by the driver.
    > > > > > > With a NO_CHAIN feature-bit both RX and TX virtq_avail.ring[]
    > > > > > > related cache-misses could be eliminated. I.e.
    > > > > > > looping a packet over a split virtqueue would just experience
    > > > > > > 7 driver cache misses, down from 10 in Virtio v1.0.
    > > > > > > Multi-element buffers would still be possible provided INDIRECT is
    > negotiated.
    > > > > >
    > > > > >
    > > > > > NO_CHAIN might be a valid optimization, it is just unfortunately
    > > > > > somewhat narrow in that devices that need to mix write and read
    > > > > > descriptors in the same ring (e.g. storage) can not use this feature.
    > > > > >
    > > > >
    > > > > Yes, if there was a way of making indirect buffers support it,
    > > > > that would be ideal. However I don't see how that can be done
    > > > > without inline headers in elements to hold their written length.
    > > >
    > > > Kind of like it's done with with packed ring?
    > > >
    > > > > At the same time storage would not be hurt by it even if they are
    > > > > unable to benefit from this particular optimization,
    > > >
    > > > It will be hurt if it uses shared code paths which potentially take
    > > > up more cache, or if bugs are introduced.
    > > >
    > > > > and as long as there is a substantial use case/space that benefit
    > > > > from an optimization, it ought to be
    > > > considered.
    > > > > I believe virtual switching offload with virtio-net devices over
    > > > > PCIe is such a key use-case.
    > > >
    > > > It looks like the packed ring addresses the need nicely, while being
    > > > device- independent.
    > > >
    > > >
    > > > > >
    > > > > > > >
    > > > > > > >
    > > > > > > > > >
    > > > > > > > > > > ------------------------------------------------------
    > > > > > > > > > > ----
    > > > > > > > > > > ----
    > > > > > > > > > > ----
    > > > > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > > > > For additional commands, e-mail:
    > > > > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > > > > >
    > > > > > > > > ----------------------------------------------------------
    > > > > > > > > ----
    > > > > > > > > ----
    > > > > > > > > --- To unsubscribe, e-mail:
    > > > > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > > > > For additional commands, e-mail:
    > > > > > > > > virtio-dev-help@lists.oasis-open.org
    > > > > > > Disclaimer: This email and any files transmitted with it may
    > > > > > > contain
    > > > > > confidential information intended for the addressee(s) only. The
    > > > > > information is not to be surrendered or copied to unauthorized
    > > > > > persons. If you have received this communication in error,
    > > > > > please notify the sender immediately and delete this e-mail from
    > your system.
    > > > > >
    > > > > > ----------------------------------------------------------------
    > > > > > ----
    > > > > > - To unsubscribe, e-mail:
    > > > > > virtio-dev-unsubscribe@lists.oasis-open.org
    > > > > > For additional commands, e-mail:
    > > > > > virtio-dev-help@lists.oasis-open.org
    > > > >
    > > > > Disclaimer: This email and any files transmitted with it may
    > > > > contain
    > > > confidential information intended for the addressee(s) only. The
    > > > information is not to be surrendered or copied to unauthorized
    > > > persons. If you have received this communication in error, please
    > > > notify the sender immediately and delete this e-mail from your system.
    > > Disclaimer: This email and any files transmitted with it may contain
    > confidential information intended for the addressee(s) only. The information
    > is not to be surrendered or copied to unauthorized persons. If you have
    > received this communication in error, please notify the sender immediately
    > and delete this e-mail from your system.



  • 22.  [PATCH v10 04/13] content: move virtqueue operation description

    Posted 03-09-2018 21:24
    virtqueue operation description is specific to the virtqueue format. Move it out to split-ring.tex and update all references. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> --- conformance.tex 4 +- content.tex 171 +++------------------------------------------------- split-ring.tex 181 ++++++++++++++++++++++++++++++++++++++++++++++++++++++-- 3 files changed, 185 insertions(+), 171 deletions(-) diff --git a/conformance.tex b/conformance.tex index f59e360..55d17b4 100644 --- a/conformance.tex +++ b/conformance.tex @@ -40,9 +40,9 @@ A driver MUST conform to the following normative statements: item
    ef{drivernormative:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} item
    ef{drivernormative:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} item
    ef{drivernormative:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} +item
    ef{drivernormative:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Updating idx} +item
    ef{drivernormative:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Notifying The Device} item
    ef{drivernormative:General Initialization And Device Operation / Device Initialization} -item
    ef{drivernormative:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Updating idx} -item
    ef{drivernormative:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Notifying The Device} item
    ef{drivernormative:General Initialization And Device Operation / Device Cleanup} item
    ef{drivernormative:Reserved Feature Bits} end{itemize} diff --git a/content.tex b/content.tex index 5b4c4e9..3b4579e 100644 --- a/content.tex +++ b/content.tex @@ -337,167 +337,14 @@ And Device Operation / Device Initialization / Set DRIVER-OK}. section{Device Operation}label{sec:General Initialization And Device Operation / Device Operation} -There are two parts to device operation: supplying new buffers to -the device, and processing used buffers from the device. - -egin{note} As an -example, the simplest virtio network device has two virtqueues: the -transmit virtqueue and the receive virtqueue. The driver adds -outgoing (device-readable) packets to the transmit virtqueue, and then -frees them after they are used. Similarly, incoming (device-writable) -buffers are added to the receive virtqueue, and processed after -they are used. -end{note} - -subsection{Supplying Buffers to The Device}label{sec:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device} - -The driver offers buffers to one of the device's virtqueues as follows: - -egin{enumerate} -itemlabel{itm:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Place Buffers} The driver places the buffer into free descriptor(s) in the - descriptor table, chaining as necessary (see
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table}~
    ameref{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table}). - -itemlabel{itm:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Place Index} The driver places the index of the head of the descriptor chain - into the next ring entry of the available ring. - -item Steps
    ef{itm:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Place Buffers} and
    ef{itm:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Place Index} MAY be performed repeatedly if batching - is possible. - -item The driver performs suitable a memory barrier to ensure the device sees - the updated descriptor table and available ring before the next - step. - -item The available field{idx} is increased by the number of - descriptor chain heads added to the available ring. - -item The driver performs a suitable memory barrier to ensure that it updates - the field{idx} field before checking for notification suppression. - -item If notifications are not suppressed, the driver notifies the device - of the new available buffers. -end{enumerate} - -Note that the above code does not take precautions against the -available ring buffer wrapping around: this is not possible since -the ring buffer is the same size as the descriptor table, so step -(1) will prevent such a condition. - -In addition, the maximum queue size is 32768 (the highest power -of 2 which fits in 16 bits), so the 16-bit field{idx} value can always -distinguish between a full and empty buffer. +When operating the device, each field in the device configuration +space can be changed by either the driver or the device. -What follows is the requirements of each stage in more detail. - -subsubsection{Placing Buffers Into The Descriptor Table}label{sec:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Placing Buffers Into The Descriptor Table} - -A buffer consists of zero or more device-readable physically-contiguous -elements followed by zero or more physically-contiguous -device-writable elements (each has at least one element). This -algorithm maps it into the descriptor table to form a descriptor -chain: - -for each buffer element, b: - -egin{enumerate} -item Get the next free descriptor table entry, d -item Set field{d.addr} to the physical address of the start of b -item Set field{d.len} to the length of b. -item If b is device-writable, set field{d.flags} to VIRTQ_DESC_F_WRITE, - otherwise 0. -item If there is a buffer element after this: - egin{enumerate} - item Set field{d.next} to the index of the next free descriptor - element. - item Set the VIRTQ_DESC_F_NEXT bit in field{d.flags}. - end{enumerate} -end{enumerate} - -In practice, field{d.next} is usually used to chain free -descriptors, and a separate count kept to check there are enough -free descriptors before beginning the mappings. - -subsubsection{Updating The Available Ring}label{sec:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Updating The Available Ring} - -The descriptor chain head is the first d in the algorithm -above, ie. the index of the descriptor table entry referring to the first -part of the buffer. A naive driver implementation MAY do the following (with the -appropriate conversion to-and-from little-endian assumed): - -egin{lstlisting} -avail->ring[avail->idx % qsz] = head; -end{lstlisting} +Whenever such a configuration change is triggered by the device, +driver is notified. This makes it possible for drivers to +cache device configuration, avoiding expensive configuration +reads unless notified. -However, in general the driver MAY add many descriptor chains before it updates -field{idx} (at which point they become visible to the -device), so it is common to keep a counter of how many the driver has added: - -egin{lstlisting} -avail->ring[(avail->idx + added++) % qsz] = head; -end{lstlisting} - -subsubsection{Updating field{idx}}label{sec:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Updating idx} - -field{idx} always increments, and wraps naturally at -65536: - -egin{lstlisting} -avail->idx += added; -end{lstlisting} - -Once available field{idx} is updated by the driver, this exposes the -descriptor and its contents. The device MAY -access the descriptor chains the driver created and the -memory they refer to immediately. - -drivernormative{paragraph}{Updating idx}{General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Updating idx} -The driver MUST perform a suitable memory barrier before the field{idx} update, to ensure the -device sees the most up-to-date copy. - -subsubsection{Notifying The Device}label{sec:General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Notifying The Device} - -The actual method of device notification is bus-specific, but generally -it can be expensive. So the device MAY suppress such notifications if it -doesn't need them, as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression}. - -The driver has to be careful to expose the new field{idx} -value before checking if notifications are suppressed. - -drivernormative{paragraph}{Notifying The Device}{General Initialization And Device Operation / Device Operation / Supplying Buffers to The Device / Notifying The Device} -The driver MUST perform a suitable memory barrier before reading field{flags} or -field{avail_event}, to avoid missing a notification. - -subsection{Receiving Used Buffers From The Device}label{sec:General Initialization And Device Operation / Device Operation / Receiving Used Buffers From The Device} - -Once the device has used buffers referred to by a descriptor (read from or written to them, or -parts of both, depending on the nature of the virtqueue and the -device), it interrupts the driver as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression}. - -egin{note} -For optimal performance, a driver MAY disable interrupts while processing -the used ring, but beware the problem of missing interrupts between -emptying the ring and reenabling interrupts. This is usually handled by -re-checking for more used buffers after interrups are re-enabled: - -egin{lstlisting} -virtq_disable_interrupts(vq); - -for (;;) { - if (vq->last_seen_used != le16_to_cpu(virtq->used.idx)) { - virtq_enable_interrupts(vq); - mb(); - - if (vq->last_seen_used != le16_to_cpu(virtq->used.idx)) - break; - - virtq_disable_interrupts(vq); - } - - struct virtq_used_elem *e = virtq.used->ring[vq->last_seen_used%vsz]; - process_buffer(e); - vq->last_seen_used++; -} -end{lstlisting} -end{note} subsection{Notification of Device Configuration Changes}label{sec:General Initialization And Device Operation / Device Operation / Notification of Device Configuration Changes} @@ -3017,9 +2864,7 @@ If VIRTIO_NET_HDR_F_NEEDS_CSUM is not set, the device MUST NOT rely on the packet checksum being correct. paragraph{Packet Transmission Interrupt}label{sec:Device Types / Network Device / Device Operation / Packet Transmission / Packet Transmission Interrupt} -Often a driver will suppress transmission interrupts using the -VIRTQ_AVAIL_F_NO_INTERRUPT flag - (see
    ef{sec:General Initialization And Device Operation / Device Operation / Receiving Used Buffers From The Device}~
    ameref{sec:General Initialization And Device Operation / Device Operation / Receiving Used Buffers From The Device}) +Often a driver will suppress transmission virtqueue interrupts and check for used packets in the transmit path of following packets. @@ -3079,7 +2924,7 @@ if VIRTIO_NET_F_MRG_RXBUF is not negotiated.} When a packet is copied into a buffer in the receiveq, the optimal path is to disable further interrupts for the receiveq -(see
    ef{sec:General Initialization And Device Operation / Device Operation / Receiving Used Buffers From The Device}~
    ameref{sec:General Initialization And Device Operation / Device Operation / Receiving Used Buffers From The Device}) and process +and process packets until no more are found, then re-enable them. Processing incoming packets involves: diff --git a/split-ring.tex b/split-ring.tex index 418f63d..404660b 100644 --- a/split-ring.tex +++ b/split-ring.tex @@ -1,11 +1,12 @@ section{Split Virtqueues}label{sec:Basic Facilities of a Virtio Device / Split Virtqueues} -The split virtqueue format is the original format used by legacy -virtio devices. The split virtqueue format separates the -virtqueue into several parts, where each part is write-able by -either the driver or the device, but not both. Multiple -locations need to be updated when making a buffer available -and when marking it as used. +The split virtqueue format was the only format supported +by the version 1.0 (and earlier) of this standard. +The split virtqueue format separates the virtqueue into several +parts, where each part is write-able by either the driver or the +device, but not both. Multiple parts and/or locations within +a part need to be updated when making a buffer +available and when marking it as used. Each queue has a 16-bit queue size parameter, which sets the number of entries and implies the total size @@ -496,3 +497,171 @@ include/uapi/linux/virtio_ring.h. This was explicitly licensed by IBM and Red Hat under the (3-clause) BSD license so that it can be freely used by all other projects, and is reproduced (with slight variation) in
    ef{sec:virtio-queue.h}~
    ameref{sec:virtio-queue.h}. + +subsection{Virtqueue Operation}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Operation} + +There are two parts to virtqueue operation: supplying new +available buffers to the device, and processing used buffers from +the device. + +egin{note} As an +example, the simplest virtio network device has two virtqueues: the +transmit virtqueue and the receive virtqueue. The driver adds +outgoing (device-readable) packets to the transmit virtqueue, and then +frees them after they are used. Similarly, incoming (device-writable) +buffers are added to the receive virtqueue, and processed after +they are used. +end{note} + +What follows is the requirements of each of these two parts +when using the split virtqueue format in more detail. + +subsection{Supplying Buffers to The Device}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device} + +The driver offers buffers to one of the device's virtqueues as follows: + +egin{enumerate} +itemlabel{itm:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Place Buffers} The driver places the buffer into free descriptor(s) in the + descriptor table, chaining as necessary (see
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table}~
    ameref{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table}). + +itemlabel{itm:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Place Index} The driver places the index of the head of the descriptor chain + into the next ring entry of the available ring. + +item Steps
    ef{itm:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Place Buffers} and
    ef{itm:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Place Index} MAY be performed repeatedly if batching + is possible. + +item The driver performs suitable a memory barrier to ensure the device sees + the updated descriptor table and available ring before the next + step. + +item The available field{idx} is increased by the number of + descriptor chain heads added to the available ring. + +item The driver performs a suitable memory barrier to ensure that it updates + the field{idx} field before checking for notification suppression. + +item If notifications are not suppressed, the driver notifies the device + of the new available buffers. +end{enumerate} + +Note that the above code does not take precautions against the +available ring buffer wrapping around: this is not possible since +the ring buffer is the same size as the descriptor table, so step +(1) will prevent such a condition. + +In addition, the maximum queue size is 32768 (the highest power +of 2 which fits in 16 bits), so the 16-bit field{idx} value can always +distinguish between a full and empty buffer. + +What follows is the requirements of each stage in more detail. + +subsubsection{Placing Buffers Into The Descriptor Table}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Placing Buffers Into The Descriptor Table} + +A buffer consists of zero or more device-readable physically-contiguous +elements followed by zero or more physically-contiguous +device-writable elements (each has at least one element). This +algorithm maps it into the descriptor table to form a descriptor +chain: + +for each buffer element, b: + +egin{enumerate} +item Get the next free descriptor table entry, d +item Set field{d.addr} to the physical address of the start of b +item Set field{d.len} to the length of b. +item If b is device-writable, set field{d.flags} to VIRTQ_DESC_F_WRITE, + otherwise 0. +item If there is a buffer element after this: + egin{enumerate} + item Set field{d.next} to the index of the next free descriptor + element. + item Set the VIRTQ_DESC_F_NEXT bit in field{d.flags}. + end{enumerate} +end{enumerate} + +In practice, field{d.next} is usually used to chain free +descriptors, and a separate count kept to check there are enough +free descriptors before beginning the mappings. + +subsubsection{Updating The Available Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Updating The Available Ring} + +The descriptor chain head is the first d in the algorithm +above, ie. the index of the descriptor table entry referring to the first +part of the buffer. A naive driver implementation MAY do the following (with the +appropriate conversion to-and-from little-endian assumed): + +egin{lstlisting} +avail->ring[avail->idx % qsz] = head; +end{lstlisting} + +However, in general the driver MAY add many descriptor chains before it updates +field{idx} (at which point they become visible to the +device), so it is common to keep a counter of how many the driver has added: + +egin{lstlisting} +avail->ring[(avail->idx + added++) % qsz] = head; +end{lstlisting} + +subsubsection{Updating field{idx}}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Updating idx} + +field{idx} always increments, and wraps naturally at +65536: + +egin{lstlisting} +avail->idx += added; +end{lstlisting} + +Once available field{idx} is updated by the driver, this exposes the +descriptor and its contents. The device MAY +access the descriptor chains the driver created and the +memory they refer to immediately. + +drivernormative{paragraph}{Updating idx}{Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Updating idx} +The driver MUST perform a suitable memory barrier before the field{idx} update, to ensure the +device sees the most up-to-date copy. + +subsubsection{Notifying The Device}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Notifying The Device} + +The actual method of device notification is bus-specific, but generally +it can be expensive. So the device MAY suppress such notifications if it +doesn't need them, as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression}. + +The driver has to be careful to expose the new field{idx} +value before checking if notifications are suppressed. + +drivernormative{paragraph}{Notifying The Device}{Basic Facilities of a Virtio Device / Virtqueues / Supplying Buffers to The Device / Notifying The Device} +The driver MUST perform a suitable memory barrier before reading field{flags} or +field{avail_event}, to avoid missing a notification. + +subsection{Receiving Used Buffers From The Device}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Receiving Used Buffers From The Device} + +Once the device has used buffers referred to by a descriptor (read from or written to them, or +parts of both, depending on the nature of the virtqueue and the +device), it interrupts the driver as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression}. + +egin{note} +For optimal performance, a driver MAY disable interrupts while processing +the used ring, but beware the problem of missing interrupts between +emptying the ring and reenabling interrupts. This is usually handled by +re-checking for more used buffers after interrups are re-enabled: + +egin{lstlisting} +virtq_disable_interrupts(vq); + +for (;;) { + if (vq->last_seen_used != le16_to_cpu(virtq->used.idx)) { + virtq_enable_interrupts(vq); + mb(); + + if (vq->last_seen_used != le16_to_cpu(virtq->used.idx)) + break; + + virtq_disable_interrupts(vq); + } + + struct virtq_used_elem *e = virtq.used->ring[vq->last_seen_used%vsz]; + process_buffer(e); + vq->last_seen_used++; +} +end{lstlisting} +end{note} -- MST


  • 23.  [PATCH v10 03/13] content: move ring text out to a separate file

    Posted 03-09-2018 21:24
    Will be easier to manage this way. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> --- content.tex 499 +-------------------------------------------------------- split-ring.tex 498 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++ 2 files changed, 499 insertions(+), 498 deletions(-) create mode 100644 split-ring.tex diff --git a/content.tex b/content.tex index 4483a4b..5b4c4e9 100644 --- a/content.tex +++ b/content.tex @@ -244,504 +244,7 @@ a device event - i.e. send an interrupt to the driver. For queue operation detail, see
    ef{sec:Basic Facilities of a Virtio Device / Split Virtqueues}~
    ameref{sec:Basic Facilities of a Virtio Device / Split Virtqueues}. -section{Split Virtqueues}label{sec:Basic Facilities of a Virtio Device / Split Virtqueues} -The split virtqueue format is the original format used by legacy -virtio devices. The split virtqueue format separates the -virtqueue into several parts, where each part is write-able by -either the driver or the device, but not both. Multiple -locations need to be updated when making a buffer available -and when marking it as used. - - -Each queue has a 16-bit queue size -parameter, which sets the number of entries and implies the total size -of the queue. - -Each virtqueue consists of three parts: - -egin{itemize} -item Descriptor Table -item Available Ring -item Used Ring -end{itemize} - -where each part is physically-contiguous in guest memory, -and has different alignment requirements. - -The memory aligment and size requirements, in bytes, of each part of the -virtqueue are summarized in the following table: - -egin{tabular}{ l l l } -hline -Virtqueue Part & Alignment & Size \ -hline hline -Descriptor Table & 16 & $16 * $(Queue Size) \ -hline -Available Ring & 2 & $6 + 2 * $(Queue Size) \ - hline -Used Ring & 4 & $6 + 8 * $(Queue Size) \ - hline -end{tabular} - -The Alignment column gives the minimum alignment for each part -of the virtqueue. - -The Size column gives the total number of bytes for each -part of the virtqueue. - -Queue Size corresponds to the maximum number of buffers in the -virtqueuefootnote{For example, if Queue Size is 4 then at most 4 buffers -can be queued at any given time.}. Queue Size value is always a -power of 2. The maximum Queue Size value is 32768. This value -is specified in a bus-specific way. - -When the driver wants to send a buffer to the device, it fills in -a slot in the descriptor table (or chains several together), and -writes the descriptor index into the available ring. It then -notifies the device. When the device has finished a buffer, it -writes the descriptor index into the used ring, and sends an interrupt. - -drivernormative{subsection}{Virtqueues}{Basic Facilities of a Virtio Device / Virtqueues} -The driver MUST ensure that the physical address of the first byte -of each virtqueue part is a multiple of the specified alignment value -in the above table. - -subsection{Legacy Interfaces: A Note on Virtqueue Layout}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout} - -For Legacy Interfaces, several additional -restrictions are placed on the virtqueue layout: - -Each virtqueue occupies two or more physically-contiguous pages -(usually defined as 4096 bytes, but depending on the transport; -henceforth referred to as Queue Align) -and consists of three parts: - -egin{tabular}{ l l l } -hline -Descriptor Table & Available Ring (ldots paddingldots) & Used Ring \ -hline -end{tabular} - -The bus-specific Queue Size field controls the total number of bytes -for the virtqueue. -When using the legacy interface, the transitional -driver MUST retrieve the Queue Size field from the device -and MUST allocate the total number of bytes for the virtqueue -according to the following formula (Queue Align given in qalign and -Queue Size given in qsz): - -egin{lstlisting} -#define ALIGN(x) (((x) + qalign) & ~qalign) -static inline unsigned virtq_size(unsigned int qsz) -{ - return ALIGN(sizeof(struct virtq_desc)*qsz + sizeof(u16)*(3 + qsz)) - + ALIGN(sizeof(u16)*3 + sizeof(struct virtq_used_elem)*qsz); -} -end{lstlisting} - -This wastes some space with padding. -When using the legacy interface, both transitional -devices and drivers MUST use the following virtqueue layout -structure to locate elements of the virtqueue: - -egin{lstlisting} -struct virtq { - // The actual descriptors (16 bytes each) - struct virtq_desc desc[ Queue Size ]; - - // A ring of available descriptor heads with free-running index. - struct virtq_avail avail; - - // Padding to the next Queue Align boundary. - u8 pad[ Padding ]; - - // A ring of used descriptor heads with free-running index. - struct virtq_used used; -}; -end{lstlisting} - -subsection{Legacy Interfaces: A Note on Virtqueue Endianness}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Endianness} - -Note that when using the legacy interface, transitional -devices and drivers MUST use the native -endian of the guest as the endian of fields and in the virtqueue. -This is opposed to little-endian for non-legacy interface as -specified by this standard. -It is assumed that the host is already aware of the guest endian. - -subsection{Message Framing}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing} -The framing of messages with descriptors is -independent of the contents of the buffers. For example, a network -transmit buffer consists of a 12 byte header followed by the network -packet. This could be most simply placed in the descriptor table as a -12 byte output descriptor followed by a 1514 byte output descriptor, -but it could also consist of a single 1526 byte output descriptor in -the case where the header and packet are adjacent, or even three or -more descriptors (possibly with loss of efficiency in that case). - -Note that, some device implementations have large-but-reasonable -restrictions on total descriptor size (such as based on IOV_MAX in the -host OS). This has not been a problem in practice: little sympathy -will be given to drivers which create unreasonably-sized descriptors -such as by dividing a network packet into 1500 single-byte -descriptors! - -devicenormative{subsubsection}{Message Framing}{Basic Facilities of a Virtio Device / Message Framing} -The device MUST NOT make assumptions about the particular arrangement -of descriptors. The device MAY have a reasonable limit of descriptors -it will allow in a chain. - -drivernormative{subsubsection}{Message Framing}{Basic Facilities of a Virtio Device / Message Framing} -The driver MUST place any device-writable descriptor elements after -any device-readable descriptor elements. - -The driver SHOULD NOT use an excessive number of descriptors to -describe a buffer. - -subsubsection{Legacy Interface: Message Framing}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing / Legacy Interface: Message Framing} - -Regrettably, initial driver implementations used simple layouts, and -devices came to rely on it, despite this specification wording. In -addition, the specification for virtio_blk SCSI commands required -intuiting field lengths from frame boundaries (see -
    ef{sec:Device Types / Block Device / Device Operation / Legacy Interface: Device Operation}~
    ameref{sec:Device Types / Block Device / Device Operation / Legacy Interface: Device Operation}) - -Thus when using the legacy interface, the VIRTIO_F_ANY_LAYOUT -feature indicates to both the device and the driver that no -assumptions were made about framing. Requirements for -transitional drivers when this is not negotiated are included in -each device section. - -subsection{The Virtqueue Descriptor Table}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} - -The descriptor table refers to the buffers the driver is using for -the device. field{addr} is a physical address, and the buffers -can be chained via field{next}. Each descriptor describes a -buffer which is read-only for the device (``device-readable'') or write-only for the device (``device-writable''), but a chain of -descriptors can contain both device-readable and device-writable buffers. - -The actual contents of the memory offered to the device depends on the -device type. Most common is to begin the data with a header -(containing little-endian fields) for the device to read, and postfix -it with a status tailer for the device to write. - -egin{lstlisting} -struct virtq_desc { - /* Address (guest-physical). */ - le64 addr; - /* Length. */ - le32 len; - -/* This marks a buffer as continuing via the next field. */ -#define VIRTQ_DESC_F_NEXT 1 -/* This marks a buffer as device write-only (otherwise device read-only). */ -#define VIRTQ_DESC_F_WRITE 2 -/* This means the buffer contains a list of buffer descriptors. */ -#define VIRTQ_DESC_F_INDIRECT 4 - /* The flags as indicated above. */ - le16 flags; - /* Next field if flags & NEXT */ - le16 next; -}; -end{lstlisting} - -The number of descriptors in the table is defined by the queue size -for this virtqueue: this is the maximum possible descriptor chain length. - -egin{note} -The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} -referred to this structure as vring_desc, and the constants as -VRING_DESC_F_NEXT, etc, but the layout and values were identical. -end{note} - -devicenormative{subsubsection}{The Virtqueue Descriptor Table}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} -A device MUST NOT write to a device-readable buffer, and a device SHOULD NOT -read a device-writable buffer (it MAY do so for debugging or diagnostic -purposes). - -drivernormative{subsubsection}{The Virtqueue Descriptor Table}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} -Drivers MUST NOT add a descriptor chain over than $2^{32}$ bytes long in total; -this implies that loops in the descriptor chain are forbidden! - -subsubsection{Indirect Descriptors}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} - -Some devices benefit by concurrently dispatching a large number -of large requests. The VIRTIO_F_INDIRECT_DESC feature allows this (see
    ef{sec:virtio-queue.h}~
    ameref{sec:virtio-queue.h}). To increase -ring capacity the driver can store a table of indirect -descriptors anywhere in memory, and insert a descriptor in main -virtqueue (with field{flags}&VIRTQ_DESC_F_INDIRECT on) that refers to memory buffer -containing this indirect descriptor table; field{addr} and field{len} -refer to the indirect table address and length in bytes, -respectively. - -The indirect table layout structure looks like this -(field{len} is the length of the descriptor that refers to this table, -which is a variable, so this code won't compile): - -egin{lstlisting} -struct indirect_descriptor_table { - /* The actual descriptors (16 bytes each) */ - struct virtq_desc desc[len / 16]; -}; -end{lstlisting} - -The first indirect descriptor is located at start of the indirect -descriptor table (index 0), additional indirect descriptors are -chained by field{next}. An indirect descriptor without a valid field{next} -(with field{flags}&VIRTQ_DESC_F_NEXT off) signals the end of the descriptor. -A single indirect descriptor -table can include both device-readable and device-writable descriptors. - -drivernormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} -The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the -VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT -set the VIRTQ_DESC_F_INDIRECT flag within an indirect descriptor (ie. only -one table per descriptor). - -A driver MUST NOT create a descriptor chain longer than the Queue Size of -the device. - -A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT -in field{flags}. - -devicenormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} -The device MUST ignore the write-only flag (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an indirect table. - -The device MUST handle the case of zero or more normal chained -descriptors followed by a single descriptor with field{flags}&VIRTQ_DESC_F_INDIRECT. - -egin{note} -While unusual (most implementations either create a chain solely using -non-indirect descriptors, or use a single indirect element), such a -layout is valid. -end{note} - -subsection{The Virtqueue Available Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Available Ring} - -egin{lstlisting} -struct virtq_avail { -#define VIRTQ_AVAIL_F_NO_INTERRUPT 1 - le16 flags; - le16 idx; - le16 ring[ /* Queue Size */ ]; - le16 used_event; /* Only if VIRTIO_F_EVENT_IDX */ -}; -end{lstlisting} - -The driver uses the available ring to offer buffers to the -device: each ring entry refers to the head of a descriptor chain. It is only -written by the driver and read by the device. - -field{idx} field indicates where the driver would put the next descriptor -entry in the ring (modulo the queue size). This starts at 0, and increases. - -egin{note} -The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} -referred to this structure as vring_avail, and the constant as -VRING_AVAIL_F_NO_INTERRUPT, but the layout and value were identical. -end{note} - -subsection{Virtqueue Interrupt Suppression}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} - -If the VIRTIO_F_EVENT_IDX feature bit is not negotiated, -the field{flags} field in the available ring offers a crude mechanism for the driver to inform -the device that it doesn't want interrupts when buffers are used. Otherwise -field{used_event} is a more performant alternative where the driver -specifies how far the device can progress before interrupting. - -Neither of these interrupt suppression methods are reliable, as they -are not synchronized with the device, but they serve as -useful optimizations. - -drivernormative{subsubsection}{Virtqueue Interrupt Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} -If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: -egin{itemize} -item The driver MUST set field{flags} to 0 or 1. -item The driver MAY set field{flags} to 1 to advise -the device that interrupts are not needed. -end{itemize} - -Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: -egin{itemize} -item The driver MUST set field{flags} to 0. -item The driver MAY use field{used_event} to advise the device that interrupts are unnecessary until the device writes entry with an index specified by field{used_event} into the used ring (equivalently, until field{idx} in the -used ring will reach the value field{used_event} + 1). -end{itemize} - -The driver MUST handle spurious interrupts from the device. - -devicenormative{subsubsection}{Virtqueue Interrupt Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} - -If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: -egin{itemize} -item The device MUST ignore the field{used_event} value. -item After the device writes a descriptor index into the used ring: - egin{itemize} - item If field{flags} is 1, the device SHOULD NOT send an interrupt. - item If field{flags} is 0, the device MUST send an interrupt. - end{itemize} -end{itemize} - -Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: -egin{itemize} -item The device MUST ignore the lower bit of field{flags}. -item After the device writes a descriptor index into the used ring: - egin{itemize} - item If the field{idx} field in the used ring (which determined - where that descriptor index was placed) was equal to - field{used_event}, the device MUST send an interrupt. - item Otherwise the device SHOULD NOT send an interrupt. - end{itemize} -end{itemize} - -egin{note} -For example, if field{used_event} is 0, then a device using - VIRTIO_F_EVENT_IDX would interrupt after the first buffer is - used (and again after the 65536th buffer, etc). -end{note} - -subsection{The Virtqueue Used Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} - -egin{lstlisting} -struct virtq_used { -#define VIRTQ_USED_F_NO_NOTIFY 1 - le16 flags; - le16 idx; - struct virtq_used_elem ring[ /* Queue Size */]; - le16 avail_event; /* Only if VIRTIO_F_EVENT_IDX */ -}; - -/* le32 is used here for ids for padding reasons. */ -struct virtq_used_elem { - /* Index of start of used descriptor chain. */ - le32 id; - /* Total length of the descriptor chain which was used (written to) */ - le32 len; -}; -end{lstlisting} - -The used ring is where the device returns buffers once it is done with -them: it is only written to by the device, and read by the driver. - -Each entry in the ring is a pair: field{id} indicates the head entry of the -descriptor chain describing the buffer (this matches an entry -placed in the available ring by the guest earlier), and field{len} the total -of bytes written into the buffer. - -egin{note} -field{len} is particularly useful -for drivers using untrusted buffers: if a driver does not know exactly -how much has been written by the device, the driver would have to zero -the buffer in advance to ensure no data leakage occurs. - -For example, a network driver may hand a received buffer directly to -an unprivileged userspace application. If the network device has not -overwritten the bytes which were in that buffer, this could leak the -contents of freed memory from other processes to the application. -end{note} - -field{idx} field indicates where the driver would put the next descriptor -entry in the ring (modulo the queue size). This starts at 0, and increases. - -egin{note} -The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} -referred to these structures as vring_used and vring_used_elem, and -the constant as VRING_USED_F_NO_NOTIFY, but the layout and value were -identical. -end{note} - -subsubsection{Legacy Interface: The Virtqueue Used -Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues -/ The Virtqueue Used Ring/ Legacy Interface: The Virtqueue Used -Ring} - -Historically, many drivers ignored the field{len} value, as a -result, many devices set field{len} incorrectly. Thus, when -using the legacy interface, it is generally a good idea to ignore -the field{len} value in used ring entries if possible. Specific -known issues are listed per device type. - -devicenormative{subsubsection}{The Virtqueue Used Ring}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} - -The device MUST set field{len} prior to updating the used field{idx}. - -The device MUST write at least field{len} bytes to descriptor, -beginning at the first device-writable buffer, -prior to updating the used field{idx}. - -The device MAY write more than field{len} bytes to descriptor. - -egin{note} -There are potential error cases where a device might not know what -parts of the buffers have been written. This is why field{len} is -permitted to be an underestimate: that's preferable to the driver believing -that uninitialized memory has been overwritten when it has not. -end{note} - -drivernormative{subsubsection}{The Virtqueue Used Ring}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} - -The driver MUST NOT make assumptions about data in device-writable buffers -beyond the first field{len} bytes, and SHOULD ignore this data. - -subsection{Virtqueue Notification Suppression}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} - -The device can suppress notifications in a manner analogous to the way -drivers can suppress interrupts as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression}. -The device manipulates field{flags} or field{avail_event} in the used ring the -same way the driver manipulates field{flags} or field{used_event} in the available ring. - -drivernormative{subsubsection}{Virtqueue Notification Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} - -The driver MUST initialize field{flags} in the used ring to 0 when -allocating the used ring. - -If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: -egin{itemize} -item The driver MUST ignore the field{avail_event} value. -item After the driver writes a descriptor index into the available ring: - egin{itemize} - item If field{flags} is 1, the driver SHOULD NOT send a notification. - item If field{flags} is 0, the driver MUST send a notification. - end{itemize} -end{itemize} - -Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: -egin{itemize} -item The driver MUST ignore the lower bit of field{flags}. -item After the driver writes a descriptor index into the available ring: - egin{itemize} - item If the field{idx} field in the available ring (which determined - where that descriptor index was placed) was equal to - field{avail_event}, the driver MUST send a notification. - item Otherwise the driver SHOULD NOT send a notification. - end{itemize} -end{itemize} - -devicenormative{subsubsection}{Virtqueue Notification Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} -If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: -egin{itemize} -item The device MUST set field{flags} to 0 or 1. -item The device MAY set field{flags} to 1 to advise -the driver that notifications are not needed. -end{itemize} - -Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: -egin{itemize} -item The device MUST set field{flags} to 0. -item The device MAY use field{avail_event} to advise the driver that notifications are unnecessary until the driver writes entry with an index specified by field{avail_event} into the available ring (equivalently, until field{idx} in the -available ring will reach the value field{avail_event} + 1). -end{itemize} - -The device MUST handle spurious notifications from the driver. - -subsection{Helpers for Operating Virtqueues}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Helpers for Operating Virtqueues} - -The Linux Kernel Source code contains the definitions above and -helper routines in a more usable form, in -include/uapi/linux/virtio_ring.h. This was explicitly licensed by IBM -and Red Hat under the (3-clause) BSD license so that it can be -freely used by all other projects, and is reproduced (with slight -variation) in
    ef{sec:virtio-queue.h}~
    ameref{sec:virtio-queue.h}. +input{split-ring.tex} chapter{General Initialization And Device Operation}label{sec:General Initialization And Device Operation} diff --git a/split-ring.tex b/split-ring.tex new file mode 100644 index 0000000..418f63d --- /dev/null +++ b/split-ring.tex @@ -0,0 +1,498 @@ +section{Split Virtqueues}label{sec:Basic Facilities of a Virtio Device / Split Virtqueues} +The split virtqueue format is the original format used by legacy +virtio devices. The split virtqueue format separates the +virtqueue into several parts, where each part is write-able by +either the driver or the device, but not both. Multiple +locations need to be updated when making a buffer available +and when marking it as used. + + +Each queue has a 16-bit queue size +parameter, which sets the number of entries and implies the total size +of the queue. + +Each virtqueue consists of three parts: + +egin{itemize} +item Descriptor Table +item Available Ring +item Used Ring +end{itemize} + +where each part is physically-contiguous in guest memory, +and has different alignment requirements. + +The memory aligment and size requirements, in bytes, of each part of the +virtqueue are summarized in the following table: + +egin{tabular}{ l l l } +hline +Virtqueue Part & Alignment & Size \ +hline hline +Descriptor Table & 16 & $16 * $(Queue Size) \ +hline +Available Ring & 2 & $6 + 2 * $(Queue Size) \ + hline +Used Ring & 4 & $6 + 8 * $(Queue Size) \ + hline +end{tabular} + +The Alignment column gives the minimum alignment for each part +of the virtqueue. + +The Size column gives the total number of bytes for each +part of the virtqueue. + +Queue Size corresponds to the maximum number of buffers in the +virtqueuefootnote{For example, if Queue Size is 4 then at most 4 buffers +can be queued at any given time.}. Queue Size value is always a +power of 2. The maximum Queue Size value is 32768. This value +is specified in a bus-specific way. + +When the driver wants to send a buffer to the device, it fills in +a slot in the descriptor table (or chains several together), and +writes the descriptor index into the available ring. It then +notifies the device. When the device has finished a buffer, it +writes the descriptor index into the used ring, and sends an interrupt. + +drivernormative{subsection}{Virtqueues}{Basic Facilities of a Virtio Device / Virtqueues} +The driver MUST ensure that the physical address of the first byte +of each virtqueue part is a multiple of the specified alignment value +in the above table. + +subsection{Legacy Interfaces: A Note on Virtqueue Layout}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Layout} + +For Legacy Interfaces, several additional +restrictions are placed on the virtqueue layout: + +Each virtqueue occupies two or more physically-contiguous pages +(usually defined as 4096 bytes, but depending on the transport; +henceforth referred to as Queue Align) +and consists of three parts: + +egin{tabular}{ l l l } +hline +Descriptor Table & Available Ring (ldots paddingldots) & Used Ring \ +hline +end{tabular} + +The bus-specific Queue Size field controls the total number of bytes +for the virtqueue. +When using the legacy interface, the transitional +driver MUST retrieve the Queue Size field from the device +and MUST allocate the total number of bytes for the virtqueue +according to the following formula (Queue Align given in qalign and +Queue Size given in qsz): + +egin{lstlisting} +#define ALIGN(x) (((x) + qalign) & ~qalign) +static inline unsigned virtq_size(unsigned int qsz) +{ + return ALIGN(sizeof(struct virtq_desc)*qsz + sizeof(u16)*(3 + qsz)) + + ALIGN(sizeof(u16)*3 + sizeof(struct virtq_used_elem)*qsz); +} +end{lstlisting} + +This wastes some space with padding. +When using the legacy interface, both transitional +devices and drivers MUST use the following virtqueue layout +structure to locate elements of the virtqueue: + +egin{lstlisting} +struct virtq { + // The actual descriptors (16 bytes each) + struct virtq_desc desc[ Queue Size ]; + + // A ring of available descriptor heads with free-running index. + struct virtq_avail avail; + + // Padding to the next Queue Align boundary. + u8 pad[ Padding ]; + + // A ring of used descriptor heads with free-running index. + struct virtq_used used; +}; +end{lstlisting} + +subsection{Legacy Interfaces: A Note on Virtqueue Endianness}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Legacy Interfaces: A Note on Virtqueue Endianness} + +Note that when using the legacy interface, transitional +devices and drivers MUST use the native +endian of the guest as the endian of fields and in the virtqueue. +This is opposed to little-endian for non-legacy interface as +specified by this standard. +It is assumed that the host is already aware of the guest endian. + +subsection{Message Framing}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing} +The framing of messages with descriptors is +independent of the contents of the buffers. For example, a network +transmit buffer consists of a 12 byte header followed by the network +packet. This could be most simply placed in the descriptor table as a +12 byte output descriptor followed by a 1514 byte output descriptor, +but it could also consist of a single 1526 byte output descriptor in +the case where the header and packet are adjacent, or even three or +more descriptors (possibly with loss of efficiency in that case). + +Note that, some device implementations have large-but-reasonable +restrictions on total descriptor size (such as based on IOV_MAX in the +host OS). This has not been a problem in practice: little sympathy +will be given to drivers which create unreasonably-sized descriptors +such as by dividing a network packet into 1500 single-byte +descriptors! + +devicenormative{subsubsection}{Message Framing}{Basic Facilities of a Virtio Device / Message Framing} +The device MUST NOT make assumptions about the particular arrangement +of descriptors. The device MAY have a reasonable limit of descriptors +it will allow in a chain. + +drivernormative{subsubsection}{Message Framing}{Basic Facilities of a Virtio Device / Message Framing} +The driver MUST place any device-writable descriptor elements after +any device-readable descriptor elements. + +The driver SHOULD NOT use an excessive number of descriptors to +describe a buffer. + +subsubsection{Legacy Interface: Message Framing}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Message Framing / Legacy Interface: Message Framing} + +Regrettably, initial driver implementations used simple layouts, and +devices came to rely on it, despite this specification wording. In +addition, the specification for virtio_blk SCSI commands required +intuiting field lengths from frame boundaries (see +
    ef{sec:Device Types / Block Device / Device Operation / Legacy Interface: Device Operation}~
    ameref{sec:Device Types / Block Device / Device Operation / Legacy Interface: Device Operation}) + +Thus when using the legacy interface, the VIRTIO_F_ANY_LAYOUT +feature indicates to both the device and the driver that no +assumptions were made about framing. Requirements for +transitional drivers when this is not negotiated are included in +each device section. + +subsection{The Virtqueue Descriptor Table}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} + +The descriptor table refers to the buffers the driver is using for +the device. field{addr} is a physical address, and the buffers +can be chained via field{next}. Each descriptor describes a +buffer which is read-only for the device (``device-readable'') or write-only for the device (``device-writable''), but a chain of +descriptors can contain both device-readable and device-writable buffers. + +The actual contents of the memory offered to the device depends on the +device type. Most common is to begin the data with a header +(containing little-endian fields) for the device to read, and postfix +it with a status tailer for the device to write. + +egin{lstlisting} +struct virtq_desc { + /* Address (guest-physical). */ + le64 addr; + /* Length. */ + le32 len; + +/* This marks a buffer as continuing via the next field. */ +#define VIRTQ_DESC_F_NEXT 1 +/* This marks a buffer as device write-only (otherwise device read-only). */ +#define VIRTQ_DESC_F_WRITE 2 +/* This means the buffer contains a list of buffer descriptors. */ +#define VIRTQ_DESC_F_INDIRECT 4 + /* The flags as indicated above. */ + le16 flags; + /* Next field if flags & NEXT */ + le16 next; +}; +end{lstlisting} + +The number of descriptors in the table is defined by the queue size +for this virtqueue: this is the maximum possible descriptor chain length. + +egin{note} +The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} +referred to this structure as vring_desc, and the constants as +VRING_DESC_F_NEXT, etc, but the layout and values were identical. +end{note} + +devicenormative{subsubsection}{The Virtqueue Descriptor Table}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} +A device MUST NOT write to a device-readable buffer, and a device SHOULD NOT +read a device-writable buffer (it MAY do so for debugging or diagnostic +purposes). + +drivernormative{subsubsection}{The Virtqueue Descriptor Table}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table} +Drivers MUST NOT add a descriptor chain over than $2^{32}$ bytes long in total; +this implies that loops in the descriptor chain are forbidden! + +subsubsection{Indirect Descriptors}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} + +Some devices benefit by concurrently dispatching a large number +of large requests. The VIRTIO_F_INDIRECT_DESC feature allows this (see
    ef{sec:virtio-queue.h}~
    ameref{sec:virtio-queue.h}). To increase +ring capacity the driver can store a table of indirect +descriptors anywhere in memory, and insert a descriptor in main +virtqueue (with field{flags}&VIRTQ_DESC_F_INDIRECT on) that refers to memory buffer +containing this indirect descriptor table; field{addr} and field{len} +refer to the indirect table address and length in bytes, +respectively. + +The indirect table layout structure looks like this +(field{len} is the length of the descriptor that refers to this table, +which is a variable, so this code won't compile): + +egin{lstlisting} +struct indirect_descriptor_table { + /* The actual descriptors (16 bytes each) */ + struct virtq_desc desc[len / 16]; +}; +end{lstlisting} + +The first indirect descriptor is located at start of the indirect +descriptor table (index 0), additional indirect descriptors are +chained by field{next}. An indirect descriptor without a valid field{next} +(with field{flags}&VIRTQ_DESC_F_NEXT off) signals the end of the descriptor. +A single indirect descriptor +table can include both device-readable and device-writable descriptors. + +drivernormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} +The driver MUST NOT set the VIRTQ_DESC_F_INDIRECT flag unless the +VIRTIO_F_INDIRECT_DESC feature was negotiated. The driver MUST NOT +set the VIRTQ_DESC_F_INDIRECT flag within an indirect descriptor (ie. only +one table per descriptor). + +A driver MUST NOT create a descriptor chain longer than the Queue Size of +the device. + +A driver MUST NOT set both VIRTQ_DESC_F_INDIRECT and VIRTQ_DESC_F_NEXT +in field{flags}. + +devicenormative{paragraph}{Indirect Descriptors}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Descriptor Table / Indirect Descriptors} +The device MUST ignore the write-only flag (field{flags}&VIRTQ_DESC_F_WRITE) in the descriptor that refers to an indirect table. + +The device MUST handle the case of zero or more normal chained +descriptors followed by a single descriptor with field{flags}&VIRTQ_DESC_F_INDIRECT. + +egin{note} +While unusual (most implementations either create a chain solely using +non-indirect descriptors, or use a single indirect element), such a +layout is valid. +end{note} + +subsection{The Virtqueue Available Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Available Ring} + +egin{lstlisting} +struct virtq_avail { +#define VIRTQ_AVAIL_F_NO_INTERRUPT 1 + le16 flags; + le16 idx; + le16 ring[ /* Queue Size */ ]; + le16 used_event; /* Only if VIRTIO_F_EVENT_IDX */ +}; +end{lstlisting} + +The driver uses the available ring to offer buffers to the +device: each ring entry refers to the head of a descriptor chain. It is only +written by the driver and read by the device. + +field{idx} field indicates where the driver would put the next descriptor +entry in the ring (modulo the queue size). This starts at 0, and increases. + +egin{note} +The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} +referred to this structure as vring_avail, and the constant as +VRING_AVAIL_F_NO_INTERRUPT, but the layout and value were identical. +end{note} + +subsection{Virtqueue Interrupt Suppression}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} + +If the VIRTIO_F_EVENT_IDX feature bit is not negotiated, +the field{flags} field in the available ring offers a crude mechanism for the driver to inform +the device that it doesn't want interrupts when buffers are used. Otherwise +field{used_event} is a more performant alternative where the driver +specifies how far the device can progress before interrupting. + +Neither of these interrupt suppression methods are reliable, as they +are not synchronized with the device, but they serve as +useful optimizations. + +drivernormative{subsubsection}{Virtqueue Interrupt Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} +If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: +egin{itemize} +item The driver MUST set field{flags} to 0 or 1. +item The driver MAY set field{flags} to 1 to advise +the device that interrupts are not needed. +end{itemize} + +Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: +egin{itemize} +item The driver MUST set field{flags} to 0. +item The driver MAY use field{used_event} to advise the device that interrupts are unnecessary until the device writes entry with an index specified by field{used_event} into the used ring (equivalently, until field{idx} in the +used ring will reach the value field{used_event} + 1). +end{itemize} + +The driver MUST handle spurious interrupts from the device. + +devicenormative{subsubsection}{Virtqueue Interrupt Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression} + +If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: +egin{itemize} +item The device MUST ignore the field{used_event} value. +item After the device writes a descriptor index into the used ring: + egin{itemize} + item If field{flags} is 1, the device SHOULD NOT send an interrupt. + item If field{flags} is 0, the device MUST send an interrupt. + end{itemize} +end{itemize} + +Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: +egin{itemize} +item The device MUST ignore the lower bit of field{flags}. +item After the device writes a descriptor index into the used ring: + egin{itemize} + item If the field{idx} field in the used ring (which determined + where that descriptor index was placed) was equal to + field{used_event}, the device MUST send an interrupt. + item Otherwise the device SHOULD NOT send an interrupt. + end{itemize} +end{itemize} + +egin{note} +For example, if field{used_event} is 0, then a device using + VIRTIO_F_EVENT_IDX would interrupt after the first buffer is + used (and again after the 65536th buffer, etc). +end{note} + +subsection{The Virtqueue Used Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} + +egin{lstlisting} +struct virtq_used { +#define VIRTQ_USED_F_NO_NOTIFY 1 + le16 flags; + le16 idx; + struct virtq_used_elem ring[ /* Queue Size */]; + le16 avail_event; /* Only if VIRTIO_F_EVENT_IDX */ +}; + +/* le32 is used here for ids for padding reasons. */ +struct virtq_used_elem { + /* Index of start of used descriptor chain. */ + le32 id; + /* Total length of the descriptor chain which was used (written to) */ + le32 len; +}; +end{lstlisting} + +The used ring is where the device returns buffers once it is done with +them: it is only written to by the device, and read by the driver. + +Each entry in the ring is a pair: field{id} indicates the head entry of the +descriptor chain describing the buffer (this matches an entry +placed in the available ring by the guest earlier), and field{len} the total +of bytes written into the buffer. + +egin{note} +field{len} is particularly useful +for drivers using untrusted buffers: if a driver does not know exactly +how much has been written by the device, the driver would have to zero +the buffer in advance to ensure no data leakage occurs. + +For example, a network driver may hand a received buffer directly to +an unprivileged userspace application. If the network device has not +overwritten the bytes which were in that buffer, this could leak the +contents of freed memory from other processes to the application. +end{note} + +field{idx} field indicates where the driver would put the next descriptor +entry in the ring (modulo the queue size). This starts at 0, and increases. + +egin{note} +The legacy hyperref[intro:Virtio PCI Draft]{[Virtio PCI Draft]} +referred to these structures as vring_used and vring_used_elem, and +the constant as VRING_USED_F_NO_NOTIFY, but the layout and value were +identical. +end{note} + +subsubsection{Legacy Interface: The Virtqueue Used +Ring}label{sec:Basic Facilities of a Virtio Device / Virtqueues +/ The Virtqueue Used Ring/ Legacy Interface: The Virtqueue Used +Ring} + +Historically, many drivers ignored the field{len} value, as a +result, many devices set field{len} incorrectly. Thus, when +using the legacy interface, it is generally a good idea to ignore +the field{len} value in used ring entries if possible. Specific +known issues are listed per device type. + +devicenormative{subsubsection}{The Virtqueue Used Ring}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} + +The device MUST set field{len} prior to updating the used field{idx}. + +The device MUST write at least field{len} bytes to descriptor, +beginning at the first device-writable buffer, +prior to updating the used field{idx}. + +The device MAY write more than field{len} bytes to descriptor. + +egin{note} +There are potential error cases where a device might not know what +parts of the buffers have been written. This is why field{len} is +permitted to be an underestimate: that's preferable to the driver believing +that uninitialized memory has been overwritten when it has not. +end{note} + +drivernormative{subsubsection}{The Virtqueue Used Ring}{Basic Facilities of a Virtio Device / Virtqueues / The Virtqueue Used Ring} + +The driver MUST NOT make assumptions about data in device-writable buffers +beyond the first field{len} bytes, and SHOULD ignore this data. + +subsection{Virtqueue Notification Suppression}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} + +The device can suppress notifications in a manner analogous to the way +drivers can suppress interrupts as detailed in section
    ef{sec:Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Interrupt Suppression}. +The device manipulates field{flags} or field{avail_event} in the used ring the +same way the driver manipulates field{flags} or field{used_event} in the available ring. + +drivernormative{subsubsection}{Virtqueue Notification Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} + +The driver MUST initialize field{flags} in the used ring to 0 when +allocating the used ring. + +If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: +egin{itemize} +item The driver MUST ignore the field{avail_event} value. +item After the driver writes a descriptor index into the available ring: + egin{itemize} + item If field{flags} is 1, the driver SHOULD NOT send a notification. + item If field{flags} is 0, the driver MUST send a notification. + end{itemize} +end{itemize} + +Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: +egin{itemize} +item The driver MUST ignore the lower bit of field{flags}. +item After the driver writes a descriptor index into the available ring: + egin{itemize} + item If the field{idx} field in the available ring (which determined + where that descriptor index was placed) was equal to + field{avail_event}, the driver MUST send a notification. + item Otherwise the driver SHOULD NOT send a notification. + end{itemize} +end{itemize} + +devicenormative{subsubsection}{Virtqueue Notification Suppression}{Basic Facilities of a Virtio Device / Virtqueues / Virtqueue Notification Suppression} +If the VIRTIO_F_EVENT_IDX feature bit is not negotiated: +egin{itemize} +item The device MUST set field{flags} to 0 or 1. +item The device MAY set field{flags} to 1 to advise +the driver that notifications are not needed. +end{itemize} + +Otherwise, if the VIRTIO_F_EVENT_IDX feature bit is negotiated: +egin{itemize} +item The device MUST set field{flags} to 0. +item The device MAY use field{avail_event} to advise the driver that notifications are unnecessary until the driver writes entry with an index specified by field{avail_event} into the available ring (equivalently, until field{idx} in the +available ring will reach the value field{avail_event} + 1). +end{itemize} + +The device MUST handle spurious notifications from the driver. + +subsection{Helpers for Operating Virtqueues}label{sec:Basic Facilities of a Virtio Device / Virtqueues / Helpers for Operating Virtqueues} + +The Linux Kernel Source code contains the definitions above and +helper routines in a more usable form, in +include/uapi/linux/virtio_ring.h. This was explicitly licensed by IBM +and Red Hat under the (3-clause) BSD license so that it can be +freely used by all other projects, and is reproduced (with slight +variation) in
    ef{sec:virtio-queue.h}~
    ameref{sec:virtio-queue.h}. -- MST


  • 24.  [PATCH v10 07/13] content: generalize rest of text

    Posted 03-09-2018 21:24
    Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> --- content.tex 14 ++++++-------- 1 file changed, 6 insertions(+), 8 deletions(-) diff --git a/content.tex b/content.tex index 9fc9673..5634c7d 100644 --- a/content.tex +++ b/content.tex @@ -1467,8 +1467,7 @@ All register values are organized as Little Endian. } hline mmioreg{QueueNum}{Virtual queue size}{0x038}{W}{% - Queue size is the number of elements in the queue, therefore in each - of the Descriptor Table, the Available Ring and the Used Ring. + Queue size is the number of elements in the queue. Writing to this register notifies the device what size of the queue the driver will use. This applies to the queue selected by writing to field{QueueSel}. @@ -1491,9 +1490,9 @@ All register values are organized as Little Endian. caused the device interrupt to be asserted. The following events are possible: egin{description} - item[Used Ring Update] - bit 0 - the interrupt was asserted - because the device has updated the Used - Ring in at least one of the active virtual queues. + item[Used Buffer Update] - bit 0 - the interrupt was asserted + because the device has used a buffer + in at least one of the active virtual queues. item [Configuration Change] - bit 1 - the interrupt was asserted because the configuration of the device has changed. end{description} @@ -1642,9 +1641,8 @@ The driver will typically initialize the virtual queue in the following way: field{QueueNumMax}. If the returned value is zero (0x0) the queue is not available. -item Allocate and zero the queue pages, making sure the memory - is physically contiguous. It is recommended to align the - Used Ring to an optimal boundary (usually the page size). +item Allocate and zero the queue memory, making sure the memory + is physically contiguous. item Notify the device about the queue size by writing the size to field{QueueNum}. -- MST


  • 25.  [PATCH v10 11/13] content: in-order buffer use

    Posted 03-09-2018 21:24
    Using descriptors in-order is sometimes beneficial. Add an option for that - per-format detail allowing more optimizations will be added by follow-up patches. Signed-off-by: Michael S. Tsirkin <mst@redhat.com> Reviewed-by: Cornelia Huck <cohuck@redhat.com> Reviewed-by: Stefan Hajnoczi <stefanha@redhat.com> --- content.tex 15 +++++++++++++++ 1 file changed, 15 insertions(+) diff --git a/content.tex b/content.tex index 73f40b7..c57a918 100644 --- a/content.tex +++ b/content.tex @@ -245,6 +245,15 @@ a device event - i.e. send an interrupt to the driver. Device reports the number of bytes it has written to memory for each buffer it uses. This is referred to as ``used length''. +Device is not generally required to use buffers in +the same order in which they have been made available +by the driver. + +Some devices always use descriptors in the same order in which +they have been made available. These devices can offer the +VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge +might allow optimizations or simplify driver and/or device code. + Each virtqueue can consist of up to 3 parts: egin{itemize} item Descriptor Area - used for describing buffers @@ -5248,6 +5257,9 @@ Descriptors} and
    ef{sec:Packed Virtqueues / Indirect Flag: Scatter-Gather Supp item[VIRTIO_F_RING_PACKED(34)] This feature indicates support for the packed virtqueue layout as described in
    ef{sec:Basic Facilities of a Virtio Device / Packed Virtqueues}~
    ameref{sec:Basic Facilities of a Virtio Device / Packed Virtqueues}. + item[VIRTIO_F_IN_ORDER(35)] This feature indicates + that all buffers are used by the device in the same + order in which they have been made available. end{description} drivernormative{section}{Reserved Feature Bits}{Reserved Feature Bits} @@ -5273,6 +5285,9 @@ translates bus addresses from the device into physical addresses in memory. A device MAY fail to operate further if VIRTIO_F_IOMMU_PLATFORM is not accepted. +If VIRTIO_F_IN_ORDER has been negotiated, a device MUST use +buffers in the same order in which they have been available. + section{Legacy Interface: Reserved Feature Bits}label{sec:Reserved Feature Bits / Legacy Interface: Reserved Feature Bits} Transitional devices MAY offer the following: -- MST