Lines Matching +full:layer +full:- +full:buffer +full:- +full:offset
1 /* SPDX-License-Identifier: GPL-2.0 */
10 #include <linux/blk-mq.h>
14 * ********************** FC-NVME LS API ********************
16 * Data structures used by both FC-NVME hosts and FC-NVME
17 * targets to perform FC-NVME LS requests or transmit
24 * struct nvmefc_ls_req - Request structure passed from the transport
25 * to the LLDD to perform a NVME-FC LS request and obtain
27 * Used by nvme-fc transport (host) to send LS's such as
30 * Used by the nvmet-fc transport (controller) to send
34 * @rqstaddr: pointer to request buffer
35 * @rqstdma: PCI DMA address of request buffer
36 * @rqstlen: Length, in bytes, of request buffer
37 * @rspaddr: pointer to response buffer
38 * @rspdma: PCI DMA address of response buffer
39 * @rsplen: Length, in bytes, of response buffer
45 * request. The length of the buffer corresponds to the
51 * negative errno on failure (example: -ENXIO).
70 * struct nvmefc_ls_rsp - Structure passed from the transport to the LLDD
71 * to request the transmit the NVME-FC LS response to a
72 * NVME-FC LS request. The structure originates in the LLDD
75 * FC exchange context for the NVME-FC LS request that was
77 * Used by the LLDD to pass the nvmet-fc transport (controller)
80 * Used by the LLDD to pass the nvme-fc transport (host)
85 * from the FC link. The address of the structure is passed to the nvmet-fc
86 * or nvme-fc layer via the xxx_rcv_ls_req() transport routines.
96 * the transport and the LLDD can de-allocate the structure.
102 * When the structure is used for the LLDD->xmt_ls_rsp() call, the
103 * transport layer will fully set the fields in order to specify the
104 * response payload buffer and its length as well as the done routine
105 * to be called upon completion of the transmit. The transport layer
108 * Values set by the transport layer prior to calling the LLDD xmt_ls_rsp
110 * @rspbuf: pointer to the LS response buffer
111 * @rspdma: PCI DMA address of the LS response buffer
112 * @rsplen: Length, in bytes, of the LS response buffer
116 * @nvme_fc_private: pointer to an internal transport-specific structure
132 * ********************** LLDD FC-NVME Host API ********************
141 * struct nvme_fc_port_info - port-specific ids and FC connection-specific
172 * struct nvmefc_fcp_req - Request structure passed from NVME-FC transport
175 * Values set by the NVME-FC layer prior to calling the LLDD fcp_io
177 * @cmdaddr: pointer to the FCP CMD IU buffer
178 * @rspaddr: pointer to the FCP RSP IU buffer
179 * @cmddma: PCI DMA address of the FCP CMD IU buffer
180 * @rspdma: PCI DMA address of the FCP RSP IU buffer
181 * @cmdlen: Length, in bytes, of the FCP CMD IU buffer
182 * @rsplen: Length, in bytes, of the FCP RSP IU buffer
193 * while processing the operation. The length of the buffer
204 * negative errno value upon failure (ex: -EIO). Note: this is
206 * status of the FCP operation at the NVME-FC level.
253 * struct nvme_fc_local_port - structure used between NVME-FC transport and
260 * @port_num: NVME-FC transport host port number
266 * The length of the buffer corresponds to the local_priv_sz
281 /* static/read-only fields */
296 * struct nvme_fc_remote_port - structure used between NVME-FC transport and
303 * @port_num: NVME-FC transport remote subsystem port number
307 * @localport: pointer to the NVME-FC local host port the subsystem is
311 * The length of the buffer corresponds to the remote_priv_sz
340 * struct nvme_fc_port_template - structure containing static entrypoints and
343 * NVME-FC transport remembers template reference and may
360 * @create_queue: Upon creating a host<->controller association, queues are
369 * at the block-level is also passed in. The LLDD should use the
375 * host<->controller association teardown, this routine is called
384 * @ls_req: Called to issue a FC-NVME FC-4 LS service request.
392 * @fcp_io: called to issue a FC-NVME I/O request. The I/O may be for
394 * fully describe the io: the buffer containing the FC-NVME CMD IU
396 * and the buffer to place the FC-NVME RSP IU into. The LLDD will
415 * @xmt_ls_rsp: Called to transmit the response to a FC-NVME FC-4 LS service.
416 * The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
421 * non-zero errno status), and upon completion of the transmit, call
448 * memory that it would like fc nvme layer to allocate on the LLDD's
451 * the localport->private pointer.
455 * memory that it would like fc nvme layer to allocate on the LLDD's
458 * the remoteport->private pointer.
462 * memory that it would like fc nvme layer to allocate on the LLDD's
465 * specified by the ls_request->private pointer.
469 * memory that it would like fc nvme layer to allocate on the LLDD's
472 * specified by the fcp_request->private pointer.
476 /* initiator-based functions */
540 * Routine called to pass a NVME-FC LS request, received by the lldd,
541 * to the nvme-fc transport.
545 * If the return value is non-zero: the transport has not accepted the
546 * LS. The lldd should ABTS-LS the LS.
549 * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
559 * noop the transmission of the rsp and call the lsrsp->done() routine
571 * If the return value is non-zero: Returns the appid associated with VM
578 * *************** LLDD FC-NVME Target/Subsystem API ***************
586 * struct nvmet_fc_port_info - port-specific ids and FC connection-specific
605 /* Operations that NVME-FC layer may request the LLDD to perform for FCP */
616 * struct nvmefc_tgt_fcp_req - Structure used between LLDD and NVMET-FC
617 * layer to represent the exchange context and
618 * the specific FC-NVME IU operation(s) to perform
619 * for a FC-NVME FCP IO.
621 * Structure used between LLDD and nvmet-fc layer to represent the exchange
622 * context for a FC-NVME FCP I/O operation (e.g. a nvme sqe, the sqe-related
626 * from the FC link. The address of the structure is passed to the nvmet-fc
627 * layer via the nvmet_fc_rcv_fcp_req() call. The address of the structure
634 * op done() routine, allowing the nvmet-fc layer to release dma resources.
636 * further access will be made by the nvmet-fc layer and the LLDD can
637 * de-allocate the structure.
643 * When the structure is used for an FCP target operation, the nvmet-fc
644 * layer will fully set the fields in order to specify the scattergather
646 * upon compeletion of the operation. The nvmet-fc layer will also set a
649 * Values set by the NVMET-FC layer prior to calling the LLDD fcp_op
652 * @hwqid: Specifies the hw queue index (0..N-1, where N is the
655 * @offset: Indicates the DATA_OUT/DATA_IN payload offset to be tranferred.
671 * @rspaddr: pointer to the FCP RSP IU buffer to be transmit
673 * @rspdma: PCI DMA address of the FCP RSP IU buffer
675 * @rsplen: Length, in bytes, of the FCP RSP IU buffer
680 * @nvmet_fc_private: pointer to an internal NVMET-FC layer structure used
681 * as part of the NVMET-FC processing. The LLDD is not to
695 u32 offset; member
726 * struct nvmet_fc_target_port - structure used between NVME-FC transport and
733 * @port_num: NVME-FC transport subsystem port number
738 * The length of the buffer corresponds to the target_priv_sz
750 /* static/read-only fields */
764 * struct nvmet_fc_target_template - structure containing static entrypoints
767 * registrations. NVME-FC transport remembers template
778 * @xmt_ls_rsp: Called to transmit the response to a FC-NVME FC-4 LS service.
779 * The nvmefc_ls_rsp structure is the same LLDD-supplied exchange
784 * non-zero errno status), and upon completion of the transmit, call
795 * to the block layer.
799 * The nvmefc_tgt_fcp_req structure is the same LLDD-supplied
806 * more FC sequences (preferrably 1). Note: the fc-nvme layer
813 * Note: the FC-NVME layer may call the WRITEDATA operation
827 * may retramsit the FCP_RSP iu if necessary per FC-NVME. Upon
836 * FCP_RSP iu if FCP_CONF is not received per FC-NVME. Upon
846 * the LLDD-supplied exchange structure must remain valid until the
851 * operations, the fc-nvme layer may immediate convert, in the same
858 * Returns 0 on success, -<errno> on failure (Ex: -EIO)
862 * The command may be in-between operations (nothing active in LLDD)
881 * is now free to re-use the rcv buffer associated with the
890 * @ls_req: Called to issue a FC-NVME FC-4 LS service request.
895 * LS request is identified by the hosthandle argument. The nvmet-fc
896 * transport is only allowed to issue FC-NVME LS's on behalf of an
909 * Entrypoint is Optional - but highly recommended.
949 * memory that it would like fc nvme layer to allocate on the LLDD's
952 * the targetport->private pointer.
956 * memory that it would like nvmet-fc layer to allocate on the LLDD's
959 * specified by the ls_request->private pointer.
1004 * Routine called to pass a NVME-FC LS request, received by the lldd,
1005 * to the nvmet-fc transport.
1009 * If the return value is non-zero: the transport has not accepted the
1010 * LS. The lldd should ABTS-LS the LS.
1013 * calling the ops->xmt_ls_rsp() routine to transmit a response, the LLDD
1023 * noop the transmission of the rsp and call the lsrsp->done() routine
1033 * connectivity to a NVME-FC host port which there had been active
1035 * hosthandle. The hosthandle is given to the nvmet-fc transport
1037 * The nvmet-fc transport will cache the hostport value with the
1039 * When the LLDD calls this routine, the nvmet-fc transport will
1047 * port, the nvmet-fc transport will call the ops->host_release()
1048 * callback. As of the callback, the nvmet-fc transport will no
1055 * If nvmet_fc_rcv_fcp_req returns non-zero, the transport has not accepted
1056 * the FCP cmd. The lldd should ABTS-LS the cmd.