Searched full:fabrics (Results 1 – 25 of 73) sorted by relevance
123
/linux-6.14.4/drivers/nvme/host/ |
D | Kconfig | 48 tristate "NVM Express over Fabrics RDMA host driver" 53 This provides support for the NVMe over Fabrics protocol using 57 To configure a NVMe over Fabrics controller use the nvme-cli tool 63 tristate "NVM Express over Fabrics FC host driver" 69 This provides support for the NVMe over Fabrics protocol using 73 To configure a NVMe over Fabrics controller use the nvme-cli tool 79 tristate "NVM Express over Fabrics TCP host driver" 86 This provides support for the NVMe over Fabrics protocol using 90 To configure a NVMe over Fabrics controller use the nvme-cli tool 96 bool "NVMe over Fabrics TCP TLS encryption support" [all …]
|
D | fabrics.c | 3 * NVMe over Fabrics common host code. 14 #include "fabrics.h" 152 * nvmf_reg_read32() - NVMe Fabrics "Property Get" API function. 157 * register (see the fabrics section of the NVMe standard). 165 * NVMe fabrics space.) 197 * nvmf_reg_read64() - NVMe Fabrics "Property Get" API function. 202 * register (see the fabrics section of the NVMe standard). 210 * NVMe fabrics space.) 242 * nvmf_reg_write32() - NVMe Fabrics "Property Write" API function. 247 * register (see the fabrics section of the NVMe standard). [all …]
|
D | Makefile | 7 obj-$(CONFIG_NVME_FABRICS) += nvme-fabrics.o 24 nvme-fabrics-y += fabrics.o
|
D | fabrics.h | 3 * NVMe over Fabrics common host code. 74 * @mask: Used by the fabrics library to parse through sysfs options 144 * fabric implementation of NVMe fabrics. 145 * @entry: Used by the fabrics library to add the new
|
/linux-6.14.4/drivers/nvme/target/ |
D | Kconfig | 35 NVMe Over Fabrics protocol. It allows for hosts to manage and 53 tristate "NVMe over Fabrics RDMA target support" 64 tristate "NVMe over Fabrics FC target driver" 75 tristate "NVMe over Fabrics FC Transport Loopback Test driver" 88 tristate "NVMe over Fabrics TCP target support" 98 bool "NVMe over Fabrics TCP target TLS encryption support" 110 bool "NVMe over Fabrics In-band Authentication in target side" 114 This enables support for NVMe over Fabrics In-band Authentication in
|
D | fabrics-cmd.c | 3 * NVMe Fabrics command implementation. 92 switch (cmd->fabrics.fctype) { in nvmet_fabrics_admin_cmd_data_len() 108 switch (cmd->fabrics.fctype) { in nvmet_parse_fabrics_admin_cmd() 125 cmd->fabrics.fctype); in nvmet_parse_fabrics_admin_cmd() 137 switch (cmd->fabrics.fctype) { in nvmet_fabrics_io_cmd_data_len() 153 switch (cmd->fabrics.fctype) { in nvmet_parse_fabrics_io_cmd() 164 cmd->fabrics.fctype); in nvmet_parse_fabrics_io_cmd() 195 /* for fabrics, this value applies to only the I/O Submission Queues */ in nvmet_install_queue() 377 cmd->fabrics.fctype != nvme_fabrics_type_connect) in nvmet_connect_cmd_data_len() 389 cmd->fabrics.opcode); in nvmet_parse_connect_cmd() [all …]
|
D | Makefile | 13 nvmet-y += core.o configfs.o admin-cmd.o fabrics-cmd.o \ 18 nvmet-$(CONFIG_NVME_TARGET_AUTH) += fabrics-cmd-auth.o auth.o
|
D | passthru.c | 3 * NVMe Over Fabrics Target Passthrough command implementation. 121 * We export aerl limit for the fabrics controller, update this when in nvmet_passthru_override_id_ctrl() 162 /* Support multipath connections with fabrics */ in nvmet_passthru_override_id_ctrl() 429 * hosts that connect via fabrics. This could potentially be in nvmet_parse_passthru_io_cmd()
|
/linux-6.14.4/Documentation/devicetree/bindings/arm/tegra/ |
D | nvidia,tegra234-cbb.yaml | 16 The Tegra234 SoC has different fabrics based on CBB 2.0 architecture 17 which include cluster fabrics BPMP, AON, PSC, SCE, RCE, DCE, FSI and
|
/linux-6.14.4/include/linux/ |
D | nvme.h | 1518 * Fabrics subcommands. 1542 * If not fabrics command, fctype will be ignored. 1562 * Note that cntlid of value 0 is considered illegal in the fabrics world. 1910 struct nvmf_common_command fabrics; member 1948 return "Fabrics Cmd"; in nvme_get_fabrics_opcode_str() 1962 return nvme_get_fabrics_opcode_str(cmd->fabrics.fctype); in nvme_fabrics_opcode_str() 1986 * Why can't we simply have a Fabrics In and Fabrics out command? in nvme_is_write() 1989 return cmd->fabrics.fctype & 1; in nvme_is_write() 2093 * I/O Command Set Specific - Fabrics commands: 2154 * Used by Admin and Fabrics commands to return data:
|
/linux-6.14.4/include/target/ |
D | target_core_fabric.h | 45 * Optionally used by fabrics to allow demo-mode login, but not 71 * Used only for SCSI fabrics that contain multi-value TransportIDs 72 * (like iSCSI). All other SCSI fabrics should set this to NULL.
|
/linux-6.14.4/drivers/target/tcm_fc/ |
D | Kconfig | 6 Say Y here to enable the TCM FC plugin for accessing FC fabrics in TCM
|
/linux-6.14.4/drivers/infiniband/ulp/isert/ |
D | Kconfig | 6 Support for iSCSI Extensions for RDMA (iSER) Target on Infiniband fabrics.
|
/linux-6.14.4/sound/aoa/ |
D | Makefile | 4 obj-$(CONFIG_SND_AOA) += fabrics/
|
D | Kconfig | 12 source "sound/aoa/fabrics/Kconfig"
|
/linux-6.14.4/Documentation/devicetree/bindings/interconnect/ |
D | qcom,msm8974.yaml | 14 bandwidth requirements between various network-on-chip fabrics.
|
D | qcom,msm8939.yaml | 14 adjusting the bandwidth requirements between the various NoC fabrics.
|
D | qcom,msm8953.yaml | 14 bandwidth requirements between the various NoC fabrics.
|
D | qcom,qcm2290.yaml | 14 bandwidth requirements between the various NoC fabrics.
|
D | qcom,sdm660.yaml | 14 bandwidth requirements between the various NoC fabrics.
|
D | interconnect.txt | 16 consumers, such as in the case where two network-on-chip fabrics interface
|
D | qcom,msm8996.yaml | 14 bandwidth requirements between the various NoC fabrics.
|
D | qcom,sm6115.yaml | 14 bandwidth requirements between the various NoC fabrics.
|
/linux-6.14.4/Documentation/nvme/ |
D | nvme-pci-endpoint-target.rst | 10 using a NVMe fabrics target controller configured with the PCI transport type. 18 using NVMe over fabrics: the controller represents the interface to an NVMe 22 existing physical NVMe device or a NVMe fabrics host controller (e.g. a NVMe TCP
|
/linux-6.14.4/Documentation/driver-api/ |
D | edac.rst | 194 An AMD heterogeneous system is built by connecting the data fabrics of 198 The MI200 accelerators are data center GPUs. They have 2 data fabrics,
|
123