Lines Matching +full:support +full:- +full:nesting

1 /* SPDX-License-Identifier: GPL-2.0 WITH Linux-syscall-note */
2 /* Copyright (c) 2021-2022, NVIDIA CORPORATION & AFFILIATES.
23 * - ENOTTY: The IOCTL number itself is not supported at all
24 * - E2BIG: The IOCTL number is supported, but the provided structure has
25 * non-zero in a part the kernel does not understand.
26 * - EOPNOTSUPP: The IOCTL number is supported, and the structure is
28 * understand or support.
29 * - EINVAL: Everything about the IOCTL was understood, but a field is not
31 * - ENOENT: An ID or IOVA provided does not exist.
32 * - ENOMEM: Out of memory.
33 * - EOVERFLOW: Mathematics overflowed.
61 * struct iommu_destroy - ioctl(IOMMU_DESTROY)
74 * struct iommu_ioas_alloc - ioctl(IOMMU_IOAS_ALLOC)
90 * struct iommu_iova_range - ioctl(IOMMU_IOVA_RANGE)
102 * struct iommu_ioas_iova_ranges - ioctl(IOMMU_IOAS_IOVA_RANGES)
122 * the total number of iovas filled in. The ioctl will return -EMSGSIZE and set
124 * caller should allocate a larger output array and re-issue the ioctl.
146 * struct iommu_ioas_allow_iovas - ioctl(IOMMU_IOAS_ALLOW_IOVAS)
178 * enum iommufd_ioas_map_flags - Flags for map and copy
191 * struct iommu_ioas_map - ioctl(IOMMU_IOAS_MAP)
221 * struct iommu_ioas_map_file - ioctl(IOMMU_IOAS_MAP_FILE)
245 * struct iommu_ioas_copy - ioctl(IOMMU_IOAS_COPY)
276 * struct iommu_ioas_unmap - ioctl(IOMMU_IOAS_UNMAP)
296 * enum iommufd_option - ioctl(IOMMU_OPTION_RLIMIT_MODE) and
305 * PAGE_SIZE. This can be useful for benchmarking. This is a per-IOAS
314 * enum iommufd_option_ops - ioctl(IOMMU_OPTION_OP_SET) and
325 * struct iommu_option - iommu option multiplexer
348 * enum iommufd_vfio_ioas_op - IOMMU_VFIO_IOAS_* ioctls
360 * struct iommu_vfio_ioas - ioctl(IOMMU_VFIO_IOAS)
367 * The VFIO compatibility support uses a single ioas because VFIO APIs do not
368 * support the ID field. Set or Get the IOAS that VFIO compatibility will use.
372 * this ioctl. SET or CLEAR does not destroy any auto-created IOAS.
383 * enum iommufd_hwpt_alloc_flags - Flags for HWPT allocation
385 * the parent HWPT in a nesting configuration.
386 * @IOMMU_HWPT_ALLOC_DIRTY_TRACKING: Dirty tracking support for device IOMMU is
392 * Any domain attached to the non-PASID part of the
395 * If IOMMU does not support PASID it will return
396 * error (-EOPNOTSUPP).
406 * enum iommu_hwpt_vtd_s1_flags - Intel VT-d stage-1 page table
419 * struct iommu_hwpt_vtd_s1 - Intel VT-d stage-1 page table
422 * @pgtbl_addr: The base address of the stage-1 page table.
423 * @addr_width: The address width of the stage-1 page table
434 * struct iommu_hwpt_arm_smmuv3 - ARM SMMUv3 nested STE
438 * the translation. Must be little-endian.
440 * - word-0: V, Cfg, S1Fmt, S1ContextPtr, S1CDMax
441 * - word-1: EATS, S1DSS, S1CIR, S1COR, S1CSH, S1STALLD
443 * -EIO will be returned if @ste is not legal or contains any non-allowed field.
445 * nested domain will translate the same as the nesting parent. The S1 will
447 * by the nesting parent.
454 * enum iommu_hwpt_data_type - IOMMU HWPT Data Type
456 * @IOMMU_HWPT_DATA_VTD_S1: Intel VT-d stage-1 page table
466 * struct iommu_hwpt_alloc - ioctl(IOMMU_HWPT_ALLOC)
478 * @__reserved2: Padding to 64-bit alignment. Must be 0.
484 * A kernel-managed HWPT will be created with the mappings from the given
487 * nesting configuration by passing IOMMU_HWPT_ALLOC_NEST_PARENT via @flags.
489 * A user-managed nested HWPT will be created from a given vIOMMU (wrapping a
492 * case, the @data_type must be set to a pre-defined type corresponding to an
517 * enum iommu_hw_info_vtd_flags - Flags for VT-d hw_info
518 * @IOMMU_HW_INFO_VTD_ERRATA_772415_SPR17: If set, disallow read-only mappings
520 … https://www.intel.com/content/www/us/en/content-details/772415/content-details.ht…
527 * struct iommu_hw_info_vtd - Intel VT-d hardware information
532 * @cap_reg: Value of Intel VT-d capability register defined in VT-d spec
534 * @ecap_reg: Value of Intel VT-d capability register defined in VT-d spec
537 * User needs to understand the Intel VT-d specification to decode the
548 * struct iommu_hw_info_arm_smmuv3 - ARM SMMUv3 hardware information
553 * @idr: Implemented features for ARM SMMU Non-secure programming interface
569 * - S1P should be assumed to be true if a NESTED HWPT can be created
570 * - VFIO/iommufd only support platforms with COHACC, it should be assumed to be
572 * - ATS is a per-device property. If the VMM describes any devices as ATS
578 * architecture are not currently supported by the kernel for nesting: HTTU,
590 * enum iommu_hw_info_type - IOMMU Hardware Info Types
593 * @IOMMU_HW_INFO_TYPE_INTEL_VTD: Intel VT-d iommu info type
604 * @IOMMU_HW_CAP_DIRTY_TRACKING: IOMMU hardware support for dirty tracking
617 * struct iommu_hw_info - ioctl(IOMMU_GET_HW_INFO)
623 * @data_uptr: User pointer to a user-space buffer used by the kernel to fill
635 * a guest stage-1 page table can be compatible with the physical iommu.
658 * enum iommufd_hwpt_set_dirty_tracking_flags - Flags for steering dirty
667 * struct iommu_hwpt_set_dirty_tracking - ioctl(IOMMU_HWPT_SET_DIRTY_TRACKING)
685 * enum iommufd_hwpt_get_dirty_bitmap_flags - Flags for getting dirty bits
698 * struct iommu_hwpt_get_dirty_bitmap - ioctl(IOMMU_HWPT_GET_DIRTY_BITMAP)
731 * enum iommu_hwpt_invalidate_data_type - IOMMU HWPT Cache Invalidation
742 * enum iommu_hwpt_vtd_s1_invalidate_flags - Flags for Intel VT-d
743 * stage-1 cache invalidation
745 * to all-levels page structure cache or just
753 * struct iommu_hwpt_vtd_s1_invalidate - Intel VT-d cache invalidation
761 * The Intel VT-d specific invalidation data for user-managed stage-1 cache
763 * tell the impacted cache scope after modifying the stage-1 page table.
778 * struct iommu_viommu_arm_smmuv3_invalidate - ARM SMMUv3 cache invalidation
780 * @cmd: 128-bit cache invalidation command that runs in SMMU CMDQ.
781 * Must be little-endian.
793 * -EIO will be returned if the command is not supported.
800 * struct iommu_hwpt_invalidate - ioctl(IOMMU_HWPT_INVALIDATE)
803 * @data_uptr: User pointer to an array of driver-specific cache invalidation
813 * Invalidate iommu cache for user-managed page table or vIOMMU. Modifications
814 * on a user-managed page table should be followed by this operation, if a HWPT
818 * Each ioctl can support one or more cache invalidation requests in the array
837 * enum iommu_hwpt_pgfault_flags - flags for struct iommu_hwpt_pgfault
848 * enum iommu_hwpt_pgfault_perm - perm bits for struct iommu_hwpt_pgfault
865 * struct iommu_hwpt_pgfault - iommu page fault data
875 * transfer, it could fill in 10MB and the OS could pre-fault in
877 * @cookie: kernel-managed cookie identifying a group of fault messages. The
894 * enum iommufd_page_response_code - Return status of fault handlers
908 * struct iommu_hwpt_page_response - IOMMU page fault response
909 * @cookie: The kernel-managed cookie reported in the fault message.
918 * struct iommu_fault_alloc - ioctl(IOMMU_FAULT_QUEUE_ALLOC)
935 * enum iommu_viommu_type - Virtual IOMMU Type
945 * struct iommu_viommu_alloc - ioctl(IOMMU_VIOMMU_ALLOC)
950 * @hwpt_id: ID of a nesting parent HWPT to associate to
954 * virtualization support that is a security-isolated slice of the real IOMMU HW
957 * - Security namespace for guest owned ID, e.g. guest-controlled cache tags
958 * - Non-device-affiliated event reporting, e.g. invalidation queue errors
959 * - Access to a sharable nesting parent pagetable across physical IOMMUs
960 * - Virtualization of various platforms IDs, e.g. RIDs and others
961 * - Delivery of paravirtualized invalidation
962 * - Direct assigned invalidation queues
963 * - Direct assigned interrupts
976 * struct iommu_vdevice_alloc - ioctl(IOMMU_VDEVICE_ALLOC)
982 * of AMD IOMMU, and vRID of a nested Intel VT-d to a Context Table
997 * struct iommu_ioas_change_process - ioctl(VFIO_IOAS_CHANGE_PROCESS)