Lines Matching +full:ia32 +full:- +full:3 +full:a
12 The side effects described below are stated for a uniprocessor
14 SMP cases are a simple extension, in that you just extend the
15 definition such that the side effect for a particular interface occurs
19 if it can be proven that a user address space has never executed
20 on a cpu (see mm_cpumask()), one need not perform a flush
25 virtual-->physical address translations obtained from the software
53 3) ``void flush_tlb_range(struct vm_area_struct *vma,
56 Here we are flushing a specific range of (user) virtual
59 modifications for the address space 'vma->vm_mm' in the range
60 'start' to 'end-1' will be visible to the cpu. That is, after
62 virtual addresses in the range 'start' to 'end-1'.
68 a suitably efficient method for removing multiple page
77 Linux to keep track of mmap'd regions for a process, the
78 address space is available via vma->vm_mm. Also, one may
79 test (vma->vm_flags & VM_EXEC) to see if this region is
81 split-tlb type setups).
84 page table modification for address space 'vma->vm_mm' for
87 'vma->vm_mm' for virtual address 'addr'.
97 in the software page tables for address space "vma->vm_mm"
101 a NULL "vmf".
103 A port may use this information in any way it so chooses.
104 For example, it could use this event to pre-load TLB
109 is changing an existing virtual-->physical mapping to a new value,
120 3) flush_cache_page(vma, addr, pfn);
126 a virtual-->physical translation to exist for a virtual address
131 to the extent that it is necessary for a particular cpu. Mostly,
133 indexed caches which must be flushed when virtual-->physical
135 indexed physically tagged caches of IA32 processors have no need to
162 3) ``void flush_cache_range(struct vm_area_struct *vma,
165 Here we are flushing a specific range of (user) virtual
167 entries in the cache for 'vma->vm_mm' for virtual addresses in
168 the range 'start' to 'end-1'.
174 a suitably efficient method for removing multiple page
181 This time we need to remove a PAGE_SIZE sized range
183 Linux to keep track of mmap'd regions for a process, the
184 address space is available via vma->vm_mm. Also, one may
185 test (vma->vm_flags & VM_EXEC) to see if this region is
195 'vma->vm_mm' for virtual address 'addr' which translates
215 Here in these two interfaces we are flushing a specific range
218 space for virtual addresses in the range 'start' to 'end-1'.
225 require a whole different set of interfaces to handle properly.
227 of a processor.
229 Is your port susceptible to virtual aliasing in its D-cache?
230 Well, if your D-cache is virtually indexed, is larger in size than
234 If your D-cache has this problem, first define asm/shmparam.h SHMLBA
236 addressed D-cache (or if the size is variable, the largest possible
238 processes to mmap shared memory at address which are a multiple of
246 Next, you have to solve the D-cache aliasing issue for all
247 other cases. Please keep in mind that fact that, for a given page
250 PAGE_OFFSET. So immediately, once the first user maps a given
251 physical page into its address space, by implication the D-cache
259 pages. It allows a port to efficiently avoid D-cache alias
262 For example, a port may temporarily map 'from' and 'to' to
264 for these two pages is chosen in such a way that the kernel
271 parameter gives a pointer to the struct page of the target.
273 If D-cache aliasing is not an issue, these two routines may
280 a) the kernel did write to a page that is in the page cache page
282 b) the kernel is about to read from a page cache page and user space
292 space of a user process. So for example, VFS layer code
296 The phrase "kernel writes to a page cache page" means, specifically,
299 flush here to handle D-cache aliasing, to make sure these kernel stores
306 If D-cache aliasing is not an issue, this routine may simply be defined
307 as a nop on that architecture.
309 There is a bit set aside in folio->flags (PG_arch_1) as "architecture
311 clear this bit when such a page first enters the pagecache.
320 folio_flush_mapping() returns a mapping, and mapping_mapped() on that
322 flag bit. Later, in update_mmu_cache_range(), a check is made
356 implementation is a nop (and should remain so for all coherent
385 flushes the kernel cache for a given virtual address range in
394 the cache for a given virtual address range in the vmap area