/aosp_15_r20/external/antlr/runtime/C/src/ |
H A D | antlr3collections.c | 128 // All we have to do is create the hashtable tracking structure in antlr3HashTableNew() 212 /* Allow sparse tables, though we don't create them as such at present in antlr3HashFree() 222 /* Save next entry - we do not want to access memory in entry after we in antlr3HashFree() 236 /* Free the key memory - we know that we allocated this in antlr3HashFree() 246 entry = nextEntry; /* Load next pointer to see if we shoud free it */ in antlr3HashFree() 254 /* Now we can free the bucket memory in antlr3HashFree() 259 /* Now we free teh memory for the table itself in antlr3HashFree() 281 /* First we need to know the hash of the provided key in antlr3HashRemoveI() 285 /* Knowing the hash, we can find the bucket in antlr3HashRemoveI() 289 /* Now, we traverse the entries in the bucket until in antlr3HashRemoveI() [all …]
|
/aosp_15_r20/external/antlr/runtime/Cpp/include/ |
H A D | antlr3collections.inl | 138 /* Now we need to allocate the root node. This makes it easier 139 * to use the tree as we don't have to do anything special 144 /* Now we seed the root node with the index being the 145 * highest left most bit we want to test, which limits the 151 /* And as we have nothing in here yet, we set both child pointers 159 * we use calloc() to initialise it. 172 /* the nodes are all gone now, so we need only free the memory 189 * then by definition (as the bit index decreases as we descent the trie) 190 * we have reached a 'backward' pointer. A backward pointer means we 192 * and it must either be the key we are looking for, or if not then it [all …]
|
/aosp_15_r20/external/swiftshader/third_party/llvm-16.0/llvm/lib/Target/X86/ |
H A D | X86SpeculativeLoadHardening.cpp | 143 // We mostly have one conditional branch, and in extremely rare cases have 235 // We have to insert the new block immediately after the current one as we in splitEdge() 236 // don't know what layout-successor relationships the successor has and we in splitEdge() 247 // we might have *broken* fallthrough and so need to inject a new in splitEdge() 257 // Update the unconditional branch now that we've added one. in splitEdge() 275 // If this is the only edge to the successor, we can just replace it in the in splitEdge() 276 // CFG. Otherwise we need to add a new entry in the CFG for the new in splitEdge() 324 /// FIXME: It's really frustrating that we have to do this, but SSA-form in MIR 325 /// isn't what you might expect. We may have multiple entries in PHI nodes for 326 /// a single predecessor. This makes CFG-updating extremely complex, so here we [all …]
|
/aosp_15_r20/external/llvm/docs/tutorial/ |
H A D | LangImpl09.rst | 12 LLVM <index.html>`_" tutorial. In chapters 1 through 8, we've built a 19 source that the programmer wrote. In LLVM we generally use a format 23 The short summary of this chapter is that we'll go through the 27 Caveat: For now we can't debug via the JIT, so we'll need to compile 29 we'll make a few modifications to the running of the language and 30 how programs are compiled. This means that we'll have a source file 32 interactive JIT. It does involve a limitation that we can only 36 Here's the sample program we'll be compiling: 54 locations more difficult. In LLVM IR we keep the original source location 61 tutorial we're going to avoid optimization (as you'll see with one of the [all …]
|
/aosp_15_r20/packages/inputmethods/LatinIME/java/src/com/android/inputmethod/latin/inputlogic/ |
D | InputLogic.java | 75 // TODO : Remove this member when we can. 114 * @param latinIME the instance of the parent LatinIME. We should remove this when we can. 158 // so we try using some heuristics to find out about these and fix them. in startInput() 192 // If we had a composition in progress, we need to commit the word so that the in onOrientationChange() 213 // Normally this class just gets out of scope after the process ends, but in unit tests, we 280 // We still want to log a suggestion click. in onPickSuggestionManually() 293 // Manual pick affects the contents of the editor, so we take note of this. It's important in onPickSuggestionManually() 308 // TODO: We should not need the following branch. We should be able to take the same in onPickSuggestionManually() 309 // code path as for other kinds, use commitChosenWord, and do everything normally. We will in onPickSuggestionManually() 310 // however need to reset the suggestion strip right away, because we know we can't take in onPickSuggestionManually() [all …]
|
/aosp_15_r20/external/llvm/docs/ |
H A D | MergeFunctions.rst | 22 explains how we could combine equal functions correctly, keeping module valid. 31 cover only common cases, and thus avoid cases when after minor code changes we 39 code fundamentals. In this article we suppose reader is familiar with 45 We will use such terms as 77 again and again, and yet you don't understand why we implemented it that way. 79 We hope that after this article reader could easily debug and improve 98 Do we need to merge functions? Obvious thing is: yes that's a quite possible 99 case, since usually we *do* have duplicates. And it would be good to get rid of 100 them. But how to detect such a duplicates? The idea is next: we split functions 101 onto small bricks (parts), then we compare "bricks" amount, and if it equal, [all …]
|
/aosp_15_r20/external/libxml2/result/ |
H A D | ent9 | 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> 16 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
H A D | ent9.rdr | 11 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 16 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 21 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 26 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 31 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 36 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 41 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 46 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 51 2 3 #text 0 1 WE need lot of garbage now to trigger the problem 56 2 3 #text 0 1 WE need lot of garbage now to trigger the problem [all …]
|
/aosp_15_r20/external/libxml2/test/ |
H A D | ent9 | 6 <p> WE need lot of garbage now to trigger the problem</p> 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
/aosp_15_r20/external/cronet/third_party/libxml/fuzz/seed_corpus/ |
H A D | ef6238d1f01ecc4837c37d151e0073d64fa64021 | 6 <p> WE need lot of garbage now to trigger the problem</p> 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
/aosp_15_r20/external/libpcap/ |
H A D | configure.ac | 36 # LIBS: inherited from the environment; we add libraries required by 40 # we're finished doing configuration tests for the modules. 66 # or libpcap.pc, as, in all platforms on which we run, if a dynamic 107 # We require C99 or later. 124 # We only need a C++ compiler for Haiku; all code except for its 137 # We have to use different data types, because the results of 138 # a test are cached, so if we test for the size of a given type 140 # We trick autoconf by testing the size of a "void *" in C and a 171 dnl include <sys/ioccom.h>, and we were to drop support for older 174 dnl in "aclocal.m4" uses it, so we would still have to test for it [all …]
|
H A D | CMakeLists.txt | 3 # We need 3.12 or later, so that we can set policy CMP0074; see 12 # neither do we with autotools; don't do so with CMake, either, and 21 # WE KNOW WHAT WE'RE DOING, WE'RE DOING EVERYTHING THE WAY THAT NEWER 29 # We want find_file() and find_library() to honor {packagename}_ROOT, 39 # We want check_include_file() to honor CMAKE_REQUIRED_LIBRARIES; see 50 # We only need a C++ compiler for Haiku; all code except for its 53 # We do that by specifying just C in the project() call and, after 54 # that finishes, checking for Haiku and, if we're building for 56 # we don't require a C++ compiler on platforms other than Haiku. 58 # CMAKE_SYSTEM_NAME is set by project(), so we can't do this by [all …]
|
H A D | pcap-linux.c | 108 * We require TPACKET_V2 support. 114 /* check for memory mapped access avaibility. We assume every needed 175 * When capturing on all interfaces we use this as the buffer size. 191 int must_do_on_close; /* stuff we must do when we close */ 194 int ifindex; /* interface index of device we're bound to */ 196 int netdown; /* we got an ENETDOWN and haven't resolved it */ 198 char *mondevice; /* mac80211 monitor device we created */ 214 * Stuff to do when we close. 270 * With a pre-3.0 kernel, we cannot distinguish between packets with no 271 * VLAN tag and packets on VLAN 0, so we will mishandle some packets, and [all …]
|
/aosp_15_r20/external/swiftshader/third_party/llvm-10.0/llvm/lib/Target/X86/ |
H A D | X86SpeculativeLoadHardening.cpp | 142 // We mostly have one conditional branch, and in extremely rare cases have 233 // We have to insert the new block immediately after the current one as we in splitEdge() 234 // don't know what layout-successor relationships the successor has and we in splitEdge() 245 // we might have *broken* fallthrough and so need to inject a new in splitEdge() 255 // Update the unconditional branch now that we've added one. in splitEdge() 273 // If this is the only edge to the successor, we can just replace it in the in splitEdge() 274 // CFG. Otherwise we need to add a new entry in the CFG for the new in splitEdge() 322 /// FIXME: It's really frustrating that we have to do this, but SSA-form in MIR 323 /// isn't what you might expect. We may have multiple entries in PHI nodes for 324 /// a single predecessor. This makes CFG-updating extremely complex, so here we [all …]
|
/aosp_15_r20/external/libxml2/result/noent/ |
H A D | ent9 | 7 <p> WE need lot of garbage now to trigger the problem</p> 8 <p> WE need lot of garbage now to trigger the problem</p> 9 <p> WE need lot of garbage now to trigger the problem</p> 10 <p> WE need lot of garbage now to trigger the problem</p> 11 <p> WE need lot of garbage now to trigger the problem</p> 12 <p> WE need lot of garbage now to trigger the problem</p> 13 <p> WE need lot of garbage now to trigger the problem</p> 14 <p> WE need lot of garbage now to trigger the problem</p> 15 <p> WE need lot of garbage now to trigger the problem</p> 16 <p> WE need lot of garbage now to trigger the problem</p> [all …]
|
/aosp_15_r20/external/federated-compute/fcp/java_src/main/java/com/google/fcp/client/http/ |
H A D | HttpRequestHandleImpl.java | 130 // Until we have an actual connection, this is a no-op. 171 * upload request bodies. This also determines the amount of request body data we'll read from 173 * @param responseBodyChunkSizeBytes determines the amount of response body data we'll try to read 179 * to read before starting another round of decompression (in case we receive a compressed 180 * response body that we need to decompress on the fly). 240 // We mark the connection closed, to prevent any further callbacks to the native layer in close() 241 // from being issued. We do this *before* invoking the callback, just in case our in close() 243 // layer (we wouldn't want to enter an infinite loop) in close() 246 // We signal the closure/cancellation to the native layer right away, using the in close() 247 // appropriate callback for the state we were in. in close() [all …]
|
/aosp_15_r20/prebuilts/go/linux-x86/src/cmd/go/internal/modload/ |
D | edit.go | 51 // If we already know what go version we will end up on after the edit, and 54 // If we are changing from pruned to unpruned, then we MUST check the unpruned 58 // If we are changing from unpruned to pruned, then we would like to avoid 62 // Note that even if we don't find a go version in mustSelect, it is possible 63 // that we will switch from unpruned to pruned (but not the other way around!) 64 // after applying the edits if we find a dependency that requires a high 72 // We don't know exactly what go version we will end up at, but we know 99 // dependencies. To the extent possible, we want to preserve those implicit 100 // dependencies, so we need to treat everything in the build list as 105 // If we couldn't load the graph, we don't know what its requirements were [all …]
|
/aosp_15_r20/external/perfetto/docs/design-docs/ |
H A D | heapprofd-sampling.md | 15 probability p of being sampled. In theory we can think of each byte undergoing a 16 Bernoulli trial. The reason we use a random sampling approach, as opposed to 20 To scale the sampled bytes to the correct scale, we multiply by 1 / p, i.e. if 21 we sample a byte with probability 10%, then each byte sampled represents 10 29 1. We look at an allocation 32 chance of it being sampled at least once, we return the size of the 35 3. If the size of the allocation is smaller, then we compute the number of times 36 we would draw a sample if we sampled each byte with the given sampling rate: 38 * In practice we do this by keeping track of the arrival time of the next 39 sample. When an allocation happens, we subtract its size from the arrival [all …]
|
/aosp_15_r20/external/llvm/lib/CodeGen/GlobalISel/ |
H A D | RegBankSelect.cpp | 75 // We could preserve the information from these two analysis but in getAnalysisUsage() 86 // By default we assume we will have to repair something. in assignmentMatch() 112 assert(NewVRegs.begin() != NewVRegs.end() && "We should not have to repair"); in repairReg() 114 // Assume we are repairing a use and thus, the original reg will be in repairReg() 119 // If we repair a definition, swap the source and destination for in repairReg() 126 "We are about to create several defs for Dst"); in repairReg() 134 // Check if MI is legal. if not, we need to legalize all the in repairReg() 135 // instructions we are going to insert. in repairReg() 157 assert(MO.isReg() && "We should only repair register operand"); in getRepairCost() 162 // If MO does not have a register bank, we should have just been in getRepairCost() [all …]
|
/aosp_15_r20/external/mdnsresponder/mDNSCore/ |
H A D | mDNS.c | 53 …// to the compiler that the assignment is intentional, we have to just turn this warning off compl… 65 // Do we really need to define a macro for "if"? 150 // Depending on whether this is a multicast or unicast question we want to set either: in SetNextQueryTime() 191 // We allocate just one AuthEntity at a time because we need to be able in GetAuthEntity() 192 // free them all individually which normally happens when we parse /etc/hosts into in GetAuthEntity() 193 // AuthHash where we add the "new" entries and discard (free) the already added in GetAuthEntity() 194 // entries. If we allocate as chunks, we can't free them individually. in GetAuthEntity() 200 // If we still have no free records, recycle all the records we can. in GetAuthEntity() 201 …erating the entire auth is moderately expensive, so when we do it, we reclaim all the records we c… in GetAuthEntity() 285 …if (!ag) ag = GetAuthGroup(r, slot, &rr->resrec); // If we don't have a AuthGroup for this name, m… in InsertAuthRecord() [all …]
|
/aosp_15_r20/external/libcups/ |
H A D | config.h.in | 96 * Do we have domain socket support, and if so what is the default one? 131 * Do we have posix_spawn? 138 * Do we have ZLIB? 146 * Do we have PAM stuff? 156 * Do we have <shadow.h>? 163 * Do we have <crypt.h>? 186 * Do we have the long long type? 201 * Do we have the strtoll() function? 212 * Do we have the strXXX() functions? 221 * Do we have the geteuid() function? [all …]
|
/aosp_15_r20/external/swiftshader/third_party/llvm-10.0/llvm/lib/Transforms/Scalar/ |
H A D | SimpleLoopUnswitch.cpp | 129 // If not an instruction with the same opcode, nothing we can do. in collectHomogenousInstGraphLoopInvariants() 145 assert(!isa<Constant>(Invariant) && "Why are we unswitching on a constant?"); in replaceLoopInvariantUses() 149 // Grab the use and walk past it so we can clobber it in the use list. in replaceLoopInvariantUses() 203 // When the loop exit is directly unswitched we just need to update the in rewritePHINodesForUnswitchedExitBlock() 204 // incoming basic block. We loop to handle weird cases with repeated in rewritePHINodesForUnswitchedExitBlock() 234 // removing each one. We have to do this weird loop manually so that we in rewritePHINodesForExitAndUnswitchedBlocks() 235 // create the same number of new incoming edges in the new PHI as we expect in rewritePHINodesForExitAndUnswitchedBlocks() 263 /// Because we've removed an exit from the loop, we may have changed the set of 269 // If the loop is already at the top level, we can't hoist it anywhere. in hoistLoopToNewParent() 291 // because it isn't in this loop we also need to update the primary loop map. in hoistLoopToNewParent() [all …]
|
/aosp_15_r20/external/rust/android-crates-io/crates/csv/src/ |
D | tutorial.rs | 43 In this section, we'll get you setup with a simple program that reads CSV data 48 We'll start by creating a new Cargo project: 92 // Import the standard library's I/O module so we can read from stdin. 102 // We will make this more friendly later! 110 Don't worry too much about what this code means; we'll dissect it in the next 117 Assuming that succeeds, let's try running our program. But first, we will need 118 some CSV data to play with! For that, we will use a random selection of 100 119 US cities, along with their population size and geographical coordinates. (We 140 throughout the examples in this tutorial. Therefore, we're going to spend a 204 a problem writing to stdout. In general, we will ignore the latter problem in [all …]
|
/aosp_15_r20/external/apache-commons-math/src/main/java/org/apache/commons/math3/ode/events/ |
H A D | FilterType.java | 95 // we are initializing the first point in selectTransformer() 103 // we are exactly at a root, we don't know if it is an increasing in selectTransformer() 104 // or a decreasing event, we remain in uninitialized state in selectTransformer() 109 // we have crossed the zero line on an ignored increasing event, in selectTransformer() 110 // we must change the transformer in selectTransformer() 113 // we are still in the same status in selectTransformer() 118 // we have crossed the zero line on an ignored increasing event, in selectTransformer() 119 // we must change the transformer in selectTransformer() 122 // we are still in the same status in selectTransformer() 127 // we have crossed the zero line on a triggered decreasing event, in selectTransformer() [all …]
|
/aosp_15_r20/external/libcups/xcode/ |
H A D | config.h | 98 * Do we have domain socket support, and if so what is the default one? 133 * Do we have posix_spawn? 140 * Do we have ZLIB? 148 * Do we have PAM stuff? 160 * Do we have <shadow.h>? 167 * Do we have <crypt.h>? 190 * Do we have the long long type? 205 * Do we have the strtoll() function? 216 * Do we have the strXXX() functions? 225 * Do we have the geteuid() function? [all …]
|