1memchr 2====== 3This library provides heavily optimized routines for string search primitives. 4 5[](https://github.com/BurntSushi/memchr/actions) 6[](https://crates.io/crates/memchr) 7 8Dual-licensed under MIT or the [UNLICENSE](https://unlicense.org/). 9 10 11### Documentation 12 13[https://docs.rs/memchr](https://docs.rs/memchr) 14 15 16### Overview 17 18* The top-level module provides routines for searching for 1, 2 or 3 bytes 19 in the forward or reverse direction. When searching for more than one byte, 20 positions are considered a match if the byte at that position matches any 21 of the bytes. 22* The `memmem` sub-module provides forward and reverse substring search 23 routines. 24 25In all such cases, routines operate on `&[u8]` without regard to encoding. This 26is exactly what you want when searching either UTF-8 or arbitrary bytes. 27 28### Compiling without the standard library 29 30memchr links to the standard library by default, but you can disable the 31`std` feature if you want to use it in a `#![no_std]` crate: 32 33```toml 34[dependencies] 35memchr = { version = "2", default-features = false } 36``` 37 38On `x86_64` platforms, when the `std` feature is disabled, the SSE2 accelerated 39implementations will be used. When `std` is enabled, AVX2 accelerated 40implementations will be used if the CPU is determined to support it at runtime. 41 42SIMD accelerated routines are also available on the `wasm32` and `aarch64` 43targets. The `std` feature is not required to use them. 44 45When a SIMD version is not available, then this crate falls back to 46[SWAR](https://en.wikipedia.org/wiki/SWAR) techniques. 47 48### Minimum Rust version policy 49 50This crate's minimum supported `rustc` version is `1.61.0`. 51 52The current policy is that the minimum Rust version required to use this crate 53can be increased in minor version updates. For example, if `crate 1.0` requires 54Rust 1.20.0, then `crate 1.0.z` for all values of `z` will also require Rust 551.20.0 or newer. However, `crate 1.y` for `y > 0` may require a newer minimum 56version of Rust. 57 58In general, this crate will be conservative with respect to the minimum 59supported version of Rust. 60 61 62### Testing strategy 63 64Given the complexity of the code in this crate, along with the pervasive use 65of `unsafe`, this crate has an extensive testing strategy. It combines multiple 66approaches: 67 68* Hand-written tests. 69* Exhaustive-style testing meant to exercise all possible branching and offset 70 calculations. 71* Property based testing through [`quickcheck`](https://github.com/BurntSushi/quickcheck). 72* Fuzz testing through [`cargo fuzz`](https://github.com/rust-fuzz/cargo-fuzz). 73* A huge suite of benchmarks that are also run as tests. Benchmarks always 74 confirm that the expected result occurs. 75 76Improvements to the testing infrastructure are very welcome. 77 78 79### Algorithms used 80 81At time of writing, this crate's implementation of substring search actually 82has a few different algorithms to choose from depending on the situation. 83 84* For very small haystacks, 85 [Rabin-Karp](https://en.wikipedia.org/wiki/Rabin%E2%80%93Karp_algorithm) 86 is used to reduce latency. Rabin-Karp has very small overhead and can often 87 complete before other searchers have even been constructed. 88* For small needles, a variant of the 89 ["Generic SIMD"](http://0x80.pl/articles/simd-strfind.html#algorithm-1-generic-simd) 90 algorithm is used. Instead of using the first and last bytes, a heuristic is 91 used to select bytes based on a background distribution of byte frequencies. 92* In all other cases, 93 [Two-Way](https://en.wikipedia.org/wiki/Two-way_string-matching_algorithm) 94 is used. If possible, a prefilter based on the "Generic SIMD" algorithm 95 linked above is used to find candidates quickly. A dynamic heuristic is used 96 to detect if the prefilter is ineffective, and if so, disables it. 97 98 99### Why is the standard library's substring search so much slower? 100 101We'll start by establishing what the difference in performance actually 102is. There are two relevant benchmark classes to consider: `prebuilt` and 103`oneshot`. The `prebuilt` benchmarks are designed to measure---to the extent 104possible---search time only. That is, the benchmark first starts by building a 105searcher and then only tracking the time for _using_ the searcher: 106 107``` 108$ rebar rank benchmarks/record/x86_64/2023-08-26.csv --intersection -e memchr/memmem/prebuilt -e std/memmem/prebuilt 109Engine Version Geometric mean of speed ratios Benchmark count 110------ ------- ------------------------------ --------------- 111rust/memchr/memmem/prebuilt 2.5.0 1.03 53 112rust/std/memmem/prebuilt 1.73.0-nightly 180dffba1 6.50 53 113``` 114 115Conversely, the `oneshot` benchmark class measures the time it takes to both 116build the searcher _and_ use it: 117 118``` 119$ rebar rank benchmarks/record/x86_64/2023-08-26.csv --intersection -e memchr/memmem/oneshot -e std/memmem/oneshot 120Engine Version Geometric mean of speed ratios Benchmark count 121------ ------- ------------------------------ --------------- 122rust/memchr/memmem/oneshot 2.5.0 1.04 53 123rust/std/memmem/oneshot 1.73.0-nightly 180dffba1 5.26 53 124``` 125 126**NOTE:** Replace `rebar rank` with `rebar cmp` in the above commands to 127explore the specific benchmarks and their differences. 128 129So in both cases, this crate is quite a bit faster over a broad sampling of 130benchmarks regardless of whether you measure only search time or search time 131plus construction time. The difference is a little smaller when you include 132construction time in your measurements. 133 134These two different types of benchmark classes make for a nice segue into 135one reason why the standard library's substring search can be slower: API 136design. In the standard library, the only APIs available to you require 137one to re-construct the searcher for every search. While you can benefit 138from building a searcher once and iterating over all matches in a single 139string, you cannot reuse that searcher to search other strings. This might 140come up when, for example, searching a file one line at a time. You'll need 141to re-build the searcher for every line searched, and this can [really 142matter][burntsushi-bstr-blog]. 143 144**NOTE:** The `prebuilt` benchmark for the standard library can't actually 145avoid measuring searcher construction at some level, because there is no API 146for it. Instead, the benchmark consists of building the searcher once and then 147finding all matches in a single string via an iterator. This tends to 148approximate a benchmark where searcher construction isn't measured, but it 149isn't perfect. While this means the comparison is not strictly 150apples-to-apples, it does reflect what is maximally possible with the standard 151library, and thus reflects the best that one could do in a real world scenario. 152 153While there is more to the story than just API design here, it's important to 154point out that even if the standard library's substring search were a precise 155clone of this crate internally, it would still be at a disadvantage in some 156workloads because of its API. (The same also applies to C's standard library 157`memmem` function. There is no way to amortize construction of the searcher. 158You need to pay for it on every call.) 159 160The other reason for the difference in performance is that 161the standard library has trouble using SIMD. In particular, substring search 162is implemented in the `core` library, where platform specific code generally 163can't exist. That's an issue because in order to utilize SIMD beyond SSE2 164while maintaining portable binaries, one needs to use [dynamic CPU feature 165detection][dynamic-cpu], and that in turn requires platform specific code. 166While there is [an RFC for enabling target feature detection in 167`core`][core-feature], it doesn't yet exist. 168 169The bottom line here is that `core`'s substring search implementation is 170limited to making use of SSE2, but not AVX. 171 172Still though, this crate does accelerate substring search even when only SSE2 173is available. The standard library could therefore adopt the techniques in this 174crate just for SSE2. The reason why that hasn't happened yet isn't totally 175clear to me. It likely needs a champion to push it through. The standard 176library tends to be more conservative in these things. With that said, the 177standard library does use some [SSE2 acceleration on `x86-64`][std-sse2] added 178in [this PR][std-sse2-pr]. However, at the time of writing, it is only used 179for short needles and doesn't use the frequency based heuristics found in this 180crate. 181 182**NOTE:** Another thing worth mentioning is that the standard library's 183substring search routine requires that both the needle and haystack have type 184`&str`. Unless you can assume that your data is valid UTF-8, building a `&str` 185will come with the overhead of UTF-8 validation. This may in turn result in 186overall slower searching depending on your workload. In contrast, the `memchr` 187crate permits both the needle and the haystack to have type `&[u8]`, where 188`&[u8]` can be created from a `&str` with zero cost. Therefore, the substring 189search in this crate is strictly more flexible than what the standard library 190provides. 191 192[burntsushi-bstr-blog]: https://blog.burntsushi.net/bstr/#motivation-based-on-performance 193[dynamic-cpu]: https://doc.rust-lang.org/std/arch/index.html#dynamic-cpu-feature-detection 194[core-feature]: https://github.com/rust-lang/rfcs/pull/3469 195[std-sse2]: https://github.com/rust-lang/rust/blob/bf9229a2e366b4c311f059014a4aa08af16de5d8/library/core/src/str/pattern.rs#L1719-L1857 196[std-sse2-pr]: https://github.com/rust-lang/rust/pull/103779 197